{"id":100,"date":"2024-08-28T03:36:36","date_gmt":"2024-08-28T03:36:36","guid":{"rendered":"https:\/\/audioai.gfsoftware.com\/wp\/?p=100"},"modified":"2024-08-28T03:38:55","modified_gmt":"2024-08-28T03:38:55","slug":"running-tensorflow-and-openvino-models-in-c-for-audio-processing","status":"publish","type":"post","link":"https:\/\/audioai.gfsoftware.com\/wp\/2024\/08\/28\/running-tensorflow-and-openvino-models-in-c-for-audio-processing\/","title":{"rendered":"Running TensorFlow and OpenVINO Models in C# for Audio Processing"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">Introduction<\/h3>\n\n\n\n<p>Here\u2019s an article written mostly by chatGPT based on my code. I modified some things that were wrong, and added some details. Both C# files are included in this post.<\/p>\n\n\n\n<p>In this article, we will explore two distinct model wrapper implementations in C#: `<strong>TensorFlowModelWrapper<\/strong>` and `<strong>OpenVINOModelWrapper<\/strong>`. These wrappers are designed to facilitate the integration of TensorFlow and <strong>OpenVINO<\/strong> models into a C# application, and are used for my audio player. We will examine the key functionalities, the differences in data handling, and the unique requirements of each wrapper.<\/p>\n\n\n\n<p>Both classes implements interface <strong>ITensorFlowModelWrapper <\/strong>so they can be swapped easily during development.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1. Overview of TensorFlowModelWrapper<\/h3>\n\n\n\n<p>The `<strong>TensorFlowModelWrapper<\/strong>` class interfaces with TensorFlow models using ML.NET, providing a structured way to integrate these models into C# applications.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Key Characteristics:<\/h3>\n\n\n\n<p>&#8211; **<strong>Model Initialization and Data Handling<\/strong>**: Unlike OpenVINO, TensorFlow models in ML.NET require a dedicated class for input and output data. This involves defining a schema that represents the model&#8217;s input and output formats, which is then used to handle data preprocessing and postprocessing. If you have a model with dynamic batch size, it will automatically configure the model for a batch size of 1.<\/p>\n\n\n\n<p>&#8211; **<strong>Fitting Process<\/strong>**: While TensorFlow models are generally pre-trained, ML.NET requires an explicit &#8220;fit&#8221; process to prepare the model for inference. This step configures the input pipeline and ensures that the model is ready to handle the input data effectively.<\/p>\n\n\n\n<p>&#8211; **<strong>Inference Execution<\/strong>**: Requires to create objects for input\/output data before each inference.<\/p>\n\n\n\n<p>&#8211; <strong>**Memory Management**:<\/strong> TensorFlow models in ML.NET may require explicit memory management techniques such as invoking garbage collection to ensure that memory usage is optimized, particularly when dealing with large models or high-frequency inference requests. In my case I had some memory leak after each call. I switched to <strong>OpenVino <\/strong>&nbsp;for this reason.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">### 2. Overview of OpenVINOModelWrapper<\/h3>\n\n\n\n<p>The `<strong>OpenVINOModelWrapper<\/strong>` is designed to work with models optimized by the OpenVINO toolkit, which provides accelerated inference on various Intel hardware.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">#### Key Characteristics:<\/h3>\n\n\n\n<p>&#8211; <strong>**Simpler Data Handling**:<\/strong> <strong>OpenVINO<\/strong> models do not require a separate class to handle input and output data. Instead, the data is directly set into the model&#8217;s tensor, simplifying the data handling process. This approach is more straightforward compared to TensorFlow&#8217;s requirement for data schema definitions.<br>However, if your model has a dynamic batch size, it is necessary to reshape the model to the desired batch size.<\/p>\n\n\n\n<p>&#8211; <strong>**No Fitting Required**:<\/strong> Unlike TensorFlow models in ML.NET, OpenVINO models do not require a fitting process. The models are pre-trained and optimized for inference directly. This reduces the setup overhead and speeds up the integration process.<\/p>\n\n\n\n<p>&#8211; **<strong>Optimized Inference<\/strong>**: OpenVINO is designed to optimize model inference across different hardware configurations. The `<strong>OpenVINOModelWrapper<\/strong>` handles the model compilation and inference request setup automatically, making it more efficient for running AI models on various Intel platforms, including CPUs, GPUs, VPUs, and FPGAs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">### 3. Key Differences in Data Handling and Model Integration<\/h3>\n\n\n\n<p>There are several fundamental differences between how `<strong>TensorFlowModelWrapper<\/strong>` and `<strong>OpenVINOModelWrapper<\/strong>` handle data and model integration:<\/p>\n\n\n\n<p>&#8211; **<strong>Input and Output Data Classes<\/strong>**: TensorFlow in ML.NET requires defining classes that describe the structure of input and output data. These classes are essential for converting between C# data types and the tensor formats expected by TensorFlow models. In contrast, OpenVINO handles input and output tensors directly without needing separate data classes, streamlining the data processing pipeline.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n\n\n\n<p>With the current state of the ML.net TensorFlow API, it is safer to use OpenVino through the <a href=\"https:\/\/github.com\/guojin-yan\">guojin-yan<\/a> <strong><a href=\"https:\/\/github.com\/guojin-yan\/OpenVINO-CSharp-API\">OpenVINO-CSharp-API<\/a>.<\/strong> This however requires to first convert the model into an IR. <\/p>\n\n\n\n<p>On the other hand, TensorFlow API can load TensorFlow models directly, and would be useful if the model is constantly being updated.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">TensorFlowModel.cs<\/h4>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<pre class=\"wp-block-code\"><code>using System;\nusing System.Collections.Generic;\nusing System.Diagnostics;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.Transforms;\n\nnamespace AIAudioPlayer.ML\n{\n\n    public enum TensorFormat_1D_Enum : int\n    {\n        NWC, \/\/ tensorflow standard\n        NCW  \/\/ torch ?\n    }\n\n    public interface ITensorFlowModelWrapper\n    {\n        bool Initialized { get; }\n        bool FitCalled { get; }\n        int InputChannels { get; }\n        int OutputChannels { get; }\n\n        \/\/\/ &lt;summary>\n        \/\/\/ Formato en memoria del tensor\n        \/\/\/ &lt;\/summary>\n        TensorFormat_1D_Enum TensorFormat { get; }\n\n        int DataSampleRate { get; }\n        int DataLengthSamples { get; } \/\/ Used for both input and output\n        int DataTrimSize { get; } \/\/ Samples\n\n        bool Init(string modelPath = \"\");\n        bool Fit(float&#91;] inputTensor);\n        bool Run(float&#91;] inputTensor, ref float&#91;] outputTensor);\n        bool Run_wLock(float&#91;] inputTensor, ref float&#91;] outputTensor);\n\n        \/\/\/ &lt;summary>\n        \/\/\/ Returns true if everything was loaded correctly\n        \/\/\/ &lt;\/summary>\n        \/\/\/ &lt;returns>&lt;\/returns>\n        bool IsReady();\n    }\n\n    internal class TensorFlowModelWrapper : ITensorFlowModelWrapper\n    {\n#if DEBUG\n        protected string modelPath = \"H:\\\\Training\\\\WaveUnet\\\\saves\\\\model529\\\\\"; \/\/ 58 = lindo\n        \/\/protected string modelPath = \"H:\\\\Training\\\\WaveUnet\\\\saves\\\\modelBypass\\\\\"; \/\/ Para probar comunicacion entre .net y tensorflow\n#else\n        protected string modelPath = \"{AppExe}\/modelB\"; \n#endif\n        protected MLContext mlContext = new MLContext();\n        protected TensorFlowModel? model = null;\n        protected TensorFlowTransformer? transformer = null;\n\n        protected TensorFlowEstimator? pipeline = null;\n\n        private bool _fitCalled = false;\n\n        public  int tensor_InputChannels = 2;\n        public  int tensor_OutputChannels = 8;\n\n        public  int tensor_DataLength = 196608;\n        public  int tensor_SampleRate = 44100;\n        \n        private  TensorFormat_1D_Enum _tensorFormat = TensorFormat_1D_Enum.NWC;\n        public TensorFormat_1D_Enum TensorFormat { get { return _tensorFormat; } }\n\n        public bool Initialized { get { return pipeline != null; } }\n\n        public bool FitCalled { get => _fitCalled; }\n\n        public int InputChannels { get => tensor_InputChannels; }\n        public int OutputChannels { get => tensor_OutputChannels; }\n        public int DataLengthSamples { get => tensor_DataLength; }\n\n        public int DataSampleRate { get => tensor_SampleRate; }\n        \n        \/\/\/ &lt;summary>\n        \/\/\/ Trim size in samples (for each side.)\n        \/\/\/ -1 use default\n        \/\/\/ &lt;\/summary>\n        public int DataTrimSize { get { return -1; } }\n\n         public const string modelOutputName = \"StatefulPartitionedCall\"; \/\/ \n\n#if DEBUG\n        public const string modelInputName = \"serving_default_input_4\";\n#else\n        public const string modelInputName = \"serving_default_input_3\"; \/\/ ModelA = 9, modelB\n#endif\n        private string _toNativeSeparators(string path)\n        {\n            return Path.GetFullPath(path);\n        }\n\n        public bool Init(string modelPath = \"\")\n        {\n\n            if (modelPath != \"\")\n                throw new ArgumentException(\"Custom modelPath not supported in this class\");\n\n            string AppExePath = Path.GetDirectoryName(Application.ExecutablePath);\n\n            string wModelPath = _toNativeSeparators(modelPath.Replace(\"{AppExe}\", AppExePath));\n\n            \/\/ Load the TensorFlow model\n            model = mlContext.Model.LoadTensorFlowModel(wModelPath);\n            Debug.Print(\"Model loaded from {0}\", wModelPath);\n\n            \/\/var schema = tensorFlowModel.GetInputSchema();\n            var schema = model.GetModelSchema();\n\n            foreach (var column in schema)\n            {\n                Console.WriteLine($\"{column.Name}: {column.Type}\");\n            }\n\n            \/\/ Define the schema of the TensorFlow model\n            pipeline = model.ScoreTensorFlowModel(\n                outputColumnNames: new&#91;] { modelOutputName },\n                inputColumnNames: new&#91;] { modelInputName }, \/\/ 11 - model  42, 2- model 44, 4 -model 55\n                addBatchDimensionInput: false);\n\n            return true;\n        }\n\n        public bool Fit(float&#91;] inputTensor)\n        {\n            if (model == null)\n            {\n                throw new Exception(\"Init not called\");\n                return false;\n            }\n            if (_fitCalled)\n            {\n                throw new Exception(\"Fit already called\");\n                return false;\n            }\n\n            _fitCalled = true;\n\n            \/\/ Calls transform to check the model\n            var input = new Input { Data = inputTensor };\n            var inputList = new List&lt;Input>() { input }; \/\/ Create list so it implements IEnumerable\n\n            var dataView = mlContext.Data.LoadFromEnumerable(inputList);\n\n            transformer = pipeline.Fit(dataView);\n\n            return true;\n        }\n\n        public bool Run(float&#91;] inputTensor, ref float&#91;] outputTensor)\n        {\n            if (model == null)\n            {\n                Debug.Print(\"Fit not called\");\n                return false;\n            }\n\n            \/\/ Prepare input data\n            var input = new Input { Data = inputTensor };\n            var inputList = new List&lt;Input>() { input };\n\n            \/\/ Load data into IDataView\n            var inputDv = mlContext.Data.LoadFromEnumerable(inputList);\n\n            \/\/ Transform data\n            var transformedValues = transformer.Transform(inputDv);\n\n            \/\/ Extract output\n            var output = mlContext.Data.CreateEnumerable&lt;Output>(transformedValues, reuseRowObject: false).Single();\n            outputTensor = output.Data;\n\n            \/\/ Explicitly nullify objects to aid garbage collection\n            inputDv = null;\n            transformedValues = null;\n\n            \/\/ Optionally force garbage collection\n            GC.Collect();\n            GC.WaitForPendingFinalizers();\n\n            return true;\n        }\n\n\n        \/\/\/ &lt;summary>\n        \/\/\/ Runs the model using a sync lock\n        \/\/\/ &lt;\/summary>\n        public bool Run_wLock(float&#91;] inputTensor, ref float&#91;] outputTensor)\n        {\n            lock (this)\n            {\n                return Run(inputTensor, ref outputTensor);\n            }\n        }\n\n        public bool IsReady()\n        {\n            return _fitCalled;\n        }\n\n        public class Input\n        {\n            &#91;VectorType(196608 * 2)] \/\/ Our model takes a three-dimensional tensor as input but ML.NET takes a flatten vector as input\n            \/\/&#91;VectorType(tensorBHWC_height, tensorBHWC_width, tensor_InputChannels)]\n            &#91;ColumnName(modelInputName)] \/\/ This name must match the input node's name\n            public float&#91;] Data { get; set; }\n        }\n\n        public class Output\n        {\n            &#91;VectorType(196608 * 8)] \/\/ Again, a 3 dimensional tensor\n            \/\/&#91;VectorType(tensorBHWC_height, tensorBHWC_width, tensor_OutputChannels)]\n            &#91;ColumnName(modelOutputName)] \/\/ This name must match the output node's name\n            public float&#91;] Data { get; set; }\n        }\n    }\n}\n\n<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">OpenVinoModel.cs<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>using System;\nusing System.CodeDom;\nusing System.Diagnostics;\nusing System.IO;\nusing OpenVinoSharp;\n\nnamespace AIAudioPlayer.ML\n{\n    \/\/\/ &lt;summary>\n    \/\/\/ Carga un modelo de OpenVino que implenenta ITensorFlowModelWrapper\n    \/\/\/ &lt;\/summary>\n    internal class OpenVINOModelWrapper : ITensorFlowModelWrapper\n    {\n#if DEBUG\n        protected string DefaultModelPath = \"H:\\\\Training\\\\WaveUnet\\\\saves\\\\model529\\\\saved_model.xml\";\n        \/\/protected string modelPath = \"H:\\\\Training\\\\WaveUnet\\\\saves\\\\modelBypass\\\\saved_model.xml\";\n#else\n        protected string DefaultModelPath = \"H:\\\\Training\\\\WaveUnet\\\\saves\\\\model529\\\\saved_model.xml\";\n        protected string modelPath = \"{AppExe}\/modelB.xml\";\n#endif\n        private static Core? _core;\n        private Model _model;\n        private CompiledModel _compiledModel;\n        private InferRequest _inferRequest;\n\n        private bool _fitCalled = false;\n\n        \/\/\/ &lt;summary>\n        \/\/\/ The inputTensor from the model, if empty, OPENVINO.get_input is called to get the default input\n        \/\/\/ &lt;\/summary>\n        protected string model_InputTensor = \"\";\n\n        protected int model_InputChannels = 2;\n        protected int model_OutputChannels = 8;\n\n        protected int model_DataLength = 196608;\n        protected int model_SampleRate = 44100;\n\n        \/\/\/ &lt;summary>\n        \/\/\/ Models that don't have the batch dimension set to a fixed size needs\n        \/\/\/ this to be TRUE\n        \/\/\/ &lt;\/summary>\n        protected bool model_needsReshape = true;\n\n        \/\/ These are for models that takes multiple inputs, in case it can work correctly \n        \/\/ with just one. De otra forma usar un modelo espec\u00edfico\n        protected ulong model_inputN = 0;\n        protected ulong model_OutputN = 0;\n\n        private TensorFormat_1D_Enum _tensorFormat = TensorFormat_1D_Enum.NWC;\n        public TensorFormat_1D_Enum TensorFormat { get { return _tensorFormat; } }\n\n        public bool Initialized { get { return _compiledModel != null; } }\n\n        public bool FitCalled { get => _fitCalled; }\n\n        public int InputChannels { get => model_InputChannels; }\n        public int OutputChannels { get => model_OutputChannels; }\n        public int DataLengthSamples { get => model_DataLength; }\n\n        public int DataSampleRate { get => model_SampleRate; }\n\n        public int DataTrimSize { get { return -1; } }\n\n        public void SetModelSettings(string _defaultPath = \"\",\n                                     int _model_InputChannels = 2, int _model_OutputChannels = 8, int _model_DataLength = 196608,\n                                     int _model_SampleRate = 44100, TensorFormat_1D_Enum _model_TensorFormat = TensorFormat_1D_Enum.NWC,\n                                     bool _model_needsReshape = true,\n                                     string _model_InputTensor = \"\",\n                                     ulong _model_inputN = 0,\n                                     ulong _model_outputN = 0)\n        {\n            if (_model != null)\n                throw new Exception(\"Init already called\");\n\n            if (_defaultPath != \"\")\n                DefaultModelPath = _defaultPath;\n\n            this.model_InputChannels = _model_InputChannels;\n            this.model_OutputChannels = _model_OutputChannels;\n            this.model_DataLength = _model_DataLength;\n            this.model_SampleRate = _model_SampleRate;\n            this._tensorFormat = _model_TensorFormat;\n            this.model_InputTensor = _model_InputTensor;\n            this.model_needsReshape = _model_needsReshape;\n            this.model_inputN = _model_inputN;\n            this.model_OutputN = _model_outputN;\n        }\n\n        public bool Init(string modelPath = \"\")\n        {\n\n            if (modelPath == \"\")\n                modelPath = DefaultModelPath;\n\n            string AppExePath = Path.GetDirectoryName(System.Windows.Forms.Application.ExecutablePath);\n            string wModelPath = DefaultModelPath.Replace(\"{AppExe}\", AppExePath);\n\n            if (OpenVINOModelWrapper._core == null)\n            {\n                OpenVINOModelWrapper._core = new Core();\n            }\n\n            _model = OpenVINOModelWrapper._core.read_model(wModelPath);\n            \n            \/\/ ---- Get some info (n of inputs, input shape)\n            ulong inputSize = _model.get_inputs_size();\n            ulong outputSize = _model.get_outputs_size();\n            PartialShape inputShape;\n            if (model_InputTensor != \"\")\n            {\n                inputShape = _model.get_input(model_InputTensor).get_partial_shape();\n            }\n            else\n            { \/\/ Use default input (FROM model)\n                inputShape = _model.get_input().get_partial_shape();\n            }\n\n            string inputShapeSTRING = inputShape.to_string();\n\n\n            \/\/ ------------ Reshape model (set batch dimension to 1)\n            if (model_needsReshape)\n            {\n\n                PartialShape newShape = new PartialShape(3, new long&#91;] { 1, DataLengthSamples, InputChannels }); \/\/ NWC\n\n                if (_tensorFormat == TensorFormat_1D_Enum.NCW)\n                    newShape = new PartialShape(3, new long&#91;] { 1, InputChannels, DataLengthSamples }); \/\/ NCW\n\n                string staticShape2 = newShape.to_string();\n\n\n                if (inputSize == 1)\n                { \/\/ Will reshape model input always that there's a single input\n                    _model.reshape(newShape);\n                }\n                else\n                    throw new Exception(\"Can't reshape a model with multiple inputs\");\n            }\n\n            \/\/ --------- Compile model\n\n            _compiledModel = OpenVINOModelWrapper._core.compile_model(_model, \"AUTO\");\n\n            \/\/ ---- List inputs of compiled model\n            \/\/ ... List inputs\n            for (ulong i = 0; i &lt; inputSize; i++)\n            {\n                using OpenVinoSharp.Input tmpInput = _compiledModel.input(i);\n                Debug.Print(\"Input {0} Name: {1} Shape: {2} Index {3}\", i, tmpInput.get_any_name(), tmpInput.get_shape().to_string(), tmpInput.get_index());\n            }\n            \/\/ ... List outputs\n            for (ulong i = 0; i &lt; outputSize; i++)\n            {\n                using OpenVinoSharp.Output tmpOutput = _compiledModel.output(i);\n                Debug.Print(\"Output {0} Name: {1} Shape: {2} Index {3}\", i, tmpOutput.get_any_name(), tmpOutput.get_shape().to_string(), tmpOutput.get_index());\n            }\n\n            \/\/ Create inference request\n            _inferRequest = _compiledModel.create_infer_request();\n\n            Debug.Print(\"Model loaded and compiled from {0}\", wModelPath);\n\n            return true;\n        }\n\n        public bool Fit(float&#91;] inputTensor)\n        {\n            if (_compiledModel == null || _inferRequest == null)\n            {\n                throw new Exception(\"Init not called\");\n                return false;\n            }\n            if (_fitCalled)\n            {\n                throw new Exception(\"Fit already called\");\n                return false;\n            }\n\n            _fitCalled = true;\n            \/\/ No actual fitting necessary, as we're only running inference\n\n            return true;\n        }\n\n        public bool Run(float&#91;] inputTensor, ref float&#91;] outputTensor)\n        {\n            if (_compiledModel == null || _inferRequest == null)\n            {\n                Debug.Print(\"Fit not called\");\n                return false;\n            }\n\n            \/\/ Prepare input tensor\n            using Tensor inputTensorObj = _inferRequest.get_input_tensor(model_inputN);\n            \n            \/\/ Shape input_shape = inputTensorObj.get_shape();\n            \/\/ Debug.Print(\"Input tensor Size: {0} {1}\", (int)inputTensorObj.get_size(), inputTensorObj.ToString());\n            inputTensorObj.set_data(inputTensor);\n\n            \/\/ Perform inference\n            _inferRequest.infer();\n\n            \/\/ Retrieve output tensor\n            using Tensor outputTensorObj = _inferRequest.get_output_tensor(model_OutputN);\n            \/\/ Debug.Print(\"Output tensor Size: {0} \", (int)outputTensorObj.get_size());\n            outputTensor = outputTensorObj.get_data&lt;float>((int)outputTensorObj.get_size());\n\n            return true;\n        }\n\n\n        public bool Run_wLock(float&#91;] inputTensor, ref float&#91;] outputTensor)\n        {\n            lock (this)\n            {\n                return Run(inputTensor, ref outputTensor);\n            }\n        }\n\n        public bool IsReady()\n        {\n            if (_compiledModel == null || _inferRequest == null)\n            {\n                Debug.Print(\"IsReady = False\");\n                return false;\n            }\n            return true;\n        }\n    }\n}\n<\/code><\/pre>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Here\u2019s an article written mostly by chatGPT based on my code. I modified some things that were wrong, and added some details. Both C#&#8230;<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_siteseo_robots_primary_cat":"none","footnotes":""},"categories":[10],"tags":[],"class_list":["post-100","post","type-post","status-publish","format-standard","hentry","category-programming"],"_links":{"self":[{"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/posts\/100","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/comments?post=100"}],"version-history":[{"count":2,"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/posts\/100\/revisions"}],"predecessor-version":[{"id":104,"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/posts\/100\/revisions\/104"}],"wp:attachment":[{"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/media?parent=100"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/categories?post=100"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/audioai.gfsoftware.com\/wp\/wp-json\/wp\/v2\/tags?post=100"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}