A New Way to Call Python ML Libraries from .NET Code

Tags

We already shown how to call Python code from .NET using Javonet on local machine and inside docker containers. Until now, we have been using intermediate Python code; however, we will now demonstrate how to directly interact with Python libraries used for machine learning.

Introduction

Sometimes there is no need to use additional middleware code to call Python methods that use ML from .NET application. We will demonstrate this approach.

Docker container for Python

Start by creating a directory for your Python code. Next, download the files libJavonetCppSdk.so and jcg from our website.

A short explanation:

  • libJavonetCppSdk.so – Javonet SDK for C++ compiled for linux
  • JCG – Javonet Code Gateway. TCP/IP server to host on node. Allowing to remote communication

Your directory structure should look like this:

PythonCode
│   Dockerfile
│   jcg
└───libJavonetCppSdk.so

Our dockerfile should look like below:

FROM python:3.9-slim # provides a minimal Python environment

# installing required system libraries
RUN apt-get update && apt-get install -y \
    libxml2-dev openssl libxmlsec1-dev libxmlsec1-openssl && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

RUN pip install --upgrade pip

# installing required packages
RUN pip install --no-cache-dir transformers torch

# setting working directory where the application will run
WORKDIR /usr/local/app

# ensure the application can locate necessary shared libraries
ENV LD_LIBRARY_PATH=/usr/local/app 

# copying files
COPY libJavonetCppSdk.so ./
COPY jcg ./

EXPOSE 8080 # Exposes port 8080 for application access

# defining the entry point command to execute jcg, specifying the license key and runtime as Python
CMD ./jcg -licenseKey "your-license-key" -runtime python

And that’s it! Now we can proceed to the .NET part.

.NET client code

Project structure

Now, let’s create a simple web API using .NET. Your directory structure should look like this:

App
│   docker-compose.yml
│   PythonCode
│   └ ...
└───DotTorchDocker
│   │ DotTorchDocker.sln
│   │ docker-compose.yml
└───└ DotTorchDocker.Api
│   │   │ appsettings.json
│   │   │ Dockerfile
│   │   │ javonetconf.json
└───└───└ Program.cs

Javonetconf.json configuration file

You might have noticed the javonetconf.json file. As you may have guessed, this file contains the configuration for the Javonet connection. A typical configuration might look like this:

{ "licenseKey": "your-license-key", "loggingLevel": "runtimeInfo", "runtimes": { "python": [ { "name": "pythondocker", "customOptions": "", "modules": "", "channel": { "type": "tcp", "host": "127.0.0.1", "port": 8080 } }, { "name": "pythoninsidedocker", "customOptions": "", "modules": "", "channel": { "type": "tcp", "host": "python_service", "port": 8080 } } ] } }

The first part (pythondocker) can be used to connect to a dockerized Python instance from local code (e.g., for debugging in Visual Studio). Our goal is to use the second one (pythoninsidedocker).

.NET code

Let’s move on to the .NET code. First you need to create a new web application and add Javonet nuget package:

dotnet add package Javonet.Netcore.Sdk -s https://api.nuget.org/v3/index.json

For more information click here!

Now we can proceed to the code. I’ve added short /check http method to verify connection. So we should have something like this in our Program.cs file:

namespace DotTorchDocker { using Javonet.Netcore.Sdk; using System.Text.Json; public record TextRequest(string Text); internal class Program { private static void Main(string[] args) { var builder = WebApplication.CreateBuilder(args); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.MapGet("/check", () => { return "ok"; }) .WithName("check") .WithOpenApi(); app.Run(); } } }

You need to use “old” Program.cs style with class Program and Main(string[] args) method.

Adding Javonet configuration

Next, we need to access the Javonet configuration. This can be done using the WithConfig() extension:

RuntimeContext _context = Javonet.WithConfig("./javonetconf.json").Python("pythoninsidedocker");

Earlier, when we discussed the javonetconf.json file, we mentioned using the pythoninsidedocker section. Here, we specify to Javonet where to locate the correct configuration. As a reminder, this is the relevant part:

{ "licenseKey": "your-license-key", "loggingLevel": "runtimeInfo", "runtimes": { "python": [ { ... }, { "name": "pythoninsidedocker", <== Javonet docker config "customOptions": "", "modules": "", "channel": { "type": "tcp", "host": "python_service", "port": 8080 } } ] } }

Let’s add a simple DTO class to our code to store the results returned by the Python script

public class PredictionResult
{
    public string Label { get; set; } = string.Empty;
    public double Score { get; set; }
}

and record to use it in the http request:

public record TextRequest(string Text);

Javonet call

At this moment we need to add some code, to call Javonet and ML methods inside Python docker container. We can create dedicatad .NET method in our Program class. Let’s call it CallPython():

private static PredictionResult CallPython(RuntimeContext context, string message) { var pipeline = context .GetType("transformers.pipeline") .CreateInstance("sentiment-analysis", "distilbert-base-uncased-finetuned-sst-2-english") .Execute(); var analyzer = pipeline.CreateInstance(message).Execute(); return new PredictionResult(); }

Now, let’s break down this piece of code step by step. We need to pass RuntimeContext object and a message from request to this method.
Then we can get transformers.pipeline Python class type from RuntimeContext instance

var pipeline = context.GetType("transformers.pipeline")

and create an instance of this type:

    .CreateInstance("sentiment-analysis", "distilbert-base-uncased-finetuned-sst-2-english")
    .Execute();

Now we have a pipeline instance to use in our .NET code, so we can pass message to it:

 var analyzer = pipeline.CreateInstance(message).Execute();

Extracting data from Python

As you can probably see, our method doesn’t return any relevant data at this point. So we need to extract values from analyzer variable.
We can build another method. I called it ClassWrapper():

private static PredictionResult ClassWrapper(InvocationContext data)
{
    var labelData = data.GetIndex(0).GetIndex("label").Execute();
    var scoreData = data.GetIndex(0).GetIndex("score").Execute();

    return new PredictionResult
    {
        Label = (string)labelData.GetValue(),
        Score = (double)scoreData.GetValue(),
    };
}

And again let’s break it down. Our parameter will be InvocationContext instance this time. And we need to get values from it. Our data is on index==0 inside array, so we need to call .GetIndex(0) extension. From previuos article we now, that values of our prediction are called label and score in Python. That’s why we need to use .GetIndex() again. This time for ‘label’ and ‘score’ values. And last step of this part is execution of this code, so we call .Execute().
Now we can extract values by calling .GetValue() method, casting these values to the correct types and wrapping in DTO object:

return new PredictionResult
{
    Label = (string)labelData.GetValue(),
    Score = (double)scoreData.GetValue(),
};

Last line of CallPython() method should look like this now:

private static PredictionResult CallPython(RuntimeContext context, string message)
{
    [...]
    // return new PredictionResult();
    return ClassWrapper(analyzer);
}

Using Javonet methods inside HTTP method

It’s time to use our Javonet code. Let’s add a new POST method:

app.MapPost("/analyze", (TextRequest request) =>
{
    var prediction = CallPython(_context, request.Text);

    var result = new
    {
        OriginalText = request.Text,
        Label = prediction.Label,
        Score = prediction.Score
    };
    return Results.Ok(result);
});

So Program.cs file should look like this:

namespace DotTorchDocker { using Javonet.Netcore.Sdk; public record TextRequest(string Text); public class PredictionResult { public string Label { get; set; } = string.Empty; public double Score { get; set; } } internal class Program { private static void Main(string[] args) { var builder = WebApplication.CreateBuilder(args); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); var _context = Javonet.WithConfig("./javonetconf.json").Python("pythoninsidedocker"); app.MapPost("/analyze", (TextRequest request) => { var prediction = CallPython(_context, request.Text); var result = new { OriginalText = request.Text, Label = prediction.Label, Score = prediction.Score }; return Results.Ok(result); }); app.MapGet("/check", () => { return "ok"; }) .WithName("check") .WithOpenApi(); app.Run(); } private static PredictionResult CallPython(RuntimeContext context, string message) { var pipeline = context .GetType("transformers.pipeline") .CreateInstance("sentiment-analysis", "distilbert-base-uncased-finetuned-sst-2-english") .Execute(); var analyzer = pipeline.CreateInstance(message).Execute(); return ClassWrapper(analyzer); } private static PredictionResult ClassWrapper(InvocationContext data) { var labelData = data.GetIndex(0).GetIndex("label").Execute(); var scoreData = data.GetIndex(0).GetIndex("score").Execute(); return new PredictionResult { Label = (string)labelData.GetValue(), Score = (double)scoreData.GetValue(), }; } } }

Dockerize .NET application

This should be dockerfile for our .NET code:

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base USER app WORKDIR /app EXPOSE 8080 EXPOSE 8081 FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build ARG BUILD_CONFIGURATION=Release WORKDIR /src COPY ["DotTorchDocker.Api/DotTorchDocker.Api.csproj", "DotTorchDocker.Api/"] RUN dotnet restore "./DotTorchDocker.Api/DotTorchDocker.Api.csproj" COPY . . WORKDIR "/src/DotTorchDocker.Api" RUN dotnet build "./DotTorchDocker.Api.csproj" -c $BUILD_CONFIGURATION -o /app/build FROM build AS publish ARG BUILD_CONFIGURATION=Release RUN dotnet publish "./DotTorchDocker.Api.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false FROM base AS final WORKDIR /app COPY --from=publish /app/publish . COPY --from=build /src/Binaries ./Binaries ENTRYPOINT ["dotnet", "DotTorchDocker.Api.dll"]

And docker-compose.yml:

services: python-service: restart: always build: context: ../Python dockerfile: ../Python/Dockerfile container_name: python_service networks: - app_network environment: - LD_LIBRARY_PATH=/usr/local/app expose: - 8080 ports: - "8080:8080" dottorchdocker.api: image: ${DOCKER_REGISTRY-}dottorchdockerapi build: context: . dockerfile: DotTorchDocker.Api/Dockerfile networks: - app_network depends_on: - python-service ports: - "53675:8081" networks: app_network: driver: bridge

Running dockerized application

We can run it by:

docker-compose up -d

Testing solution

Once the containers are built, we can test the entire solution. You can call your .NET app with curl:

curl --location 'https://localhost:53675/analyze' \
--header 'Content-Type: application/json' \
--data '{
  "text": "Very good item"
}'

Received response

You should recieve similar message:

{ "0riginalText": "Very good item", "label": "POSITIVE", "score": 0.99987197 }

Conclusion

Integrating Python machine learning libraries directly into .NET applications opens up powerful new possibilities for developers. This innovative approach lets you combine the efficiency and scalability of .NET with Python’s rich ecosystem of ML tools. Whether you’re building data-driven applications, intelligent automation workflows, or real-time prediction engines, this integration provides flexibility and efficiency without compromising performance.

By bridging these two ecosystems seamlessly, you simplify the deployment and maintenance of AI-powered solutions, making it easier to bring advanced machine learning models to production within your .NET projects.

We’re excited to see how you’ll leverage this approach! Are there specific ML libraries or solutions you plan to integrate into your .NET applications? Share your ideas, challenges, or successes, and let’s collaborate to explore the new frontier of .NET and Python together.