PPromptHelm Docs
Quickstarts

Quickstart with .NET (C#)

Call your PromptHelm prompts from any .NET application.

This quickstart walks you through minting an API token, installing the PromptHelm .NET SDK, and making your first call from C#. By the end you will have a working completion, an IAsyncEnumerable streaming example, and a checklist for shipping to ASP.NET Core or any other .NET host.

SDK status

The .NET SDK is in pre-release. The PromptHelm.Sdk NuGet package is being staged on nuget.org; track Runivox/prompt-helm-sdk-dotnet for the release announcement. The public API below is stable.

Prerequisites

  • .NET 8 SDK or newer (the package multi-targets net8.0 and netstandard2.0).
  • A PromptHelm account. Join the waitlist if you need an invite.
  1. Sign in to the dashboard and open Settings → API tokens. Click New token, name it (e.g. dotnet-service-dev), and copy the value immediately — tokens are revealed exactly once.

    Store the token in user-secrets for local development, and in your secrets manager (Azure Key Vault, AWS Secrets Manager, HashiCorp Vault) for staging and production:

    dotnet user-secrets init
    dotnet user-secrets set "PromptHelm:ApiKey" "ph_live_your_token_here"
  2. Add the package via the .NET CLI, the Visual Studio NuGet UI, or by editing your .csproj.

    dotnet add package PromptHelm.Sdk
  3. In the dashboard, navigate to Prompts → New prompt. Give it a slug (for example, welcome), pick a default model, and use {{ variable_name }} syntax for runtime variables. Saving publishes v1 on the main environment.

    Learn more

    See Concepts → Prompts for versions, environments, and promotion semantics.

  4. Build the client with a configuration object and await the first response. The example below reads the API key from the process environment for local runs; in production, swap to your DI-managed configuration provider.

    Program.cs
    using PromptHelm.Sdk;
    
    var ph = new PromptHelmClient(new PromptHelmConfig
    {
        ApiKey = Environment.GetEnvironmentVariable("PROMPTHELM_API_KEY")!,
    });
    
    var response = await ph.ExecuteAsync(new ExecuteRequest
    {
        PromptSlug = "welcome",
        Variables = new Dictionary<string, string> { ["name"] = "World" },
    });
    
    Console.WriteLine(response.Output);

    The call appears in the dashboard's Logs view with the full request/response payload, the per-call cost, and the round-trip latency.

  5. For chat-style endpoints, the SDK exposes an IAsyncEnumerable<StreamEvent> that integrates with await foreach, ASP.NET Core minimal APIs, and SignalR hubs out of the box.

    Stream.cs
    await foreach (var ev in ph.StreamAsync(new ExecuteRequest
    {
        PromptSlug = "welcome",
        Variables = new Dictionary<string, string> { ["name"] = "World" },
    }))
    {
        switch (ev)
        {
            case ChunkEvent chunk:
                Console.Write(chunk.Content);
                break;
            case DoneEvent done:
                Console.WriteLine($"\n{done.TotalTokens} tokens, ${done.Cost}");
                break;
            case ErrorEvent err:
                Console.Error.WriteLine($"Error {err.ErrorCode}: {err.Message}");
                break;
        }
    }

    Pass a CancellationToken (for example HttpContext.RequestAborted) into StreamAsync so the SSE connection closes when the caller disconnects.

  6. Before you point real traffic at PromptHelm, run through this checklist:

    • Register the client through DI. In ASP.NET Core, prefer the built-in extension over new PromptHelmClient(...):

      builder.Services.AddPromptHelm(opts =>
          opts.ApiKey = builder.Configuration["PromptHelm:ApiKey"]!);

      This wires up a singleton with a pooled HttpClient and graceful shutdown.

    • Propagate CancellationToken everywhere. Pass HttpContext.RequestAborted (or the worker's stopping token) to every ExecuteAsync / StreamAsync call so cancellation reaches the SDK.

    • Catch typed errors. Every call throws PromptHelmException with a stable ErrorCode. Map known codes to retries or user-facing responses; let unknown codes bubble to your error reporter.

    • Source the API key from configuration. Use user-secrets in development, environment variables in containers, and a managed secrets store in production. Never check the key into source control.

    • Pin an environment. Default main to production traffic and promote new prompt versions through staging and dev first.

Next steps

On this page