PPromptHelm Docs
Quickstarts

Quickstart with Node.js

Get your first PromptHelm response in under 60 seconds with the official Node.js SDK.

This quickstart walks you through minting an API token, installing @prompt-helm/sdk, and making your first call from a Node.js process. By the end you will have a working completion against a published prompt, plus a streaming example and a short production checklist.

Prerequisites

  • Node.js 18 or newer (we test against 18, 20, and 22).
  • A PromptHelm account. If you do not have one, join the waitlist.
  1. Sign in to the dashboard and open Settings → API tokens. Click New token, give it a memorable name (for example, local-dev-quickstart), and copy the value immediately — tokens are revealed exactly once.

    One reveal only

    PromptHelm never stores plaintext tokens. If you lose the value, revoke the token and mint a new one.

    Store the token in an environment variable so it never lands in source control:

    .env
    PROMPTHELM_API_KEY=ph_live_your_token_here
  2. Add @prompt-helm/sdk to your project. We officially support npm, pnpm, and yarn:

    npm install @prompt-helm/sdk

    The SDK ships ESM-first with full TypeScript types. CommonJS works via the auto-generated dual-export entrypoint.

  3. In the dashboard, navigate to Prompts → New prompt. Give it a slug (for example, support-triage), pick a default model, and write the prompt body. Use {{ variable_name }} syntax for runtime variables.

    When you click Save, PromptHelm publishes a v1 on the main environment. The slug + environment combination is the contract your SDK calls will reference.

    Learn more

    See Concepts → Prompts for the full data model: versions, environments, and promotion flows.

  4. Create a small script and import the SDK. The client picks up PROMPTHELM_API_KEY from process.env automatically.

    src/quickstart.ts
    import { PromptHelm } from "@prompt-helm/sdk";
    
    const client = new PromptHelm({
      // apiKey defaults to process.env.PROMPTHELM_API_KEY.
      // Specify it explicitly only when running outside Node (e.g. edge runtimes).
    });
    
    async function main() {
      const result = await client.execute({
        promptSlug: "support-triage",
        environment: "main",
        variables: {
          ticket: "Password reset email never arrived.",
        },
      });
    
      console.log("Response:", result.output);
      console.log("Cost (USD):", result.usage.costUsd);
      console.log("Latency (ms):", result.metrics.latencyMs);
    }
    
    main().catch((err) => {
      console.error(err);
      process.exit(1);
    });

    Run it:

    node --env-file=.env src/quickstart.ts

    You should see the model output, the per-call cost, and the round-trip latency. The same call is recorded in the dashboard's Logs view with the full request/response payload.

  5. For interactive UIs, switch to the streaming API. The SDK exposes an AsyncIterable so you can for await chunks straight to the client.

    src/stream.ts
    import { PromptHelm } from "@prompt-helm/sdk";
    
    const client = new PromptHelm();
    
    const stream = await client.stream({
      promptSlug: "support-triage",
      environment: "main",
      variables: {
        ticket: "How do I rotate my API key?",
      },
    });
    
    for await (const chunk of stream) {
      if (chunk.type === "delta") {
        process.stdout.write(chunk.text);
      }
      if (chunk.type === "done") {
        process.stdout.write("\n");
        console.log("Cost (USD):", chunk.usage.costUsd);
      }
    }

    Streaming uses Server-Sent Events under the hood and works in every modern Node runtime, including Workers and Edge Functions.

  6. Before you point real traffic at PromptHelm, run through this checklist:

    • Store the API token in your secrets manager. Never commit it.
    • Wrap calls with error handling. Every SDK method throws a typed PromptHelmError with a stable code. Map known codes to retries or user-facing messages.
    • Set timeouts. Pass a signal: AbortSignal.timeout(15_000) (or the SDK's built-in timeoutMs option) so a slow provider does not pin a Node worker.
    • Pin an environment. Default main to production traffic and promote new prompt versions through staging and dev first.
    • Watch the cost dashboard. PromptHelm tags every request with tenantId, promptSlug, and environment for slice-and-dice reporting.

Next steps

On this page