@helicone/helicone
TypeScript icon, indicating that this package has built-in type declarations

2.1.11 • Public • Published

Helicone OpenAI v4+ Node.js Library

This package is a simple and convenient way to log all requests made through the OpenAI API with Helicone. You can easily track and manage your OpenAI API usage and monitor your GPT models' cost, latency, and performance on the Helicone platform.

Proxy Setup

Installation and Setup

  1. To get started, install the helicone-openai package:

    npm install @helicone/helicone
  2. Set HELICONE_API_KEY as an environment variable:

    Set HELICONE_API_KEY as an environment variable:
    

    ℹ️ You can also set the Helicone API Key in your code (See below).

  3. Replace:

    const { ClientOptions, OpenAI } = require("openai");

    with:

    const { HeliconeProxyOpenAI as OpenAI,
        IHeliconeProxyClientOptions as ClientOptions } = require("helicone");
  4. Make a request Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.

    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      heliconeMeta: {
        apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable
        // ... additional helicone meta fields
      },
    });
    
    const chatCompletion = await openai.chat.completion.create({
      model: "gpt-3.5-turbo",
      messages: [{ role: "user", content: "Hello world" }],
    });
    
    console.log(chatCompletion.data.choices[0].message);

Send Feedback

Ensure you store the helicone-id header returned in the original response.

const { data, response } = await openai.chat.completion
  .create({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: "Hello world" }],
  })
  .withResponse();

const heliconeId = response.headers.get("helicone-id");

await openai.helicone.logFeedback(heliconeId, HeliconeFeedbackRating.Positive); // or Negative

HeliconeMeta options

interface IHeliconeMeta {
  apiKey?: string;
  properties?: { [key: string]: any };
  cache?: boolean;
  retry?: boolean | { [key: string]: any };
  rateLimitPolicy?: string | { [key: string]: any };
  user?: string;
  baseUrl?: string;
  onFeedback?: OnHeliconeFeedback; // Callback after feedback was processed
}

type OnHeliconeLog = (response: Response) => Promise<void>;
type OnHeliconeFeedback = (result: Response) => Promise<void>;

Advanced Features Example

const options = new IHeliconeProxyClientOptions({
  apiKey,
  heliconeMeta: {
    apiKey: process.env.HELICONE_API_KEY,
    cache: true,
    retry: true,
    properties: {
      Session: "24",
      Conversation: "support_issue_2",
    },
    rateLimitPolicy: {
      quota: 10,
      time_window: 60,
      segment: "Session",
    },
  },
});

Async Setup

Installation and Setup

  1. To get started, install the helicone-openai package:

    npm install @helicone/helicone
  2. Set HELICONE_API_KEY as an environment variable:

    Set HELICONE_API_KEY as an environment variable:
    

    ℹ️ You can also set the Helicone API Key in your code (See below).

  3. Replace:

    const { ClientOptions, OpenAI } = require("openai");

    with:

    const { HeliconeAsyncOpenAI as OpenAI,
        IHeliconeAsyncClientOptions as ClientOptions } = require("helicone");
  4. Make a request Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.

    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      heliconeMeta: {
        apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable
        // ... additional helicone meta fields
      },
    });
    
    const chatCompletion = await openai.chat.completion.create({
      model: "gpt-3.5-turbo",
      messages: [{ role: "user", content: "Hello world" }],
    });
    
    console.log(chatCompletion.data.choices[0].message);

Send Feedback

With Async logging, you must retrieve the helicone-id header from the log response (not LLM response).

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  heliconeMeta: {
    apiKey: process.env.HELICONE_API_KEY,
    onLog: async (response: Response) => {
      const heliconeId = response.headers.get("helicone-id");
      await openai.helicone.logFeedback(
        heliconeId,
        HeliconeFeedbackRating.Positive
      );
    },
  },
});

HeliconeMeta options

Async logging loses some additional features such as cache, rate limits, and retries

interface IHeliconeMeta {
  apiKey?: string;
  properties?: { [key: string]: any };
  user?: string;
  baseUrl?: string;
  onLog?: OnHeliconeLog;
  onFeedback?: OnHeliconeFeedback;
}

type OnHeliconeLog = (response: Response) => Promise<void>;
type OnHeliconeFeedback = (result: Response) => Promise<void>;

 


 

For more information see our documentation.

Package Sidebar

Install

npm i @helicone/helicone

Weekly Downloads

726

Version

2.1.11

License

Apache-2.0

Unpacked Size

126 kB

Total Files

56

Last publish

Collaborators

  • justintorre75
  • colegottdank