September 10, 2024

Changelog Image

Slack Alerts

Bringing alerts to your workspace when something goes wrong

Helicone now supports sending alerts to your Slack workspace in addition to email, so you can act on important notifications faster. Get started by heading to the Alerts page to create an alert or edit an existing one.

August 29, 2024

Changelog Image

#1 Product of the Day on Product Hunt

Helicone Reaches #1 on Product Hunt!

This achievement reflects our team’s hard work and the incredible support from our community. We’re thrilled about the boost in visibility for our platform!

Highlights:

  • #1 on Product Hunt’s daily leaderboard
  • Positive feedback from the open-source community
  • Surge in new user sign-ups and engagement

Product Hunt Results

A huge thank you to everyone who upvoted, commented, and shared Helicone. Your support motivates us to keep improving!

For more on our Product Hunt journey, check out our blog posts:

Links:

Product Hunt: Helicone on Product Hunt

August 25, 2024

Changelog Image

Docker images on Docker Hub

Docker images now available on Docker Hub We’ve started publishing Docker images on Docker Hub.

This update simplifies Helicone deployment on platforms that don’t natively support the Google Container Registry. For detailed instructions, please refer to our updated self-hosting guide.

Links:

Docker Hub: helicone

August 12, 2024

Changelog Image

New hpstatic Function for Static Prompts in LLM Applications

We’ve added a new hpstatic function to our Helicone Prompt Formatter (HPF) package. This function allows users to create static prompts that don’t change between requests, which is particularly useful for system prompts or other constant text. The hpstatic function wraps the text in <helicone-prompt-static> tags, indicating to Helicone that this part of the prompt should not be treated as variable input.

Here’s a quick example of how to use hpstatic:

import { hpf, hpstatic } from "@helicone/prompts";

const systemPrompt = hpstatic`You are a helpful assistant.`;
const userPrompt = hpf`Write a story about ${{ character }}`;

const chatCompletion = await openai.chat.completions.create(
  {
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: userPrompt },
    ],
    model: "gpt-3.5-turbo",
  },
  {
    headers: {
      "Helicone-Prompt-Id": "prompt_story",
    },
  }
);

This new feature enhances our prompt management capabilities, allowing for more flexible and efficient prompt structuring in your applications.

Start Using Static Prompts 🚀

August 9, 2024

Changelog Image

Ragas Integration for RAG System Evaluation

We’re excited to announce our integration with Ragas, an open-source framework for evaluating Retrieval-Augmented Generation (RAG) systems. This integration allows you to:

  • Monitor and analyze the performance of your RAG pipelines
  • Gain insights into RAG effectiveness using metrics like faithfulness, answer relevancy, and context precision
  • Easily identify areas for improvement in your RAG systems

Check out this quick video overview of the Ragas integration:

To get started with the Ragas integration, visit our documentation for step-by-step instructions and code examples.

August 6, 2024

Changelog Image

Optimistic Updates & Asynchronous Loading in Requests Page

We’ve improved data loading in the Requests page of the Helicone platform. By fetching metadata and request bodies separately and loading data asynchronously we’ve reduced the time it takes to render large tables by almost 6x, improving speed and UX.

July 26, 2024

Changelog Image

New Assistants UI Playground

We’re thrilled to announce a major update to our Assistants UI Playground! Head to the Playground and click the “Try New Playground” button to explore the latest improvements:

  • Streamed responses for real-time interaction
  • Enhanced tool rendering for better visualization
  • Improved reliability for a smoother experience

Coming soon:

  • Expanded model support
  • Advanced prompt management
  • Integrated Markdown editor

Try out the new Playground today and elevate your LLM testing experience!

July 24, 2024

Changelog Image

Fireworks AI + Helicone

We’re excited to announce our integration with Fireworks AI, the high-performance LLM platform! Enhance your AI applications with Helicone’s powerful observability tools in just two easy steps:

  1. Generate a write-only API key in your Helicone account.
  2. Update your Fireworks AI base URL to:
    https://fireworks.helicone.ai
    

That’s all it takes! Now you can monitor, analyze, and optimize your Fireworks AI models with Helicone’s comprehensive insights.

For more details, check out our Fireworks AI integration guide.

July 23, 2024

Changelog Image

Dify + Helicone

We’re thrilled to announce our integration with Dify, the open-source LLM app development platform! Now you can easily add Helicone’s powerful observability features to your Dify projects in just two simple steps:

  1. Generate a write-only API key in your Helicone account.
  2. Set your API base URL in Dify to:
    https://oai.helicone.ai/<API_KEY>
    

That’s it! Enjoy comprehensive logs and insights for your Dify LLM applications.

Check out our integration guide for more details.

July 22, 2024

Changelog Image

Prompts package

We’re excited to announce the release of our new @helicone/prompts package! This lightweight library simplifies prompt formatting for Large Language Models, offering features like:

  • Automated versioning with change detection
  • Support for chat-like prompt templates
  • Efficient variable handling and extraction

Check it out on GitHub and enhance your LLM workflow today!