The Production-Ready
Open Source AI Framework

Get Started

Highly
customizable

Don’t just use Haystack, build on top of it.

The flexible components and pipelines architecture allows you to build around your own specifications and use-cases. Whether you’re building a simple retrieval-augmented generation (RAG) app or a complex architecture with many moving parts.

Build with leading LLM providers and AI tools

Have the freedom of choice.

Thanks to our partnerships with many leading LLM providers, vector databases, and AI tools such as OpenAI, Mistral, Weaviate, Pinecone and so much more

Production is where it gets real

Get your application in front of the world.

Haystack 2.0 is built from the ground-up with production in mind. Our pipelines are fully serializable and perfect for K8s native workflows. Logging and monitoring integrations give you the transparency you need. Our deployment guides walk you through full-scale deployments on all clouds and on-prem.

Learn how to extend Haystack with deepset Cloud for faster building, easier iteration and instant deployment.

People in our community work for:

AWS
Nvidia
IBM
Intel

Haystack Use Cases

Join the community

Join our Discord

Our community on Discord is for everyone interested in NLP, using Haystack or even just getting started!

GET STARTED

Upcoming Haystack Events

Haystack Commmunity Talks

AWS Summit Berlin 2023: Building Generative AI Applications on AWS featuring deepset

Building Applications with LLM-Based Agents

Open NLP Meetup #13: Hosting LLM Apps @ Scale with Haystack, Titan ML & Jina AI

  • AWS Summit Berlin 2023: Building Generative AI Applications on AWS featuring deepset

  • Building Applications with LLM-Based Agents

  • Open NLP Meetup #13: Hosting LLM Apps @ Scale with Haystack, Titan ML & Jina AI