AI Prompt for RAG Pipelines
Prompt and verifier for extracting verifiable citations from a RAG answer over medical records, scored by Claude Sonnet 4.5 rubric scorer.
More prompts for RAG Pipelines.
Implement query decomposition to improve retrieval recall for support tickets using jina-embeddings-v3 + multi-vector (per chunk).
Production RAG recipe: recursive character chunking, mxbai-embed-large embeddings, Redis Vector storage, Voyage rerank-2 reranking. Includes retrieval evals.
Production RAG recipe: semantic (embedding-based) chunking, stella_en_1.5B_v5 embeddings, Chroma storage, mxbai-rerank-large reranking. Includes retrieval evals.
Hybrid BM25 + dense retrieval architecture with Cohere Rerank 3.5 cross-encoder reranking, tuned for customer interview transcripts.
Production RAG recipe: token-based sliding window chunking, stella_en_1.5B_v5 embeddings, Weaviate storage, mxbai-rerank-large reranking. Includes retrieval evals.
Production RAG recipe: token-based sliding window chunking, cohere-embed-multilingual-v3 embeddings, pgvector storage, Cohere Rerank 3.5 reranking. Includes retrieval evals.
Replace the bracketed placeholders with your own context before running the prompt:
[c-3]— fill in your specific c-3.[c-2]— fill in your specific c-2.[c-1]— fill in your specific c-1.[c-4]— fill in your specific c-4.[c-N]— fill in your specific c-n.["list of parts of the question not answered by context"]— fill in your specific "list of parts of the question not answered by context".