Claude Prompt for RAG Pipelines
Prompt and verifier for extracting verifiable citations from a RAG answer over financial filings (10-K/10-Q), scored by Claude Sonnet 4.5 rubric scorer.
Replace the bracketed placeholders with your own context before running the prompt:
[c-3]β fill in your specific c-3.[c-2]β fill in your specific c-2.[c-1]β fill in your specific c-1.[c-4]β fill in your specific c-4.[c-N]β fill in your specific c-n.["list of parts of the question not answered by context"]β fill in your specific "list of parts of the question not answered by context".More prompts for RAG Pipelines.
Implement query decomposition to improve retrieval recall for support tickets using jina-embeddings-v3 + multi-vector (per chunk).
Production RAG recipe: recursive character chunking, mxbai-embed-large embeddings, Redis Vector storage, Voyage rerank-2 reranking. Includes retrieval evals.
Production RAG recipe: semantic (embedding-based) chunking, stella_en_1.5B_v5 embeddings, Chroma storage, mxbai-rerank-large reranking. Includes retrieval evals.
Hybrid BM25 + dense retrieval architecture with Cohere Rerank 3.5 cross-encoder reranking, tuned for customer interview transcripts.
Production RAG recipe: token-based sliding window chunking, stella_en_1.5B_v5 embeddings, Weaviate storage, mxbai-rerank-large reranking. Includes retrieval evals.
Production RAG recipe: token-based sliding window chunking, cohere-embed-multilingual-v3 embeddings, pgvector storage, Cohere Rerank 3.5 reranking. Includes retrieval evals.