Fix: LangChain.js Not Working — Chain Not Executing, Retriever Returning Empty, or Memory Lost Between Calls
Quick Answer
How to fix LangChain.js issues — model setup, prompt templates, RAG with vector stores, conversational memory, structured output, tool agents, and streaming with LangChain Expression Language.
The Problem
A chain is created but invoking it returns nothing:
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromTemplate('Tell me about {topic}');
const chain = prompt.pipe(model);
const result = await chain.invoke({ topic: 'TypeScript' });
// Error: OpenAI API key not foundOr a retriever returns zero results even though documents were added:
const results = await vectorStore.similaritySearch('what is TypeScript?', 5);
// [] — empty despite adding documentsOr conversation memory resets between requests:
// First request: "My name is Alice"
// Second request: "What is my name?" → "I don't know your name"Why This Happens
LangChain.js is a framework for building applications powered by language models. Its modular architecture has several common failure points:
- Each model provider is a separate package —
@langchain/openai,@langchain/anthropic,@langchain/google-genaimust be installed individually. The corelangchainpackage doesn’t include any model. API keys must be set as environment variables. - Vector stores need embeddings and a backend — similarity search requires documents to be embedded (converted to vectors) and stored. If the embedding model or vector store isn’t configured correctly, documents are stored without vectors and searches return empty.
- Memory is in-process by default —
BufferMemorylives in JavaScript memory. In serverless environments (Vercel, Lambda), each request gets a fresh process. Memory from previous requests is lost. Persistent memory requires an external store (Redis, database). - LCEL (LangChain Expression Language) chains use
.pipe()— the new API uses.pipe()to compose chains. The olderLLMChain,SequentialChainclasses still work but are being deprecated. Mixing old and new APIs causes confusion.
Fix 1: Basic Chat with LCEL
npm install langchain @langchain/openai @langchain/core// Basic chain with LCEL (LangChain Expression Language)
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
// Model — reads OPENAI_API_KEY from environment
const model = new ChatOpenAI({
modelName: 'gpt-4o',
temperature: 0.7,
maxTokens: 1000,
});
// Prompt template
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant that explains {topic} concepts clearly.'],
['human', '{question}'],
]);
// Chain: prompt → model → parse output as string
const chain = prompt.pipe(model).pipe(new StringOutputParser());
// Invoke
const result = await chain.invoke({
topic: 'TypeScript',
question: 'What are generics and why are they useful?',
});
console.log(result); // String response
// Stream
const stream = await chain.stream({
topic: 'TypeScript',
question: 'Explain type narrowing',
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}Fix 2: RAG (Retrieval-Augmented Generation)
npm install @langchain/openai langchain @langchain/community
# Vector store — pick one:
npm install faiss-node # Local (FAISS)
# Or: npm install @pinecone-database/pinecone # Cloud (Pinecone)
# Or: Use MemoryVectorStore for developmentimport { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { Document } from '@langchain/core/documents';
// 1. Prepare documents
const docs = [
new Document({
pageContent: 'TypeScript 5.0 introduced decorators that align with the TC39 proposal.',
metadata: { source: 'ts-docs', topic: 'decorators' },
}),
new Document({
pageContent: 'The satisfies operator lets you validate a type without widening it.',
metadata: { source: 'ts-docs', topic: 'satisfies' },
}),
// ... more documents
];
// 2. Split large documents into chunks
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splitDocs = await splitter.splitDocuments(docs);
// 3. Create embeddings and vector store
const embeddings = new OpenAIEmbeddings({
modelName: 'text-embedding-3-small',
});
const vectorStore = await MemoryVectorStore.fromDocuments(splitDocs, embeddings);
// 4. Create retriever
const retriever = vectorStore.asRetriever({
k: 4, // Return top 4 matches
searchType: 'similarity',
});
// 5. Build RAG chain
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based on the following context. If the context doesn't contain the answer, say "I don't have information about that."
Context: {context}
Question: {input}
`);
const documentChain = await createStuffDocumentsChain({ llm: model, prompt });
const ragChain = await createRetrievalChain({ combineDocsChain: documentChain, retriever });
// 6. Query
const response = await ragChain.invoke({
input: 'What does the satisfies operator do in TypeScript?',
});
console.log(response.answer);
// Also available: response.context (retrieved documents)Fix 3: Conversational Memory
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { RunnableWithMessageHistory } from '@langchain/core/runnables';
import { ChatMessageHistory } from 'langchain/stores/message/in_memory';
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant.'],
new MessagesPlaceholder('history'),
['human', '{input}'],
]);
const chain = prompt.pipe(model);
// In-memory message store (per session)
const messageHistories: Record<string, ChatMessageHistory> = {};
function getMessageHistory(sessionId: string) {
if (!messageHistories[sessionId]) {
messageHistories[sessionId] = new ChatMessageHistory();
}
return messageHistories[sessionId];
}
// Wrap chain with message history
const withHistory = new RunnableWithMessageHistory({
runnable: chain,
getMessageHistory,
inputMessagesKey: 'input',
historyMessagesKey: 'history',
});
// Each invocation includes the session's history
const config = { configurable: { sessionId: 'user-123' } };
const r1 = await withHistory.invoke({ input: 'My name is Alice' }, config);
console.log(r1.content); // "Hello Alice!"
const r2 = await withHistory.invoke({ input: 'What is my name?' }, config);
console.log(r2.content); // "Your name is Alice"
// Persistent memory with Redis (for serverless)
import { RedisChatMessageHistory } from '@langchain/redis';
function getRedisHistory(sessionId: string) {
return new RedisChatMessageHistory({
sessionId,
config: { url: process.env.REDIS_URL },
});
}Fix 4: Structured Output
import { ChatOpenAI } from '@langchain/openai';
import { z } from 'zod';
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
// Method 1: withStructuredOutput (recommended)
const schema = z.object({
title: z.string().describe('Article title'),
summary: z.string().describe('One-paragraph summary'),
tags: z.array(z.string()).describe('Relevant tags'),
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1).describe('Confidence score'),
});
const structuredModel = model.withStructuredOutput(schema);
const result = await structuredModel.invoke(
'Analyze this review: "Great product, fast shipping, but packaging could be better"'
);
// result = { title: "...", summary: "...", tags: [...], sentiment: "positive", confidence: 0.85 }
// Method 2: In a chain with prompt
import { ChatPromptTemplate } from '@langchain/core/prompts';
const prompt = ChatPromptTemplate.fromTemplate(
'Extract key entities from: {text}'
);
const entitySchema = z.object({
people: z.array(z.string()),
organizations: z.array(z.string()),
locations: z.array(z.string()),
dates: z.array(z.string()),
});
const entityChain = prompt.pipe(model.withStructuredOutput(entitySchema));
const entities = await entityChain.invoke({
text: 'Apple CEO Tim Cook announced at WWDC 2024 in San Jose that...',
});Fix 5: Tool-Using Agents
import { ChatOpenAI } from '@langchain/openai';
import { tool } from '@langchain/core/tools';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { z } from 'zod';
// Define tools
const searchTool = tool(
async ({ query }) => {
const results = await searchWeb(query);
return JSON.stringify(results.slice(0, 3));
},
{
name: 'web_search',
description: 'Search the web for current information',
schema: z.object({ query: z.string().describe('Search query') }),
},
);
const calculatorTool = tool(
async ({ expression }) => {
try {
return String(eval(expression)); // Use a proper math parser in production
} catch {
return 'Invalid expression';
}
},
{
name: 'calculator',
description: 'Evaluate a mathematical expression',
schema: z.object({ expression: z.string().describe('Math expression like "2 + 2"') }),
},
);
// Create agent
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const agent = createReactAgent({
llm: model,
tools: [searchTool, calculatorTool],
});
// Run agent
const result = await agent.invoke({
messages: [{ role: 'user', content: 'What is the current population of Japan and what is 10% of it?' }],
});
// Agent will: 1) search for Japan population, 2) use calculator for 10%
console.log(result.messages[result.messages.length - 1].content);Fix 6: Multiple Providers
npm install @langchain/anthropic @langchain/google-genaiimport { ChatOpenAI } from '@langchain/openai';
import { ChatAnthropic } from '@langchain/anthropic';
import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
// Each provider reads its own API key from environment
// OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY
const openai = new ChatOpenAI({ modelName: 'gpt-4o' });
const claude = new ChatAnthropic({ modelName: 'claude-sonnet-4-20250514' });
const gemini = new ChatGoogleGenerativeAI({ modelName: 'gemini-2.0-flash' });
// All share the same interface — swap freely
const prompt = ChatPromptTemplate.fromTemplate('Explain {concept} simply');
const openaiChain = prompt.pipe(openai).pipe(new StringOutputParser());
const claudeChain = prompt.pipe(claude).pipe(new StringOutputParser());
const geminiChain = prompt.pipe(gemini).pipe(new StringOutputParser());
// Use whichever chain you want
const result = await claudeChain.invoke({ concept: 'quantum computing' });Still Not Working?
“API key not found” even though it’s set — each provider reads a specific environment variable: OPENAI_API_KEY for OpenAI, ANTHROPIC_API_KEY for Anthropic, GOOGLE_API_KEY for Google. If you use a different variable name, pass it explicitly: new ChatOpenAI({ openAIApiKey: process.env.MY_KEY }).
Vector search returns empty results — the embeddings might not have been generated. Check that the embedding model API key is valid and that MemoryVectorStore.fromDocuments() completed without error. Also verify the search query is semantically similar to the stored documents — keyword matching won’t work with vector search.
Memory resets on each request in serverless — in-memory stores are lost when the function cold-starts. Use RedisChatMessageHistory, UpstashRedisChatMessageHistory, or store messages in your database. Pass the session ID in each request to load the correct history.
Chain returns AIMessage instead of a string — pipe a StringOutputParser() at the end of your chain: prompt.pipe(model).pipe(new StringOutputParser()). Without the parser, the chain returns the full AIMessage object including metadata.
For related AI and API issues, see Fix: Vercel AI SDK Not Working and Fix: Next.js App Router Not Working.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Fastify Not Working — 404, Plugin Encapsulation, and Schema Validation Errors
How to fix Fastify issues — route 404 from plugin encapsulation, reply already sent, FST_ERR_VALIDATION, request.body undefined, @fastify/cors, hooks not running, and TypeScript type inference.
Fix: Better Auth Not Working — Login Failing, Session Null, or OAuth Callback Error
How to fix Better Auth issues — server and client setup, email/password and OAuth providers, session management, middleware protection, database adapters, and plugin configuration.
Fix: BullMQ Not Working — Jobs Not Processing, Workers Not Starting, or Redis Connection Failing
How to fix BullMQ issues — queue and worker setup, Redis connection, job scheduling, retry strategies, concurrency, rate limiting, event listeners, and dashboard monitoring.
Fix: GraphQL Yoga Not Working — Schema Errors, Resolvers Not Executing, or Subscriptions Failing
How to fix GraphQL Yoga issues — schema definition, resolver patterns, context and authentication, file uploads, subscriptions with SSE, error handling, and Next.js integration.