Fix: LangChain Python Not Working — ImportError, Pydantic, and Deprecated Classes
Quick Answer
How to fix LangChain Python errors — ImportError from package split, Pydantic v2 compatibility, AgentExecutor deprecated, ConversationBufferMemory removed, LCEL output type mismatches, and tool calling failures.
The Error
You install LangChain and the import fails immediately:
ImportError: cannot import name 'ChatOpenAI' from 'langchain'Or you update LangChain and existing code breaks with Pydantic errors:
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LLMChainOr your agent loops forever and then stops without finishing:
# No error, just stops mid-task
{'output': 'Agent stopped due to iteration limit or time limit.'}Or memory doesn’t persist between calls:
# Second message has no context from the first
chain.invoke({"input": "My name is Alice"})
chain.invoke({"input": "What's my name?"}) # Returns: "I don't know your name"LangChain has gone through significant structural changes — its package architecture, internal dependencies, and core APIs have all shifted. Most errors come from code that worked six months ago hitting APIs that no longer exist in the same location.
Why This Happens
LangChain split from a monolithic package into several focused packages (langchain-core, langchain-openai, langchain-community, etc.) and upgraded its internal Pydantic version requirement from v1 to v2. At the same time, it deprecated several high-level abstractions (AgentExecutor, ConversationBufferMemory, ConversationChain) in favor of LangGraph and LCEL patterns. Code written for LangChain 0.1 frequently fails on 0.3+ without changes.
Current stable version as of April 2026: LangChain 1.2.x (with langchain-core 1.2.x).
Fix 1: ImportError — Package Split
The most common error when upgrading or installing LangChain. Classes that used to live in langchain now live in separate packages.
Before (old, broken):
from langchain.chat_models import ChatOpenAI # ❌
from langchain.document_loaders import PyPDFLoader # ❌
from langchain.tools import tool, BaseTool # ❌
from langchain.prompts import ChatPromptTemplate # ❌
from langchain.output_parsers import JsonOutputParser # ❌After (current):
from langchain_openai import ChatOpenAI # ✓
from langchain_anthropic import ChatAnthropic # ✓
from langchain_community.document_loaders import PyPDFLoader # ✓
from langchain_core.tools import tool, BaseTool, StructuredTool # ✓
from langchain_core.prompts import ChatPromptTemplate # ✓
from langchain_core.output_parsers import JsonOutputParser # ✓
from langchain_core.runnables import RunnablePassthrough # ✓Install the packages you actually need:
pip install langchain-core # Always required
pip install langchain-openai # For OpenAI/Azure models
pip install langchain-anthropic # For Claude models
pip install langchain-google-genai # For Gemini models
pip install langchain-community # For document loaders, vector stores, etc.
pip install langchain # High-level chains and agents
pip install langgraph # Agent runtime (recommended over AgentExecutor)Quick reference — where things live now:
| What you need | Package | Import |
|---|---|---|
ChatOpenAI | langchain-openai | from langchain_openai import ChatOpenAI |
ChatAnthropic | langchain-anthropic | from langchain_anthropic import ChatAnthropic |
ChatPromptTemplate | langchain-core | from langchain_core.prompts import ChatPromptTemplate |
@tool decorator | langchain-core | from langchain_core.tools import tool |
StrOutputParser | langchain-core | from langchain_core.output_parsers import StrOutputParser |
PyPDFLoader | langchain-community | from langchain_community.document_loaders import PyPDFLoader |
FAISS vector store | langchain-community | from langchain_community.vectorstores import FAISS |
Pro Tip: If you’re getting ModuleNotFoundError on first install rather than after an upgrade, check that you’re installing into the right virtual environment. See Python ModuleNotFoundError venv for environment isolation fixes.
Automated migration: For large codebases, the LangChain CLI can update most imports automatically:
pip install "langchain-cli>=0.0.22"
langchain-cli migrate ./your_project/Run it multiple times — it may need a second pass to catch all imports.
Fix 2: Pydantic v2 Compatibility Errors
langchain-core 0.3.0 and later requires Pydantic v2. If you’re seeing pydantic.v1.error_wrappers.ValidationError or UserWarning: Pydantic v1 is no longer supported, you’re either on an old version of LangChain or mixing v1 and v2 imports.
Check your installed versions:
pip show pydantic langchain-corelangchain-core 1.x requires pydantic>=2.7.4. If you’re on Pydantic 1.x, upgrade:
pip install --upgrade pydantic langchain-coreIf the error message mentions pydantic.v1, you’re on an old LangChain version or mixing v1 and v2 imports. See Python pydantic validation error for a deeper breakdown of Pydantic v2 model errors.
Update your own Pydantic models:
# Old (Pydantic v1 style):
from pydantic import BaseModel, validator
class MyModel(BaseModel):
name: str
@validator('name')
def name_must_not_be_empty(cls, v):
if not v:
raise ValueError('Name cannot be empty')
return v
# New (Pydantic v2 style):
from pydantic import BaseModel, field_validator
class MyModel(BaseModel):
name: str
@field_validator('name')
@classmethod
def name_must_not_be_empty(cls, v):
if not v:
raise ValueError('Name cannot be empty')
return vFor tool input schemas specifically, nested Pydantic v2 models can fail when LangChain generates the JSON schema. If a tool’s args_schema produces wrong or empty schema output, flatten the input structure or use StructuredTool.from_function() with an explicit schema:
from pydantic import BaseModel, Field
from langchain_core.tools import StructuredTool
class SearchInput(BaseModel):
query: str = Field(description="The search query")
max_results: int = Field(default=5, description="Max results to return")
def search(query: str, max_results: int = 5) -> str:
return f"Results for '{query}'"
# Explicit args_schema bypasses auto-detection issues
search_tool = StructuredTool.from_function(
func=search,
args_schema=SearchInput,
name="search",
description="Search the web"
)Fix 3: AgentExecutor Deprecated — Migrate to LangGraph
AgentExecutor from langchain.agents is deprecated and only receives critical bug fixes. If your agent is hitting iteration limits, silently stopping, or behaving unpredictably, migrating to LangGraph gives you more control and better error handling.
Old pattern (deprecated):
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate
agent = create_openai_tools_agent(model, tools, prompt)
executor = AgentExecutor(
agent=agent,
tools=tools,
max_iterations=15, # Agent stops here silently
verbose=True
)
result = executor.invoke({"input": "Do a multi-step task"})New pattern with LangGraph:
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(
model,
tools,
recursion_limit=25 # replaces max_iterations
)
result = agent.invoke({"messages": [("user", "Do a multi-step task")]})
print(result["messages"][-1].content)LangGraph’s recursion_limit controls how many steps the agent takes. Set it higher than you think you need — LangGraph raises a clear GraphRecursionError when it’s exceeded instead of silently returning a partial result.
For streaming (much better with LangGraph):
async for event in agent.astream_events(
{"messages": [("user", "Do a task")]},
version="v2"
):
if event["event"] == "on_chat_model_stream":
print(event["data"]["chunk"].content, end="", flush=True)Fix 4: Memory Not Persisting — ConversationBufferMemory Removed
ConversationBufferMemory, ConversationChain, and related memory classes are fully deprecated in LangChain 1.x. They don’t work with LCEL chains at all.
The replacement is RunnableWithMessageHistory:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini")
# 1. Store per-session history
store: dict[str, InMemoryChatMessageHistory] = {}
def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
# 2. Prompt with a history placeholder
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="history"), # Must match history_messages_key
("human", "{input}") # Must match input_messages_key
])
chain = prompt | model
# 3. Wrap with history
chain_with_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="input", # Matches the human template variable
history_messages_key="history" # Matches MessagesPlaceholder variable_name
)
# 4. Always pass session_id in config
result1 = chain_with_history.invoke(
{"input": "My name is Alice"},
config={"configurable": {"session_id": "user-123"}}
)
result2 = chain_with_history.invoke(
{"input": "What's my name?"},
config={"configurable": {"session_id": "user-123"}}
)
# result2 now knows the name is AliceCommon mistakes:
MessagesPlaceholder(variable_name="history")— thevariable_namemust exactly matchhistory_messages_key- Forgetting
config={"configurable": {"session_id": "..."}}— without this, all conversations share the same history slot - Using a different key name than the template variable
Fix 5: LCEL Chain — Type Mismatch and Output Errors
LCEL chains (prompt | model | parser) are strict about types between steps. A common pattern that breaks:
from langchain_core.output_parsers import JsonOutputParser
# model outputs AIMessage
# JsonOutputParser expects str containing valid JSON
# StrOutputParser is needed in between, OR JsonOutputParser handles AIMessage directly
chain = prompt | model | JsonOutputParser()
result = chain.invoke({"input": "Return a JSON object"})
# May raise OutputParserException if model wraps JSON in backticksJsonOutputParser handles markdown-wrapped JSON, but only if the JSON is inside a valid ```json ... ``` block. If the model returns JSON without the wrapper, it works. If the model adds text before the JSON block, it fails.
Fix: Be explicit in the prompt:
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate
parser = JsonOutputParser()
prompt = ChatPromptTemplate.from_messages([
("system", "Return a JSON object only. No markdown, no explanation. Just raw JSON."),
("human", "{input}")
])
chain = prompt | model | parserUsing StrOutputParser when you just need text:
from langchain_core.output_parsers import StrOutputParser
# Most lenient — extracts .content from AIMessage, returns plain string
chain = prompt | model | StrOutputParser()with_retry() for transient parsing failures:
chain = (prompt | model | JsonOutputParser()).with_retry(
stop_after_attempt=3
)Fix 6: Tool Calling — bind_tools() and @tool Errors
bind_tools() after with_structured_output() fails:
model = ChatOpenAI()
structured_model = model.with_structured_output(MySchema)
# AttributeError: 'RunnableSequence' object has no attribute 'bind_tools'
model_with_tools = structured_model.bind_tools(tools)with_structured_output() returns a RunnableSequence, not a model. You can’t chain further model methods on it. Use separate chains for tool calling and structured output.
Correct @tool definition in LangChain 1.x:
Type hints are required — they generate the JSON schema the LLM uses to decide how to call the tool:
from langchain_core.tools import tool
@tool
def calculate_area(length: float, width: float) -> float:
"""Calculate the area of a rectangle.
Args:
length: The length of the rectangle in meters.
width: The width of the rectangle in meters.
Returns:
The area in square meters.
"""
return length * widthDebug tool schema output:
# Inspect what the LLM actually sees
print(calculate_area.name) # "calculate_area"
print(calculate_area.description) # Docstring
print(calculate_area.args) # {"length": {...}, "width": {...}}If args is empty or missing required parameters, the LLM can’t call the tool correctly. This usually means the type hints are missing or the docstring format isn’t being parsed.
Still Not Working?
LangSmith Tracing Shows Nothing
Set all three environment variables before running your code:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls_xxxxxxxxxxxx
export LANGCHAIN_PROJECT=my-project # OptionalOr set them in Python before importing LangChain:
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "ls_xxxxxxxxxxxx"
# Import LangChain AFTER setting the env vars
from langchain_openai import ChatOpenAILangChain reads these at import time — setting them after importing has no effect.
RuntimeError: asyncio.run() cannot be called from a running event loop
This happens when you call asyncio.run() inside FastAPI (which already runs an event loop) or inside Jupyter notebooks. If you’re running multiple async chains concurrently, also check Python asyncio gather error for exception propagation patterns.
# WRONG — inside a FastAPI route or Jupyter cell
result = asyncio.run(chain.ainvoke(input_dict))
# CORRECT — use await directly
result = await chain.ainvoke(input_dict)Every LangChain runnable has both invoke() (sync) and ainvoke() (async). In FastAPI routes, always use await chain.ainvoke(). See Python asyncio not running for the full set of event loop pitfalls.
OpenAI Rate Limit Errors Propagating Through Chains
LangChain doesn’t catch or wrap model errors — they pass through unchanged. Catch them at the call site:
from openai import RateLimitError
from langchain_core.runnables import RunnableLambda
def handle_rate_limit(input_dict):
try:
return chain.invoke(input_dict)
except RateLimitError:
# Check error.body["error"]["type"]:
# "insufficient_quota" = billing issue, don't retry
# "rate_limit_exceeded" = backoff and retry
raise
For retry logic on rate limits, see OpenAI API not working — the same patterns apply when OpenAI is called through LangChain.
pip install langchain Installed an Old Version
LangChain releases multiple times per week. If you installed it a few weeks ago without pinning, you may be on an older minor version that’s missing fixes:
pip install --upgrade langchain langchain-core langchain-openai
# Or pin to current stable
pip install "langchain==1.2.15" "langchain-core==1.2.26" "langchain-openai>=0.3"Check what’s currently installed:
pip show langchain langchain-core langchain-openai | grep -E "Name|Version"Deprecation Warnings Flooding Output
LangChain emits LangChainDeprecationWarning for deprecated class usage. These are warnings, not errors, but they indicate code that will break in a future version. To see which lines are triggering them:
import warnings
warnings.filterwarnings("error", category=DeprecationWarning)
# Now deprecated usage raises an error with a traceback showing you exactly where it isThen fix the flagged imports using the table in Fix 1.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Hugging Face Transformers Not Working — OSError, CUDA OOM, and Generation Errors
How to fix Hugging Face Transformers errors — OSError can't load tokenizer, gated repo access, CUDA out of memory with device_map auto, bitsandbytes not installed, tokenizer padding mismatch, pad_token_id warning, and LoRA adapter loading failures.
Fix: Ollama Not Working — Connection Refused, Model Not Found, GPU Not Detected
How to fix Ollama errors — connection refused when the daemon isn't running, model not found, GPU not detected falling back to CPU, port 11434 already in use, VRAM exhausted, and API access from other machines.
Fix: Apache Airflow Not Working — DAG Not Found, Task Failures, and Scheduler Issues
How to fix Apache Airflow errors — DAG not appearing in UI, ImportError preventing DAG load, task stuck in running or queued, scheduler not scheduling, XCom too large, connection not found, and database migration errors.
Fix: Dash Not Working — Callback Errors, Pattern Matching, and State Management
How to fix Dash errors — circular dependency in callbacks, pattern matching callback not firing, missing attribute clientside_callback, DataTable filtering not working, clientside JavaScript errors, Input Output State confusion, and async callback delays.