Skip to content

Fix: SQLAlchemy Not Working — DetachedInstanceError, Pool Exhausted, and MissingGreenlet

FixDevs ·

Quick Answer

How to fix SQLAlchemy 2.x errors — DetachedInstanceError from lazy loading, QueuePool limit exceeded, MissingGreenlet in async context, N+1 queries, IntegrityError rollback, and Alembic migration failures.

The Error

You access a relationship on an object after the session closes and get this:

sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <User at 0x7f04...>
is not bound to a Session; lazy load operation of attribute 'orders' cannot proceed

Or your app runs fine under low load then falls over under traffic:

sqlalchemy.pool.QueuePool - QueuePool limit of size 5 overflow 10 reached,
connection timed out, timeout 30

Or you migrate to async SQLAlchemy and hit this immediately:

sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called;
can't call await_only() here — asyncio event loop is not accessible

Or your database slows to a crawl with hundreds of queries for what should be a single request:

# echo=True output — 1 query for 50 users = 51 queries total
SELECT * FROM users;
SELECT * FROM orders WHERE user_id = 1;
SELECT * FROM orders WHERE user_id = 2;
# ... 48 more

Each of these comes from a different failure mode in SQLAlchemy’s ORM. Understanding session lifecycle, lazy loading, and connection pool behavior is the key to fixing all of them.

Why This Happens

SQLAlchemy 2.x is strict about session ownership. Every ORM object belongs to a session — when that session closes, any unloaded attribute access triggers a database query. If the session is already gone, you get DetachedInstanceError. If the session is async but you’re calling sync ORM methods, you get MissingGreenlet. If you’re not returning connections to the pool after use, it eventually empties.

The root of most SQLAlchemy bugs is the same: objects outlive their sessions, and lazy loading attempts fail when there’s no session left to issue the query.

Fix 1: DetachedInstanceError — Load Relationships Before the Session Closes

This is the most common SQLAlchemy error. It happens when you access a lazy-loaded relationship (user.orders, product.tags) after the session that loaded the parent object has already closed.

In FastAPI, this is the exact pattern that triggers it — the session closes at the end of the request handler but response serialization happens after:

# WRONG — session closes before orders are accessed
async def get_user(user_id: int, session: AsyncSession = Depends(get_session)):
    result = await session.execute(select(User).where(User.id == user_id))
    user = result.scalar_one()
    return user  # orders hasn't been loaded — DetachedInstanceError during serialization

Fix: Use selectinload() to load relationships in the same query:

from sqlalchemy import select
from sqlalchemy.orm import selectinload

async def get_user(user_id: int, session: AsyncSession = Depends(get_session)):
    stmt = (
        select(User)
        .where(User.id == user_id)
        .options(selectinload(User.orders))  # Load orders now, not later
    )
    result = await session.execute(stmt)
    return result.scalar_one()

selectinload() issues a second SELECT ... WHERE id IN (...) query to load all related records up front. After that, user.orders is available even after the session closes.

When to use joinedload() instead:

For many-to-one relationships (loading the parent side), joinedload() uses a JOIN and loads everything in a single query:

# Loading orders and their associated user — many-to-one, use joinedload
stmt = select(Order).options(joinedload(Order.user))
# Loading a user's orders — one-to-many, use selectinload
stmt = select(User).options(selectinload(User.orders))

Rule: selectinload for one-to-many and many-to-many. joinedload for many-to-one and one-to-one. joinedload on a one-to-many multiplies your result rows, which wastes memory.

Alternative: set lazy="selectin" at the model level to eager-load by default for every query:

from sqlalchemy.orm import Mapped, relationship
from typing import List

class User(Base):
    __tablename__ = "users"
    id: Mapped[int] = mapped_column(primary_key=True)
    orders: Mapped[List["Order"]] = relationship(
        back_populates="user",
        lazy="selectin"  # Eager-load whenever User is queried
    )

For objects that need to remain accessible after session.commit(), set expire_on_commit=False:

from sqlalchemy.ext.asyncio import async_sessionmaker

async_session = async_sessionmaker(
    engine,
    expire_on_commit=False  # Attributes don't expire after commit
)

Without this, SQLAlchemy expires all attributes on commit, meaning any access after commit() triggers a fresh query — which fails on a closed session.

Fix 2: QueuePool Limit Exceeded — Connection Pool Exhausted

The default pool has pool_size=5 and max_overflow=10 — a maximum of 15 simultaneous connections. Under concurrency, if any code path holds a connection open longer than expected (slow query, missing with block, exception that skips cleanup), the pool drains.

Fix: Always use context managers to guarantee connection return:

# WRONG — if an exception occurs, session.close() may never run
session = Session(engine)
result = session.execute(select(User))
session.close()  # Never reached on exception

# CORRECT — context manager guarantees cleanup
with Session(engine) as session:
    result = session.execute(select(User))
    # Session closes and connection returns to pool here, even on exception

Async equivalent:

from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker

async_session = async_sessionmaker(engine)

async def get_users():
    async with async_session() as session:
        result = await session.execute(select(User))
        return result.scalars().all()
        # Connection returns to pool here

Tune pool size for your workload. For a FastAPI app with 10 concurrent workers, the default pool of 5+10 is usually fine. For high-concurrency APIs:

from sqlalchemy.ext.asyncio import create_async_engine

engine = create_async_engine(
    "postgresql+asyncpg://user:pass@localhost/db",
    pool_size=20,        # Permanent connections
    max_overflow=40,     # Burst capacity
    pool_pre_ping=True,  # Test connections before checkout (catches stale ones)
    pool_recycle=3600,   # Recycle connections after 1 hour
)

pool_pre_ping=True is important for MySQL and long-running services. MySQL closes idle connections after 8 hours (wait_timeout). pool_pre_ping sends a SELECT 1 before checkout — if the connection is dead, SQLAlchemy replaces it transparently instead of handing you a broken connection.

pool_recycle prevents connections from being used past the server’s idle timeout. Set it below your database server’s wait_timeout — for MySQL, this means pool_recycle=3600 (1 hour) is a safe default.

Pro Tip: To debug pool exhaustion in production, check how many connections are active before you hit the limit:

from sqlalchemy import event

@event.listens_for(engine.sync_engine, "connect")
def log_pool_status(dbapi_conn, connection_record):
    pool = engine.sync_engine.pool
    print(f"Connections checked out: {pool.checkedout()}/{pool.size() + pool.overflow()}")

If checkedout approaches the pool limit during normal operation, either increase the pool size or audit for sessions that aren’t being closed.

Fix 3: MissingGreenlet — Sync ORM Calls in Async Context

This error means you’re calling synchronous SQLAlchemy operations (lazy loading, sync session.get()) from inside an async function. SQLAlchemy’s async layer uses greenlets to bridge sync and async — accessing a lazy attribute in async code bypasses that bridge.

# WRONG — sync Session in async code
from sqlalchemy.orm import Session

async def get_users():
    session = Session(engine)  # Sync session
    users = session.execute(select(User)).scalars().all()
    return users  # Lazy attributes will trigger MissingGreenlet

Fix: Use AsyncSession and create_async_engine throughout:

from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession

# Async engine — note the +asyncpg driver
engine = create_async_engine("postgresql+asyncpg://user:pass@localhost/db")
async_session = async_sessionmaker(engine, expire_on_commit=False)

async def get_users():
    async with async_session() as session:
        result = await session.execute(select(User))
        return result.scalars().all()

Common Mistake: Using create_engine (sync) but passing it to async functions. The URL prefix matters — postgresql+asyncpg:// or postgresql+aiopg:// for async. postgresql+psycopg2:// is sync only.

Even with AsyncSession, lazy loading still triggers MissingGreenlet. Async SQLAlchemy does not support lazy loading — every relationship access must be eager. Use selectinload() in your queries or set lazy="raise" on relationships to surface the problem immediately during development:

class User(Base):
    __tablename__ = "users"
    orders: Mapped[List["Order"]] = relationship(
        back_populates="user",
        lazy="raise"  # Raises immediately if you forget selectinload()
    )

With lazy="raise", any unintentional lazy load raises an error at development time instead of silently working in sync code and breaking in async. For production, switch to lazy="selectin".

If you need to reload attributes on an already-loaded object inside an async session:

async def refresh_user(user_id: int):
    async with async_session() as session:
        user = await session.get(User, user_id)
        await session.refresh(user, ["orders"])  # Explicit reload
        return user.orders

For the async setup to work at all, install the async dependencies. See Python asyncio not running if you’re hitting event loop conflicts on top of this:

pip install "sqlalchemy[asyncio]"  # Installs greenlet
pip install asyncpg                # PostgreSQL async driver
# or
pip install aiosqlite              # SQLite async driver (for testing)

Fix 4: N+1 Queries — Lazy Loading in a Loop

This is a performance problem, not an exception. Your code works, but slowly. The symptom is hundreds of queries in your logs when you expected a handful.

Enable echo=True to see what’s happening:

engine = create_engine("postgresql+psycopg2://user:pass@localhost/db", echo=True)

If you’re loading 50 users and see 51 queries in the output, you have an N+1 problem.

The problematic pattern:

users = session.execute(select(User)).scalars().all()  # 1 query
for user in users:
    print(user.orders)  # 1 query per user = 50 more queries

Fix: Load the relationship up front with selectinload():

stmt = select(User).options(selectinload(User.orders))
users = session.execute(stmt).scalars().all()  # 2 queries total
for user in users:
    print(user.orders)  # Already in memory — no queries

Nested relationships (loading a user’s orders and each order’s items) chain the loaders:

stmt = (
    select(User)
    .options(
        selectinload(User.orders).selectinload(Order.items)
    )
)

This issues 3 queries total regardless of how many users, orders, or items exist.

contains_eager() for when you’re joining manually:

stmt = (
    select(User)
    .join(User.orders)
    .where(Order.status == "pending")
    .options(contains_eager(User.orders))
)

contains_eager() tells SQLAlchemy to populate the relationship from the JOIN you already wrote, rather than issuing a separate IN query.

Fix 5: IntegrityError — Constraint Violations and Session Recovery

When SQLAlchemy raises IntegrityError (unique violation, foreign key error, not-null violation), the session enters a pending rollback state. Any further operation on that session raises PendingRollbackError until you explicitly call session.rollback().

from sqlalchemy.exc import IntegrityError

try:
    user = User(email="[email protected]", name="Alice")
    session.add(user)
    session.commit()
except IntegrityError as e:
    session.rollback()  # Required — resets session state
    print(f"Duplicate entry: {e.orig}")
    # Session is now usable again

For batch inserts where you want to skip duplicates rather than abort, use begin_nested() to create a savepoint. An error rolls back to the savepoint, leaving the outer transaction intact:

from sqlalchemy import exc

with session.begin():
    for record in records:
        try:
            with session.begin_nested():  # Creates a savepoint
                session.add(User(**record))
        except exc.IntegrityError:
            pass  # Rolls back to savepoint, continues outer transaction
# All non-duplicate records committed here

PostgreSQL upsert — if your goal is insert-or-update rather than insert-or-skip:

from sqlalchemy.dialects.postgresql import insert as pg_insert

stmt = pg_insert(User).values(email="[email protected]", name="Alice")
stmt = stmt.on_conflict_do_update(
    index_elements=["email"],
    set_={"name": "Alice Updated"}
)
session.execute(stmt)
session.commit()

If you’re seeing IntegrityError from model validation rather than database constraints, check Python pydantic validation error — for APIs using SQLAlchemy models with Pydantic schemas, validation errors and constraint errors often stack.

Fix 6: Alembic Migration Failures

Target database is not up to date means your code has migration files that haven’t been applied to the database. Run:

alembic upgrade head

If you’re seeing this error on every application startup, it means your startup check is more strict than it needs to be. To see exactly where the mismatch is:

alembic current   # What the database thinks its version is
alembic heads     # What the latest migration file is
alembic history   # Full migration chain

Autogenerate not detecting your changes is a common source of confusion. The most frequent cause is that env.py isn’t importing your model metadata:

# In alembic/env.py — this must import ALL your models
from myapp.models import Base  # Must reflect the actual module path

target_metadata = Base.metadata

If alembic revision --autogenerate produces an empty migration even though you added columns, it means env.py isn’t seeing your updated models. Import the module that defines them, not just Base.

Common Mistake: Creating a new migration file without running alembic upgrade head first. If you generate a revision while the database is behind, Alembic’s chain gets out of sync and upgrade head may fail with a MultipleHeads error. Always run alembic upgrade head before alembic revision --autogenerate.

If you have multiple head revisions from branch merges:

alembic merge heads -m "merge branches"
alembic upgrade head

This creates a merge migration that resolves the fork.

Still Not Working?

PendingRollbackError on Every Request

sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back
due to a previous exception during flush. To begin a new transaction with this Session,
first issue Session.rollback().

This means an earlier exception wasn’t caught and the session wasn’t rolled back. Every subsequent operation on that session raises this error. The fix is to structure your session usage so exceptions always trigger a rollback — either explicitly in a try/except or by using context managers (with session.begin()).

For FastAPI, the standard pattern using a dependency handles this automatically:

from fastapi import Depends
from sqlalchemy.ext.asyncio import AsyncSession

async def get_session():
    async with async_session() as session:
        async with session.begin():
            try:
                yield session
            except Exception:
                await session.rollback()
                raise

Async SQLAlchemy with asyncpgcannot perform operation: another operation is in progress

asyncpg connections are not thread-safe and can’t be used concurrently within a single connection. This surfaces when you’re running two queries on the same session simultaneously:

# WRONG — concurrent queries on same session
result1, result2 = await asyncio.gather(
    session.execute(select(User)),
    session.execute(select(Order))
)

Use separate sessions per query, or use run_sync for operations that must stay serial. See Python async sync mix for patterns when you need to combine sync and async SQLAlchemy code.

SQLite database is locked with Multiple Writers

SQLite uses file-level locking. Multiple async writers will deadlock. For production use, switch to PostgreSQL. For testing, use aiosqlite with check_same_thread=False and serialize writes:

# Testing config — single writer
engine = create_async_engine(
    "sqlite+aiosqlite:///./test.db",
    connect_args={"check_same_thread": False}
)

For SQLite locking in production contexts, see SQLite database is locked for the full breakdown of WAL mode and concurrent reader patterns.

sqlalchemy.exc.OperationalError — Database Connection Dropped

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError)
SSL SYSCALL error: EOF detected

This is a connection pool issue — a connection was open for too long and the server closed it. Enable pool_pre_ping=True and set pool_recycle to less than your server’s idle timeout:

engine = create_async_engine(
    DATABASE_URL,
    pool_pre_ping=True,   # Detects dead connections before use
    pool_recycle=1800,    # Recycle after 30 minutes
)

If you’re seeing this specifically after deployments or database restarts, see PostgreSQL connection refused — the connection refused error is different from a dropped connection mid-operation but has overlapping pool fixes.

Checking SQLAlchemy Version Compatibility

SQLAlchemy 2.0 introduced breaking changes from 1.x. If you’re seeing AttributeError: 'Query' object has no attribute 'all' or similar, you may be mixing 1.x and 2.x syntax. Check your version:

pip show sqlalchemy

SQLAlchemy 2.x requires the select() construct and session.execute() instead of session.query(). The legacy Query API still works in 2.x under session.query() but is deprecated. Migrate to the new-style API:

# Old style (SQLAlchemy 1.x, deprecated in 2.x):
users = session.query(User).filter(User.active == True).all()

# New style (SQLAlchemy 2.x):
users = session.execute(select(User).where(User.active == True)).scalars().all()
F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles