Asynchronous Programming
Harness the full potential of FastAPI's asynchronous capabilities to build high-performance applications.
Content
Async Database Operations
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Async Database Operations in FastAPI — The Slightly Reckless, Very Practical Guide
"Asynchronous endpoints are awesome — until your database laughs and says, 'I only speak blocking.'" — Your future debugging self
You already know how to use async/await and how concurrency in FastAPI lets many requests mingle without crashing the party. Now let’s give your database the VIP treatment it deserves: true non-blocking DB operations, correct dependency wiring, testing tactics, and the things that will bite you at 3 AM if ignored.
Why this matters (and why your CPU wants it)
If your endpoint is async but your DB client is synchronous, each DB call blocks a worker thread and defeats much of the async benefit. That means less throughput, higher latency, and a sad developer. Use async DB drivers (or deliberately wrap sync calls) so your event loop stays free to handle other requests.
This follows naturally from our previous lessons on async/await and concurrency in FastAPI: if we let blocking I/O sit in the event loop, it's like asking a bouncer to read War and Peace while the club fills up.
Big picture: options and trade-offs
| Approach | Driver/Tools | Pros | Cons |
|---|---|---|---|
| Native async DB | asyncpg, SQLAlchemy (1.4+ async), databases, Tortoise | True non-blocking, better throughput | Slightly different APIs; some ORMs lag features |
| Sync DB in threadpool | SQLAlchemy (sync), Django ORM with run_in_threadpool | Drop-in for existing code | Thread overhead, connection pool complexity |
| Hybrid | Use async clients for most, run small sync work in threadpool | Pragmatic | Complexity, easy to make mistakes |
Practical patterns (copy-paste, but read the comments)
1) Use an async DB client when possible (SQLAlchemy + asyncpg example)
# db.py
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
DATABASE_URL = "postgresql+asyncpg://user:pass@localhost/db"
engine = create_async_engine(DATABASE_URL, pool_size=20)
AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
async def get_db():
async with AsyncSessionLocal() as session:
yield session
# in routes
from sqlalchemy import select
from fastapi import Depends
@app.get('/items')
async def read_items(db: AsyncSession = Depends(get_db)):
result = await db.execute(select(Item))
items = result.scalars().all()
return items
Notes:
- Use
expire_on_commit=Falseto avoid lazy loads that hit DB after session close. - Use
selectinload/joinedloadto avoid N+1 queries.
2) If you must use a sync ORM, isolate it from the event loop
from starlette.concurrency import run_in_threadpool
def query_sync(session, query):
return session.execute(query).fetchall()
@app.get('/legacy')
async def legacy_route():
rows = await run_in_threadpool(query_sync, sync_session, "SELECT ...")
return rows
This is a stopgap. Threadpool protects the loop, but every blocking call consumes a worker thread.
3) Short-lived sessions per request + dependency injection
Always create sessions per request (dependency) and close them properly. Don’t stash request-scoped DB sessions into globals or background tasks.
If a background task needs DB access, create a fresh session for it.
Transactions, timeouts, and cancellations (the important details)
- Use explicit transactions for multi-step changes:
async with session.begin(): .... - Set timeouts for long queries (DB or driver-level). Unbounded queries will block connections and starve the pool.
- Handle cancellations: if a request is cancelled, the coroutine may raise
asyncio.CancelledError. Your DB driver should support cancellation; otherwise connections may hang or leak.
Example transaction:
async def transfer_money(db: AsyncSession, a_id, b_id, amount):
async with db.begin():
a = await db.get(Account, a_id)
b = await db.get(Account, b_id)
a.balance -= amount
b.balance += amount
Performance tips (so it's not just correct — it's fast)
- Batch writes when possible instead of many tiny inserts.
- Use prepared statements / parameterized queries (asyncpg does this well).
- Avoid N+1 queries: use eager loading (
selectinload) or explicit joins. - Tune the DB pool size for your workload. Small pool + many concurrent requests = queueing.
Testing async DB operations (builds on your testing practices)
You already created tests for endpoints. Now make them async-aware and DB-safe.
Checklist:
- Use pytest-asyncio or any async test runner.
- Use httpx.AsyncClient to call your app in tests.
- Use a transactional fixture to rollback DB changes after each test, or use a disposable test database (Docker) seeded per run.
- For SQLAlchemy async, use nested transactions / SAVEPOINTs to rollback cleanly.
Example fixtures:
# conftest.py
import pytest
from httpx import AsyncClient
from myapp.main import app
from myapp.db import AsyncSessionLocal
@pytest.fixture
async def async_client():
async with AsyncClient(app=app, base_url='http://test') as c:
yield c
@pytest.fixture
async def db_session():
async with AsyncSessionLocal() as session:
async with session.begin_nested():
yield session
await session.rollback()
Test example:
@pytest.mark.asyncio
async def test_create_and_read(async_client, db_session):
# use db_session to seed data or via app endpoints
res = await async_client.get('/items')
assert res.status_code == 200
Caveats:
- SQLite in-memory with async drivers can be tricky: a single connection may be required for the lifetime of the test process. Prefer a real test DB in CI.
- Ensure migrations (Alembic) are run against the test DB schema or use a fixture to create models.
Debugging checklist (when things go sideways)
- Are you accidentally calling blocking code in an async endpoint? Search for heavy CPU or sync DB calls.
- Check connection pool exhaustion: are you leaking sessions (not closing) or the pool size too small?
- Are transactions left open due to exceptions? Use
async with session.begin()to be safe. - Do cancellation traces show DB driver waiting? That hints at driver-level cancellation issues.
Quick reference: do this, not that
- Do: Use async DB driver (asyncpg, SQLAlchemy async) for production throughput.
- Do: Provide session via FastAPI dependency with
async withand yield. - Do: Create fresh sessions for background tasks.
- Don't: Call sync DB code directly in an async endpoint without run_in_threadpool.
- Don't: Reuse request-scoped session across threads or background tasks.
Final thought (because you deserve a mic-drop)
Async endpoints without async DB calls are like a Tesla idling in traffic — fancy tech doing nothing. Use async DB drivers, manage sessions per request, test with async clients and transactional rollbacks, and keep an eye on connection pools and cancellations. Do this, and your app will actually behave the way you promised your stakeholders it would.
"Write async like you mean it — especially when your DB is the bottleneck."
Summary of key takeaways:
- Prefer native async DB drivers when possible.
- Use per-request async sessions as FastAPI dependencies.
- If forced to use sync drivers, isolate with run_in_threadpool and be mindful of thread pools.
- Test using pytest-asyncio + httpx AsyncClient and transactional fixtures or disposable test DBs.
- Watch connection pools, transactions, and cancellation behavior.
Version note: This builds on our earlier async/await and concurrency lessons and extends the testing strategies from "Testing FastAPI Applications" to include async DB-specific fixtures and patterns.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!