Fix: MySQL Slow Query — Diagnosis and Optimization with EXPLAIN
Quick Answer
How to diagnose and fix slow MySQL queries — enabling the slow query log, reading EXPLAIN output, adding indexes, fixing N+1 queries, and optimizing JOINs and ORDER BY.
The Problem
A MySQL query runs much slower than expected:
SELECT * FROM orders WHERE customer_id = 42;
-- Takes 12 seconds on a table with 5 million rowsOr an API endpoint shows the database query is the bottleneck:
GET /api/orders?customer=42 — 12,340ms
DB query: SELECT * FROM orders WHERE customer_id = 42 — 11,980msOr a query was fast in development but slow in production with real data:
Works fine locally (100 rows) → 15 seconds in production (5M rows)Why This Happens
Slow MySQL queries almost always come down to a few root causes:
- Full table scan (no index) — MySQL reads every row to find matching records. A table with 5 million rows scanned for a single value is inherently slow.
- Wrong index or no index on JOIN/WHERE columns — a column appears in a
WHERE,JOIN ON, orORDER BYclause but has no index, forcing a sequential scan. - N+1 query pattern — the application makes one query to fetch N records, then N more queries to fetch related data for each, resulting in hundreds or thousands of round trips.
- Selecting more data than needed —
SELECT *fetches all columns including large TEXT/BLOB columns that aren’t used. Fetching unnecessary columns increases I/O. - Filesort on large result sets —
ORDER BYon a column without an index causes MySQL to sort in memory (or on disk), which is slow for large datasets. - Outdated table statistics — MySQL’s query optimizer uses statistics to choose query plans. Stale statistics lead to poor plan choices like full table scans when an index would be faster.
- Implicit type conversion — comparing a VARCHAR column to an integer causes MySQL to cast every value and ignore the index.
Step 1: Enable and Check the Slow Query Log
The slow query log records queries that exceed a time threshold. Enable it to find the worst offenders:
-- Check current slow query log settings
SHOW VARIABLES LIKE 'slow_query%';
SHOW VARIABLES LIKE 'long_query_time';
-- Enable slow query log (session or global)
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1; -- Log queries taking > 1 second
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';
-- Also log queries that don't use indexes
SET GLOBAL log_queries_not_using_indexes = 'ON';Or enable permanently in my.cnf:
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1Analyze the slow query log with mysqldumpslow:
# Top 10 slowest queries
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log
# Top 10 most frequent slow queries
mysqldumpslow -s c -t 10 /var/log/mysql/slow.log
# Queries taking more than 5 seconds
mysqldumpslow -t 10 -l 5 /var/log/mysql/slow.logUse pt-query-digest (Percona Toolkit) for richer analysis:
pt-query-digest /var/log/mysql/slow.log | head -100Fix 1: Read EXPLAIN Output
EXPLAIN shows how MySQL executes a query — which indexes it uses, how many rows it scans, and which join type it applies:
EXPLAIN SELECT * FROM orders WHERE customer_id = 42;Sample output (bad — full table scan):
+----+-------------+--------+------+---------------+------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+---------------+------+---------+------+---------+-------------+
| 1 | SIMPLE | orders | ALL | NULL | NULL | NULL | NULL | 4987234 | Using where |
+----+-------------+--------+------+---------------+------+---------+------+---------+-------------+Key columns to interpret:
| Column | What to look for |
|---|---|
type | ALL = full table scan (bad). ref, eq_ref, range = index used (good). const = single row lookup (best). |
key | The index MySQL chose. NULL means no index was used. |
rows | Estimated rows scanned. High numbers with few result rows = inefficiency. |
Extra | Using filesort = slow sort without index. Using temporary = temp table created. Using index = index-only scan (fast). |
After adding an index:
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
EXPLAIN SELECT * FROM orders WHERE customer_id = 42;+----+-------------+--------+------+------------------------+------------------------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+------------------------+------------------------+---------+-------+------+-------+
| 1 | SIMPLE | orders | ref | idx_orders_customer_id | idx_orders_customer_id | 4 | const | 150 | NULL |
+----+-------------+--------+------+------------------------+------------------------+---------+-------+------+-------+type: ref and rows: 150 — MySQL now scans 150 rows instead of 5 million.
Use EXPLAIN FORMAT=JSON for detailed cost estimates:
EXPLAIN FORMAT=JSON SELECT * FROM orders WHERE customer_id = 42\GUse EXPLAIN ANALYZE (MySQL 8.0+) for actual execution times alongside estimates:
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 42;
-- Shows actual rows, actual loops, and actual time per operationFix 2: Add the Right Indexes
Different query patterns need different index strategies:
Equality lookup — single column:
-- Slow: full scan
SELECT * FROM users WHERE email = '[email protected]';
-- Add index on the filter column
CREATE INDEX idx_users_email ON users (email);
-- Or for unique values:
CREATE UNIQUE INDEX idx_users_email ON users (email);Composite index — multiple filter columns:
-- Query filters on status AND created_at
SELECT * FROM orders WHERE status = 'pending' AND created_at > NOW() - INTERVAL 7 DAY;
-- Order: equality columns first, range columns last
CREATE INDEX idx_orders_status_created ON orders (status, created_at);
-- MySQL can use this index for both the equality filter and the range filterCommon Mistake: Putting the range column first in a composite index. MySQL can use a composite index for prefix columns, but stops using it at the first range condition.
(status, created_at)works forWHERE status = 'x' AND created_at > y.(created_at, status)only uses the index forcreated_at, not the equality filter onstatus.
Covering index — all needed columns in the index:
-- Query only needs id, status, total
SELECT id, status, total FROM orders WHERE customer_id = 42;
-- Covering index includes all selected columns
CREATE INDEX idx_orders_customer_covering ON orders (customer_id, status, total);
-- EXPLAIN Extra: "Using index" — no table row access at allPartial index (prefix index for TEXT/VARCHAR):
-- Index only the first 20 characters of a long URL column
CREATE INDEX idx_posts_url ON posts (url(20));
-- Useful when full-column indexing is too largeRemove redundant indexes. Multiple overlapping indexes waste write performance:
-- Find duplicate/redundant indexes
SELECT * FROM sys.schema_redundant_indexes;
-- Find unused indexes (MySQL 8.0 with performance_schema)
SELECT * FROM sys.schema_unused_indexes;Fix 3: Fix N+1 Query Problems
N+1 is the most common application-level performance issue. It’s invisible in EXPLAIN because each individual query is fast — the problem is volume:
# N+1 in Python — 1 query + 100 separate queries = 101 total
customers = db.execute("SELECT * FROM customers LIMIT 100").fetchall()
for customer in customers:
orders = db.execute(
"SELECT * FROM orders WHERE customer_id = %s", (customer['id'],)
).fetchall()
customer['orders'] = ordersFix — batch the second query:
# 2 queries total — fetch all related records at once
customers = db.execute("SELECT * FROM customers LIMIT 100").fetchall()
customer_ids = [c['id'] for c in customers]
# Use IN to fetch all orders in one query
placeholders = ','.join(['%s'] * len(customer_ids))
orders = db.execute(
f"SELECT * FROM orders WHERE customer_id IN ({placeholders})",
customer_ids
).fetchall()
# Group orders by customer_id in Python
from collections import defaultdict
orders_by_customer = defaultdict(list)
for order in orders:
orders_by_customer[order['customer_id']].append(order)
for customer in customers:
customer['orders'] = orders_by_customer[customer['id']]Or use a JOIN:
SELECT c.id, c.name, o.id AS order_id, o.total, o.status
FROM customers c
LEFT JOIN orders o ON o.customer_id = c.id
LIMIT 100;In SQLAlchemy:
# N+1 — accessing relationship without eager loading
customers = session.query(Customer).limit(100).all()
for c in customers:
print(c.orders) # Each access fires a new query
# Fix — eager load with joinedload
from sqlalchemy.orm import joinedload
customers = (
session.query(Customer)
.options(joinedload(Customer.orders))
.limit(100)
.all()
)
# One query with JOIN — no extra queries when accessing c.ordersDetect N+1 with query logging. Enable MySQL query log and look for repeated queries with different parameter values:
SET GLOBAL general_log = 'ON';
SET GLOBAL general_log_file = '/var/log/mysql/general.log';
-- Run your application, then check the log for repeated patterns
SET GLOBAL general_log = 'OFF';Fix 4: Fix Slow ORDER BY and GROUP BY
Sorting without an index causes a filesort — MySQL sorts all matching rows in memory or on disk:
EXPLAIN SELECT * FROM posts ORDER BY created_at DESC LIMIT 20;
-- Extra: Using filesort ← slow for large tablesAdd an index that matches the sort order:
CREATE INDEX idx_posts_created_at ON posts (created_at DESC);
EXPLAIN SELECT * FROM posts ORDER BY created_at DESC LIMIT 20;
-- Extra: NULL ← index used for ordering, no filesortFor filtered + sorted queries, the index must cover both:
-- Query: filter on status, sort by created_at
SELECT * FROM posts WHERE status = 'published' ORDER BY created_at DESC LIMIT 20;
-- Index must include both columns in the right order
CREATE INDEX idx_posts_status_created ON posts (status, created_at DESC);
-- Now MySQL uses the index for both the WHERE and ORDER BYAvoid ORDER BY RAND() — it’s extremely slow on large tables because it assigns a random value to every row, then sorts all of them:
-- SLOW — scans entire table, assigns random value, sorts
SELECT * FROM posts ORDER BY RAND() LIMIT 5;
-- FAST — efficient random sampling for large tables
SELECT * FROM posts
WHERE id >= (SELECT FLOOR(RAND() * (SELECT MAX(id) FROM posts)))
ORDER BY id
LIMIT 5;For GROUP BY queries, ensure the grouped column is indexed:
-- Slow GROUP BY — no index on category
SELECT category, COUNT(*) FROM products GROUP BY category;
-- Add index
CREATE INDEX idx_products_category ON products (category);Fix 5: Optimize Pagination — Avoid Large OFFSET
OFFSET for pagination forces MySQL to read and discard all rows before the offset position:
-- SLOW — must read and discard 100,000 rows
SELECT * FROM posts ORDER BY id DESC LIMIT 20 OFFSET 100000;Use keyset (cursor-based) pagination instead:
-- First page
SELECT * FROM posts ORDER BY id DESC LIMIT 20;
-- Returns rows with IDs e.g. 5000 down to 4981
-- Next page — use the last seen ID as cursor
SELECT * FROM posts WHERE id < 4981 ORDER BY id DESC LIMIT 20;
-- MySQL uses the index on id — no rows discardedFor multi-column sort, include all sort columns in the cursor:
-- Sort by created_at DESC, then id DESC for tie-breaking
SELECT * FROM posts
WHERE (created_at, id) < ($last_created_at, $last_id)
ORDER BY created_at DESC, id DESC
LIMIT 20;
-- Index: (created_at DESC, id DESC)
CREATE INDEX idx_posts_cursor ON posts (created_at DESC, id DESC);Real-world scenario: An e-commerce site with 500,000 products used
OFFSETpagination. Page 1000 (OFFSET 20000) took 2 seconds. Switching to keyset pagination reduced it to 8ms — the index jump is instantaneous regardless of page depth.
Fix 6: Fix Implicit Type Conversion
When the data type of a query parameter doesn’t match the column type, MySQL casts every row and ignores the index:
-- Column: user_id VARCHAR(20), indexed
-- Query passes an integer — type mismatch!
SELECT * FROM sessions WHERE user_id = 12345;
-- EXPLAIN type: ALL — index ignored due to type conversion
-- Fix: match the column type
SELECT * FROM sessions WHERE user_id = '12345';
-- EXPLAIN type: ref — index usedCheck column types before writing queries:
DESCRIBE sessions;
-- user_id: varchar(20)
-- → Always pass strings for varchar columnsWith ORMs, ensure parameter types match:
# SQLAlchemy — Python int vs VARCHAR column
# Wrong
session.query(Session).filter(Session.user_id == 12345)
# Correct
session.query(Session).filter(Session.user_id == str(12345))Fix 7: Use performance_schema for Real-Time Monitoring
MySQL’s performance_schema shows which queries are consuming the most resources right now:
-- Find the slowest queries currently running
SELECT * FROM information_schema.PROCESSLIST
WHERE TIME > 5
ORDER BY TIME DESC;
-- Kill a specific slow query
KILL QUERY <process_id>;
-- Find queries with most total execution time (performance_schema)
SELECT
SCHEMA_NAME,
DIGEST_TEXT,
COUNT_STAR AS calls,
ROUND(SUM_TIMER_WAIT / 1e12, 2) AS total_seconds,
ROUND(AVG_TIMER_WAIT / 1e9, 2) AS avg_ms
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 20;Check table I/O hotspots:
SELECT
OBJECT_SCHEMA,
OBJECT_NAME,
COUNT_READ,
COUNT_WRITE,
ROUND(SUM_TIMER_READ / 1e12, 2) AS read_seconds,
ROUND(SUM_TIMER_WRITE / 1e12, 2) AS write_seconds
FROM performance_schema.table_io_waits_summary_by_table
ORDER BY SUM_TIMER_READ + SUM_TIMER_WRITE DESC
LIMIT 10;Use sys schema shortcuts (MySQL 5.7+):
-- Top 10 slowest statements
SELECT * FROM sys.statement_analysis ORDER BY avg_latency DESC LIMIT 10;
-- Tables with most full table scans
SELECT * FROM sys.schema_tables_with_full_table_scans LIMIT 10;
-- Index usage stats
SELECT * FROM sys.schema_index_statistics ORDER BY rows_selected DESC LIMIT 20;Still Not Working?
Force MySQL to use a specific index to test whether the index is actually helping:
-- Force index use
SELECT * FROM orders USE INDEX (idx_orders_customer_id) WHERE customer_id = 42;
-- Ignore a specific index (test without it)
SELECT * FROM orders IGNORE INDEX (idx_orders_customer_id) WHERE customer_id = 42;Update table statistics if the query planner makes poor choices:
ANALYZE TABLE orders;
-- Forces MySQL to recompute statistics for better plan decisionsCheck if the table needs optimization after many deletes/updates (fragmentation):
OPTIMIZE TABLE orders;
-- Rebuilds the table, reclaims space, updates statistics
-- Note: locks the table during optimization on MyISAM; InnoDB uses online rebuildIncrease the InnoDB buffer pool for frequently accessed data. If the buffer pool is too small, MySQL reads from disk on every query:
# my.cnf — set to 70-80% of available RAM
[mysqld]
innodb_buffer_pool_size = 4G-- Check buffer pool hit rate (should be > 99%)
SELECT
(1 - (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests)) * 100
AS buffer_pool_hit_rate
FROM (
SELECT
SUM(VARIABLE_VALUE) AS Innodb_buffer_pool_reads
FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Innodb_buffer_pool_reads'
) reads,
(
SELECT
SUM(VARIABLE_VALUE) AS Innodb_buffer_pool_read_requests
FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Innodb_buffer_pool_read_requests'
) read_requests;Check for lock contention if queries are fast alone but slow under concurrent load:
-- Show current locks and waiting queries
SELECT * FROM information_schema.INNODB_LOCKS;
SELECT * FROM information_schema.INNODB_LOCK_WAITS;
-- MySQL 8.0+
SELECT * FROM performance_schema.data_locks;
SELECT * FROM performance_schema.data_lock_waits;For related database issues, see Fix: PostgreSQL Slow Query, Fix: PostgreSQL Deadlock Detected, and Fix: PostgreSQL Max Connections Exceeded.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: MySQL Index Not Being Used — Query Optimizer Skipping Indexes
How to fix MySQL indexes not being used by the query optimizer — EXPLAIN output, implicit conversions, function on columns, composite index order, cardinality issues, and forcing indexes.
Fix: MySQL Replication Not Working — Replica Lag, Stopped Replication, or GTID Errors
How to fix MySQL replication issues — SHOW REPLICA STATUS errors, relay log corruption, GTID configuration, replication lag, skipping errors, and replica promotion.
Fix: MySQL Full-Text Search Not Working — MATCH AGAINST Returns No Results
How to fix MySQL full-text search issues — FULLTEXT index creation, minimum word length, stopwords, boolean mode vs natural language mode, InnoDB vs MyISAM, and LIKE fallback.
Fix: Rails N+1 Query Problem — Too Many Database Queries
How to fix Rails N+1 queries — includes vs joins vs preload vs eager_load, Bullet gem detection, avoiding N+1 in serializers and views, and counter caches.