Skip to content

Fix: Python subprocess Not Working — Output Empty, Command Not Found, or Permission Denied

FixDevs ·

Quick Answer

How to fix Python subprocess issues — capturing stdout/stderr, shell=True risks, Popen vs run, timeout handling, and common subprocess errors explained.

The Problem

subprocess.run() runs but the output is empty or None:

import subprocess

result = subprocess.run(['ls', '-la'])
print(result.stdout)  # None

Or the command isn’t found even though it works in the terminal:

result = subprocess.run(['python', 'script.py'])
# FileNotFoundError: [Errno 2] No such file or directory: 'python'
# Works fine in the terminal — why not in subprocess?

Or the command succeeds but stdout and stderr are mixed or lost:

result = subprocess.run(['npm', 'install'], capture_output=True, text=True)
print(result.stdout)  # Empty — output went to stderr

Or a long-running command hangs the Python process indefinitely:

result = subprocess.run(['ffmpeg', '-i', 'large_video.mp4', 'out.mp4'])
# Hangs forever — no timeout

Why This Happens

subprocess.run() has safe defaults that don’t match most expectations:

  • Output not captured by default — without capture_output=True or stdout=PIPE, output goes directly to the terminal and result.stdout is None.
  • text=False by default — without text=True (or encoding=), captured output is bytes, not str.
  • shell=False by default — commands are executed directly, not through a shell. Shell builtins like cd, source, and pipe operators (|, &&) don’t work without shell=True.
  • PATH differences — the Python process may have a different PATH than your interactive shell, especially inside virtual environments, Docker, or cron jobs.
  • stderr is separate — stdout and stderr are captured independently. If the command writes to stderr, result.stdout is still empty.

Fix 1: Capture Output Correctly

Always pass capture_output=True, text=True when you need the output as a string:

import subprocess

# WRONG — stdout goes to terminal, result.stdout is None
result = subprocess.run(['ls', '-la'])
print(result.stdout)  # None

# CORRECT — capture stdout and stderr as text
result = subprocess.run(
    ['ls', '-la'],
    capture_output=True,
    text=True
)
print(result.stdout)   # Directory listing as string
print(result.stderr)   # Any errors
print(result.returncode)  # 0 for success

# Equivalent explicit form (pre-Python 3.7):
result = subprocess.run(
    ['ls', '-la'],
    stdout=subprocess.PIPE,
    stderr=subprocess.PIPE,
    encoding='utf-8'
)

Capture both stdout and stderr together:

# Merge stderr into stdout
result = subprocess.run(
    ['npm', 'install'],
    stdout=subprocess.PIPE,
    stderr=subprocess.STDOUT,  # Redirect stderr to stdout
    text=True
)
print(result.stdout)  # Contains both stdout and stderr

Check return code and raise on failure:

# check=True raises CalledProcessError if return code != 0
try:
    result = subprocess.run(
        ['git', 'pull'],
        capture_output=True,
        text=True,
        check=True  # Raises if command fails
    )
    print(result.stdout)
except subprocess.CalledProcessError as e:
    print(f"Command failed with exit code {e.returncode}")
    print(f"stderr: {e.stderr}")

Fix 2: Handle PATH and Environment Issues

The subprocess inherits the parent process’s environment, which may differ from your interactive shell:

import subprocess
import os
import sys

# WRONG — 'python' may not be in PATH or points to wrong version
result = subprocess.run(['python', 'script.py'], capture_output=True, text=True)

# CORRECT — use sys.executable to run the same Python interpreter
result = subprocess.run(
    [sys.executable, 'script.py'],
    capture_output=True,
    text=True
)

# CORRECT — use full path when you know where the binary is
result = subprocess.run(
    ['/usr/local/bin/node', 'server.js'],
    capture_output=True,
    text=True
)

# Debug PATH issues — print what subprocess sees
result = subprocess.run(
    ['env'],  # or ['printenv'] on Linux
    capture_output=True,
    text=True
)
print(result.stdout)

Pass a custom environment:

import os

# Extend the current environment with extra variables
env = os.environ.copy()
env['MY_VAR'] = 'my_value'
env['PATH'] = f"/custom/bin:{env['PATH']}"

result = subprocess.run(
    ['my_command'],
    capture_output=True,
    text=True,
    env=env
)

Set the working directory:

result = subprocess.run(
    ['npm', 'run', 'build'],
    capture_output=True,
    text=True,
    cwd='/path/to/project'  # Run in this directory
)

Fix 3: Use shell=True Correctly (and Safely)

shell=True runs the command through /bin/sh, enabling shell features like pipes, redirection, and glob expansion — but it introduces security risks:

# shell=True — enables shell features
result = subprocess.run(
    'ls -la | grep ".py" | wc -l',
    shell=True,
    capture_output=True,
    text=True
)

# Same as above but with a list (preferred on Unix)
result = subprocess.run(
    ['bash', '-c', 'ls -la | grep ".py" | wc -l'],
    capture_output=True,
    text=True
)

Security warning — never use shell=True with user input:

# DANGEROUS — command injection vulnerability
user_input = "file.txt; rm -rf /"
result = subprocess.run(f"cat {user_input}", shell=True)  # Executes rm -rf /

# SAFE — pass user input as list argument (no shell injection possible)
result = subprocess.run(['cat', user_input], capture_output=True, text=True)

Warning: Only use shell=True when the command string is fully under your control and doesn’t include any user-supplied input.

When you actually need shell features, use pipes explicitly:

import subprocess

# Instead of: "ps aux | grep python"
ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE)
grep = subprocess.Popen(
    ['grep', 'python'],
    stdin=ps.stdout,
    stdout=subprocess.PIPE,
    text=True
)
ps.stdout.close()  # Allow ps to receive SIGPIPE if grep exits early
output, _ = grep.communicate()
print(output)

Fix 4: Set Timeouts to Prevent Hanging

Long-running commands can hang indefinitely without a timeout:

import subprocess

# WRONG — hangs if command never finishes
result = subprocess.run(['ping', '-c', '1000', 'example.com'])

# CORRECT — raises TimeoutExpired after 10 seconds
try:
    result = subprocess.run(
        ['ping', '-c', '1000', 'example.com'],
        capture_output=True,
        text=True,
        timeout=10
    )
except subprocess.TimeoutExpired as e:
    print(f"Command timed out after {e.timeout}s")
    print(f"Partial stdout: {e.stdout}")

Timeouts with Popen for streaming output:

import subprocess
import threading

def run_with_timeout(cmd, timeout):
    proc = subprocess.Popen(
        cmd,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        text=True
    )

    try:
        stdout, stderr = proc.communicate(timeout=timeout)
        return proc.returncode, stdout, stderr
    except subprocess.TimeoutExpired:
        proc.kill()
        stdout, stderr = proc.communicate()
        raise subprocess.TimeoutExpired(cmd, timeout, output=stdout, stderr=stderr)

returncode, out, err = run_with_timeout(['long_command'], timeout=30)

Fix 5: Stream Output in Real Time with Popen

subprocess.run() buffers all output until the command finishes. Use Popen when you need to see output as it arrives:

import subprocess

# subprocess.run() — waits for everything, then returns
# Good for: short commands where you need the full output at once

# Popen — gives you a process handle for streaming
# Good for: long commands, progress updates, interactive processes

def run_streaming(cmd):
    with subprocess.Popen(
        cmd,
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT,
        text=True,
        bufsize=1  # Line-buffered
    ) as proc:
        for line in proc.stdout:
            print(line, end='')  # Print each line as it arrives
        proc.wait()
        return proc.returncode

# Usage — see build output in real time
returncode = run_streaming(['make', 'build'])

Async streaming with asyncio:

import asyncio

async def run_async(cmd):
    proc = await asyncio.create_subprocess_exec(
        *cmd,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE
    )

    async def read_stream(stream, callback):
        while True:
            line = await stream.readline()
            if not line:
                break
            callback(line.decode())

    await asyncio.gather(
        read_stream(proc.stdout, lambda l: print('OUT:', l, end='')),
        read_stream(proc.stderr, lambda l: print('ERR:', l, end='')),
    )

    await proc.wait()
    return proc.returncode

# Run async subprocess
asyncio.run(run_async(['npm', 'run', 'build']))

Fix 6: Common Subprocess Patterns

Run a command and get output as a list of lines:

def get_lines(cmd):
    result = subprocess.run(cmd, capture_output=True, text=True, check=True)
    return result.stdout.strip().splitlines()

files = get_lines(['git', 'diff', '--name-only'])
print(files)  # ['src/main.py', 'tests/test_main.py']

Check if a command exists:

import shutil

def command_exists(cmd):
    return shutil.which(cmd) is not None

if command_exists('ffmpeg'):
    subprocess.run(['ffmpeg', '-version'])
else:
    print("ffmpeg not installed")

Run a command with input (stdin):

# Pass input to the process's stdin
result = subprocess.run(
    ['python', '-c', 'import sys; data = sys.stdin.read(); print(data.upper())'],
    input='hello world\n',
    capture_output=True,
    text=True
)
print(result.stdout)  # HELLO WORLD

Retry logic for flaky commands:

import time

def run_with_retry(cmd, retries=3, delay=2):
    for attempt in range(retries):
        result = subprocess.run(cmd, capture_output=True, text=True)
        if result.returncode == 0:
            return result
        if attempt < retries - 1:
            print(f"Attempt {attempt + 1} failed, retrying in {delay}s...")
            time.sleep(delay)
    raise subprocess.CalledProcessError(result.returncode, cmd, result.stdout, result.stderr)

Still Not Working?

Output buffering causes empty or delayed output — some programs buffer their output when not connected to a terminal. If streaming output appears empty or delayed, force line buffering with PYTHONUNBUFFERED=1 (for Python subprocesses) or use the program’s own unbuffered flag (e.g., python -u). For C programs, stdbuf -oL command forces line buffering.

Unicode errors in output — if the command outputs non-UTF-8 characters, text=True raises a UnicodeDecodeError. Use encoding='latin-1' or errors='replace' to handle it:

result = subprocess.run(
    ['some_command'],
    capture_output=True,
    encoding='utf-8',
    errors='replace'  # Replace undecodable bytes with ?
)

FileNotFoundError on Windows for commands that work in CMD — on Windows, some commands like dir, copy, and del are shell builtins and require shell=True. Also, executable extensions (.exe, .bat, .cmd) may need to be included explicitly:

# Windows shell builtin
result = subprocess.run('dir', shell=True, capture_output=True, text=True)

# Explicit .exe extension
result = subprocess.run(['python.exe', 'script.py'], capture_output=True, text=True)

Process inherits open file descriptors — by default, child processes inherit the parent’s open file descriptors, which can cause issues in long-running servers. Pass close_fds=True (the default on Unix since Python 3.2) or use pass_fds=() to control which FDs are inherited.

For related Python issues, see Fix: Python asyncio Not Running and Fix: Python Logging Not Working.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles