When print stops being enough, Python's interactive debugger takes over. pdb (Python Debugger) lets you pause execution at a chosen line, inspect local and global variables, step through the program one statement at a time, and call any Python expression right there in the frozen frame. Once you get comfortable with it, it replaces dozens of speculative prints.
The modern entry point is the built-in breakpoint(). Call it anywhere in your code and the program stops there and drops into pdb. You can configure the debugger via the PYTHONBREAKPOINT environment variable; setting it to 0 disables all breakpoints without removing them, which is handy for production.
Inside pdb the essential commands are n (next line), s (step into function), c (continue until next breakpoint or end), l (list source around the current line), p expr (print an expression), and q (quit). You can also just type any Python expression to evaluate it in the current frame, which is often the fastest way to inspect state.
Beyond pdb, IDE debuggers (VS Code, PyCharm) provide visual stepping and variable inspection with no extra setup. For post-mortem investigation of a crashed script, python -m pdb script.py runs under the debugger from the start; pdb.post_mortem() opens the debugger at the last traceback. Profilers (cProfile) and memory tools (tracemalloc, memory_profiler) are related diagnostics for performance and resource issues.
breakpoint() and the main commands
Drop breakpoint() where you want to pause. On entry, the prompt (Pdb) appears. Start by typing l to see the surrounding code, p <name> to inspect specific values, and n/s to move forward.
Conditional breakpoints: break module:line, condition inside pdb (or using a pdb command file) lets you pause only when a condition holds — invaluable when the bug happens on the 10_000th iteration.
Post-mortem and profilers
When a program crashes, python -m pdb my_script.py re-runs it under the debugger and drops you at the exception frame. pdb.post_mortem() can be called programmatically inside an except to do the same thing.
python -m cProfile script.py produces a call-by-call profile; visualize it with snakeviz or py-spy. For memory, tracemalloc (stdlib) gives you allocation snapshots. Use these when performance, not correctness, is the problem.
Debugging and profiling tools.
| Tool | Purpose |
|---|---|
pdbmodule | Interactive debugger. |
breakpoint()built-in | Drop into the debugger here. |
pdb.post_mortem()function | Open pdb on the current traceback. |
cProfilemodule | Call-based CPU profiler. |
tracemallocmodule | Memory allocation snapshots. |
sys.settracefunction | Low-level tracing hook (rarely needed). |
py-spytool | Sampling profiler, no code changes. |
viztracertool | Timeline trace visualizer. |
Using Tools to Debug Code code example
The script uses logging, a cProfile-style timing, and shows where you would typically place a breakpoint.
# Lesson: Using Tools to Debug Code
import cProfile
import pstats
from io import StringIO
def slow_sum(n: int) -> int:
total = 0
for i in range(n):
for j in range(100):
total += i * j
return total
def fast_sum(n: int) -> int:
# Same result, O(n) not O(n*100)
return sum(i * (100 * 99 // 2) for i in range(n))
# Profile both
pr = cProfile.Profile()
pr.enable()
slow_result = slow_sum(500)
fast_result = fast_sum(500)
pr.disable()
assert slow_result == fast_result
buf = StringIO()
pstats.Stats(pr, stream=buf).sort_stats("cumulative").print_stats(5)
print(buf.getvalue())
# Where you would place breakpoint() in a real session
def buggy(xs):
# Suppose we suspect xs[-1] was already mutated
# breakpoint() # uncomment to step through
return sum(xs) / len(xs)
print("avg:", buggy([1, 2, 3, 4]))
Points to notice:
1) `cProfile` tells you where the program spent its time — per function.
2) `pstats.Stats.sort_stats('cumulative')` sorts by total time including calls.
3) `breakpoint()` is commented out; uncomment when debugging interactively.
4) Profiling usually drives the rewrite (`fast_sum`), not guesswork.
A minimal pdb session you can try by hand:
# Save as demo.py and run: python -m pdb demo.py
# Pdb commands to try:
# l list source lines
# n next line (step over)
# s step into function
# p var print variable
# c continue
# q quit
def add(a, b):
return a + b
def main():
x = 1
y = 2
breakpoint() # pause here when running normally
print(add(x, y))
if __name__ == "__main__":
main()
Demonstrate using assert to lock known behavior.
def avg(xs):
assert xs, "xs must not be empty"
return sum(xs) / len(xs)
assert avg([2, 4, 6]) == 4
try:
avg([])
except AssertionError:
pass
else:
raise AssertionError("should have raised")
Running prints something like (profile stats abbreviated):
1004 function calls in 0.024 seconds
Ordered by: cumulative time
List reduced from 6 to 5 due to restriction <5>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.023 0.023 0.023 0.023 script.py:7(slow_sum)
1 0.000 0.000 0.001 0.001 script.py:13(fast_sum)
500 0.001 0.000 0.001 0.000 script.py:14(<genexpr>)
...
avg: 2.5