Using External Code Libraries

A library is code somebody else has already written, tested, and published so you don't have to. Python's strength is its ecosystem: the Python Package Index (PyPI) hosts hundreds of thousands of libraries covering HTTP (requests), data analysis (pandas), web apps (fastapi), testing (pytest), and almost anything else. Pulling in a small, well-maintained library is almost always faster and safer than rolling your own from scratch.

Libraries arrive through pip, Python's official package manager. pip install requests downloads the latest version from PyPI, along with any dependencies it needs, and puts them on the import path of the current Python. After that, import requests just works. The companion command pip show requests prints the installed version and location; pip list shows every package in the active environment.

Installing a library globally is tempting but brittle: two projects end up sharing the same set of versions, and updating one breaks the other. The safe habit is to create a virtual environment per project (python -m venv .venv), activate it, and install everything inside. Each project then has its own list of dependencies that can be reproduced exactly from a requirements.txt.

Choosing a library is partly a research step. Read the README, look at the last release date, scan the issue tracker. Prefer libraries with clear docs, active maintenance, and a BSD/MIT/Apache licence compatible with your project. For one-off scripts the bar is lower; for code that ships to users, treat each dependency as a long-term commitment.

Finding, installing, and importing

pip install name is the baseline. You can pin a version with pip install name==1.2.3, a range with name>=1.2,<2, and install from a git URL for unpublished code: pip install git+https://github.com/user/repo.git. pip install -r requirements.txt installs a whole list at once.

Once installed, the library is imported by its package name (often, but not always, the same as the install name). The docs are the source of truth. import requests, import numpy as np, and from PIL import Image are typical examples.

Keeping an environment reproducible

Freeze the installed set with pip freeze > requirements.txt. Commit that file. On a new machine, python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt reproduces the environment byte-for-byte.

Tools like pip-tools, Poetry, uv and hatch go further by separating your direct dependencies from their transitive ones. For small scripts, plain requirements.txt is still the most common and least surprising choice.

The everyday tools for external libraries.

ToolPurpose
pip install name
command
Installs a package from PyPI.
pip show name
command
Prints metadata and location of an installed package.
pip list
command
Lists every package in the environment.
pip freeze > requirements.txt
command
Writes pinned versions to a file.
python -m venv .venv
module
Creates a virtual environment.
importlib.metadata
module
Reads installed package metadata at runtime.
PyPI
registry
Central index for published Python packages.
name>=1,<2
version spec
Version range allowed by pip.

Using External Code Libraries code example

The script below uses importlib.metadata to inspect installed libraries without running any network calls, so it works in any environment.

# Lesson: Using External Code Libraries
from importlib import metadata

# List every installed distribution, sorted
names = sorted(d.metadata["Name"] for d in metadata.distributions())
print("installed count:", len(names))
print("first 5:        ", names[:5])

# Inspect a specific package (pip is always installed with Python)
try:
    pip_version = metadata.version("pip")
except metadata.PackageNotFoundError:
    pip_version = "not installed"
print("pip version:", pip_version)

# What are pip's top-level requirements?
try:
    reqs = metadata.requires("pip") or []
except metadata.PackageNotFoundError:
    reqs = []
print("pip requires (first 3):", reqs[:3])

# Distribution metadata: summary, homepage, author
meta = metadata.metadata("pip") if pip_version != "not installed" else None
if meta:
    print("summary:    ", meta["Summary"])
    print("home-page:  ", meta["Home-page"] or meta.get("Project-URL", "-"))
    print("requires py:", meta["Requires-Python"])

# A library is just importable Python: try a safe import
try:
    import pip as _pip
    print("import ok:", _pip.__name__)
except ImportError as err:
    print("import failed:", err)

Walk through each block:

1) `distributions()` lists every package; great for auditing the environment.
2) `metadata.version(name)` is the programmatic `pip show name` for version only.
3) `metadata.requires(name)` exposes dependency specs to your own code.
4) Importing the installed package is the normal way you use it afterwards.

Practice inspecting and guarding optional imports.

# Example A: optional dependency
try:
    import requests
    have_requests = True
except ImportError:
    have_requests = False
print("requests available:", have_requests)

# Example B: version check with packaging logic
from importlib.metadata import version, PackageNotFoundError
def installed_version(name: str) -> str:
    try:
        return version(name)
    except PackageNotFoundError:
        return "missing"
print(installed_version("pip"))
print(installed_version("definitely-not-a-real-library-xyz"))

Sanity checks that do not require network access.

from importlib import metadata
assert metadata.version("pip")
assert metadata.metadata("pip")["Name"].lower() == "pip"
try:
    metadata.version("not-a-real-pkg-qwe")
except metadata.PackageNotFoundError:
    pass
else:
    raise AssertionError("expected PackageNotFoundError")

Output varies by environment but looks like:

installed count: 42
first 5:         ['certifi', 'charset-normalizer', 'idna', 'pip', 'requests']
pip version: 24.0
pip requires (first 3): []
summary:     The PyPA recommended tool for installing Python packages.
home-page:   https://pip.pypa.io/
requires py: >=3.7
import ok: pip