Writing is the mirror image of reading, with one extra responsibility: you decide what ends up on disk and in what shape. Python gives you three write modes — "w" truncates then writes, "a" appends to the end, and "x" fails if the file already exists. Choosing the right mode on purpose is the first line of defense against accidentally overwriting important data.
For small files the shortcut is path.write_text(data, encoding="utf-8"). It opens, writes and closes in one call. For anything larger or line-by-line, open with "w" and call f.write(line) or f.writelines(iterable_of_lines). Note that writelines does not add newlines for you — each element must already include "\n" if you want line breaks.
Newline handling is the subtle part. On Windows, text mode translates "\n" into "\r\n" on write and reverses the translation on read. That is usually what you want, but it means you should never open a CSV file with the default newline; pass newline="" so the csv module can control the line terminator itself. Binary files always need "wb" and never translate.
Durability is the last concern. f.write() pushes bytes into a buffer; f.flush() forces the buffer to the OS; os.fsync(f.fileno()) asks the OS to move them to stable storage. Most applications rely on the with block's implicit close, which flushes automatically. If your program cannot afford to lose the last few writes on a crash, use fsync or the "write to temp, then rename" pattern.
Choosing the right write mode
"w" is destructive: it truncates the file on open. Use it when the new content is authoritative. "x" is protective: it raises FileExistsError if the path already exists — perfect for "create, do not clobber" workflows like generating a daily report.
For binary payloads like images or pickled objects, always use "wb" and drop the encoding argument. Mixing text and binary modes is the biggest source of corrupted files.
Structured writers and atomic replace
csv.writer and csv.DictWriter handle quoting for you; json.dump(obj, f, indent=2) writes pretty-printed JSON with one call. Both of these pair naturally with the with statement.
For data you cannot afford to lose, write to a temporary file next to the target and then Path(tmp).replace(target). The replace is atomic on every major OS, so readers of the target file always see a fully-written version.
The tools for turning Python data into a file.
| Tool | Purpose |
|---|---|
Path.write_text(s)method | Writes an entire small text file in one call. |
open(p, 'w')built-in | Truncates then writes text. |
open(p, 'a')built-in | Appends to an existing file. |
open(p, 'x')built-in | Fails if the file already exists. |
csv.writer / DictWriterclass | Writes CSV rows with proper quoting. |
json.dump(obj, f)function | Writes an object as JSON to a file. |
os.fsync(fd)function | Forces OS-level flush for durability. |
Path.replace(target)method | Atomically renames a file. |
Writing Data to Files code example
The script creates a small report in text, CSV and JSON forms and demonstrates the atomic-rename pattern.
# Lesson: Writing Data to Files
import csv
import json
import os
from pathlib import Path
from tempfile import gettempdir
root = Path(gettempdir())
txt = root / "report.txt"
csv_ = root / "report.csv"
jsn = root / "report.json"
rows = [
{"name": "ana", "score": 91},
{"name": "ben", "score": 78},
]
# Plain text, one line per row
with open(txt, "w", encoding="utf-8") as f:
f.write("name\tscore\n")
f.writelines(f"{r['name']}\t{r['score']}\n" for r in rows)
# CSV via DictWriter
with open(csv_, "w", encoding="utf-8", newline="") as f:
w = csv.DictWriter(f, fieldnames=["name", "score"])
w.writeheader()
w.writerows(rows)
# JSON, pretty printed
with open(jsn, "w", encoding="utf-8") as f:
json.dump({"rows": rows}, f, indent=2)
# Atomic replace: safer for "must not lose" files
target = root / "safe.txt"
tmp = target.with_suffix(".txt.tmp")
tmp.write_text("important\n", encoding="utf-8")
tmp.replace(target)
for p in (txt, csv_, jsn, target):
print(f"{p.name:12s}", p.stat().st_size, "bytes")
p.unlink()
Notice each write pattern:
1) Plain text write uses `f.write` and `f.writelines` for one-line-per-row output.
2) `DictWriter` handles headers and escaping for CSV data.
3) `json.dump(..., indent=2)` writes human-readable JSON.
4) The temp-then-replace pattern is the simplest atomic-write approach.
Practice writing CSV and appending to a log.
import csv
from pathlib import Path
from tempfile import gettempdir
p = Path(gettempdir()) / "todo.csv"
# Example A: create with DictWriter
with open(p, "w", encoding="utf-8", newline="") as f:
w = csv.DictWriter(f, fieldnames=["item", "done"])
w.writeheader()
w.writerow({"item": "write docs", "done": False})
# Example B: append a single line
with open(p, "a", encoding="utf-8", newline="") as f:
csv.writer(f).writerow(["ship release", False])
print(p.read_text(encoding="utf-8"))
p.unlink()
Small checks that avoid touching disk.
import json
data = {"a": 1, "b": [1, 2]}
dumped = json.dumps(data)
assert json.loads(dumped) == data
rows = [{"n": 1}, {"n": 2}]
assert sum(r["n"] for r in rows) == 3
assert ",".join(["a", "b"]) == "a,b"
Running prints something like:
report.txt 20 bytes
report.csv 24 bytes
report.json 53 bytes
safe.txt 10 bytes