Completing a Guided Project

A guided project is the bridge between “I know the syntax” and “I can build something useful”. In this lesson we walk through a complete small program end-to-end: a command-line task tracker that stores tasks on disk, lets you add and complete them, lists them, and prints simple statistics. Every step uses only the language and standard library you already know.

The project is intentionally modest. It has a CLI (argparse), a data model (dataclass), persistence (JSON file), a couple of tests (unittest), and a single entry point. The goal is not to be impressive; it's to show that a well-shaped program is made of small, familiar pieces wired together carefully.

Follow along by writing the code yourself in a new file, not by copy-pasting. Run it after every logical step. Resist the urge to add features before the basic ones work. After the guided version, pick one of the suggested extensions to make it your own — due dates, tags, searching, or a colorized output.

The workflow you will use here (plan → model → implement the simplest path → test → iterate) is the same one you will use on real projects. The habits matter more than the particular program. Finish this one, then pick your own next project with similar scope and repeat.

Design and data model

A task has an id, a title, a done flag, and a created_at timestamp. Tasks live in a JSON file in the user's home directory. The CLI supports add, list, done, stats.

Serialization is manual: we convert datetime to ISO strings on write and parse them on read. Using JSON keeps the file human-readable; a corrupted task list is fixable with a text editor.

Implementation plan

Step 1: define the dataclass and IO helpers. Step 2: wire up argparse subcommands. Step 3: implement each command against the helpers. Step 4: write two unit tests (add + complete). Step 5: polish output.

The script uses a temporary file rather than a real home-directory file so the demo is safe to run anywhere.

Building blocks used in this project.

ToolPurpose
argparse
module
CLI parser.
dataclasses
module
Tiny records with methods.
json
module
Readable on-disk format.
pathlib
module
Cross-platform paths.
datetime
module
Timestamps.
unittest
module
Stdlib test runner.
tempfile
module
Temp files for tests.
sys.exit
function
Return a status code from main.

Completing a Guided Project code example

The script implements the full tracker, runs a small demo, and executes its tests inline.

# Lesson: Completing a Guided Project  (Task Tracker)
import json
import unittest
from dataclasses import asdict, dataclass
from datetime import datetime, timezone
from pathlib import Path
from tempfile import TemporaryDirectory


@dataclass
class Task:
    id: int
    title: str
    done: bool
    created_at: str   # ISO 8601

    @classmethod
    def new(cls, tid: int, title: str) -> "Task":
        return cls(
            id=tid, title=title, done=False,
            created_at=datetime.now(timezone.utc).isoformat(),
        )


def load(path: Path) -> list[Task]:
    if not path.exists():
        return []
    return [Task(**row) for row in json.loads(path.read_text(encoding="utf-8"))]


def save(path: Path, tasks: list[Task]) -> None:
    path.write_text(
        json.dumps([asdict(t) for t in tasks], indent=2), encoding="utf-8"
    )


def add(path: Path, title: str) -> Task:
    tasks = load(path)
    tid = (max((t.id for t in tasks), default=0)) + 1
    task = Task.new(tid, title)
    tasks.append(task)
    save(path, tasks)
    return task


def complete(path: Path, tid: int) -> Task:
    tasks = load(path)
    for t in tasks:
        if t.id == tid:
            t.done = True
            save(path, tasks)
            return t
    raise KeyError(tid)


def stats(path: Path) -> dict:
    tasks = load(path)
    return {
        "total": len(tasks),
        "done": sum(1 for t in tasks if t.done),
        "open": sum(1 for t in tasks if not t.done),
    }


# ---- demo run ----
with TemporaryDirectory() as tmp:
    db = Path(tmp) / "tasks.json"
    add(db, "write part 5 intro")
    add(db, "review transcripts")
    add(db, "push to production")
    complete(db, 2)

    for t in load(db):
        mark = "[x]" if t.done else "[ ]"
        print(f"{mark} {t.id} {t.title}")

    print("stats:", stats(db))


# ---- tests ----
class TestTracker(unittest.TestCase):
    def setUp(self):
        self.tmp = TemporaryDirectory()
        self.db = Path(self.tmp.name) / "t.json"

    def tearDown(self):
        self.tmp.cleanup()

    def test_add_and_list(self):
        t = add(self.db, "hello")
        self.assertEqual(t.id, 1)
        self.assertEqual([x.title for x in load(self.db)], ["hello"])

    def test_complete(self):
        add(self.db, "x")
        complete(self.db, 1)
        self.assertTrue(load(self.db)[0].done)


unittest.TextTestRunner(verbosity=2).run(
    unittest.TestLoader().loadTestsFromTestCase(TestTracker)
)

The project as a walk-through:

1) Dataclass + JSON (de)serialization give us a persistent model in ~15 lines.
2) Each operation (add/complete/stats) is a pure function of the file path.
3) Tests use `tmp_path`-style isolation via `TemporaryDirectory`.
4) A CLI is one `argparse` parser away; the core is already stable.

Wrap the functions in an argparse CLI.

import argparse, sys
from pathlib import Path

def main(argv=None):
    p = argparse.ArgumentParser()
    sub = p.add_subparsers(dest="cmd", required=True)
    s_add = sub.add_parser("add"); s_add.add_argument("title")
    s_done = sub.add_parser("done"); s_done.add_argument("id", type=int)
    sub.add_parser("stats")
    p.add_argument("--db", default="tasks.json", type=Path)
    args = p.parse_args(argv)
    if args.cmd == "add": print(add(args.db, args.title))
    elif args.cmd == "done": print(complete(args.db, args.id))
    elif args.cmd == "stats": print(stats(args.db))
    return 0

# sys.exit(main())

End-to-end contract checks.

from tempfile import TemporaryDirectory
from pathlib import Path
with TemporaryDirectory() as d:
    db = Path(d)/"t.json"
    assert add(db, "a").id == 1
    assert stats(db) == {"total": 1, "done": 0, "open": 1}
    complete(db, 1)
    assert stats(db) == {"total": 1, "done": 1, "open": 0}

Running prints:

[ ] 1 write part 5 intro
[x] 2 review transcripts
[ ] 3 push to production
stats: {'total': 3, 'done': 1, 'open': 2}
test_add_and_list (__main__.TestTracker.test_add_and_list) ... ok
test_complete (__main__.TestTracker.test_complete) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.005s

OK