Metadata-Version: 2.4
Name: tlm-cli
Version: 0.1.9
Summary: TLM — AI Tech Lead that enforces TDD, tests, and spec compliance in Claude Code.
Author-email: TLM <hello@tlm.dev>
License-Expression: MIT
Project-URL: Homepage, https://tlm.dev
Project-URL: Source, https://github.com/tlm-dev/tlm
Classifier: Development Status :: 4 - Beta
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Intended Audience :: Developers
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: httpx>=0.27.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"

# TLM — AI Tech Lead for Claude Code

> The annoying agent that makes Claude do the right thing.

TLM sits inside [Claude Code](https://claude.ai/code) and enforces TDD, tests, and spec compliance — automatically. It interviews you before coding, generates specs, and blocks commits that don't meet your project's quality bar.

## Quick Start

```bash
pipx install tlm-cli
tlm signup
cd your-project
tlm install
```

That's it. Open Claude Code and start working — TLM activates automatically.

## What Happens

1. **`tlm signup`** — Create your account (one-time)
2. **`tlm install`** — Scans your project, generates enforcement rules, you approve them interactively. Installs Claude Code hooks.
3. **Work in Claude Code** — TLM interviews you before features, enforces TDD, checks spec compliance before commits
4. **`tlm check`** — Manual quality gate (runs your approved checks mechanically)

## How It Works

TLM scans your project and asks Claude to figure out your stack — test framework, linter, deploy targets, coverage thresholds. It generates an enforcement config and presents it for your approval:

```
TLM Enforcement Config

  Quality Checks (3):
    ● Tests pass [pre_commit]
      pytest tests/ -v
    ● Linting [pre_commit]
      flake8 src/
    ○ Security audit [pre_commit]
      pip-audit

Does this look right?
  yes — Approve and save
  [correction] — Tell me what's wrong
```

You can correct anything conversationally. Once approved, enforcement is mechanical — commands and exit codes, no LLM opinions.

## Commands

| Command | What it does |
|---------|-------------|
| `tlm signup` | Create account |
| `tlm auth <key>` | Save API key |
| `tlm install` | Full setup: scan, approve config, install hooks |
| `tlm uninstall` | Remove TLM integration (keeps data) |
| `tlm check` | Run quality gate manually |
| `tlm status` | Project stats + enforcement status |
| `tlm learn` | Analyze recent commits for patterns |
| `tlm learn --all` | Full history analysis |

## Philosophy

1. **Claude knows your stack.** TLM doesn't hardcode detection for any framework. Claude figures it out.
2. **You approve everything.** Nothing becomes a rule until you say yes.
3. **Enforcement is mechanical.** Commands and exit codes. No LLM opinions during enforcement.
4. **Annoying, not a prison.** Every block has an `OVERRIDE`. But you have to be explicit.

## License

MIT
