Metadata-Version: 2.4
Name: ralph-code
Version: 0.6.2
Summary: Automated task implementation with Claude Code and Codex
Author: Ralph Coding
License: MIT
Project-URL: Homepage, https://github.com/yourusername/ralph-code
Project-URL: Repository, https://github.com/yourusername/ralph-code
Project-URL: Issues, https://github.com/yourusername/ralph-code/issues
Keywords: claude,codex,ai,automation,coding,cli
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Code Generators
Classifier: Topic :: Software Development :: Build Tools
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: rich
Requires-Dist: click
Requires-Dist: jsonschema
Requires-Dist: platformdirs
Requires-Dist: questionary
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: mypy; extra == "dev"
Requires-Dist: types-pygments; extra == "dev"
Requires-Dist: types-jsonschema; extra == "dev"
Dynamic: license-file
Dynamic: requires-python

# ralph-code

Automated task implementation with Claude Code and Codex for "Ralph Coding". What is [Ralph Coding](https://ghuntley.com/ralph/)? It's a method of coding where context rot is avoided by controlling the retention of information. This method involves re-invoking claude or codex for each task, and passing information about the requirements, acceptance testing, and any progress that's made (or roadblocks/challenges faced) through files, rather than retaining all prompts + thinking + response tokens. It tends to result in more requests, some duplicated token work, but fairly consistent performance, and best of all it can largely be done unattended. Ralph now defaults to continuing past PRD-to-task conversion instead of pausing there, and non-interactive harness calls are bounded by timeouts and turn caps so stuck agent runs fail fast instead of hanging forever. Recommend Claude Max account or codex equivalent, but be aware that GPT-5 - GPT5.2's slow reasoning and response makes this ponderous, it's fine overnight.

Because LLMs are carrying out the work, we can specify a job of "Find all the python files in the project that directly or indirectly access sqlalchemy objects, and upgrade the code to work with sqlalchemy 2.* This will result in probably a single-task project, but that one task might add 50 other tasks (on per file) to the backlog, which are then processed sequentially."

## Installation

```bash
pipx install ralph-code
```

Or with pip:

```bash
pip install ralph-code
```

## Usage

```bash
ralph [OPTIONS] [DIRECTORY]
```

## Recent changes

Version `0.6.2` includes:
- Bounded non-interactive harness execution with timeout and turn limits
- Structured `tasks.json` generation for more reliable PRD conversion
- Automatic continuation after task generation by default
- `PRDs/` as the standard task directory, with legacy `PRD/` compatibility
- Refreshed model catalogs and current defaults

### Options

- `--debug`: Enable debug logging, logs are saved into the .ralph subdirectory of the project
- `DIRECTORY`: Target project directory (defaults to current directory)

## Usage

First create a task in `PRDs/`, give a short name for the task (used for the branch commits will be added to), and then give a description.
Then you run `ralph`, it will produce a `.md` file of the specifications, which will be broken into small tasks put into a `tasks.json` file. Each task will be worked on independently.

## Requirements

- Python 3.10+
- Claude Code or Codex CLI installed and configured

## License

MIT
