Metadata-Version: 2.4
Name: reprompt-cli
Version: 2.1.0
Summary: Discover, analyze, and optimize your prompts from AI coding sessions
Project-URL: Homepage, https://github.com/reprompt-dev/reprompt
Project-URL: Repository, https://github.com/reprompt-dev/reprompt
Project-URL: Issues, https://github.com/reprompt-dev/reprompt/issues
Project-URL: Changelog, https://github.com/reprompt-dev/reprompt/blob/main/CHANGELOG.md
License: MIT
License-File: LICENSE
Keywords: ai,analytics,claude-code,cli,llm,prompt
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: pydantic-settings>=2.0
Requires-Dist: rich>=13.0
Requires-Dist: scikit-learn>=1.4
Requires-Dist: typer>=0.9
Provides-Extra: chinese
Requires-Dist: jieba>=0.42; extra == 'chinese'
Provides-Extra: dev
Requires-Dist: jieba>=0.42; extra == 'dev'
Requires-Dist: mypy>=1.0; extra == 'dev'
Requires-Dist: pytest-cov>=5.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Provides-Extra: local
Requires-Dist: sentence-transformers>=2.0; extra == 'local'
Provides-Extra: mcp
Requires-Dist: fastmcp>=2.0; extra == 'mcp'
Provides-Extra: ollama
Requires-Dist: requests>=2.31; extra == 'ollama'
Provides-Extra: openai
Requires-Dist: openai>=1.0; extra == 'openai'
Description-Content-Type: text/markdown

# `re:prompt`

**Score, rewrite, and optimize your AI prompts** -- the only CLI that improves your prompts automatically. No LLM needed.

[![PyPI version](https://img.shields.io/pypi/v/reprompt-cli)](https://pypi.org/project/reprompt-cli/)
[![Python 3.10+](https://img.shields.io/pypi/pyversions/reprompt-cli)](https://pypi.org/project/reprompt-cli/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Tests](https://img.shields.io/badge/tests-passing-brightgreen)](https://github.com/reprompt-dev/reprompt/actions)
[![Coverage](https://img.shields.io/badge/coverage-95%25-brightgreen)](https://github.com/reprompt-dev/reprompt)

---

![reprompt demo](docs/demo.gif)

## See it in action

```bash
$ pip install reprompt-cli

# Rewrite a weak prompt into a better one (no LLM, rule-based)
$ reprompt rewrite "I was wondering if you could maybe help me fix the auth bug"
  34 → 52 (+18)

  ╭─ Rewritten ────────────────────────────────────────────────╮
  │ Help me fix the auth bug.                                  │
  ╰────────────────────────────────────────────────────────────╯

  Changes
  ✓ Removed filler (24% shorter)
  ✓ Removed hedging language

  You should also
  → Add actual code snippets or error messages for context
  → Reference specific files or functions by name
  → Add constraints (e.g., "Do not modify existing tests")

# Score any prompt instantly (research-backed, 30+ features)
$ reprompt score "Fix the auth bug in src/login.ts where JWT expires"
  Score: 40/100  (Fair)
  Tip: Include the error message -- debug prompts with errors are 3.7x more effective

# Compress prompts to save tokens
$ reprompt compress "I was wondering if you could please help me refactor this code. Basically what I need is to split this function into smaller helpers."
  Before: 28 tokens → After: 14 tokens (50% saved)

# Your personal dashboard
$ reprompt
  ╭─ Prompt Dashboard ─────────────────────────────────────────╮
  │  Prompts: 1,063 (295 unique)   Sessions: 890              │
  │  Avg Score: 68/100             Top: debug (31%), impl (24%)│
  ╰────────────────────────────────────────────────────────────╯
```

## What it does

### Analyze

| Command | Description |
|---------|-------------|
| `reprompt` | Instant dashboard -- prompts, sessions, avg score, top categories |
| `reprompt scan` | Auto-discover prompts from 9 AI tools |
| `reprompt score "prompt"` | Research-backed 0-100 scoring with 30+ features |
| `reprompt compare "a" "b"` | Side-by-side prompt analysis (or `--best-worst` for auto-selection) |
| `reprompt insights` | Personal patterns vs research-optimal benchmarks |
| `reprompt style` | Prompting fingerprint with `--trends` for evolution tracking |
| `reprompt agent` | Agent workflow analysis -- error loops, tool patterns, session efficiency |
| `reprompt sessions` | Session quality scores with frustration signal detection |
| `reprompt repetition` | Cross-session repetition detection -- spot recurring prompts |
| `reprompt projects` | Per-project quality breakdown -- sessions, scores, frustration signals |

### Optimize

| Command | Description |
|---------|-------------|
| `reprompt rewrite "prompt"` | **Rewrite prompts to score higher** -- filler removal, restructuring, hedging cleanup |
| `reprompt compress "prompt"` | 4-layer prompt compression (40-60% token savings typical) |
| `reprompt distill` | Extract important turns from conversations with 6-signal scoring |
| `reprompt distill --export` | Recover context when a session runs out -- paste into new session |
| `reprompt lint` | Configurable prompt quality linter with CI/GitHub Action support |
| `reprompt init` | Generate `.reprompt.toml` config for your project |

### Manage

| Command | Description |
|---------|-------------|
| `reprompt privacy` | See what data you sent where -- file paths, errors, PII exposure |
| `reprompt privacy --deep` | Scan for sensitive content: API keys, tokens, passwords, PII |
| `reprompt report` | Full analytics: hot phrases, clusters, patterns (`--html` for dashboard) |
| `reprompt digest` | Weekly summary comparing current vs previous period |
| `reprompt wrapped` | Prompt DNA report -- persona, scores, shareable card |
| `reprompt template save\|list\|use` | Save and reuse your best prompts |

## Prompt Science

Scoring is calibrated against 4 research papers covering 30+ features across 5 dimensions:

| Dimension | What it measures | Paper |
|-----------|-----------------|-------|
| **Structure** | Markdown, code blocks, explicit constraints | Prompt Report 2406.06608 |
| **Context** | File paths, error messages, technical specificity | Google 2512.14982 |
| **Position** | Instruction placement relative to context | Stanford 2307.03172 |
| **Repetition** | Redundancy that degrades model attention | Google 2512.14982 |
| **Clarity** | Readability, sentence length, ambiguity | SPELL (EMNLP 2023) |

All analysis runs locally in <1ms per prompt. No LLM calls, no network requests.

## Conversation Distillation

`reprompt distill` scores every turn in a conversation using 6 signals:

- **Position** -- first/last turns carry framing and conclusions
- **Length** -- substantial turns contain more information
- **Tool trigger** -- turns that cause tool calls are action-driving
- **Error recovery** -- turns that follow errors show problem-solving
- **Semantic shift** -- topic changes mark conversation boundaries
- **Uniqueness** -- novel phrasing vs repetitive follow-ups

Session type (debugging, feature-dev, exploration, refactoring) is auto-detected and signal weights adapt accordingly.

## Supported AI tools

| Tool | Format | Auto-discovered by `scan` |
|------|--------|--------------------------|
| Claude Code | JSONL | Yes |
| Codex CLI | JSONL | Yes |
| Cursor | .vscdb | Yes |
| Aider | Markdown | Yes |
| Gemini CLI | JSON | Yes |
| Cline (VS Code) | JSON | Yes |
| OpenClaw / OpenCode | JSON | Yes |
| ChatGPT | JSON | Via `reprompt import` |
| Claude.ai | JSON/ZIP | Via `reprompt import` |

## Installation

```bash
pip install reprompt-cli            # core (all features, zero config)
pip install reprompt-cli[chinese]   # + Chinese prompt analysis (jieba)
pip install reprompt-cli[mcp]       # + MCP server for Claude Code / Continue.dev / Zed
```

### Quick start

```bash
reprompt scan                       # discover prompts from installed AI tools
reprompt                            # see your dashboard
reprompt score "your prompt here"   # score any prompt instantly
reprompt distill --last 1           # distill your most recent conversation
```

### Auto-scan after every session

```bash
reprompt install-hook               # adds post-session hook to Claude Code
```

### Browser extension

Capture prompts from ChatGPT, Claude.ai, and Gemini directly in your browser. Live score badge shows prompt quality as you type.

1. **Install the extension** from [Chrome Web Store](https://chromewebstore.google.com/detail/reprompt/ojdccpagaanchmkninlbgbgemdcjckhn) or [Firefox Add-ons](https://addons.mozilla.org/addon/reprompt-cli/)
2. **Connect to the CLI:** `reprompt install-extension`
3. **Verify:** `reprompt extension-status`

Captured prompts sync locally via Native Messaging -- nothing leaves your machine.

### CI integration

#### GitHub Action

```yaml
# .github/workflows/prompt-lint.yml
- uses: reprompt-dev/reprompt@main
  with:
    score-threshold: 50   # fail if avg prompt score < 50
    strict: true          # fail on warnings too
    comment-on-pr: true   # post quality report as PR comment
```

#### pre-commit

```yaml
# .pre-commit-config.yaml
repos:
  - repo: https://github.com/reprompt-dev/reprompt
    rev: v2.1.0
    hooks:
      - id: reprompt-lint
```

#### Direct CLI

```bash
reprompt lint --score-threshold 50  # exit 1 if avg score < 50
reprompt lint --strict              # exit 1 on warnings
reprompt lint --json                # machine-readable output
```

#### Project configuration

```bash
reprompt init   # generates .reprompt.toml with all rules documented
```

```toml
# .reprompt.toml (or [tool.reprompt.lint] in pyproject.toml)
[lint]
score-threshold = 50       # fail if avg score < 50

[lint.rules]
min-length = 20            # error if prompt < 20 chars (0 = off)
short-prompt = 40          # warning if < 40 chars (0 = off)
vague-prompt = true        # error on "fix it" etc (false = off)
debug-needs-reference = true
```

## Privacy

- All analysis runs locally. No prompts leave your machine.
- `reprompt privacy` shows exactly what you've sent to which AI tool.
- Optional telemetry sends only anonymous 26-dimension feature vectors -- never prompt text.
- Open source: audit exactly what's collected.

[Privacy policy](https://getreprompt.dev/privacy)

## Links

- **Website:** [getreprompt.dev](https://getreprompt.dev)
- **Chrome Extension:** [Chrome Web Store](https://chromewebstore.google.com/detail/reprompt/ojdccpagaanchmkninlbgbgemdcjckhn)
- **Firefox Add-on:** [Firefox Add-ons](https://addons.mozilla.org/addon/reprompt-cli/)
- **PyPI:** [reprompt-cli](https://pypi.org/project/reprompt-cli/)
- **Changelog:** [CHANGELOG.md](CHANGELOG.md)
- **Privacy:** [getreprompt.dev/privacy](https://getreprompt.dev/privacy)

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.

## License

MIT
