Metadata-Version: 2.4
Name: ai-code-review-cli
Version: 2.6.0
Summary: AI-powered code review tool with local Git, remote MR/PR analysis, and CI integration (GitLab, GitHub or Forgejo)
Project-URL: Homepage, https://gitlab.com/redhat/edge/ci-cd/ai-code-review
Project-URL: Repository, https://gitlab.com/redhat/edge/ci-cd/ai-code-review
Project-URL: Issues, https://gitlab.com/redhat/edge/ci-cd/ai-code-review/-/issues
Author-email: Juanje Ojeda <juanje@redhat.com>
License-Expression: MIT
License-File: LICENSE
Keywords: ai,assistant,automation,code quality,code review,coding,developer tools,devops,git,github,github actions,gitlab,gitlab-ci,llm,static code analysis,workflows
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.12
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: anthropic<1.0.0,>=0.40.0
Requires-Dist: click>=8.1.0
Requires-Dist: gitpython>=3.1.40
Requires-Dist: grpcio>=1.75.0
Requires-Dist: httpx>=0.28.1
Requires-Dist: jinja2<4.0.0,>=3.1.0
Requires-Dist: langchain-anthropic<2.0.0,>=1.0.0
Requires-Dist: langchain-google-genai<5.0.0,>=4.0.0
Requires-Dist: langchain-google-vertexai<4.0.0,>=3.0.0
Requires-Dist: langchain-ollama<2.0.0,>=1.0.0
Requires-Dist: langchain-openai<1.0.0,>=0.3.0
Requires-Dist: langchain<3.0.0,>=1.0.0
Requires-Dist: ollama>=0.2.0
Requires-Dist: protobuf>=6.0.0
Requires-Dist: pydantic-core<3.0.0,>=2.33.0
Requires-Dist: pydantic-settings>=2.10.1
Requires-Dist: pydantic<3.0.0,>=2.12.0
Requires-Dist: pyforgejo>=2.0.0
Requires-Dist: pygithub>=2.1.0
Requires-Dist: python-gitlab>=7.0.0
Requires-Dist: pyyaml>=6.0.0
Requires-Dist: structlog>=23.2.0
Requires-Dist: unidiff>=0.7.0
Provides-Extra: dev
Requires-Dist: mypy>=1.19.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest-httpx>=0.30.0; extra == 'dev'
Requires-Dist: pytest-mock>=3.11.0; extra == 'dev'
Requires-Dist: pytest>=9.0.0; extra == 'dev'
Requires-Dist: ruff>=0.14.0; extra == 'dev'
Requires-Dist: types-pyyaml; extra == 'dev'
Requires-Dist: types-requests; extra == 'dev'
Description-Content-Type: text/markdown

# AI Code Review

AI-powered code review tool with **3 powerful use cases**:

- 🤖 **CI Integration** - Automated reviews in your CI/CD pipeline (GitLab or GitHub)
- 🔍 **Local Reviews** - Review your local changes before committing
- 🌐 **Remote Reviews** - Analyze existing MRs/PRs from the terminal

## 📑 Table of Contents

- [🚀 Primary Use Case: CI/CD Integration](#-primary-use-case-cicd-integration)
- [⚙️ Secondary Use Cases](#️-secondary-use-cases)
  - [Local Usage (Container)](#local-usage-container)
  - [Local Usage (CLI Tool)](#local-usage-cli-tool)
  - [Remote Reviews](#remote-reviews)
- [🔧 Configuration](#-configuration)
- [⚡ Smart Skip Review](#-smart-skip-review)
- [For Developers](#for-developers)
- [📁 Project Context: The Highest-Impact Improvement You Can Make](#-project-context-the-highest-impact-improvement-you-can-make)
- [🔧 Common Issues](#-common-issues)
- [📖 Documentation](#-documentation)
- [🤖 AI Tools Disclaimer](#-ai-tools-disclaimer)
- [📄 License](#-license)
- [👥 Author](#-author)

## 🚀 Primary Use Case: CI/CD Integration

This is the primary and recommended way to use the AI Code Review tool.

### GitLab CI

Add to `.gitlab-ci.yml`:
```yaml
ai-review:
  stage: code-review
  image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest
  variables:
    AI_API_KEY: $GEMINI_API_KEY  # Set in CI/CD variables
  script:
    - ai-code-review --post
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
  allow_failure: true
```

### GitHub Actions

Add to `.github/workflows/ai-review.yml`:
```yaml
name: AI Code Review
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  ai-review:
    runs-on: ubuntu-latest
    if: github.event_name == 'pull_request'
    continue-on-error: true
    permissions:
      contents: read
      pull-requests: write
    container:
      image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest
    steps:
      - name: Run AI Review
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          AI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
        run: ai-code-review --pr-number ${{ github.event.pull_request.number }} --post
```

### Forgejo Actions

Add to `.forgejo/workflows/ai-review.yml`:
```yaml
name: AI Code Review
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  ai-review:
    runs-on: codeberg-tiny  # adjust for non-codeberg instances
    continue-on-error: true
    permissions:
      contents: read
      pull-requests: write
    container:
      image: registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest
    steps:
      - name: Run AI Review
        env:
          AI_API_KEY: ${{ secrets.GEMINI_API_KEY }}  # set in Forgejo Actions secrets
        run: ai-code-review --pr-number ${{ github.event.pull_request.number }} --post
```

## ⚙️ Secondary Use Cases

### Local Usage (Container)

This is the recommended way to use the tool locally, as it doesn't require any installation on your system.

```bash
# Review local changes
podman run -it --rm -v .:/app -w /app \
       registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest \
       ai-code-review --local

# Review a remote MR
podman run -it --rm -e GITLAB_TOKEN=$GITLAB_TOKEN -e AI_API_KEY=$AI_API_KEY \
       registry.gitlab.com/redhat/edge/ci-cd/ai-code-review:latest \
       ai-code-review group/project 123
```

> **Note**: You can use `docker` instead of `podman` and the command should work the same.

### Local Usage (CLI Tool)

This is a good option if you have Python installed and want to use the tool as a CLI command.

> **Note on package vs. command name:** The package is registered on PyPI as `ai-code-review-cli`, but for ease of use, the command to execute remains `ai-code-review`.

`pipx` is a more mature and well-known tool for the same purpose. It handles the package vs. command name difference automatically.

```bash
# Install pipx
pip install pipx
pipx ensurepath

# Install the package
pipx install ai-code-review-cli

# Run the command
ai-code-review --local
```

### Remote Reviews

You can also analyze existing MRs/PRs from your terminal.

```bash
# GitLab MR
ai-code-review group/project 123

# GitHub PR
ai-code-review --platform-provider github owner/repo 456

# Save to file
ai-code-review group/project 123 -o review.md

# Post the review to the MR/PR
ai-code-review group/project 123 --post
```

## 🔧 Configuration

### Required Setup

#### 1. Platform Token (Not needed for local reviews)

```bash
# For GitLab remote reviews
export GITLAB_TOKEN=glpat_xxxxxxxxxxxxxxxxxxxx

# For GitHub remote reviews
export GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx

# For Forgejo remote reviews
export FORGEJO_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# Local reviews don't need platform tokens! 🎉
```

#### 2. AI API Key

```bash
# Get key from: https://makersuite.google.com/app/apikey
export AI_API_KEY=your_gemini_api_key_here
```

### Configuration Methods (Priority Order)

The tool supports **4 configuration methods** with the following priority:

1. **🔴 CLI Arguments** (highest priority) - `--ai-provider anthropic --ai-model claude-sonnet-4-5`
2. **🟡 Environment Variables** - `export AI_PROVIDER=anthropic`
3. **🟢 Configuration File** - `.ai_review/config.yml`
4. **⚪ Field Defaults** (lowest priority) - Built-in defaults

### Configuration File

Create a YAML configuration file for persistent settings:

```bash
# Create from template
cp .ai_review/config.yml.example .ai_review/config.yml

# Edit your project settings
nano .ai_review/config.yml
```

**Key benefits:**
- ✅ **Project-specific settings** - Different configs per repository
- ✅ **Team sharing** - Commit to git for consistent team settings
- ✅ **Reduced typing** - Set common options once
- ✅ **Layered override** - CLI arguments still override everything

**File locations:**
- **Auto-detected**: `.ai_review/config.yml` (loaded automatically if exists)
- **Custom path**: `--config-file path/to/custom.yml`
- **Disable loading**: `--no-config-file` flag

### Environment Variables

For sensitive data and CI/CD environments:
```bash
# Copy template
cp env.example .env

# Edit and set your tokens
GITLAB_TOKEN=glpat_xxxxxxxxxxxxxxxxxxxx
AI_API_KEY=your_gemini_api_key_here
```

### Common Options

```bash
# Different AI providers
ai-code-review project/123 --ai-provider anthropic  # Claude
ai-code-review project/123 --ai-provider ollama     # Local Ollama

# Custom server URLs
ai-code-review project/123 --gitlab-url https://gitlab.company.com

# Output options
ai-code-review project/123 -o review.md          # Save to file
ai-code-review project/123 2>logs.txt            # Logs to stderr
```

**For all configuration options, troubleshooting, and advanced usage → see [User Guide](docs/user-guide.md)**

### Team/Organization Context

For teams working on multiple projects, you can specify a **shared team context** that applies organization-wide:

```bash
# Remote team context (recommended - stored in central repo)
export TEAM_CONTEXT_FILE=https://gitlab.com/org/standards/-/raw/main/review.md
ai-code-review --local

# Or use CLI option
ai-code-review project/123 --team-context-file https://company.com/standards/review.md --post

# Local team context file
ai-code-review --team-context-file ../team-standards.md --local
```

**Use cases:**
- Organization-wide coding standards
- Security requirements and compliance rules
- Team conventions shared across projects
- Industry-specific guidelines (HIPAA, GDPR, etc.)

**Priority order:** Team context → Project context → Commit history

This allows maintaining org standards while individual projects add specific guidelines.

**See [User Guide - Team Context](docs/user-guide.md#-teamorganization-context) for complete documentation.**

### Intelligent Review Context (Two-Phase Synthesis)

The tool uses a **two-phase approach** to incorporate previous reviews and avoid repeating mistakes:

**Phase 1 - Synthesis (automatic):**
- Fetches **ALL** comments and reviews (including resolved ones)
- Uses a fast model (e.g., `gemini-3-flash-preview`) to synthesize key insights
- Identifies author corrections to previous AI reviews
- Generates concise summary (<500 words)

**Phase 2 - Main Review:**
- Uses synthesis as context to avoid repeating mistakes
- Focuses on code changes with awareness of discussions

**Benefits:**
- ✅ Prevents repeating invalidated suggestions
- ✅ Reduces token usage (synthesis is much shorter than raw comments)
- ✅ Lower costs (fast model for preprocessing)
- ✅ Better quality (focused insights vs raw data)

**Configuration:**
```yaml
# Enable/disable (default: enabled)
enable_review_context: true
enable_review_synthesis: true

# Custom synthesis model (optional)
synthesis_model: "gemini-3-flash-preview"  # Default for Gemini
# synthesis_model: "claude-haiku-4-5"  # For Anthropic
# synthesis_model: "gpt-4o-mini"  # For OpenAI
```

**Skips automatically when:**
- No comments/reviews exist (first review)
- Feature is disabled

## ⚡ Smart Skip Review

**AI Code Review automatically skips unnecessary reviews** to reduce noise and costs:

- 🔄 **Dependency updates** (`chore(deps): bump lodash 4.1.0 to 4.2.0`)
- 🤖 **Bot changes** (from `dependabot[bot]`, `renovate[bot]`)
- 📝 **Documentation-only** changes (if enabled)
- 🏷️ **Tagged PRs/MRs** (`[skip review]`, `[automated]`)
- 📝 **Draft/WIP PRs/MRs** (work in progress)

**Result:** Focus on meaningful changes, save API costs, faster CI/CD pipelines.

> **📖 Learn more:** Configuration, customization, and CI integration → [User Guide - Skip Review](docs/user-guide.md#smart-skip-review)

## For Developers

### Development Setup

```bash
# Install using uv (recommended)
uv sync --all-extras

# Or with pip
pip install -e .
```

> To install or learn more about `uv`, check here:
[uv](https://docs.astral.sh/uv)

## 📁 Project Context: The Highest-Impact Improvement You Can Make

Adding a `.ai_review/project.md` file to your repository is the single most effective thing
you can do to improve review quality. Without it, the AI reviewer sees only the diff — it
doesn't know your architecture, your internal libraries, or the conventions your team follows.
With it, the quality difference is dramatic.

**Three specific problems a context file solves:**

1. **Stale knowledge** — LLMs have a training cutoff. Without a context file, reviewers suggest
   outdated library versions, flag current versions as "non-existent", and recommend deprecated
   APIs. A context file with your actual dependency versions fixes this completely.

2. **Diff-only visibility** — The reviewer sees only the changed lines, not how they connect to
   the rest of the system. A context file explains your architecture, patterns, and abstractions
   so the reviewer can judge whether a change fits or conflicts with the codebase.

3. **Internal/proprietary knowledge** — Your internal libraries, custom frameworks, and team
   conventions are unknown to any LLM. Without documentation, the reviewer will make wrong
   assumptions about them. The context file tells it what's internal and how it works.

**How to create one:** Use the
[context-generator skill](https://github.com/juanje/context-generator) — a standalone AI
agent skill for Claude Code and Cursor that generates and maintains `.ai_review/project.md`
automatically. Install it once and ask your assistant to generate the file:

```bash
# Install for Claude Code
git clone git@github.com:juanje/context-generator.git ~/.claude/skills/context-generator

# Install for Cursor
git clone git@github.com:juanje/context-generator.git ~/.cursor/skills-cursor/context-generator
```

Then: *"Generate a context file for this project"*

Commit `.ai_review/project.md` to your repository. The CI/CD review job picks it up
automatically on every MR/PR — no extra configuration needed.

---

## 🔧 Common Issues

## 📖 Documentation

- **[User Guide](docs/user-guide.md)** - Complete usage, configuration, and troubleshooting
- **[Context Generator Guide](docs/context-generator.md)** - How to create context files for better reviews
- **[Developer Guide](docs/developer-guide.md)** - Development setup, architecture, and contributing

## 🤖 AI Tools Disclaimer

<details>
<summary>This project was developed with the assistance of artificial intelligence tools</summary>

**Tools used:**
- **Cursor**: Code editor with AI capabilities
- **Claude-Sonnet-4.5**: Anthropic's latest language model (claude-sonnet-4-5)

**Division of responsibilities:**

**AI (Cursor + Claude-Sonnet-4.5)**:
- 🔧 Initial code prototyping
- 📝 Generation of examples and test cases
- 🐛 Assistance in debugging and error resolution
- 📚 Documentation and comments writing
- 💡 Technical implementation suggestions

**Human (Juanje Ojeda)**:
- 🎯 Specification of objectives and requirements
- 🔍 Critical review of code and documentation
- 💬 Iterative feedback and solution refinement
- ✅ Final validation of concepts and approaches

**Crotchety old human (Adam Williamson)**:
- 👴🏻 Adapted GitHub client and tests for Forgejo using 100% artisanal human brainpower

**Collaboration philosophy**: AI tools served as a highly capable technical assistant, while all design decisions, educational objectives, and project directions were defined and validated by the human.
</details>

## 📄 License

MIT License - see LICENSE file for details.

## 👥 Author

- **Author:** Juanje Ojeda
- **Email:** juanje@redhat.com
- **URL:** <https://gitlab.com/redhat/edge/ci-cd/ai-code-review>
