Metadata-Version: 2.4
Name: pixie-qa
Version: 0.8.6
Summary: Automated quality assurance for AI applications
Project-URL: Homepage, https://github.com/yiouli/pixie-qa
Project-URL: Repository, https://github.com/yiouli/pixie-qa
Project-URL: Documentation, https://yiouli.github.io/pixie-qa/
Project-URL: Bug Tracker, https://github.com/yiouli/pixie-qa/issues
License: MIT License
        
        Copyright (c) 2026 Yiou Li
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
License-File: LICENSE
Keywords: ai,evals,llm,observability,opentelemetry,testing
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Testing
Requires-Python: >=3.10
Requires-Dist: autoevals>=0.1.0
Requires-Dist: jsonpickle>=4.0.0
Requires-Dist: openai>=2.29.0
Requires-Dist: openinference-instrumentation>=0.1.44
Requires-Dist: opentelemetry-api>=1.27.0
Requires-Dist: opentelemetry-sdk>=1.27.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-dotenv>=1.2.2
Requires-Dist: starlette>=1.0.0
Requires-Dist: uvicorn>=0.42.0
Requires-Dist: watchfiles>=1.1.1
Provides-Extra: all
Requires-Dist: openinference-instrumentation-anthropic; extra == 'all'
Requires-Dist: openinference-instrumentation-dspy; extra == 'all'
Requires-Dist: openinference-instrumentation-google-genai; extra == 'all'
Requires-Dist: openinference-instrumentation-langchain; extra == 'all'
Requires-Dist: openinference-instrumentation-openai; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: openinference-instrumentation-anthropic; extra == 'anthropic'
Provides-Extra: dspy
Requires-Dist: openinference-instrumentation-dspy; extra == 'dspy'
Provides-Extra: google
Requires-Dist: openinference-instrumentation-google-genai; extra == 'google'
Provides-Extra: langchain
Requires-Dist: openinference-instrumentation-langchain; extra == 'langchain'
Provides-Extra: openai
Requires-Dist: openinference-instrumentation-openai; extra == 'openai'
Description-Content-Type: text/markdown

# Pixie-QA

[![Skill](https://img.shields.io/badge/Skill-eval--driven--dev-blueviolet?style=flat&logo=anthropic&logoColor=white)](https://skills.sh/github/awesome-copilot/eval-driven-dev)
[![PyPI package](https://img.shields.io/pypi/v/pixie-qa?logo=pypi&logoColor=white&style=flat)](https://badge.fury.io/py/pixie-qa)

## Agent skill for Evaluation Driven Development

Pixie-QA is an agent skill that let your coding agent to systematically improve the quality of your AI application with Evaluation Driven Development (EDD) approach. With the skill, your coding agent will carry out the evaluate->analyze->implement cycle for you.

## Why Pixie-QA?

You've probably spent a lot of time tweaking your implementation for your AI feature, re-testing the same inputs, and not being sure whether things actually got better.

You might have looked at evals products, but think they are not worth the hassle - they are good at giving you fancy metrics and dashboards, but provides little help on actually improving your application.

Pixie-QA takes a different approach, focusing on producing actionable insights — specific action items that you or your coding agent can investigate further or directly implement in your code.

And because Pixie-QA runs locally inside your codebase, your data stays private and you're not locked into another platform.

## Demo

[Demo Video](https://github.com/user-attachments/assets/74565bd2-a7fc-4f31-909d-9697642e033d)

## How it Works

The skill guides your coding agent (Claude Code, Cursor, GitHub Copilot, etc.) through a 6-step pipeline:

1. **Analyze the app** — The agent reads your codebase, identifies entry points, maps capabilities, and defines eval criteria based on real failure modes (not generic quality checklists).

2. **Instrument data boundaries** — Lightweight `wrap()` calls are added where your app reads external data (databases, APIs, caches) and where it produces output. This lets the eval harness inject controlled inputs and capture results — without changing your app's logic.

3. **Build a Runnable** — A thin adapter that lets the eval harness invoke your app the same way a real user would. Your app runs its real code path, makes real LLM calls — nothing is mocked.

4. **Define evaluators** — Each eval criterion maps to a scoring function: LLM-as-judge for semantic quality, deterministic checks for structural requirements, or custom evaluators for domain-specific rules.

5. **Build a dataset** — Test cases with realistic inputs, pre-captured external data, and expected behavior. Each entry specifies which evaluators to run and what passing looks like.

6. **Run `pixie test` and analyze** — The harness runs all entries concurrently, scores them, and the agent analyzes results: which entries failed, why, and what to fix — in the app or in the eval setup itself. Each `pixie test` result directory should be fully analyzed before the next rerun starts.

The output is a working eval pipelinem and detailed analysis + action plan that you or your coding agent can implement.

## Get Started

Add the skill to your coding agent:

```bash
npx skills add yiouli/pixie-qa
```

Then simply talk to your coding agent in your project, e.g:

- "Setup eval"
- "Improve my agent's output quality"
- "The AI response is wrong when ..., please fix"

## Privacy

pixie-qa records anonymous usage events to understand how the tool is used in
practice. No personal data, file contents, project names, or identifying
information are collected.

To opt out:

```bash
PIXIE_NO_TELEMETRY=1
```
