Metadata-Version: 2.4
Name: llm2graph
Version: 0.3.5
Summary: Paper-aligned LLM-only graph construction, benchmark runners, and public-facing evaluation tools for LLM unlearning experiments.
Author: Raj Sanjay Shah
License-Expression: MIT
Project-URL: Homepage, https://pypi.org/project/llm2graph/
Keywords: knowledge-graph,llm,unlearning,benchmark,reproducibility,evaluation
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: networkx>=3.2
Requires-Dist: openai>=1.37
Requires-Dist: pydantic>=2.7
Requires-Dist: tenacity>=8.2
Requires-Dist: typer>=0.12
Provides-Extra: gemini
Requires-Dist: google-generativeai>=0.7; extra == "gemini"
Provides-Extra: hf-local
Requires-Dist: transformers>=4.44; extra == "hf-local"
Requires-Dist: accelerate>=0.33; extra == "hf-local"
Requires-Dist: sentencepiece>=0.2; extra == "hf-local"
Requires-Dist: einops>=0.7; extra == "hf-local"
Provides-Extra: dev
Requires-Dist: pytest>=8.3; extra == "dev"
Dynamic: license-file

# LLM2Graph

`llm2graph` is a toolkit for:

1. building entity-centric knowledge graphs with LLM calls
2. generating graph-derived evaluation queries
3. running benchmark-style evaluation workflows for unlearning and retention experiments

This `0.3.5` release includes local fixes made after the `0.3.4` wheel in this environment:

- parallel question answering during graph construction
- parallel triple extraction during graph construction
- configurable question-generation prompt count instead of a fixed hard-coded ten-question prompt
- GPT-5-family OpenAI requests omit explicit `temperature`
- default timeout increased to 180 seconds across the package
- maintainer-facing documentation added locally in this workspace

## Install

```bash
pip install llm2graph
```

Optional extras:

```bash
pip install "llm2graph[gemini]"
pip install "llm2graph[hf-local]"
```

## Quickstart

```bash
llm2graph entity \
  --seed "Stephen King" \
  --provider openai \
  --model gpt-5-mini \
  --max-depth 0 \
  --elicitation-question-count 3 \
  --out graph.json
```

```bash
llm2graph gen-queries \
  --graph graph.json \
  --target "Stephen King" \
  --hops 2 \
  --out queries.json
```

```bash
llm2graph eval \
  --queries queries.json \
  --pre-provider openai \
  --pre-model gpt-5-mini \
  --post-provider openai \
  --post-model gpt-5-mini \
  --out eval_report.json
```
