Metadata-Version: 2.4
Name: liel
Version: 0.2.12
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Rust
Classifier: Topic :: Database
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Dist: mcp>=1.0 ; extra == 'mcp'
Provides-Extra: mcp
License-File: LICENSE
Summary: Single-file graph memory for local AI, agents, and Python applications
Keywords: graph,database,embedded,memory,ai,mcp
Home-Page: https://github.com/hy-token/liel
Author-email: hy-token <51951093+hy-token@users.noreply.github.com>
License: MIT
Requires-Python: >=3.9
Description-Content-Type: text/markdown; charset=UTF-8; variant=GFM
Project-URL: Bug Tracker, https://github.com/hy-token/liel/issues
Project-URL: Documentation, https://github.com/hy-token/liel/blob/main/docs/index.md
Project-URL: Homepage, https://github.com/hy-token/liel
Project-URL: Repository, https://github.com/hy-token/liel

# liel

[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/hy-token/liel/blob/main/LICENSE)
[![CI](https://img.shields.io/github/actions/workflow/status/hy-token/liel/ci.yml?branch=main&label=CI)](https://github.com/hy-token/liel/actions/workflows/ci.yml)

The name comes from the French *lier* — to connect, to bind.

**A portable external brain for local AI agents** — one file, structured by relationships.

```bash
pip install liel
liel-demo
```

Runs fully local. No API keys required (LLM optional).

`liel` is a single-file graph memory layer for people using local AI agents while coding. One `.liel` file stores decisions, tasks, sources, files, facts, and the relationships between them, so tools can recall *why* decisions were made, not just what was said.

The core is a small Rust **property graph** engine with **Python (PyO3)** bindings and optional MCP tools. No server, no cloud, no daemon.

## Why Local-First

- **Your code stays on your machine.** No API keys, no telemetry, no cloud round-trips.
- **Works with any LLM.** Local (Ollama, LM Studio) or cloud (Claude, GPT) — only memory stays local.
- **Offline-friendly.** Memory persists across sessions without network access.
- **One file, no lock-in.** Copy, commit, archive, and open with any tool that speaks `.liel`.

## LLM Setup

Use `liel` as project memory through MCP:

```bash
pip install "liel[mcp]"
```

Configure your LLM client to start the `liel` MCP server. In Claude Code, edit
`.mcp.json` in the project root like this:

```json
{
  "mcpServers": {
    "liel": {
      "type": "stdio",
      "command": "/absolute/path/to/liel-mcp",
      "args": ["--path", "/absolute/path/to/agent-memory.liel"]
    }
  }
}
```

Use the installed `liel-mcp` executable for `command`, and set `--path` to the
`.liel` file the AI should use as durable memory. For other LLM/MCP clients,
use the equivalent MCP server setting with the same command and args.

Do not put `mcpServers` in `.claude/settings.json`; that file is for Claude
Code settings such as permissions and environment variables.

For first-time setup, `--path` is the clearest option. If the file does not
exist yet, `liel` creates it on first open. Without `--path`, the server checks
only the startup directory: if no `*.liel` file exists there, it uses
`./memory.liel`; if one exists, it uses that file; if multiple files exist, it
prints the candidates and asks you to register the intended file with `--path`
instead of choosing one silently.

Then add a memory policy to the agent's project instructions. Start with the
[AI memory playbook](docs/guide/mcp/agent-memory.md), or use the
[sample `CLAUDE.md`](docs/guide/mcp/samples/CLAUDE.md) as a longer Claude
template.

## Recommended LLM Memory Pattern

When using `liel` as project memory:

- Always check existing memory before asking the user to repeat context.
- Save only durable, high-signal information: decisions, preferences, tasks,
  sources, and important project facts.
- Do not store temporary reasoning, speculative notes, noisy logs, or every tool result.
- Write at meaningful checkpoints, not every turn.
- Use nodes for entities and edges for relationships.

## Try It

```python
import liel

with liel.open("agent-memory.liel") as db:
    task = db.add_node(
        ["Task"],
        description="Migrate auth from JWT to server-side sessions",
    )
    question = db.add_node(
        ["OpenQuestion"],
        content="Use Redis or PostgreSQL for the session store?",
    )
    rejected = db.add_node(
        ["RejectedOption"],
        option="Redis",
        reason="Adds another infrastructure dependency",
    )
    decision = db.add_node(
        ["Decision"],
        content="Use a PostgreSQL session table",
    )
    source = db.add_node(["Source"], title="Auth migration notes")

    db.add_edge(task, "RAISED", question)
    db.add_edge(question, "REJECTED", rejected)
    db.add_edge(question, "RESOLVED_BY", decision)
    db.add_edge(decision, "SUPPORTED_BY", source)
    db.commit()

    for node in db.neighbors(question, edge_label="RESOLVED_BY"):
        print(node["content"])
```

## Compared To Mem0 / Letta / Zep

`liel` is intentionally lower-level and local-first. It ships as a single `.liel` file with no server, no API keys, and no required vector index. Relationships are explicit edges you write and traverse, not only facts inferred from chat history.

Mem0, Letta, and Zep may be a better fit when you want a hosted service, a full agent runtime, automatic memory extraction, temporal graph intelligence, dashboards, or production-scale context assembly. `liel` is the smaller substrate: local coding agents and project-adjacent tools that need durable, inspectable graph memory they can copy, commit, archive, and open from Python or MCP.

## The Zen of Liel

- One file, any place.
- No server, no waiting.
- Minimal dependencies, simple environments.
- Start small, stay local.

## Documentation

- [Why liel](docs/why-liel.md) - what it solves and what it does not
- [Quickstart](docs/guide/quickstart.md) - demo, Python, and MCP paths
- [AI memory playbook](docs/guide/mcp/agent-memory.md) - recommended LLM memory pattern
- [Sample CLAUDE.md](docs/guide/mcp/samples/CLAUDE.md) - Claude project-instructions template
- [Architecture](docs/design/architecture.md) - system layers and the Mermaid diagram
- [Python guide](docs/guide/connectors/python.md) - API, transactions, traversal
- [MCP guide](docs/guide/mcp/index.md) - Claude and other MCP-capable tools
- [Feature list](docs/reference/features.md) - what is provided at a glance
- [Reliability](docs/reference/reliability.md) - commit semantics, crash recovery, repair
- [Format spec](docs/reference/format-spec.md) - byte-level `.liel` file format
- [Product trade-offs](docs/design/product-tradeoffs.md) - what liel does not do, and why

## Status

`liel` is currently a **Beta** package. The supported contract is the Python-first API plus the single-writer, single-file reliability model. There is no semantic/vector search in core, and `commit()` defines crash-safe boundaries. Breaking changes before `1.0` are tracked in the [changelog](CHANGELOG.md).

## Contributing

Pull requests and issues are welcome. A good first step is to run `liel-demo` and note anything confusing about the output, memory model, or docs.

See [CONTRIBUTING.md](CONTRIBUTING.md).

## Author

Built by Hayato under [`hy-token`](https://github.com/hy-token), a personal namespace for small local-first tools and AI infrastructure experiments.

## License

[MIT](LICENSE)

