Metadata-Version: 2.4
Name: hyperstack-langgraph
Version: 1.1.0
Summary: Knowledge graph memory for LangGraph agents. Portable memory across tools, multi-agent coordination, zero LLM cost, time-travel debugging.
Author-email: CascadeAI <deeq.yaqub1@gmail.com>
License: MIT
Project-URL: Homepage, https://cascadeai.dev/hyperstack
Project-URL: Repository, https://github.com/deeqyaqub1-cmd/hyperstack-langgraph
Project-URL: Documentation, https://cascadeai.dev/hyperstack
Keywords: langgraph,memory,knowledge-graph,ai-agents,hyperstack,langchain,multi-agent,portable-memory
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.2; extra == "langgraph"
Requires-Dist: langchain-core>=0.3; extra == "langgraph"
Dynamic: license-file

# hyperstack-langgraph

Knowledge graph memory for LangGraph agents. Developer-controlled, zero LLM cost, time-travel debugging.

## Install

```bash
pip install hyperstack-langgraph langchain-core langgraph
```

## Quick Start (3 lines)

```python
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI

agent = create_memory_agent(ChatOpenAI(model="gpt-4o"))
```

That's it. Your agent now has persistent knowledge graph memory. It will:
- **Search memory** at the start of every conversation
- **Store important facts** when decisions are made (with user confirmation)
- **Traverse the graph** to answer "what depends on X?" or "who decided Y?"

## Environment Variables

```bash
export HYPERSTACK_API_KEY=hs_your_key    # Get free at cascadeai.dev/hyperstack
export HYPERSTACK_WORKSPACE=default
export OPENAI_API_KEY=sk-...             # For your LLM
```

## Usage: Add Memory Tools to Existing Agent

```python
from hyperstack_langgraph import create_hyperstack_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

# Create memory tools
memory_tools = create_hyperstack_tools()

# Add to your existing tools
my_tools = [my_calculator, my_web_search] + memory_tools

# Create agent with memory
agent = create_react_agent(ChatOpenAI(model="gpt-4o"), my_tools)

# Use it
result = agent.invoke(
    {"messages": [{"role": "user", "content": "What do we know about our auth setup?"}]},
    config={"configurable": {"thread_id": "session-1"}}
)
```

## Usage: Full Agent with Session Memory

```python
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver

agent = create_memory_agent(
    ChatOpenAI(model="gpt-4o"),
    checkpointer=MemorySaver(),  # Session memory (optional)
)

config = {"configurable": {"thread_id": "project-alpha"}}

# First message — agent searches HyperStack for context
agent.invoke({"messages": [{"role": "user", "content": "Let's work on the auth system"}]}, config)

# Agent remembers within session (MemorySaver) AND across sessions (HyperStack)
agent.invoke({"messages": [{"role": "user", "content": "We decided to use Clerk"}]}, config)
```

## Usage: Direct API Client

```python
from hyperstack_langgraph import HyperStackClient

client = HyperStackClient()

# Store
client.store("use-clerk", "Use Clerk for Auth", "Chose Clerk over Auth0",
             card_type="decision", keywords=["clerk", "auth"],
             links=[{"target": "alice", "relation": "decided"}])

# Search
client.search("authentication")

# Graph traversal
client.graph("use-clerk", depth=2)

# Time-travel (Pro+)
client.graph("use-clerk", depth=2, at="2026-02-01T00:00:00Z")
```

## Tools Provided

| Tool | Description |
|------|-------------|
| `hyperstack_search` | Search memory for relevant context |
| `hyperstack_store` | Save a fact, decision, preference, or person |
| `hyperstack_graph` | Traverse knowledge graph (impact analysis, decision trails) |
| `hyperstack_list` | List all stored memories |
| `hyperstack_delete` | Remove outdated memories |

## Why HyperStack?

- **You control the graph.** No LLM auto-extraction. No phantom relationships. Your agent explicitly defines cards and links.
- **Zero LLM cost per memory op.** Mem0/Zep charge ~$0.002 per operation. HyperStack: $0.
- **Time-travel debugging.** See the graph as it existed at any point in time. "Git blame for agent memory."
- **30-second setup.** No Neo4j, no Docker, no OpenSearch. One API key, done.

## Pricing

| Plan | Cards | Graph | Price |
|------|-------|-------|-------|
| Free | 10 | ❌ | $0 |
| Pro | 100 | ✅ + time-travel | $29/mo |
| Team | 500 | ✅ | $59/mo |
| Business | 2,000 | ✅ | $149/mo |

Get a free API key at [cascadeai.dev/hyperstack](https://cascadeai.dev/hyperstack)

## License

MIT
