Metadata-Version: 2.4
Name: catalyst-brain
Version: 1.0.0
Summary: Catalyst Brain: O(1) Holographic Key-Value Cache, Quantum Attention, and Metacognitive Engine.
License-Expression: LicenseRef-Research-Eval
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Rust
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Dynamic: license-file

<div align="center">
  <h1>Catalyst Brain SDK</h1>
  <p><b>O(1) Holographic Memory, Grover-Amplified Attention, and Metacognitive Self-Improvement</b></p>
  <p>
    <a href="https://pypi.org/project/catalyst-brain/">
      <img src="https://img.shields.io/pypi/v/catalyst-brain?color=blue&label=PyPI" alt="PyPI">
    </a>
    <a href="https://pypi.org/project/catalyst-brain/">
      <img src="https://img.shields.io/pypi/pyversions/catalyst-brain" alt="Python">
    </a>
    <img src="https://img.shields.io/badge/Rust-1.75+-orange" alt="Rust">
    <img src="https://img.shields.io/badge/License-Research_%26_Eval-red" alt="License">
  </p>
</div>

---

**catalyst-brain** is a Rust+PyO3 SDK providing hyperdimensional computing primitives for AI agents. Install from PyPI:

```bash
pip install catalyst-brain
```

Then import the SDK:

```python
import catalyst_hdc as hdc

# Core HDC primitives
a = hdc.rand_bipolar(4096)
b = hdc.rand_bipolar(4096)
score = hdc.resonance(a, b)     # → ~0.5 for quasi-orthogonal vectors
bound = hdc.hdc_bind(a, b)     # XOR-like binding (self-inverse)
bundled = hdc.hdc_bundle(a, b)  # majority-vote superposition
shifted = hdc.hdc_permute(a, 3) # circular shift

# O(1) cognitive memory
from catalyst_hdc import PyHoloCPUScheduler
cpu = PyHoloCPUScheduler(dim=4096, quantum_capacity=8)
cpu.store_memory("user_pref_dark_mode")
assert cpu.recall("user_pref_dark_mode") == True
cpu.process_dopamine_hit(0.95)
print(cpu.dopamine_level())  # → 0.95
```

---

## SDK Reference

### Core HDC Primitives

Raw hypervector algebra. All other SDK classes are built on these.

| Function | Signature | Description |
|---|---|---|
| `rand_bipolar` | `(dim: int) → list[float]` | Random `{−1, +1}` hypervector |
| `resonance` | `(a, b) → float` | Cosine similarity normalized to [0, 1] |
| `hdc_bind` | `(a, b) → list[float]` | XOR-like binding (self-inverse: `bind(bind(a,b),b) == a`) |
| `hdc_bundle` | `(a, b) → list[float]` | Majority-vote superposition |
| `hdc_permute` | `(v, n) → list[float]` | Circular shift by `n` positions |
| `normalise_bipolar` | `(v) → list[float]` | Normalize to bipolar range |

```python
a = hdc.rand_bipolar(4096)
b = hdc.rand_bipolar(4096)

# Self-inverse binding (XOR)
assert hdc.resonance(a, b) > 0.4          # quasi-orthogonal
bound = hdc.hdc_bind(a, b)
recovered = hdc.hdc_bind(bound, b)
assert hdc.resonance(a, recovered) > 0.99  # a ⊕ (a ⊕ b) = b (bit-exact)

# Bundle N vectors using reduce
from functools import reduce
vectors = [hdc.rand_bipolar(4096) for _ in range(4)]
superposition = reduce(hdc.hdc_bundle, vectors)
```

---

### HoloCPU SDK — Cognitive Compute Engine

O(1) semantic memory with Grover-amplified attention routing.

```python
from catalyst_hdc import PyHoloCPUScheduler
import catalyst_hdc as hdc

cpu = PyHoloCPUScheduler(dim=4096, quantum_capacity=8)
```

#### Memory

```python
# Store and recall — O(1) regardless of how many memories exist
cpu.store_memory("user_preference_dark_mode")
cpu.store_memory("last_query")

assert cpu.recall("user_preference_dark_mode") == True
assert cpu.recall("nonexistent_key") == False

# Export entire cognitive state as a single 4096-float hypervector (16 KB constant)
state = cpu.export_holographic_state()
assert len(state) == 4096  # always 16 KB
```

#### Dopamine Feedback (RLHF replacement)

```python
# Signal quality of an inference result (0.0 = bad, 1.0 = perfect)
cpu.process_dopamine_hit(0.95)  # positive outcome
print(cpu.dopamine_level())     # → 0.95 (elevated from baseline 0.5)

cpu.process_dopamine_hit(0.1)   # negative outcome
print(cpu.dopamine_level())     # → drops toward baseline
```

#### Grover-Amplified Attention

```python
# quantum_grover_search takes a hypervector query + lists of key/value hypervectors
query = hdc.rand_bipolar(4096)
keys  = [hdc.rand_bipolar(4096) for _ in range(8)]
values = [hdc.rand_bipolar(4096) for _ in range(8)]

output = cpu.quantum_grover_search(query, keys, values)
# Returns a 4096-dim output vector from Grover-amplified routing
assert len(output) == 4096
```

#### Role Vectors

```python
# Generate orthogonal role hypervectors for structured binding
agent   = cpu.generate_role("agent")
user    = cpu.generate_role("user")
system  = cpu.generate_role("system")

# Use for structured message encoding: message = bind(content, agent_role)
```

#### API Reference

| Method | Signature | Description |
|---|---|---|
| `dimension()` | `→ int` | Hypervector dimensionality |
| `quantum_capacity()` | `→ int` | Qubit depth |
| `store_memory(key)` | `(str) → None` | Encode and store a semantic key |
| `recall(key)` | `(str) → bool` | O(1) key existence check |
| `export_holographic_state()` | `→ list[float]` | Full state as 4096 floats (16 KB) |
| `process_dopamine_hit(hit)` | `(float) → None` | RLHF signal (0.0–1.0) |
| `dopamine_level()` | `→ float` | Current dopamine level |
| `quantum_grover_search(query, keys, values)` | `(Vec, list[Vec], list[Vec]) → list[float]` | Grover attention |
| `run_audit_integrity_check()` | `→ bool` | System health check |
| `generate_role(label)` | `(str) → list[float]` | Orthogonal role vector |

---

### HoloGen SDK — Geometric Hypervector Engine

Encode 3D geometry, materials, and photon states directly into hypervector space.

```python
from catalyst_hdc import PyHoloGenEngine

engine = PyHoloGenEngine(dim=10_000)
```

#### Pixel Geometry

```python
# Map screen coordinates to hypervector addresses
pixel_hv = engine.generate_pixel_geometry(64, 64)
# → list[int8], quasi-orthogonal per unique (x, y) pair

pixel_a = engine.generate_pixel_geometry(100, 200)
pixel_b = engine.generate_pixel_geometry(100, 201)  # adjacent pixel
# pixel_a and pixel_b are quasi-orthogonal — no hash collisions
```

#### Surface Materials

```python
# A metallic surface at position (10, 0, 5) facing upward
surface_hv = engine.generate_material_mapping(
    position=[10.0, 0.0, 5.0],  # [f32; 3]
    normal=[0.0, 1.0, 0.0],     # surface normal [f32; 3]
    material_id=42
)
```

#### Photon State

```python
# Encode photon color as a semantic hypervector
photon_hv = engine.generate_photon("blue")
# Supported: "blue", "red", "green", "yellow", "white", etc.
```

#### BVH Nodes

```python
# encode_bvh_node(min_bounds, max_bounds, left_hv, right_hv)
# left_hv and right_hv must be bipolar hypervectors as list[int8]
# Convert: [int(x) for x in hdc.rand_bipolar(dim)]

left_hv  = [int(x) for x in hdc.rand_bipolar(4096)]
right_hv = [int(x) for x in hdc.rand_bipolar(4096)]

bvh_node = engine.encode_bvh_node(
    [0.0, -10.0, 0.0],   # min_bounds [f32; 3]
    [10.0, 10.0, 10.0],  # max_bounds [f32; 3]
    left_hv,
    right_hv,
)
```

#### Counterfactual Physics

```python
# Ask "what if this photon took a different path?"
actual_state    = {"action": "jump", "outcome": "reward"}
intervention    = {"action": "crouch", "outcome": "reward"}

alt_reality = engine.simulate_counterfactual(actual_state, intervention)
# Returns hypervector encoding hypothetical deviation
```

#### API Reference

| Method | Signature | Description |
|---|---|---|
| `structural_dimension()` | `→ int` | Hypervector dimensionality |
| `generate_pixel_geometry(x, y)` | `(u32, u32) → list[int8]` | Pixel coords → HDC address |
| `generate_material_mapping(position, normal, material_id)` | `([f32;3], [f32;3], u32) → list[int8]` | Surface → HDC |
| `generate_photon(color)` | `(str) → list[int8]` | Color string → HDC |
| `encode_bvh_node(min_bounds, max_bounds, left_hv, right_hv)` | `([f32;3], [f32;3], Vec<i8>, Vec<i8>) → list[int8]` | BVH node |
| `simulate_counterfactual(state, intervention)` | `(dict, dict) → list[int8]` | Counterfactual physics |

---

### Metacognition & Self-Audit

Biological self-improvement loop: observe → recommend → apply → audit.

```python
from catalyst_hdc import PyMetacognition, PyOptimizer, PySelfAudit
import catalyst_hdc as hdc

meta = PyMetacognition(dim=4096)
```

#### Record Observations

```python
# Record inference outcomes with resonance, coherence, accuracy
hv = hdc.rand_bipolar(4096)
meta.record(res=0.85, coh=0.90, acc=0.75, context=hv, hash=12345)
meta.record(res=0.92, coh=0.88, acc=0.81, context=hv, hash=12346)
meta.record(res=0.61, coh=0.72, acc=0.55, context=hv, hash=12347)
```

#### Query State

```python
print(f"success_rate:  {meta.success_rate():.3f}")   # ratio of high-resonance successes
print(f"avg_resonance: {meta.avg_resonance():.3f}")  # mean resonance score
recs = meta.recommend()
# → [("serotonin_increase", 0.05, "success rate 1 > 80%, reinforce"), ...]
```

#### Apply Recommendations

```python
opt = PyOptimizer()
opt.apply("serotonin_increase", 0.05, "success rate above 80%")
params = opt.get_params()
# → {"dopamine": 0.6, "serotonin": 0.5, "acetylcholine": 0.55, "identity_lr": 0.01}
opt.rollback()  # revert last parameter change
```

#### Audit Integrity

```python
audit = PySelfAudit(dim=4096)
hv = hdc.rand_bipolar(4096)
score, passed, issues = audit.full_audit(hv)
# → score=1.0, passed=True, issues=[]
```

#### API Reference

| Class | Method | Signature | Description |
|---|---|---|---|
| `PyMetacognition` | `record(res, coh, acc, context, hash)` | `(float, float, float, Vec, u64)` | Log observation |
| `PyMetacognition` | `success_rate()` | `→ float` | Ratio of high-res successes |
| `PyMetacognition` | `avg_resonance()` | `→ float` | Mean resonance |
| `PyMetacognition` | `recommend()` | `→ list[tuple]` | Parameter recommendations |
| `PyOptimizer` | `apply(action, delta, reason)` | `(str, float, str)` | Apply parameter delta |
| `PyOptimizer` | `get_params()` | `→ dict` | Current parameters |
| `PyOptimizer` | `rollback()` | `→ None` | Revert last change |
| `PySelfAudit` | `full_audit(hv)` | `(Vec) → (float, bool, list)` | Integrity check |

---

### Quantum Attention Head

Drop-in replacement for standard softmax attention using Grover-amplified routing.

```python
from catalyst_hdc import PyQuantumAttentionHead
import catalyst_hdc as hdc

head = PyQuantumAttentionHead(dim=512, nqubits=4)

query  = hdc.rand_bipolar(512)
keys   = [hdc.rand_bipolar(512) for _ in range(10)]
values = [hdc.rand_bipolar(512) for _ in range(10)]

output = head.compute(query, keys, values)
# Returns 512-dim output vector
```

| Method | Signature | Description |
|---|---|---|
| `compute(query, keys, values)` | `(Vec, list[Vec], list[Vec]) → list[float]` | Grover attention |

> **Note:** `amplify()` does not exist as a standalone method. Grover amplification for large memory stores is implemented inside `PyHoloCPUScheduler.quantum_grover_search()`. `PyQuantumAttentionHead` is for fine-grained per-layer attention.

---

## Benchmarks

### Memory Footprint

Catalyst state is **constant** — it never grows with token count.

| Tokens | Standard FP16 KV-Cache | Catalyst HKVC | Reduction |
|---|---|---|---|
| 1,000 | 1,220.70 MB | **0.15 MB** | **8,000x** |
| 5,000 | 6,103.52 MB | **0.15 MB** | **40,000x** |
| 10,000 | 12,207.03 MB | **0.15 MB** | **80,000x** |

### Bit-Exact Recovery

Bind/unbind is **provably lossless** — XOR is its own inverse.

| Operation | Fidelity | Tested depth |
|---|---|---|
| BCV bind/unbind | **100.00% bit-exact** | 1,000 trials |
| Chained composition (depth 2–100) | **100.00% bit-exact** | 6 depths |
| HMK serialization | **100.00% bit-exact** | 100 trials |

### Multi-Item Superposition

Multi-item bundling maintains **98.4% constant bit accuracy** regardless of item count (up to ~7,213 items at D=10,000).

---

## Build from Source

```bash
# Requires Rust 1.75+ and Python 3.10+
git clone https://github.com/quantium-rock/catalyst-brain.git
cd catalyst-brain

# Install Python package (builds Rust extension via setuptools-rust)
pip install .

# Run tests
cargo test --workspace
```

---

## Architecture

```
catalyst-brain/
├── src/
│   ├── lib.rs        # Core HDC: CausalMemory, bind/unbind, resonance
│   └── py_api.rs     # PyO3 bindings: all Python-facing classes
├── holocpu_sdk/      # O(1) scheduler + Grover search
├── hologen_sdk/      # Geometric encoding facade
├── quantum_heads/    # Grover attention head
├── metalearning/     # Metacognition, SelfAudit, Optimizer
├── hkvc_graphics/    # BVH, ray tracing, frame buffer
├── Cargo.toml        # Workspace manifest
└── pyproject.toml    # Python package config
```

---

## License

**Research & Evaluation License v1.0** — see LICENSE file.

| Use | Permitted? |
|---|---|
| Academic research | ✅ Free |
| Personal experimentation | ✅ Free |
| Benchmarking & evaluation | ✅ Free |
| Publishing results (with attribution) | ✅ Free |
| Production / commercial deployment | ❌ Requires Commercial License |
| SaaS / hosted API | ❌ Requires Commercial License |

**Patent:** U.S. Provisional Patent Application CATALYST-2026-001 covers holographic key-value caching, BlockCodeVector binding, resonant superposition memory, and Grover-amplified attention routing.

Contact: licensing@strategic-innovations.ai

---

Copyright © 2026 Strategic Innovations AI. Built with Rust 🦀 + PyO3 🐍.
