Metadata-Version: 2.4
Name: deepsigma
Version: 0.9.0
Summary: Σ OVERWATCH — Reality Await Layer (RAL) control plane for agentic AI
Author-email: Bryan David White <8ryanWh1t3@gmail.com>
License: MIT
Project-URL: Homepage, https://github.com/8ryanWh1t3/DeepSigma
Project-URL: Repository, https://github.com/8ryanWh1t3/DeepSigma
Project-URL: Wiki, https://github.com/8ryanWh1t3/DeepSigma/wiki
Project-URL: Issues, https://github.com/8ryanWh1t3/DeepSigma/issues
Keywords: agentic-ai,governance,coherence-ops,decision-episodes,drift-patch
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: jsonschema
Requires-Dist: referencing>=0.35.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: otel
Requires-Dist: opentelemetry-api>=1.20.0; extra == "otel"
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == "otel"
Requires-Dist: opentelemetry-exporter-otlp>=1.20.0; extra == "otel"
Provides-Extra: exhaust-llm
Requires-Dist: anthropic>=0.40.0; extra == "exhaust-llm"
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: pytest-benchmark; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.2.0; extra == "langgraph"
Provides-Extra: azure
Requires-Dist: msal>=1.25.0; extra == "azure"
Provides-Extra: snowflake
Requires-Dist: cryptography>=42.0; extra == "snowflake"
Provides-Extra: rdf
Requires-Dist: rdflib>=7.0; extra == "rdf"
Requires-Dist: pyshacl>=0.25; extra == "rdf"
Provides-Extra: excel
Requires-Dist: openpyxl>=3.1.0; extra == "excel"
Dynamic: license-file

[![CI](https://github.com/8ryanWh1t3/DeepSigma/actions/workflows/ci.yml/badge.svg)](https://github.com/8ryanWh1t3/DeepSigma/actions/workflows/ci.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](./LICENSE)
[![Python 3.10+](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://www.python.org/downloads/)

<div align="center">

# Institutional Decision Infrastructure

**Truth · Reasoning · Memory**

[🚀 Start Here](START_HERE.md) · [🔁 Hero Demo](HERO_DEMO.md) · [🏢 Boardroom Brief](category/boardroom_brief.md) · [📜 Specs](canonical/) · [🗺️ Navigation](NAV.md) · [🔬 RAL](ABOUT.md)

</div>

---

## The Problem

Your organization makes thousands of decisions. Almost none are structurally recorded with their reasoning, evidence, or assumptions.

- **Leader leaves** → their rationale leaves with them.
- **Conditions change** → nobody detects stale assumptions.
- **Incident occurs** → root-cause analysis becomes guessing.
- **AI accelerates decisions 100×** → governance designed for human speed fails silently.

This is not a documentation gap. It is a **missing infrastructure layer**.

Every institution pays this cost — in re-litigation, audit overhead, governance drag, and silent drift. The question: keep paying in consequences, or invest in prevention.

→ [Full economic tension analysis](category/economic_tension.md) · [Boardroom brief](category/boardroom_brief.md) · [Risk model](category/risk_model.md)

---

## The Solution

**Σ OVERWATCH** fills the void between systems of record and systems of engagement with a **system of decision**.

Every decision flows through three primitives:

| Primitive | Artifact | What It Captures |
|-----------|----------|------------------|
| **Truth** | Decision Ledger Record (DLR) | What was decided, by whom, with what evidence |
| **Reasoning** | Reasoning Scaffold (RS) | Why this choice — claims, counter-claims, weights |
| **Memory** | Decision Scaffold + Memory Graph (DS + MG) | Reusable templates + queryable institutional memory |

When assumptions decay, **Drift** fires.
When drift exceeds tolerance, a **Patch** corrects it.
This is the **Drift → Patch loop** — continuous self-correction.

---

## Progressive Escalation

Coherence Ops scales from a single decision loop to institutional credibility infrastructure:

| Level | Scale | What It Proves |
|-------|-------|---------------|
| **Mini Lattice** | 12 nodes | Mechanics: one claim, three evidence streams, TTL, drift, patch, seal |
| **Enterprise Lattice** | ~500 nodes | Complexity: K-of-N quorum, correlation groups, regional validators, sync nodes |
| **Credibility Engine** | 30,000–40,000 nodes | Survivability: multi-region, automated drift, continuous sealing, hot/warm/cold |

Same primitives. Same artifacts. Same loop. Different scale.

> Examples: [Mini Lattice](examples/01-mini-lattice/) · [Enterprise Lattice](examples/02-enterprise-lattice/) · [Credibility Engine Scale](examples/03-credibility-engine-scale/) · [Full docs](docs/credibility-engine/)
>
> Demo: [Credibility Engine Cockpit](dashboard/credibility-engine-demo/) — static dashboard, 7 panels, 30 seconds to institutional state
>
> Stage 2: [Simulated Engine](sim/credibility-engine/) — live simulation driver, 4 scenarios (Day0–Day3), 2-second ticks
>
> Stage 3: [Runtime Engine](credibility_engine/) — real engine with JSONL persistence + API endpoints

---

## Why Scale Changes Everything

At 12 nodes, a human can trace every dependency. At 500, hidden correlations emerge. At 40,000, manual governance is impossible.

| Principle | Why It Matters at Scale |
|-----------|------------------------|
| **Truth decays** | Evidence has a shelf life. Without TTL discipline, stale assertions masquerade as current truth. |
| **Silence is signal** | A lattice that stops producing drift signals is not healthy — it is blind. Watch for instability, not absence. |
| **Independence must be enforced** | Sources that appear independent may share infrastructure. Correlation groups make hidden dependencies visible. |
| **Drift is normal** | 100–400 drift events per day is steady state at production scale. Drift is maintenance fuel, not crisis. |
| **Seal authority matters** | No single region should control institutional truth. Authority distribution (no region >40%) prevents capture. |

At every scale, the same question: **can the institution trust its own assertions right now?** The Credibility Index answers it with a number. The Drift→Patch→Seal loop keeps that number honest.

---

## Stage 2 — Simulated Credibility Engine

Run the simulation driver to power the dashboard with live synthetic data:

```bash
# Terminal 1: Start simulation (Day0 = stable baseline)
python sim/credibility-engine/runner.py --scenario day0

# Terminal 2: Serve dashboard
python -m http.server 8000
```

Visit: [http://localhost:8000/dashboard/credibility-engine-demo/](http://localhost:8000/dashboard/credibility-engine-demo/)

Four scenarios model progressive institutional entropy: Day0 (stable), Day1 (entropy emerges), Day2 (coordinated darkness), Day3 (external mismatch + recovery). The dashboard updates every 2 seconds.

> [Simulation docs](sim/credibility-engine/) · [Dashboard](dashboard/credibility-engine-demo/)

---

## Stage 3 — Multi-Tenant Credibility Engine (v0.8.0)

Run the API server to serve live credibility state:

```bash
uvicorn dashboard.api_server:app --reload
```

Engine persists live state under `data/credibility/{tenant_id}/`. Dashboard supports tenant + role selection in API mode (`DATA_MODE = "API"` in `app.js`).

**Tenant-scoped API routes:** `/api/{tenant_id}/credibility/*`

| Endpoint | Description |
|----------|-------------|
| `GET /api/tenants` | List all registered tenants |
| `GET /api/{tenant_id}/credibility/snapshot` | Credibility Index, band, components, trend |
| `GET /api/{tenant_id}/credibility/claims/tier0` | Tier 0 claims with quorum and TTL |
| `GET /api/{tenant_id}/credibility/drift/24h` | Drift events by severity, category, region |
| `GET /api/{tenant_id}/credibility/correlation` | Correlation cluster map |
| `GET /api/{tenant_id}/credibility/sync` | Sync plane integrity |
| `POST /api/{tenant_id}/credibility/packet/generate` | Generate credibility packet (any role) |
| `POST /api/{tenant_id}/credibility/packet/seal` | Seal packet (requires `coherence_steward`) |

Alias routes at `/api/credibility/*` remain for backward compatibility (serve default tenant).

**Quick start:**
1. `uvicorn dashboard.api_server:app --reload`
2. Open dashboard, select tenant + role
3. Generate + seal packet

> [Runtime Engine docs](credibility_engine/) · [API Reference](docs/credibility-engine/API_V0_8.md) · [Tenancy Spec](docs/credibility-engine/TENANCY_SPEC.md)

---

## Try It (5 Minutes)

```bash
git clone https://github.com/8ryanWh1t3/DeepSigma.git && cd DeepSigma
pip install -r requirements.txt

# Score coherence (0–100, A–F)
python -m coherence_ops score ./coherence_ops/examples/sample_episodes.json --json

# Full pipeline: episodes → DLR → RS → DS → MG → report
python -m coherence_ops.examples.e2e_seal_to_report

# Why did we make this decision?
python -m coherence_ops iris query --type WHY --target ep-001
```

**Drift → Patch in 60 seconds** (v0.3.0):

```bash
python -m coherence_ops.examples.drift_patch_cycle
# BASELINE 90.00 (A) → DRIFT 85.75 (B) → PATCH 90.00 (A)
```

👉 Full walkthrough: [HERO_DEMO.md](HERO_DEMO.md) — 8 steps, every artifact touched.

---

## Golden Path (v0.5.1)

One command. One outcome. No ambiguity. Proves the full 7-step loop end-to-end:
Connect → Normalize → Extract → Seal → Drift → Patch → Recall.

```bash
# Local (fixture mode — no credentials)
deepsigma golden-path sharepoint \
  --fixture demos/golden_path/fixtures/sharepoint_small --clean

# Or via the coherence CLI
coherence golden-path sharepoint \
  --fixture demos/golden_path/fixtures/sharepoint_small

# Docker
docker compose --profile golden-path run --rm golden-path
```

Output: `golden_path_output/` with per-step JSON artifacts and `summary.json`.

👉 Details: [demos/golden_path/README.md](demos/golden_path/README.md)

---

## Trust Scorecard (v0.6.0)

Measurable SLOs from every Golden Path run. Generated automatically in CI.

```bash
python -m tools.trust_scorecard \
  --input golden_path_ci_out --output trust_scorecard.json

# With coverage
python -m tools.trust_scorecard \
  --input golden_path_ci_out --output trust_scorecard.json --coverage 85.3
```

Output: `trust_scorecard.json` with metrics, SLO checks, and timing data.

👉 Spec: [specs/trust_scorecard_v1.md](specs/trust_scorecard_v1.md) · Dashboard: **Trust Scorecard** tab

---

## Creative Director Suite (v0.6.2)

Excel-first Coherence Ops — govern creative decisions in a shared workbook that any team can edit in SharePoint. No code required.

```bash
# Generate the governed workbook
pip install -e ".[excel]"
python tools/generate_cds_workbook.py

# Explore the sample dataset
ls datasets/creative_director_suite/samples/
```

The workbook includes a `BOOT` sheet (LLM system prompt), 7 named governance tables (tblTimeline, tblDeliverables, tblDLR, tblClaims, tblAssumptions, tblPatchLog, tblCanonGuardrails), and a Coherence Index dashboard.

**Quickstart:**
1. Download the template workbook from `templates/creative_director_suite/`
2. Fill `BOOT!A1` (or use the pre-filled template)
3. Attach workbook to your LLM app (ChatGPT, Claude, Copilot)
4. Respond to: **"What Would You Like To Do Today?"**
5. Paste write-back rows into Excel tables

> Docs: [Excel-First Guide](docs/excel-first/multi-dim-prompting-for-teams/README.md) · [Boot Protocol](docs/excel-first/WORKBOOK_BOOT_PROTOCOL.md) · [Table Schemas](docs/excel-first/TABLE_SCHEMAS.md) · [Dataset](datasets/creative_director_suite/README.md)

---

## Excel-first Money Demo (v0.6.3)

One command. Deterministic Drift→Patch proof — no LLM, no network.

```bash
python -m demos.excel_first --out out/excel_money_demo

# Or via console entry point
excel-demo --out out/excel_money_demo
```

Output: `workbook.xlsx`, `run_record.json`, `drift_signal.json`, `patch_stub.json`, `coherence_delta.txt`

> Docs: [Money Demo](docs/excel-first/MONEY_DEMO.md) · [BOOT Validator](tools/validate_workbook_boot.py) · [MDPT Power App Pack](docs/excel-first/multi-dim-prompting-for-teams/POWER_AUTOMATE_FLOWS.md)

---

## MDPT Beta Kit (v0.6.4)

Registry index, product CLI, and Power App starter kit for governed prompt operations.

```mermaid
flowchart TB
    subgraph SharePoint["SharePoint Lists"]
        PC[PromptCapabilities<br/>Master Registry]
        PR[PromptRuns<br/>Execution Log]
        DP[DriftPatches<br/>Patch Queue]
    end

    subgraph Generator["MDPT Index Generator"]
        CSV[CSV Export] --> GEN[generate_prompt_index.py]
        GEN --> IDX[prompt_index.json]
        GEN --> SUM[prompt_index_summary.md]
    end

    subgraph Lifecycle["Prompt Lifecycle"]
        direction LR
        INDEX[1. Index] --> CATALOG[2. Catalog]
        CATALOG --> USE[3. Use]
        USE --> LOG[4. Log]
        LOG --> DRIFT[5. Drift]
        DRIFT --> PATCH[6. Patch]
        PATCH -.->|refresh| INDEX
    end

    PC -->|export| CSV
    INDEX -.-> PC
    USE -.-> PR
    DRIFT -.-> DP
    PATCH -.-> DP

    style SharePoint fill:#0078d4,stroke:#0078d4,color:#fff
    style Generator fill:#16213e,stroke:#0f3460,color:#fff
    style Lifecycle fill:#0f3460,stroke:#0f3460,color:#fff
```

```bash
# Generate MDPT Prompt Index from SharePoint export
deepsigma mdpt index --csv prompt_export.csv --out out/mdpt

# Product CLI
deepsigma doctor                                    # Environment health check
deepsigma demo excel --out out/excel_money_demo     # Excel-first Money Demo
deepsigma validate boot <file.xlsx>                 # BOOT contract validation
deepsigma golden-path sharepoint --fixture ...      # 7-step Golden Path
```

> Docs: [CLI Reference](docs/CLI.md) · [MDPT](mdpt/README.md) · [Power App Starter Kit](mdpt/powerapps/STARTER_KIT.md)

---

## Credibility Engine (v0.6.4)

Institutional-scale claim lattice with formal credibility scoring, evidence synchronization, and automated drift governance.

**Credibility Index** — composite 0–100 score from 6 components:

| Component | What It Measures |
|-----------|-----------------|
| Tier-weighted claim integrity | Higher-tier claims weigh more |
| Drift penalty | Active drift signals reduce score |
| Correlation risk penalty | Shared source dependencies penalized |
| Quorum margin compression | Thin redundancy penalized |
| TTL expiration penalty | Stale evidence penalized |
| Independent confirmation bonus | 3+ independent sources rewarded |

| Score | Band | Action |
|-------|------|--------|
| 95–100 | Stable | Monitor |
| 85–94 | Minor drift | Review |
| 70–84 | Elevated risk | Patch required |
| 50–69 | Structural degradation | Immediate remediation |
| <50 | Compromised | Halt dependent decisions |

**Institutional Drift Categories** — 5 scale-level patterns composing from 8 runtime drift types: timing entropy, correlation drift, confidence volatility, TTL compression, external mismatch.

**Sync Plane** — evidence timing infrastructure. Sync nodes are evidence about evidence. Event time vs. ingest time, monotonic sequences, independent beacons, watermark logic.

**Category Definition:**
Coherence Ops is not monitoring, observability, or compliance. It is the operating layer that prevents institutions from lying to themselves over time.

**Deployment:**
- MVP: 6–8 engineers, $1.5M–$3M/year
- Production: 30k–40k nodes, 3+ regions, $6M–$10M/year (~$170–$280/node/year)

> Docs: [Credibility Engine](docs/credibility-engine/) · [Credibility Index](docs/credibility-engine/credibility_index.md) · [Sync Plane](docs/credibility-engine/sync_plane.md) · [Deployment Patterns](docs/credibility-engine/deployment_patterns.md)
>
> Diagrams: [Lattice Architecture](mermaid/38-lattice-architecture.md) · [Drift Loop](mermaid/39-drift-loop.md)
>
> Examples: [Mini Lattice](examples/01-mini-lattice/) · [Enterprise Lattice](examples/02-enterprise-lattice/) · [Scale](examples/03-credibility-engine-scale/)

**Guardrails:** Abstract model for institutional credibility infrastructure. Not domain-specific. Not modeling real-world weapons. Pure decision infrastructure.

---

## Repo Structure

```
DeepSigma/
├─ START_HERE.md          # Front door
├─ HERO_DEMO.md           # 5-min hands-on walkthrough
├─ NAV.md                 # Navigation index
├── category/             # Economic tension, boardroom brief, risk model
├── canonical/            # Normative specs: DLR, RS, DS, MG, Prime Constitution
├── coherence_ops/        # Python library + CLI + examples
├── deepsigma/cli/        # Unified product CLI (doctor, demo, validate, mdpt, golden-path)
├── mdpt/                 # MDPT tools, templates, Power App starter kit
├── specs/                # JSON schemas (11 schemas)
├── examples/             # Episodes, drift events, demo data
├── llm_data_model/       # LLM-optimized canonical data model
├── datasets/             # Creative Director Suite sample data (8 CSVs)
├── docs/                 # Extended docs (vision, IRIS, policy packs, Excel-first)
├── templates/            # Excel workbook templates
├── docs/credibility-engine/ # Credibility Index, Sync Plane, deployment patterns
├── mermaid/              # 39+ architecture & flow diagrams
├── engine/               # Compression, degrade ladder, supervisor
├── dashboard/            # React dashboard + mock API
├── adapters/             # MCP, OpenClaw, SharePoint, Power Platform, AskSage, Snowflake, LangChain
├── demos/                # Golden Path end-to-end demo + fixtures
└── release/              # Release readiness checklist
```

---

## CLI Quick Reference

| Command | Purpose |
|---------|---------|
| `python -m coherence_ops audit <path>` | Cross-artifact consistency audit |
| `python -m coherence_ops score <path> [--json]` | Coherence score (0–100, A–F) |
| `python -m coherence_ops mg export <path> --format=json` | Export Memory Graph |
| `python -m coherence_ops iris query --type WHY --target <id>` | Why was this decided? |
| `python -m coherence_ops iris query --type WHAT_DRIFTED --json` | What assumptions decayed? |
| `python -m coherence_ops demo <path>` | Score + IRIS in one command |
| `coherence reconcile <path> [--auto-fix] [--json]` | Reconcile cross-artifact inconsistencies |
| `coherence schema validate <file> --schema <name>` | Validate JSON against named schema |
| `coherence dte check <path> --dte <spec>` | Check episodes against DTE constraints |
| `deepsigma doctor` | Environment health check |
| `deepsigma demo excel [--out DIR]` | Excel-first Money Demo |
| `deepsigma validate boot <file.xlsx>` | BOOT contract validation |
| `deepsigma mdpt index --csv <file>` | Generate MDPT Prompt Index |
| `deepsigma golden-path <source> [--fixture <path>]` | 7-step end-to-end Golden Path |

---

## Connectors (v0.6.0)

All connectors conform to the [Connector Contract v1.0](specs/connector_contract_v1.md) — a standard interface with a canonical Record Envelope for provenance, hashing, and access control.

| Connector | Transport | MCP Tools | Docs |
|-----------|-----------|-----------|------|
| SharePoint | Graph API | `sharepoint.list` / `get` / `sync` | [docs/26](docs/26-sharepoint-connector.md) |
| Power Platform | Dataverse Web API | `dataverse.list` / `get` / `query` | [docs/27](docs/27-power-platform-connector.md) |
| AskSage | REST API | `asksage.query` / `models` / `datasets` / `history` | [docs/28](docs/28-asksage-connector.md) |
| Snowflake | Cortex + SQL API | `cortex.complete` / `embed` / `snowflake.query` / `tables` / `sync` | [docs/29](docs/29-snowflake-connector.md) |
| LangChain | Callback | Governance + Exhaust handlers | [docs/23](docs/23-langgraph-adapter.md) |
| OpenClaw | HTTP | Dashboard API client | [adapters/openclaw/](adapters/openclaw/) |

---

## Key Links

| Resource | Path |
|----------|------|
| Reality Await Layer (RAL) | [ABOUT.md](ABOUT.md) |
| Front door | [START_HERE.md](START_HERE.md) |
| Hero demo | [HERO_DEMO.md](HERO_DEMO.md) |
| Boardroom brief | [category/boardroom_brief.md](category/boardroom_brief.md) |
| Economic tension | [category/economic_tension.md](category/economic_tension.md) |
| Risk model | [category/risk_model.md](category/risk_model.md) |
| Canonical specs | [/canonical/](canonical/) |
| JSON schemas | [/specs/](specs/) |
| Python library | [/coherence_ops/](coherence_ops/) |
| IRIS docs | [docs/18-iris.md](docs/18-iris.md) |
| Docs map | [docs/99-docs-map.md](docs/99-docs-map.md) |

---

## Operations

| Resource | Purpose |
|----------|---------|
| [OPS_RUNBOOK.md](OPS_RUNBOOK.md) | Run Money Demo, tests, diagnostics, incident playbooks |
| [TROUBLESHOOTING.md](TROUBLESHOOTING.md) | Top 20 issues — symptom → cause → fix → verify |
| [CONFIG_REFERENCE.md](CONFIG_REFERENCE.md) | All CLI args, policy pack schema, environment variables |
| [STABILITY.md](STABILITY.md) | What's stable, what's not, versioning policy, v1.0 criteria |
| [TEST_STRATEGY.md](TEST_STRATEGY.md) | Test tiers, SLOs, how to run locally, coverage |

**Run with coverage:**
```bash
pytest --cov=coherence_ops --cov-report=term-missing
```

---

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md). All contributions must maintain consistency with Truth · Reasoning · Memory and the four canonical artifacts (DLR / RS / DS / MG).

## License

See [LICENSE](LICENSE).

---

<div align="center">

**Σ OVERWATCH**
*We don't sell agents. We sell the ability to trust them.*

</div>
