Metadata-Version: 2.4
Name: omniai-framework
Version: 0.1.0
Summary: A unified, modular framework for building, training, and deploying all AI/ML/LLM architectures.
Project-URL: Homepage, https://github.com/omniai/omniai
Project-URL: Documentation, https://omniai.readthedocs.io
Project-URL: Repository, https://github.com/omniai/omniai
Author: OmniAI Contributors
License-Expression: Apache-2.0
Keywords: ai,deep-learning,diffusion,gan,llm,machine-learning,neuromorphic,quantum,reinforcement-learning,transformers
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Requires-Dist: numpy>=1.24
Requires-Dist: pyyaml>=6.0
Requires-Dist: tqdm>=4.65
Requires-Dist: typing-extensions>=4.5
Provides-Extra: all
Requires-Dist: omniai[classical,dev,diffusion,eval,graph,llm,rl,serving,torch,tracking,viz]; extra == 'all'
Provides-Extra: classical
Requires-Dist: catboost>=1.2; extra == 'classical'
Requires-Dist: lightgbm>=4.0; extra == 'classical'
Requires-Dist: scikit-learn>=1.3; extra == 'classical'
Requires-Dist: xgboost>=2.0; extra == 'classical'
Provides-Extra: dev
Requires-Dist: mypy>=1.7; extra == 'dev'
Requires-Dist: pre-commit>=3.5; extra == 'dev'
Requires-Dist: pytest-cov>=4.1; extra == 'dev'
Requires-Dist: pytest>=7.4; extra == 'dev'
Requires-Dist: ruff>=0.1; extra == 'dev'
Provides-Extra: diffusion
Requires-Dist: diffusers>=0.24; extra == 'diffusion'
Requires-Dist: torch>=2.0; extra == 'diffusion'
Provides-Extra: eval
Requires-Dist: nltk>=3.8; extra == 'eval'
Requires-Dist: rouge-score>=0.1; extra == 'eval'
Requires-Dist: sacrebleu>=2.3; extra == 'eval'
Provides-Extra: graph
Requires-Dist: torch-geometric>=2.4; extra == 'graph'
Requires-Dist: torch>=2.0; extra == 'graph'
Provides-Extra: jax
Requires-Dist: jax>=0.4; extra == 'jax'
Requires-Dist: jaxlib>=0.4; extra == 'jax'
Provides-Extra: llm
Requires-Dist: accelerate>=0.25; extra == 'llm'
Requires-Dist: safetensors>=0.4; extra == 'llm'
Requires-Dist: sentencepiece>=0.1.99; extra == 'llm'
Requires-Dist: tiktoken>=0.5; extra == 'llm'
Requires-Dist: tokenizers>=0.15; extra == 'llm'
Requires-Dist: torch>=2.0; extra == 'llm'
Requires-Dist: transformers>=4.35; extra == 'llm'
Provides-Extra: neuromorphic
Requires-Dist: snntorch>=0.7; extra == 'neuromorphic'
Requires-Dist: torch>=2.0; extra == 'neuromorphic'
Provides-Extra: quantum
Requires-Dist: pennylane>=0.33; extra == 'quantum'
Requires-Dist: qiskit>=0.45; extra == 'quantum'
Provides-Extra: rl
Requires-Dist: gymnasium>=0.29; extra == 'rl'
Requires-Dist: torch>=2.0; extra == 'rl'
Provides-Extra: safety
Requires-Dist: detoxify>=0.5; extra == 'safety'
Provides-Extra: serving
Requires-Dist: fastapi>=0.104; extra == 'serving'
Requires-Dist: onnx>=1.15; extra == 'serving'
Requires-Dist: onnxruntime>=1.16; extra == 'serving'
Requires-Dist: uvicorn>=0.24; extra == 'serving'
Provides-Extra: tensorflow
Requires-Dist: tensorflow>=2.13; extra == 'tensorflow'
Provides-Extra: thermodynamic
Requires-Dist: scipy>=1.11; extra == 'thermodynamic'
Requires-Dist: torch>=2.0; extra == 'thermodynamic'
Provides-Extra: torch
Requires-Dist: torch>=2.0; extra == 'torch'
Provides-Extra: tracking
Requires-Dist: mlflow>=2.9; extra == 'tracking'
Requires-Dist: tensorboard>=2.15; extra == 'tracking'
Requires-Dist: wandb>=0.16; extra == 'tracking'
Provides-Extra: viz
Requires-Dist: matplotlib>=3.8; extra == 'viz'
Requires-Dist: plotly>=5.18; extra == 'viz'
Description-Content-Type: text/markdown

# 🧠 OmniAI

**A unified, modular framework for building, training, and deploying ALL AI/ML/LLM architectures.**

[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://python.org)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-green.svg)](https://opensource.org/licenses/Apache-2.0)

---

## ⚡ Quick Start

```python
from omniai import Model, Config, Trainer, registry

# Config-driven model creation
config = Config.from_yaml("configs/llama_7b.yaml")
model = Model.build(config)

# Or programmatic
model = Model.build(
    arch="transformer.llama",
    hidden_size=4096,
    num_layers=32,
)

# Unified training
trainer = Trainer(model=model, train_data=loader, optimizer="adamw")
trainer.fit(epochs=3)

# Inference & export
output = model.predict(input_data)
model.export("onnx", path="model.onnx")
```

## 📦 Installation

```bash
# Core (NumPy only, classical ML)
pip install omniai

# With PyTorch (deep learning, transformers, LLMs)
pip install omniai[torch]

# Specific domains
pip install omniai[llm]          # LLM tools + transformers + tokenizers
pip install omniai[diffusion]    # Diffusion models
pip install omniai[rl]           # Reinforcement learning
pip install omniai[quantum]      # Quantum ML
pip install omniai[classical]    # scikit-learn, XGBoost, LightGBM

# Everything
pip install omniai[all]
```

## 🏗️ Architecture

```
omniai/
├── core/              # Base classes, config, registry, callbacks
├── backends/          # PyTorch, JAX, TF, NumPy abstraction
├── classical/         # SVM, trees, clustering, Bayesian
├── deep/              # MLP, CNN, RNN, autoencoders
├── transformers/      # BERT, GPT, LLaMA, Mamba, ViT, ...
├── llm/               # Tokenizers, generation, serving
├── diffusion/         # DDPM, Stable Diffusion, DiT, ...
├── gan/               # StyleGAN, CycleGAN, normalizing flows
├── rl/                # DQN, PPO, SAC, RLHF, DPO
├── graph/             # GCN, GAT, GraphSAGE, equivariant GNNs
├── thermodynamic/     # Thermodynamic computing, equilibrium prop
├── quantum/           # VQE, QAOA, tensor networks
├── neuromorphic/      # Spiking NNs, reservoir computing
├── evolutionary/      # NAS, genetic algorithms, CMA-ES
├── memory/            # NTM, DNC, Hopfield networks
├── energy/            # RBM, DBN, contrastive learning
├── neurosymbolic/     # Neural theorem provers, concept bottleneck
├── metalearning/      # MAML, ProtoNet, hypernetworks
├── continual/         # EWC, progressive networks
├── selfsupervised/    # MLM, MAE, JEPA, BYOL
├── multimodal/        # CLIP, text-to-image/video/audio
├── agents/            # ReAct, CoT, tool-using agents
├── optimization/      # AdamW, Lion, Sophia, schedulers
├── distributed/       # DDP, FSDP, DeepSpeed
├── compression/       # Pruning, quantization, LoRA/PEFT
├── data/              # Multi-modal data pipelines
├── evaluation/        # BLEU, FID, mAP, perplexity
├── serving/           # REST API, ONNX, vLLM-style inference
└── safety/            # Guardrails, RLHF, bias detection
```

## 🔑 Key Features

| Feature | Description |
|---------|-------------|
| **Unified API** | Consistent `.build()`, `.train()`, `.predict()`, `.export()` across all architectures |
| **Backend-agnostic** | Write once, run on PyTorch, JAX, TensorFlow, or NumPy |
| **Hardware-aware** | Auto-detects CPU, CUDA, ROCm, Apple MPS, TPU |
| **Config-driven** | YAML/JSON/dataclass configs with inheritance and validation |
| **Model Registry** | Global registry with `@register` decorator, search, categories |
| **Callbacks** | EarlyStopping, ModelCheckpoint, LR monitoring, custom hooks |
| **Production-ready** | Type hints, comprehensive error messages, logging, checkpointing |

## 🧪 Running Tests

```bash
pip install omniai[dev]
pytest tests/ -v
```

## 📖 Examples

```bash
python examples/quickstart.py
```

## 📄 License

Apache 2.0 — see [LICENSE](LICENSE) for details.
