Metadata-Version: 2.4
Name: neuralnode
Version: 2.0.7
Summary: Comprehensive AI Framework with 50+ LLM Providers, Advanced Agents, Chains, Memory, RAG, and 100+ Tools
Project-URL: Homepage, https://assem.cloud/
Project-URL: Documentation, https://neuralnode.readthedocs.io
Project-URL: Repository, https://github.com/assemsabry/neuralnode
Project-URL: Issues, https://github.com/assemsabry/neuralnode/issues
Project-URL: Changelog, https://github.com/assemsabry/neuralnode/blob/main/CHANGELOG.md
Author-email: Assem Sabry <assemsabryy@outlook.com>
Maintainer-email: Assem Sabry <assemsabryy@outlook.com>
License: MIT
License-File: LICENSE
Keywords: agent,ai,anthropic,automation,embeddings,framework,google,langchain,llm,ollama,openai,rag,tools,vector-stores
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Requires-Dist: numpy>=1.24.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: requests>=2.28.0
Provides-Extra: all
Requires-Dist: accelerate>=0.24.0; extra == 'all'
Requires-Dist: anthropic>=0.8.0; extra == 'all'
Requires-Dist: beautifulsoup4>=4.12.0; extra == 'all'
Requires-Dist: bitsandbytes>=0.41.0; extra == 'all'
Requires-Dist: chromadb>=0.4.0; extra == 'all'
Requires-Dist: cohere>=4.0.0; extra == 'all'
Requires-Dist: edge-tts>=6.1.0; extra == 'all'
Requires-Dist: einops>=0.7.0; extra == 'all'
Requires-Dist: faiss-cpu>=1.7.4; extra == 'all'
Requires-Dist: google-generativeai>=0.3.0; extra == 'all'
Requires-Dist: groq>=0.4.0; extra == 'all'
Requires-Dist: huggingface-hub>=0.23.0; extra == 'all'
Requires-Dist: llama-cpp-python>=0.2.0; extra == 'all'
Requires-Dist: mistralai>=0.0.12; extra == 'all'
Requires-Dist: ollama>=0.1.0; extra == 'all'
Requires-Dist: openai>=1.0.0; extra == 'all'
Requires-Dist: protobuf>=3.20.0; extra == 'all'
Requires-Dist: psutil>=5.9.0; extra == 'all'
Requires-Dist: pydub>=0.25.1; extra == 'all'
Requires-Dist: pypdf>=3.17.0; extra == 'all'
Requires-Dist: python-docx>=0.8.11; extra == 'all'
Requires-Dist: python-telegram-bot>=20.0; extra == 'all'
Requires-Dist: scipy>=1.10.0; extra == 'all'
Requires-Dist: sentence-transformers>=2.2.0; extra == 'all'
Requires-Dist: sentencepiece>=0.1.99; extra == 'all'
Requires-Dist: speechrecognition>=3.10.0; extra == 'all'
Requires-Dist: sqlalchemy>=2.0.0; extra == 'all'
Requires-Dist: torch>=2.0.0; extra == 'all'
Requires-Dist: transformers>=4.35.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.8.0; extra == 'anthropic'
Provides-Extra: aws
Requires-Dist: boto3>=1.34.0; extra == 'aws'
Provides-Extra: azure
Requires-Dist: openai>=1.0.0; extra == 'azure'
Provides-Extra: chroma
Requires-Dist: chromadb>=0.4.0; extra == 'chroma'
Provides-Extra: cohere
Requires-Dist: cohere>=4.0.0; extra == 'cohere'
Provides-Extra: dev
Requires-Dist: black>=23.0.0; extra == 'dev'
Requires-Dist: mypy>=1.7.0; extra == 'dev'
Requires-Dist: pytest>=7.4.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Provides-Extra: discord
Requires-Dist: discord-py>=2.3.0; extra == 'discord'
Provides-Extra: embeddings
Requires-Dist: fastembed>=0.1.0; extra == 'embeddings'
Requires-Dist: instructorembedding>=1.0.1; extra == 'embeddings'
Requires-Dist: sentence-transformers>=2.2.0; extra == 'embeddings'
Requires-Dist: tiktoken>=0.5.0; extra == 'embeddings'
Provides-Extra: faiss
Requires-Dist: faiss-cpu>=1.7.4; extra == 'faiss'
Provides-Extra: flash-attn
Requires-Dist: flash-attn>=2.3.0; extra == 'flash-attn'
Provides-Extra: google
Requires-Dist: google-cloud-aiplatform>=1.38.0; extra == 'google'
Requires-Dist: google-generativeai>=0.3.0; extra == 'google'
Provides-Extra: groq
Requires-Dist: groq>=0.4.0; extra == 'groq'
Provides-Extra: horus
Requires-Dist: accelerate>=0.24.0; extra == 'horus'
Requires-Dist: bitsandbytes>=0.41.0; extra == 'horus'
Requires-Dist: einops>=0.7.0; extra == 'horus'
Requires-Dist: huggingface-hub>=0.23.0; extra == 'horus'
Requires-Dist: llama-cpp-python>=0.2.0; extra == 'horus'
Requires-Dist: protobuf>=3.20.0; extra == 'horus'
Requires-Dist: scipy>=1.10.0; extra == 'horus'
Requires-Dist: sentencepiece>=0.1.99; extra == 'horus'
Requires-Dist: torch>=2.0.0; extra == 'horus'
Requires-Dist: transformers>=4.35.0; extra == 'horus'
Provides-Extra: local
Requires-Dist: accelerate>=0.24.0; extra == 'local'
Requires-Dist: huggingface-hub>=0.23.0; extra == 'local'
Requires-Dist: llama-cpp-python>=0.2.0; extra == 'local'
Requires-Dist: protobuf>=3.20.0; extra == 'local'
Requires-Dist: sentencepiece>=0.1.99; extra == 'local'
Requires-Dist: torch>=2.0.0; extra == 'local'
Requires-Dist: transformers>=4.35.0; extra == 'local'
Provides-Extra: local-all
Requires-Dist: accelerate>=0.24.0; extra == 'local-all'
Requires-Dist: bitsandbytes>=0.41.0; extra == 'local-all'
Requires-Dist: einops>=0.7.0; extra == 'local-all'
Requires-Dist: huggingface-hub>=0.23.0; extra == 'local-all'
Requires-Dist: llama-cpp-python>=0.2.0; extra == 'local-all'
Requires-Dist: protobuf>=3.20.0; extra == 'local-all'
Requires-Dist: scipy>=1.10.0; extra == 'local-all'
Requires-Dist: sentencepiece>=0.1.99; extra == 'local-all'
Requires-Dist: torch>=2.0.0; extra == 'local-all'
Requires-Dist: transformers>=4.35.0; extra == 'local-all'
Provides-Extra: milvus
Requires-Dist: pymilvus>=2.3.0; extra == 'milvus'
Provides-Extra: mistral
Requires-Dist: mistralai>=0.0.12; extra == 'mistral'
Provides-Extra: ollama
Requires-Dist: ollama>=0.1.0; extra == 'ollama'
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == 'openai'
Provides-Extra: pinecone
Requires-Dist: pinecone-client>=2.2.0; extra == 'pinecone'
Provides-Extra: qdrant
Requires-Dist: qdrant-client>=1.6.0; extra == 'qdrant'
Provides-Extra: rag
Requires-Dist: beautifulsoup4>=4.12.0; extra == 'rag'
Requires-Dist: markdown>=3.5.0; extra == 'rag'
Requires-Dist: openpyxl>=3.1.0; extra == 'rag'
Requires-Dist: pypdf>=3.17.0; extra == 'rag'
Requires-Dist: python-docx>=0.8.11; extra == 'rag'
Provides-Extra: redis
Requires-Dist: redis>=5.0.0; extra == 'redis'
Provides-Extra: replica
Requires-Dist: edge-tts>=6.1.0; extra == 'replica'
Provides-Extra: slack
Requires-Dist: slack-sdk>=3.21.0; extra == 'slack'
Provides-Extra: speech
Requires-Dist: pydub>=0.25.1; extra == 'speech'
Requires-Dist: speechrecognition>=3.10.0; extra == 'speech'
Provides-Extra: telegram
Requires-Dist: edge-tts>=6.1.0; extra == 'telegram'
Requires-Dist: pydub>=0.25.1; extra == 'telegram'
Requires-Dist: pypdf>=3.17.0; extra == 'telegram'
Requires-Dist: python-docx>=0.8.11; extra == 'telegram'
Requires-Dist: python-telegram-bot>=20.0; extra == 'telegram'
Requires-Dist: speechrecognition>=3.10.0; extra == 'telegram'
Provides-Extra: tools
Requires-Dist: psutil>=5.9.0; extra == 'tools'
Requires-Dist: pymongo>=4.6.0; extra == 'tools'
Requires-Dist: serpapi>=0.1.0; extra == 'tools'
Requires-Dist: sqlalchemy>=2.0.0; extra == 'tools'
Requires-Dist: wikipedia>=1.4.0; extra == 'tools'
Provides-Extra: turboquant
Provides-Extra: weaviate
Requires-Dist: weaviate-client>=3.0.0; extra == 'weaviate'
Description-Content-Type: text/markdown

# NeuralNode v3.1.0

NeuralNode is a Python AI framework focused on:
- real cloud providers that are implemented
- local model execution with Transformers, Ollama, llama.cpp, and Horus
- agents, memory, and RAG
- Replica TTS and speech recognition
- Telegram integration

## Install

```bash
pip install neuralnode
pip install "neuralnode[all]"
```

Useful extras:

```bash
pip install "neuralnode[horus]"
pip install "neuralnode[telegram]"
pip install "neuralnode[replica]"
pip install "neuralnode[speech]"
pip install "neuralnode[turboquant]"
```

## Quick Start

```python
import neuralnode as nn

ai = nn.NeuralNode(
    provider="groq",
    model="llama-3.1-70b-versatile",
    api_key="YOUR_GROQ_API_KEY",
)

print(ai.chat("Hello from NeuralNode"))
```

## Horus

All Horus models use:
- unified chat template: `horus_unified`
- unified context window: `8192`

```python
import neuralnode as nn

model = nn.HorusModel(
    model_id="tokenaii/horus/Horus-1.0-4B",
    turboquant=True,
    turboquant_bits=4,
).load()

response = model.chat([
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Explain Horus briefly."},
])

print(response.content)
```

### Available Horus model IDs

```python
from neuralnode import HorusModel

print(HorusModel.list_available_models())
```

Supported IDs currently include:
- `tokenaii/horus`
- `tokenaii/horus/Horus-1.0-4B`
- `tokenaii/Hours-1.0-4B-GGUF/Horus-1.0-4B-Q4_K_M.gguf`
- `tokenaii/Hours-1.0-4B-GGUF/Horus-1.0-4B-Q5_K_M.gguf`
- `tokenaii/Hours-1.0-4B-GGUF/Horus-1.0-4B-Q6_K.gguf`
- `tokenaii/Hours-1.0-4B-GGUF/Horus-1.0-4B-Q8_0.gguf`
- `tokenaii/Hours-1.0-4B-GGUF/Horus-1.0-4B-F16.gguf`

## Replica TTS

Replica exposes 20 curated `edge_tts` voices through custom voice IDs.

```python
import neuralnode as nn

print(nn.replica_voice_list())

tts = nn.ReplicaTTS(voice_id="replic-salma-language{ar-eg}")
tts.save_to_file("مرحبا من نيورال نود", "reply.mp3")
```

Voice mapping docs:
- [docs/replica_voice_ids.md](docs/replica_voice_ids.md)
- [docs/replica_voice_ids.csv](docs/replica_voice_ids.csv)

## Horus + Replica

```python
import neuralnode as nn

model = nn.HorusModel(
    model_id="tokenaii/horus/Horus-1.0-4B",
    enable_tts=True,
    tts_voice_id="replic-salma-language{ar-eg}",
).load()

result = model.chat_and_speak(
    [{"role": "user", "content": "قل لي جملة ترحيب قصيرة"}],
    output_file="horus_reply.mp3",
)

print(result["response"].content)
print(result["audio_path"])
```

## Telegram

Use a BotFather token to connect an agent to Telegram.

```python
import neuralnode as nn

ai = nn.NeuralNode(provider="horus", model="tokenaii/horus/Horus-1.0-4B")
agent = ai.agent(agent_type="simple", thinking=False)

bot = nn.TelegramBot(
    token="YOUR_BOTFATHER_TOKEN",
    agent=agent,
    config=nn.TelegramBotConfig(
        token="YOUR_BOTFATHER_TOKEN",
        enable_voice=True,
        enable_documents=True,
        reply_mode="both",  # text | voice | both
        voice_reply_voice_id="replic-aria-language{en-us}",
    ),
)

bot.start()
```

Telegram now supports:
- text chat
- voice transcription
- optional Replica voice replies
- document download and analysis

## RAG

```python
import neuralnode as nn

ai = nn.NeuralNode(provider="ollama", model="llama3.2")
rag = ai.rag(store="memory")
rag.add_documents(["notes.txt", "report.pdf"])

print(rag.query("Summarize the key findings"))
```

If the current LLM does not support embeddings, RAG automatically tries:
1. a dedicated embedding provider
2. sentence-transformers
3. lexical fallback embeddings

## Supported Provider Surface

Only implemented providers are exposed through the public provider registry.

Cloud chat providers:
- `anthropic`
- `google`
- `cohere`
- `mistral`
- `groq`
- `deepseek`
- `perplexity`
- `ai21`
- `together`
- `fireworks`
- `bedrock`
- `vertexai`

Local providers:
- `ollama`
- `llamacpp`
- `transformers`
- `llamafile`
- `koboldcpp`
- `textgenwebui`
- `exllama`
- `autogptq`
- `autoawq`
- `vllm`
- `tgi`
- `deepspeed`
- `rayserve`
- `mlflow`
- `bentoml`
- `triton`
- `mlx`
- `horus`

## TurboQuant

TurboQuant is integrated into:
- `HorusModel(..., turboquant=True, turboquant_bits=4)`
- `create_provider("transformers", turboquant=True, turboquant_bits=4, ...)`

If the `turboquant` package is not installed or the backend cannot use it, NeuralNode falls back to normal generation automatically.

## Notes

- OpenAI is intentionally blocked in NeuralNode.
- GGUF Horus models require `llama-cpp-python`.
- Horus downloads from Hugging Face require `huggingface_hub`.
