Metadata-Version: 2.4
Name: bee2bee
Version: 3.3.9
Summary: Decentralized P2P network for AI model hosting and inference
Author-email: ConnectIT Team <loaiabdalslam@gmail.com>
Keywords: p2p,ai,machine-learning,distributed,inference,transformers
Classifier: Development Status :: 3 - Alpha
Classifier: License :: OSI Approved :: MIT License
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: System :: Distributed Computing
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: click>=8.0
Requires-Dist: typer>=0.12
Requires-Dist: rich>=13.7
Requires-Dist: websockets>=12.0
Requires-Dist: psutil>=5.9
Requires-Dist: numpy>=1.24
Requires-Dist: fastapi>=0.104.0
Requires-Dist: uvicorn[standard]>=0.24.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: loguru>=0.7.2
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: huggingface-hub>=0.23.0
Requires-Dist: httpx>=0.27.0
Provides-Extra: hf
Requires-Dist: transformers>=4.40; extra == "hf"
Requires-Dist: datasets>=2.17; extra == "hf"
Provides-Extra: onnx
Requires-Dist: onnx>=1.14; extra == "onnx"
Requires-Dist: onnxruntime>=1.17; extra == "onnx"
Provides-Extra: torch
Requires-Dist: torch>=2.1; (platform_system != "Windows" or platform_machine != "ARM64") and extra == "torch"
Provides-Extra: dht
Requires-Dist: kademlia>=2.2.2; extra == "dht"
Provides-Extra: nat
Requires-Dist: miniupnpc>=2.2.5; extra == "nat"
Requires-Dist: aiortc>=1.9.0; extra == "nat"
Provides-Extra: test
Requires-Dist: pytest; extra == "test"
Requires-Dist: httpx; extra == "test"
Provides-Extra: all
Requires-Dist: transformers>=4.40; extra == "all"
Requires-Dist: datasets>=2.17; extra == "all"
Requires-Dist: onnx>=1.14; extra == "all"
Requires-Dist: onnxruntime>=1.17; extra == "all"
Requires-Dist: torch>=2.1; (platform_system != "Windows" or platform_machine != "ARM64") and extra == "all"
Requires-Dist: kademlia>=2.2.2; extra == "all"
Requires-Dist: miniupnpc>=2.2.5; extra == "all"
Requires-Dist: aiortc>=1.9.0; extra == "all"

# 🐝 Bee2Bee: The Neural Consensus P2P Network

[![PyPI version](https://badge.fury.io/py/bee2bee.svg)](https://badge.fury.io/py/bee2bee)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![FastAPI](https://img.shields.io/badge/FastAPI-005571?style=flat&logo=fastapi)](https://fastapi.tiangolo.com)

**Bee2Bee** is a decentralized, peer-to-peer neural consensus engine designed to make AI inference accessible, transparent, and resilient. It allows anyone to contribute compute to a global mesh and anyone to consume it through a unified, high-performance API.

---

## 🏗️ Architecture

Bee2Bee operates as a **Decentralized Mesh** where every node is both a consumer and potentially a provider:

-   **Neural Nodes**: Host models (Ollama, HF, Transformers) and register via the **Global Registry**.
-   **Consensus Router**: Intelligently routes requests to the lowest-latency, highest-reliability nodes.
-   **API Sidecar**: Every node optionally hosts a **FastAPI** management layer for real-time telemetry.

---

## 🚀 Quick Start (Production)

### 1. Installation
```bash
pip install bee2bee
```

### 2. Launch a Worker Node (The Consumer/Provider)
Run a node that automatically joins the network and hosts a model:
```bash
# Deploys a Llama3 provider via Ollama with an API sidecar on port 8000
python -m bee2bee serve-ollama --model llama3 --api-port 8000
```

### 3. Connect to the Mesh
To join the global discovery map on the [Chatit.cloud Dashboard](https://chatit.cloud):
```bash
python -m bee2bee config bootstrap_url ws://bootstrap.chatit.cloud:4003
```

---

## 🛠️ Developer Guide

### Developing Solutions (Python)
Integrate P2P intelligence directly into your apps using our `rich`-ready SDK:

```python
import asyncio
from bee2bee import P2PNode

async def main():
    # Initialize P2P entrypoint
    node = P2PNode()
    await node.start()
    
    # Discovery
    providers = await node.discover_providers(model="llama3")
    
    # Intelligent Request
    response = await node.request_generation(
        provider_id=providers[0].id,
        prompt="Synthesize a response for decentralized governance."
    )
    print(f"P2P Intelligence: {response['text']}")

asyncio.run(main())
```

### Interactive Learning
Check out our [Jupyter Guide](notebook/developing_with_p2p.ipynb) for a step-by-step walkthrough of the p2p architecture.

---

## 🌐 FastAPI Integration
Bee2Bee nodes now support a native **FastAPI** sidecar. This allows you to monitor your node health and peer list via standard HTTP requests:

-   **Health Check**: `GET http://localhost:8000/`
-   **Peers List**: `GET http://localhost:8000/peers`
-   **Cloud Inference**: `POST http://localhost:8000/chat`

---

## 🤝 Community & Support
- **Developer Support**: [+201211268396]
- **Dashboard**: [Bee2Bee Live Map](https://chatit.cloud)

Built with ❤️ by the ConnectIT Team for a decentralized future.

## 🚀 Quick Start Guide

### 1. Main End Point (The Supervisor)

This runs the core API server. Every network needs at least one Main Point.

**Run Locally:**
```bash
# Starts the API on Port 4002 and P2P Server on Port 4003
python -m bee2bee api
```
*Output:*
-   **API**: `http://127.0.0.1:4002` (Docs: `/docs`)
-   **P2P**: `ws://127.0.0.1:4003`

---

### 2. Desktop App (The Dashboard)

A modern UI to visualize the network and chat with models.

**Prerequisites:** Node.js 20+

**Run Locally:**
```bash
cd electron-app
npm install      # First time only
npm run dev
```
*Usage:*
- Open the App.
- It connects to `http://localhost:4002` by default.
- Go to "Chat" to talk to available providers.
- See [MANUAL_TESTING.md](MANUAL_TESTING.md) for detailed testing steps.

---

### 3. Worker Node (The AI Provider)

Run this on any machine (or the same machine) to share an AI model.

**Step A: Configure** (Tell the node where the Main Point is)
```bash
# If running on the SAME machine as Main Point:
python -m bee2bee config bootstrap_url ws://127.0.0.1:4003

# If running on a DIFFERENT machine (LAN/WAN):
python -m bee2bee config bootstrap_url ws://<MAIN_POINT_IP>:4003
```

**Step B: Deploy Model**

**Option 1: Hugging Face (Default)**
Uses `transformers` to run models like GPT-2, Llama, etc. on CPU/GPU.
```bash
# Deploys distilgpt2 (CPU friendly)
python -m bee2bee deploy-hf --model distilgpt2
```

**Option 2: Ollama (Universal)**
Uses your local Ollama instance to serve models like Llama3, Mistral, Gemma, etc.
*Prerequisite: Install and run [Ollama](https://ollama.com)*
```bash
# Serve a model (e.g., llama3)
python -m bee2bee serve-ollama --model llama3
```
*Note: This creates a separate peer node on your machine.*

**Option 3: Remote Inference (Cloud)**
Execute models entirely on Hugging Face's servers via Inference API. **No GPU required** on your local machine!
```bash
# Deploys Zephyr 7B (Runs on Hugging Face Cloud)
python -m bee2bee deploy-hf --model HuggingFaceH4/zephyr-7b-beta --remote --token YOUR_HF_TOKEN
```
*Note: The node acts as a proxy/gateway to the remote model.*

---

### 4. Bee2Bee Cloud (Google Colab)

Run a powerful node on Google's free GPU infrastructure using our **Hybrid Tunneling** setup.

**Notebook Location**: `notebook/ConnectIT_Cloud_Node.ipynb`

**How it Works (Hybrid Tunneling):**
To bypass Colab's network restrictions, we use two tunnels:
1.  **API Tunnel (Cloudflare)**: Provides a stable HTTPS URL (`trycloudflare.com`) for the Desktop App to connect to.
2.  **P2P Tunnel (Bore)**: Provides a raw WebSocket URL (`bore.pub`) for other Worker Nodes to connect to.

**Instructions:**
1.  Open the Notebook in Google Colab.
2.  Run **"Install Dependencies"**.
3.  Run **"Configure Hybrid Tunnels"** (Installs `cloudflared` & `bore`).
    - *Wait for it to output the URLs.*
4.  Run **"Run Bee2Bee Node"**.
    - *It automatically configures itself to announce the Bore address.*

**Connecting your Desktop App to Colab:**
1.  Copy the **Cloudflare URL** (e.g., `https://funny-remote-check.trycloudflare.com`).
2.  Open Desktop App -> Settings.
3.  Paste into "Main Point URL".

---

## 🛠 Advanced Configuration

### Environment Variables
You can override settings using ENV vars:

| Variable | Description | Default |
| :--- | :--- | :--- |
| `BEE2BEE_PORT` | Port for P2P Server | `4003` (Worker) / `4003` (API) |
| `BEE2BEE_HOST` | Bind Interface | `0.0.0.0` |
| `BEE2BEE_ANNOUNCE_HOST` | Public Hostname (for NAT/Tunnel) | Auto-detected |
| `BEE2BEE_ANNOUNCE_PORT` | Public Port (for NAT/Tunnel) | Auto-detected |
| `BEE2BEE_BOOTSTRAP` | URL of Main Point | `None` |

### Troubleshooting
-   **"Connection Refused"**: Ensure the `bootstrap_url` is correct and reachable (try `ping`).
-   **"0 Nodes Connected"**: Check if the Worker Node can reach the Main Point's P2P address (WSS).
-   **Colab Disconnects**: Ensure the Colab tab stays open. Tunnels change if you restart the notebook.

---

## 🤝 Contributing
Contributions are welcome! Please open an issue or PR on [GitHub](https://github.com/Chatit-cloud/BEE2BEE).
