Metadata-Version: 2.2
Name: macfleet
Version: 2.1.1
Summary: Pool Apple Silicon Macs for distributed compute and ML training
Author: MacFleet Contributors
License: MIT
Project-URL: Homepage, https://github.com/vikranthreddimasu/MacFleet
Project-URL: Documentation, https://github.com/vikranthreddimasu/MacFleet#readme
Project-URL: Repository, https://github.com/vikranthreddimasu/MacFleet
Project-URL: Issues, https://github.com/vikranthreddimasu/MacFleet/issues
Keywords: distributed,machine-learning,apple-silicon,mps,mlx,pytorch,training,gpu-pooling,data-parallel
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: MacOS
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: zeroconf>=0.131.0
Requires-Dist: rich>=13.0.0
Requires-Dist: click>=8.1.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: msgpack>=1.0.0
Requires-Dist: cloudpickle>=3.0.0
Provides-Extra: torch
Requires-Dist: torch>=2.1.0; extra == "torch"
Provides-Extra: mlx
Requires-Dist: mlx>=0.5.0; extra == "mlx"
Provides-Extra: yaml
Requires-Dist: pyyaml>=6.0; extra == "yaml"
Provides-Extra: all
Requires-Dist: torch>=2.1.0; extra == "all"
Requires-Dist: mlx>=0.5.0; extra == "all"
Requires-Dist: pyyaml>=6.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
Requires-Dist: ruff>=0.3.0; extra == "dev"
Requires-Dist: mypy>=1.8.0; extra == "dev"
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"

# MacFleet

**Pool Apple Silicon Macs into a distributed ML training cluster.**

Turn spare MacBooks, Mac Minis, and Mac Studios into one big GPU. MacFleet connects them over Thunderbolt, Ethernet, or WiFi and splits training across all of them automatically.

```
  macfleet join              macfleet join            macfleet join
 ┌──────────────┐          ┌──────────────┐          ┌──────────────┐
 │  MacBook Pro │◄────────►│  MacBook Air │◄────────►│  Mac Studio  │
 │  M4 Pro      │  WiFi /  │  M4          │  WiFi /  │  M4 Ultra    │
 │  16 GPU cores│  ETH /   │  10 GPU cores│  ETH /   │  60 GPU cores│
 │  48 GB RAM   │  TB4     │  16 GB RAM   │  TB4     │  192 GB RAM  │
 └──────────────┘          └──────────────┘          └──────────────┘
         ▲                          ▲                          ▲
         └──────────────────────────┴──────────────────────────┘
                        Ring AllReduce (gradient sync)
```

## Install

```bash
pip install macfleet            # core
pip install macfleet[torch]     # + PyTorch
pip install macfleet[mlx]       # + Apple MLX
pip install macfleet[all]       # everything
```

## Quick Start

**1. Join the pool** (run on each Mac):

```bash
macfleet join
```

No config files, no IP addresses. Macs find each other automatically via mDNS/Bonjour.

**2. Train:**

```python
import macfleet
import torch.nn as nn

model = nn.Sequential(nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10))

with macfleet.Pool() as pool:
    result = pool.train(model=model, dataset=(X_train, y_train), epochs=10)
```

## Features

- **Dual engine** — PyTorch (MPS) and Apple MLX, same pool infrastructure
- **Zero config** — mDNS discovery, no coordinator setup, no config files
- **Adaptive compression** — auto-selects TopK + FP16 based on link speed (1x–200x reduction)
- **Heterogeneous scheduling** — faster Macs get bigger batches, adjusts for thermal throttling
- **Secure by default** — auto-generated fleet tokens, HMAC mutual auth, mandatory TLS, gradient validation
- **Framework-agnostic core** — communication layer uses only numpy, never imports torch or mlx

## Security

Security is enabled by default. The first `macfleet join` auto-generates a fleet token and saves it to `~/.macfleet/fleet-token`:

```bash
macfleet join                    # auto-generates token, prints it
macfleet join --token <token>    # join with a specific token (copy from first node)
macfleet join --fleet-id lab     # isolate by fleet name
macfleet join --open             # disable security (not recommended)
```

What's protected:
- **Fleet isolation** — nodes with different tokens are invisible to each other on the network
- **Mutual authentication** — HMAC-SHA256 challenge-response on every connection
- **Encryption** — TLS enabled automatically (mandatory with auth)
- **Authenticated heartbeat** — HMAC-signed liveness probes, replay-resistant
- **Gradient validation** — rejects NaN, Inf, and extreme magnitudes (anti-poisoning)

## CLI

```
macfleet join       Join the pool (auto-discovers peers)
macfleet status     Show pool members and network info
macfleet info       Show local hardware profile
macfleet train      Run training (demo or custom script)
macfleet bench      Benchmark compute, network, or allreduce
macfleet diagnose   System health check
```

## How It Works

MacFleet uses **data parallelism**: every Mac holds a full copy of the model, trains on a weighted portion of the data, and averages gradients via Ring AllReduce after each step.

| Network       | Compression     | 100 MB gradients become |
|---------------|-----------------|-------------------------|
| Thunderbolt 4 | None            | 100 MB                  |
| Ethernet      | TopK 10% + FP16 | ~5 MB                   |
| WiFi          | TopK 1% + FP16  | ~500 KB                 |

## Requirements

- macOS with Apple Silicon (M1/M2/M3/M4)
- Python 3.11+
- PyTorch 2.1+ or MLX 0.5+

## Development

```bash
git clone https://github.com/vikranthreddimasu/MacFleet.git
cd MacFleet
pip install -e ".[dev,all]"
make test       # 373 tests
make lint       # ruff + mypy
```

## License

MIT
