Metadata-Version: 2.4
Name: deepfund
Version: 0.1.0
Summary: Python SDK for deep.fund — the statistical-rigor research scaffold for systematic strategies.
Project-URL: Homepage, https://deep.fund
Project-URL: Documentation, https://api.deep.fund/docs
Project-URL: Repository, https://github.com/simu-ai/deep.fund
Author: deep.fund
License: MIT
License-File: LICENSE
Keywords: backtest,finance,quant,sharpe,statistics
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Financial and Insurance Industry
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Office/Business :: Financial :: Investment
Requires-Python: >=3.10
Requires-Dist: httpx>=0.27
Requires-Dist: pandas>=2.0
Requires-Dist: pydantic>=2.7
Provides-Extra: dev
Requires-Dist: build>=1.2; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-httpx>=0.30; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Description-Content-Type: text/markdown

# deepfund

Python SDK for [deep.fund](https://deep.fund) — the statistical-rigor research scaffold for systematic trading strategies.

`deepfund` is a thin client over the canonical `/v1` API at `api.deep.fund`. The same surface that powers the deep.fund web app is exposed here in idiomatic Python: strategies, versions, datasets, and typed Pydantic models for every test result in the catalogue.

```bash
pip install deepfund
```

## Quickstart

```python
import deepfund as df
import pandas as pd

df.configure(api_key="dfp_...")  # or set DEEPFUND_API_KEY in your env

# 1. Create a strategy
s = df.Strategy.create(name="Momentum", description="cross-sectional 1m")

# 2. Add a version with a returns DataFrame (date + value columns)
returns = pd.read_csv("backtest.csv", parse_dates=["date"])
v = s.add_version(name="v1", returns=returns)

# 3. Run the full v1 statistical catalogue (8 tests, blocks until done)
t = v.run_tearsheet()

print(f"Sharpe: {t.sharpe.value:.2f}  (95% CI: {t.sharpe.ci_low:.2f}, {t.sharpe.ci_high:.2f})")
print(f"PSR:    {t.psr.psr:.2%}")
print(f"Deflated Sharpe: {t.deflated_sharpe.deflated_sharpe:.2f}")
print(f"Placebo p-value: {t.placebo.p_value:.3f}")
```

## Authentication

`deepfund` authenticates with an API key in the `Authorization: Bearer dfp_...` header.

To get a key:
- Sign in to [deep.fund](https://deep.fund), go to **/api-keys**, and create one — the secret is shown once, copy it immediately.
- Or, programmatically: `df.ApiKey.create(name="my-bot")` returns the secret on the response.

Set the key for your session in any of three ways:

```python
df.configure(api_key="dfp_...")              # explicit
df.login(api_key="dfp_...")                  # alias of configure()
# or set the env var before importing — the SDK auto-configures
#   export DEEPFUND_API_KEY=dfp_...
#   export DEEPFUND_BASE_URL=https://api.deep.fund   # optional
```

For multi-account usage in a single process, instantiate a `df.Client` directly and use it through the lower-level methods.

## Resources

| Object       | Use it for                                                                    |
| ------------ | ----------------------------------------------------------------------------- |
| `Strategy`   | top-level container — name, description, list of versions                     |
| `Version`    | one snapshot of returns + (optional) benchmark; owns the tearsheet            |
| `Tearsheet`  | structured access to one version's 8 test results, each as a typed model      |
| `ApiKey`     | mint / list / revoke API keys                                                 |
| `df.tests`   | list the test catalogue and look up formulas                                  |
| `df.me()`    | the authenticated user                                                        |

### Strategies and versions

```python
df.Strategy.list()                              # newest first
df.Strategy.get(strategy_id)                    # one strategy
s.versions()                                    # list its versions

s.add_version(
    name="v2",
    returns=pd.DataFrame({"date": [...], "value": [...]}),
    benchmark=pd.DataFrame({"date": [...], "value": [...]}),  # optional
    description="dropped illiquid names",
)
```

DataFrames must have a `date` column and a `value` column with **period arithmetic returns**. `return` is accepted as a synonym for `value`.

### The Tearsheet

`v.run_tearsheet()` enqueues the full v1 catalogue and blocks until every test reaches a terminal state (`complete` or `failed`). Each test gets a typed property:

```python
t = v.run_tearsheet(timeout=300)

t.sharpe              # SharpeJkResult: value, ci_low, ci_high, se, n_obs, ...
t.psr                 # PsrResult: sharpe, psr, n_obs, benchmark_sr, ...
t.deflated_sharpe     # DeflatedSharpeResult: discounted-for-multiple-testing
t.max_drawdown        # MaxDrawdownResult: realized + bootstrap CI
t.placebo             # PlaceboResult: realized_sharpe + Monte-Carlo p_value
t.info_ratio          # InfoRatioComputed | InfoRatioSkipped (no benchmark)
t.distribution        # DistributionResult: skew, kurt, Jarque-Bera
t.bootstrap_sharpe    # BootstrapSharpeResult: stationary-bootstrap CI
```

Each property returns `None` if that test hasn't completed (use `t.tearsheet()` for non-blocking status, or inspect `t.status()` and `t.runs()`).

The `info_ratio` property is a discriminated union — when no benchmark was uploaded, you get an `InfoRatioSkipped` with a `reason`; otherwise an `InfoRatioComputed`. Check with `isinstance(t.info_ratio, df.InfoRatioSkipped)` or test `t.info_ratio.status == "skipped"`.

For details on each test (formulas, assumptions, when to use it), see the catalogue:

```python
for info in df.tests.list():
    print(info.name, "—", info.description)

print(df.tests.get("deflated_sharpe_v1").long_description)
```

Or browse the rendered formulas at <https://api.deep.fund/docs>.

### API keys

```python
df.ApiKey.list()                               # active and revoked
key = df.ApiKey.create(name="ci-bot")          # secret only on .secret
print(key.secret)                              # copy this — once
key.revoke()                                   # idempotent
```

## Errors

All API errors are typed and inherit from `df.APIError`. Catch the base class for any API failure, or specific subclasses for finer control:

```python
try:
    s = df.Strategy.create(name="dup")
except df.ConflictError as e:
    print(e.code, e.message)        # e.g. "duplicate_name"
except df.ValidationError as e:
    print(e.fields)                 # per-field errors from the API
except df.APIError as e:
    print(e.status_code, e.code)
```

| Exception          | HTTP status |
| ------------------ | ----------- |
| `AuthError`        | 401         |
| `ForbiddenError`   | 403         |
| `NotFoundError`    | 404         |
| `ConflictError`    | 409         |
| `ValidationError`  | 422         |
| `RateLimitError`   | 429         |
| `ServerError`      | 5xx         |

Every exception carries `.code`, `.message`, `.status_code`, and (for 422s) `.fields` from the API's error envelope.

## Architecture notes

deep.fund is API-first. The web app, this SDK, and any future mobile or third-party clients all consume the same `/v1` surface. If you need something the SDK doesn't expose, check <https://api.deep.fund/docs> — the endpoint is probably already there, and you can drop down to `df.Client.get(...)` to call it directly.

## Links

- [deep.fund](https://deep.fund) — the product
- [api.deep.fund/docs](https://api.deep.fund/docs) — full API reference with rendered formulas
- [GitHub](https://github.com/simu-ai/deep.fund) — source

## License

MIT.
