Metadata-Version: 2.4
Name: phi_engine
Version: 1.0.0
Summary: Golden Continuum φ-Engine — a unified calculus engine using factorial structure exploitation. Arbitrary precision in constant time.
Author-email: Purrplexia <mathsisbeautiful@proton.me>
License-Expression: GPL-3.0-or-later
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: mpmath>=1.3.0
Dynamic: license-file

# φ-Engine (Golden Continuum Executable Core)

![Status: Active Research](https://img.shields.io/badge/Status-Active%20Research-42FEEC?style=rounded-square)
![Exact Calculus](https://img.shields.io/badge/Exact%20Rational-Calculus-hotpink?style=round-square)
![PGP Signed](https://img.shields.io/badge/PGP%20Signed-verify-hotpink.svg?style=rounded-square)
[![PyPI](https://img.shields.io/pypi/v/phi-engine.svg?color=hotpink)](https://pypi.org/project/phi-engine/)
![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)
![License: CC-BY-SA 4.0](https://img.shields.io/badge/License-CC--BY--SA--4.0-blue)
![Python 3.10+](https://img.shields.io/badge/python-3.10-blue)

A constructive, gridless **exact** golden-ratio calculus engine using factorial moment annihilation and **exact rational** β-stream contraction.

---

![Phi Engine Demo](https://raw.githubusercontent.com/Purrplexia/LettersToMyHeroes/main/Cantor_GoldenContinuum/code/phi_engine/phi-engine.gif)

---

```bash
pip install phi-engine
```

---

A high-precision, gridless, analytic calculus engine based on factorial contraction layers
and β-stream evaluation. The φ-engine computes derivatives, and integrals
as **exact rationals** with superfactorial precision growth and effectively constant
evaluation cost.

---

Dependencies: [mpmath >= 1.3.0]

Quick Demo:  
```python
from mpmath import mp
from phi_engine import PhiEngine
mp.dps = 30
eng = PhiEngine()
print(eng.differentiate(lambda x: mp.cos(x**2), 2))
```

---

## φ-Calculus — Engineering Summary

φ-Calculus is a constructive, mathematically exact contraction engine based on
factorial layers. It introduces zero internal numerical error. The only source
of inaccuracy is the backend library evaluating f(c ± φᵢ).

Each factorial layer uses an intrinsic displacement φᵢ = 1/(Fᵢ⁺!).  
Where Fᵢ⁺ is the (i-1)th Fibonacci number.  
These radii are exact rationals, and all β-weights are exact rationals from the moment laws.
No step size exists. All algebra is performed over ℚ.

The result: the φ-operators extract the exact analytic contribution encoded in
the samples they receive. Backend precision determines only how many digits
can be printed, not how many digits are correct.

---

## Where Numerical Error Actually Comes From

φ-Calculus does not approximate internally. All error comes from:

1. Backend evaluation of f(c + φᵢ) and f(c − φᵢ)
2. The numeric precision of those evaluations
3. Rounding limits of IEEE-754 / MPFR / etc.
4. Decimal conversion of the final rational contraction

φ-Calculus never amplifies rounding error. It only contracts the values given.

If backend evaluations were exact, φ-Calculus would output fully exact
derivatives and integrals at arbitrary depth.

---

## Notation Reference

The letter uses φᵢ.  
The engine historically used xᵢ. They are the same quantity.

$$
x_i = \phi_i^2 = \frac{1}{(F_i^+!)^2}
$$

This naming mismatch is legacy and will be unified. All mathematics is identical.

---

## Moment Laws and Exactness Model

The φ-operators are defined by finite moment systems involving φᵢ².

Derivative:
$$
\sum_{i=1}^{N} \beta_i \ \phi_i^{2\ell} = \delta_{\ell 0}
$$

Integral:
$$
\sum_{i=1}^{N} \beta_i \ \phi_i^{2\ell} = \frac{1}{2\ell+1}
$$

The βᵢ are exact rational numbers obtained from solving these algebraic systems.
These identities force all Taylor terms through order (2N−2) to cancel exactly
for every analytic function.

The first possible surviving Taylor term is order (2N−1).

---

## φ-Operator Definitions (Exact)

Derivative:
$$
D_N[f;c] =
\sum_{i=1}^{N} \beta_i^{(der)}
\cdot \frac{f(c + \phi_i) - f(c - \phi_i)}{2\phi_i}
$$

Integral over [a,b], center c = (a+b)/2:
$$
I_N[f;a,b] =
(b-a)\sum_{i=1}^{N} \beta_i^{(int)}
\cdot \frac{f(c + \phi_i) + f(c - \phi_i)}{2}
$$

Both are exact linear maps over ℚ applied to the numeric samples of f.

---

## Structural Error Bound (Guaranteed Digit Floor)

For analytic f:

$$
|E| \le M_f \ C_N \ (F_N^+!)^{-(2N-1)}
$$

Taking −log₁₀:

$$
\text{Guaranteed digits} \ge
(2N-1) \ \log_{10}(F_N^+!) \ - \ \log_{10}(M_f C_N)
$$

The second term is tiny relative to the superfactorial growth of F_N⁺!.

This guarantee is independent of:
• floating-point precision  
• function conditioning  
• rounding behavior  
• step-size choices (none exist)

---

## Practical Workflow

1. Compute β[N] once (exact rational).  
2. Evaluate f(c ± φᵢ) using any backend.  
3. Contract the samples with the β-weights.  
4. Print as many digits as the backend can reliably produce.

Precision controls only how many digits of the exact rational contraction
can be represented, not how many are correct.

---

## Superfactorial Structure (Why φ-Calculus Works)

Each layer uses:

$$ \phi_i^2 = \frac{1}{(F_i^+!)^2}
\approx \left(\left(\frac{\phi^i}{\sqrt{5}}\right)!\right)^{-2} 
\Longrightarrow \phi_{i+1} \ll \phi_i^{K} 
\text{ for every fixed } K. 
$$

Values collapse superfactorially:

| i | F⁺ᵢ | (F⁺ᵢ!)² | φᵢ²       |
|---|-----|---------|-----------|
| 1 | 1   | 1       | 1         |
| 2 | 2   | 4       | 0.25      |
| 3 | 3   | 36      | 2.7e-2    |
| 4 | 5   | 14400   | 6.9e-5    |
| 5 | 8   | 1.63e10 | 6.1e-11   |
| 6 | 13  | 4.3e22  | ~1e-23    |

φᵢ outpaces:  
• exponentials  
• factorials  
• any fixed-height factorial tower  

Only the first few layers contribute. Higher ones vanish analytically.

---

## Taylor Cancellation and the First Surviving Term (2N−1)

The β-stream enforces:  
• degree 0 matched  
• degrees 1 ... (2N−2) eliminated  

At the level of a single layer displacement φᵢ, the remaining Taylor contribution
after cancellation has the form

$$
f(c+\phi_i)
= f(c) + \frac{f^{(2N-1)}(\xi_i)}{(2N-1)!}\,\phi_i^{2N-1},
$$

for some point ξᵢ between c and (c + φᵢ). All terms of degree ≤ 2N−2 have been
removed structurally by the moment laws; the first potentially nonzero term is
order 2N−1.

Diagnostics reflect this exactly:
```text
φ-structural guarantee:
  • Exact through Taylor degree 2N−2
  • First possible surviving term: 2N−1
```

This is algebraic cancellation, not numerical approximation.

---

## Asymptotic Cost

Layer i contributes at scale ~1/(Fᵢ⁺!)².

Using Stirling and Fᵢ ≈ φⁱ/√5:

(Fᵢ⁺!)² > 10ᴰ  ⇒  i ≈ log D − log log D

Thus required layers grow as:

$$
O(\log D - \log \log D)
$$

In practice: 8–10 layers produce hundreds or thousands of certified digits.

---

## Full Proof

Complete mathematical foundation is given in:

**[Letter to Cantor — Full Proof (PDF)](https://github.com/Purrplexia/LettersToMyHeroes/blob/main/Cantor_GoldenContinuum/LetterToCantor.pdf)**  

---

## ▶ Quick Start Demos

### Install
```bash
pip install phi-engine
```

### Minimal Example

```python
from phi_engine import PhiEngine
from mpmath import mp

# Instantiate with defaults
eng = PhiEngine()

# Function and its true derivative to compare
f = lambda x: mp.cos(x*x)
f_prime = lambda x: -2*x*mp.sin(x*x)

# Set dps for mp evaluations raise engine_dps and fib_count for more digits >;)
mp.dps = 200

# Target point
x0 = mp.mpf('0.25')

# Evaluate derivative using φ-engine contraction
est = eng.differentiate(f, x0)

print("Estimated f'(x0):", mp.nstr(est, 12))
print("True value      :", mp.nstr(f_prime(x0), 12))
print("Abs error       :", mp.nstr(mp.fabs(est - f_prime(x0)), 12))
```
  
Result:
```text
Estimated f'(x0): -0.0312296589212
True value      : -0.0312296589212
Abs error       : 2.70462665204e-162
```

At first glance this looks trivial — a closed-form derivative evaluated beyond machine precision.
But it wasn't a closed form. It was computed using **exact β-stream contraction** from the default factorial ladder:

* **fib_count = 9**
* **8 symmetric layers** $(F_i!)^{-2}$ from the Fibonacci ladder.
* **8 function evaluations**

With those defaults, the φ-engine routinely obtains **150–200 digits** of stability on analytic functions.
No step size, **no grids**, no tuning, no quadrature, no symbolic solving — just factorial moment symmetries.  

This level of precision is **not achievable** with any finite-difference or grid-based method.
The accuracy comes entirely from the β-moment relations, not from numerical finesse.

$$
\boxed{\textbf{8 evaluations} \longrightarrow \textbf{162 digits of accuracy}}
$$

---

### Extreme Example

This example is *not* representative of normal usage.
It simply demonstrates how fast the factorial–Fibonacci contraction grows.

With 16 factorial layers F⁺₁₅! = 987!, the φ-engine internally stabilizes
~**69,430 digits** of precision while evaluating a smooth analytic derivative.

```python
from phi_engine import PhiEngine, PhiEngineConfig
from fractions import Fraction
from mpmath import mp

# Initialize state configuration
cfg = PhiEngineConfig(
    base_dps=1000,
    fib_count=16,
    timing=True,
    return_diagnostics=True,
    show_error=True,
    per_term_guard=True,    # Adaptive precision (explained in next demo)
    max_dps=80000
)

mp.dps = 100000  # f(x) evaluation dps not φ-Engine dps

# Instantiate the engine from the config
eng = PhiEngine(cfg)

# Define an analytic function sin(septillion x**2) for fun
SEPT = 1_000_000_000_000_000_000_000_000
f = lambda x: mp.sin(SEPT * x**2)
f_truth = lambda x: 2 * SEPT * x * mp.cos(SEPT * x**2)

x_0 = mp.mpf('0.25')
# Compute operator call
res, diag = eng.differentiate(f, x_0, name="sin(septillion x^2)")

# Inject context into the engine for report generation
diag["error"] = abs(res - f_truth(x_0))
diag["operation"] = "Differentiation"
diag["x0"] =  x_0
diag["result"] = res

# Call engine report method
eng.report(diag)
```
After about 6 seconds on a decent gaming PC:
```text
======================================================================
φ-Engine Diagnostic Report
======================================================================
Operation          : Differentiation
Function           : sin(septillion x^2)
Result             : 4.639486276917095e+22
Fib Number         : 16
Terms evaluated    : 15
Max used precision : 69430 dps
Hit precision cap  : No
Total time         : 5.920155 s
Absolute error     : 0.0000000000000000e+00
φ-structural guarantee:
  • Exact through Taylor degree 2N−2 = 28.
  • First possible surviving term: order 2N−1 = 29.
  • Higher Taylor terms are killed superfactorially
    by powers of xᵢ = 1/(F⁺ᵢ!)².
To hide this footer set suppress_guarantee=True in PhiEngineConfig.
======================================================================
```
That is ≥ 100k digit bitwise agreement vs the closed form evaluation from mpmath.

**Why the demo takes 6s?**
The runtime in this extreme example is not dominated by the φ-Engine itself.
The differentiator must evaluate your function at extremely high precision:
```python
# ΔF adaptive passes the user-supplied function F_eval
deltaF, used_dps = _eval_deltaF_adaptive(F_eval, ...)

# inside ΔF evaluation
with mp.workdps(dps):
    h_mp = mp.mpf(h_frac.numerator) / mp.mpf(h_frac.denominator)
    cur = F_eval(x0 + h_mp) - F_eval(x0 - h_mp)
```
At `max_dps = 80_000`, mpmath performs hundreds of thousands of 80k-digit big-float ops, twice per layer.  
This accounts for most of the runtime.  
The φ-Engine's internal synthesis (even at fib_count = 16, using the (987!) factorial ladder) takes only a few seconds if not milliseconds.  
The heavy part is evaluating the user's function at extreme precision.  

### Precision Efficiency

Even though the verification target was computed with `mp.dps=100000`,  
the φ-Engine itself only required:  
`Max used precision: 69430 dps`  
Yet it still achieved:  
Absolute error = 0 (over the entire 100,000-digit verification window)

That means φ-Engine produced 100k correct digits,  
while internally only using ~69,430 digits.  
This is possible because the contraction is superfactorially stable  
and annihilates all Taylor terms up through degree 2N−2.

Numerical methods normally need **more** working precision than they output.  
φ-Engine needs **less**.

---

## 🤔 How Does the Engine Think?

It utilizes a user controlled `per_term_guard` which, when enabled, allows the engine to be self-aware of its precision.  

The engine performs analytic precision budgeting per factorial layer.  
From the engine code:
```python
# . . . inside main differentiation loop . . .

    if self._config.per_term_guard:                     # Checks for user controlled guard
        digits_beta = _digits_fraction_abs(betas[idx])  # Estimate decimal digits of |num/den|
        digits_m = _digits_factorial_int(m_int)         # Gets bit-length of integer factorial
        digits_h = _digits_from_h(h_frac)               # digits to resolve an O(h) difference uses log10(1/|h|)
        margin = 30
        needed_dps = min(
            self._config.max_dps,
            max(self._config.base_dps, digits_beta + digits_m + digits_h + margin)
        )
    else:
        needed_dps = self._config.base_dps
    needed_dps_list.append(needed_dps)

    # ΔF adaptive
    deltaF, used_dps = _eval_deltaF_adaptive(  # Symmetrics kill all even taylor terms
        F_eval,
        x0_mp,
        h_frac,
        start_dps=needed_dps,
        max_dps=self._config.max_dps,
        dps_step=self._config.dps_step,
    )
    used_dps_list.append(used_dps)
```
This stores a list of used and needed dps for diagnostics output.  

### The φ-Engine doesn't “guess” precision.
It analytically **computes the exact decimal precision required** for every factorial layer, based on:  
* β-coefficient bitlength
* Fibonacci factorial mass $F^+$!
* Layer size 1/(2Fᵢ⁺!)
* analytic moment laws

By running the following demo we can expose the brain of the engine:  
```bash
python -m phi_engine.examples.example_per_term_guard
```

Result:  
```text
==============================================================================
 φ-Engine Per-Term Precision Budget Demonstration (Derivative)
==============================================================================
Function : sin(x)
x0       : 0.25
Operation: Differentiation
Taylor Terms (annihilated): 6

Each row corresponds to one factorial layer in the symmetric ladder.

  f            m=f!           h (deriv)         |β|    needed_dps    used_dps
------------------------------------------------------------------------------
  1             1.0                 1.0       1e-34            71          71
  2             2.0                 0.5       1e-31            68          68
  3             6.0      0.166666666667       1e-28            66          66
  5           120.0    0.00833333333333       1e-20            69          69
  8         40320.0    2.48015873016e-5       1e-10            84          84
 13    6227020800.0   1.60590438368e-10        1e+0           136         136

φ-Engine derivative: 0.96891242171064478414
True derivative     : 0.96891242171064478414
Abs error           : 2.901730433e-49
==============================================================================
Note: per-term adaptive precision ensures deep-layer stability.
------------------------------------------------------------------
```
### *This is calculus as a compiler pass.*

Here you can see the adaptive precision guarding the exactness of each term.  
The β-coefficients explode in log scale, precision budget increases, and symmetrics are exact rationals.  
Fibonacci factorials super-explode to kill the tail, while the symmetrics annihilate terms up to order 2N-2 where N=6 is the number of taylor terms in the demo (layers). The first surviving term is of order 2N-1 = 11.  

#### Error only enters the system from the initial F_eval call and the float-conversion, both from using the mpmath library. 

---

### Integration Demo

```python
from phi_engine import PhiEngine, PhiEngineConfig
from fractions import Fraction
from mpmath import mp  # Optional import for truth comparison with abs err

# Initialize state configuration
cfg = PhiEngineConfig(
    base_dps=50,
    fib_count=9,
    timing=True,
    return_diagnostics=True,
    show_error=True,
    max_dps=1000
)

mp.dps = 500
# Instantiate the engine from the config
eng = PhiEngine(cfg)

# Define an analytic function (for integration f must be analytic on [a,b])
f = lambda x: mp.e**(-x**2)
# if integrating define an interval using python's Fraction module for rational representation
a = Fraction(0)
b = Fraction(1)

# Compute operator call
res, diag = eng.integrate(f, a, b, name="Gaussian", dyadic_depth=7)  # Info on dyadic depth below
# Build truth comparison using hardcoded antiderivative or mp.quad
truth = mp.sqrt(mp.pi) / 2 * mp.erf(1)

# Inject context into the engine for report generation
diag["error"] = abs(res - truth)
diag["operation"] = "Integration"
diag["interval"] = f"[{0}, {1}]"
diag["result"] = res

# Call engine report method
eng.report(diag)
```

Result:  
```text
======================================================================
φ-Engine Diagnostic Report
======================================================================
Operation          : Integration
Interval           : [0, 1]
Function           : Gaussian
Result             : 0.746824132812427
Fib Number         : 9
Terms evaluated    : 1024
Max used precision : 650 dps
Hit precision cap  : No
Total time         : 0.237425 s
Absolute error     : 2.7592639792695019e-47
φ-structural guarantee:
  • Exact through Taylor degree 2N−2 = 14.
  • First possible surviving term: order 2N−1 = 15.
  • Higher Taylor terms are killed superfactorially
    by powers of xᵢ = 1/(F⁺ᵢ!)².
To hide this footer set suppress_guarantee=True in PhiEngineConfig.
======================================================================
```

### Dyadic Paneling (Exact Local Integral → Global Numerical Integral)

The φ-Engine's integral operator is **exact on each local panel**:
every local contraction is a rational evaluation with superfactorial annihilation.

Global integration is done by **dyadic subdivision**:

1. Subdivide an interval into dyadic panels.
2. Run the exact φ-integral operator on each panel.
3. Accumulate rational results.
4. Convert to float only at the end.

```python
# Dyadic tiling of [a, b] into 2**d panels
N = 1 << dyadic_depth                  # number of panels
width = (b - a) / N                    # rational panel width

total = mp.mpf('0')

for j in range(N):
    a_j = a + j * width
    b_j = a_j + width
    # Each panel is integrated by the exact φ-local operator
    panel_value = phi_local_integral(f, a_j, b_j)   # conceptual call
    total += panel_value

return total
```

This is structurally similar to adaptive quadrature,
except the per-panel integral is **mathematically exact** for analytic inputs.  

### Also, critically, there is NO grid in φ-Engine
There are end points, but nothing between the end points is ever touched.  

Using any **non-dyadic** panel OR integration bound, introduces new prime divisors;
adding prime divisors dramatically destroys precision: ≥ 60% digit loss in my testing.  

This is **NOT** a trait unique to φ-Engine. This is a trait **ALL** numeric libraries  
suffer from without realizing why.

Simpson, Clenshaw–Curtis, FFT-based quadrature, Gaussian refinements,  
Romberg, Richardson, adaptive solvers, etc., all benefit from dyadic refinement.  

Traditionally, this phenomenon was blamed on machine artifacts:
    • binary hardware  
    • floating-point representation  
    • BigFloat rounding behavior  
    • stability of uniform dyadic meshes  
    • step-size heuristics  
    • better conditioning of polynomial fits  

### The φ-Engine's and my Letter shows the actual reason: **loss of prime control.**

Core structural facts:
    • Odd-power Taylor terms vanish by symmetry.  
    • Only even-power terms contribute to the φ-integral operator.  
    • Radii obey φᵢ^(2ℓ), which have exploding 2-adic valuation.  
    • A dyadic interval $(b-a)=M/2^d$ introduces **no new odd primes**.  
      All scaling remains inside the same prime-support as the factorial radii.  
    • A non-dyadic interval injects new odd primes into the expansion,  
      and those primes do **not** cancel against φᵢ^(2ℓ),  
      breaking prime control and destroying precision.

Other methods observed the symptoms and blamed binary hardware.  
The φ-Engine exposes the cause.

---

## Supports Mass Batch Runs

This demo shows how φ-Calculus performs over a batch of analytic functions using the same operator.

```python
from phi_engine import PhiEngine, PhiEngineConfig
from mpmath import mp


cfg = PhiEngineConfig(
    base_dps=50,
    fib_count=12,                # Enough to nail 6k digits
    timing=True,
    return_diagnostics=True,
    show_error=True,
    per_term_guard=True,         # adaptive precision brain
    max_dps=5000,                # to exceed the working precision used but less than global
    display_digits=12,           
    report_col_width=24,         # expandible col width
    header_keys=("global_dps", "num_fibs", "evaluation_point", "max_dps_used"),  # custom user keys
)

eng = PhiEngine(cfg)

# Batch test list in format List[Tuple(name, f, f_prime)]
tests = [ 
    # 1. Triple Exponential Growth (absurd entire-function curvature)
    ("exp(exp(exp(x^2)))",
     lambda x: mp.e ** (mp.e ** (mp.e ** (x ** 2))),
     lambda x: mp.e ** (mp.e ** (mp.e ** (x ** 2))) *
               mp.e ** (mp.e ** (x ** 2)) *
               mp.e ** (x ** 2) * (2 * x)),

    # 2. Ultra–High-Frequency Oscillation (1e24 x^2) lol...
    ("sin(1e24 * x^2)",
     lambda x: mp.sin(1_000_000_000_000_000_000_000_000 * x**2),
     lambda x: 2_000_000_000_000_000_000_000_000 *
               x * mp.cos(1_000_000_000_000_000_000_000_000 * x**2)),

    # 3. Bessel J0 of x^2 (deep special-function structure)
    ("besselj(0, x^2)",
     lambda x: mp.besselj(0, x**2),
     lambda x: -2*x * mp.besselj(1, x**2)),

    # 4. Erf(x^2) (analytic, smooth, stiff tails)
    ("erf(x^2)",
     lambda x: mp.erf(x**2),
     lambda x: 2*x*(2/mp.sqrt(mp.pi))*mp.e**(-(x**2)**2)),

    # 5. Mixed analytic: exp(log(x+3)*log(x+5))
    ("exp(log(x+3)*log(x+5))",
     lambda x: mp.e ** (mp.log(x+3) * mp.log(x+5)),
     lambda x: mp.e ** (mp.log(x+3)*mp.log(x+5)) *
               ((1/(x+3))*mp.log(x+5) + (1/(x+5))*mp.log(x+3))),

    # 6. Brutal curvature & oscillation: exp(-x^2)*cos(x^5)
    ("exp(-x^2) * cos(x^5)",
     lambda x: mp.e**(-x**2) * mp.cos(x**5),
     lambda x: -2*x*mp.e**(-x**2)*mp.cos(x**5) +
               mp.e**(-x**2)*(-mp.sin(x**5)*5*x**4)),
]

# Evaluation point
x0 = mp.mpf('0.25')

# Visible precision for truth printing (φ uses its own internal dps)
mp.dps = 6000

diags = []

used_dps_maxs_list = []
for label, f, f_truth in tests:
    # φ-Engine differentiation
    res, diag = eng.differentiate(f, x0, name=label)
    used_dps_maxs_list.append(diag.get("used_dps_max", 0))
    
    # Closed-form truth for comparison
    truth = f_truth(x0)
    abs_err = abs(res - truth)
    
    # Some things you can do with diagnostics
    diag.update({
        "function": label,
        "operation": "Differentiation",
        "result": res,
        "truth": truth,
        "error": abs_err,
        "global_dps": mp.dps,
        "num_fibs": eng.config.fib_count,
        "evaluation_point": x0
    })
    diags.append(diag)

diags[0].update({"max_dps_used": max(used_dps_maxs_list)})
eng.report(diags, batch=True)
```

After a few seconds you'll see:
```text
=========================================================================================
φ-Engine Batch Diagnostic Summary  (11-term factorial ladder)
=========================================================================================

Operation: Differentiation
Global dps: 6000
Num fibs: 12
Evaluation point: 0.25
Max dps used: 4959

-----------------------------------------------------------------------------------------
Function                  φ-time(s)                   Result                   AbsErr
-----------------------------------------------------------------------------------------
exp(exp(exp(x^2)))         0.267180            28.0284520744       0.000000000000e+00
sin(1e24 * x^2)            0.031423        4.63948627692e+22       0.000000000000e+00
besselj(0, x^2)            0.148513         -0.0156173718471       0.000000000000e+00
erf(x^2)                   0.225760           0.561990016813       0.000000000000e+00
exp(log(x+3)*log(x+5))     0.197499            5.18736704645       0.000000000000e+00
exp(-x^2) * cos(x^5)       0.112425          -0.469724225313       0.000000000000e+00
-----------------------------------------------------------------------------------------
φ-structural guarantee:
  • Exact through Taylor degree 2N−2 = 20.
  • First possible surviving term: order 2N−1 = 21.
  • Higher Taylor terms are killed superfactorially
    by powers of xᵢ = 1/(F⁺ᵢ!)².
To hide this footer set suppress_guarantee=True in PhiEngineConfig.
=========================================================================================
```
*Exact bitwise agreement on all 6000 digits mpmath produced for every function in the batch.*

φ-Engine never needed more than ~5k internal digits (`max_dps_used = 4959`) to match all 6k digits that mpmath produced for the closed forms.

### Operator Caching

**ALL demo results in this doc compute β-streams on-the-fly.**

Operators call:
```python
betas = self.get_betas("operator_key", fib_count)
```

--->
```python
def get_betas(...):
        key = (kind, fib_count) 
        if key in self._beta_cache:
            return self._beta_cache[key]    # If already cached, return and run 

        if self._config.beta_source == "cert" and self._config.cert_path:   
        # . . .  Load from precomputed certificate on user request . . .

        # -- Default compute mode --
        ops = beta_operator_fractions(fib_count, ...)  # Compute exact rationals
        self._beta_cache[key] = ops[kind]              # Cache and return
        return self._beta_cache[key]
```
Nothing was precomputed — **although it can be**.

*The first call computes and caches the β-stream.
All subsequent functions reuse the cached β-operator and run at full speed.*

---

## φ-Calculus is not matching mpmath, mpmath is converging to φ-Calculus

That said, the φ-Engine is not a competitor to mpmath.  
φ-Calculus does not compete with numerical libraries.  
It competes with numerical calculus itself.  
mpmath is an excellent arbitrary-precision library, and I use it everywhere in this project because it's the **best tool** available for trustworthy, high-precision truth-comparison. Any places where φ-calculus appears to “beat” mpmath are not critiques — they are simply showing what factorial analytic contraction can do.

If the mpmath team ever wants to adopt or hybridize φ-operators, I would be **thrilled to collaborate** and help integrate this research.  
This engine is meant to be a mathematical proof companion, not a bubble wrapped product.  

---

### Practical note: big rationals and I/O limits

The φ-engine always computes **exact rationals** internally.  
The main practical limitation is not the engine, but whatever you use to *read* those rationals:

- Large amplitudes, long intervals, or very high precision can produce huge numerator/denominator pairs.
- Converting those into `mp.mpf` (or printing them as decimals) can become the bottleneck.
- In extreme cases, like d/dx sin(2↑↑5 · x^2), the float/decimal layer will lose precision or underflow/overflow **even though the internal rational is still exact**.  
- φ-Engine computes this absurd function in milliseconds, but there's no way to get a truth comparison.

If you want to experiment with very large amplitudes or ultra-high precision, treat:

- **the engine's rational output as ground truth** (assuming `fib_count` and `max_dps` are high enough),  
and any decimal/`mpmath` conversion as a *view* of that truth, limited by numeric backend.

---

## φ Certificates (My Solution to the I/O Problem I Created)

**φ-certificates let you distribute exact analytic operators (β-streams) with reproducible cryptographic integrity.**

Once the engine started returning *exact* rationals, the bottleneck stopped being calculus and became **I/O**:
how do you store, share, and re-use enormous β-streams without recomputing them or losing precision?

That's what **φ-certificates** are for.

A φ-certificate is a small, deterministic JSON (optionally gzipped for size) file that packages:

* the **exact β-stream** as `[(num, den), ...]` integer pairs (no floats, no rounding),
* the **factorial node structure** (Fibonacci ladder and node formula),
* the **moment law** it satisfies (`"derivative"`, `"integral"`),
* a **canonical SHA-256 hash** over the math payload (type, moment, fibs, node formula, encoding, payload),
* optional **metadata** (timestamp, generator, version, etc).

The workflow is:

* Generate β-streams once from a configured `PhiEngine` (`emit_cert_from_engine`).
* Store them as φ-certs on disk (optionally gzipped).
* Later, **load** them, **verify** the hash and moment laws (`verify_hash`, `verify_moments`),
  and then **use** them directly to drive evaluation on any analytic $f$.

That gives you:

* **Exactness** – β's are stored symbolically as `Fraction`s.
* **Determinism** – canonical JSON encoding → same hash on any machine, any OS.
* **Reproducibility** – anyone can clone the repo, load the cert, and verify both the hash and the moment identities.
* **Separation of concerns** – certificates live on the math side; any numeric error is *only* in the final contraction step (e.g. `mpmath`, IEEE-754).

φ-certificates freeze the operator side of calculus (the β-streams and factorial layers).
The actual contraction against a specific function $f$ still happens at runtime.

### Why Engineers and Cryptographers Care

A φ-certificate is not “just a cache” or “just a convenience.”

It is a **complete, canonical, and cryptographically sealed representation of an analytic operator**:

- exact rational β-streams,
- exact factorial radii,
- exact moment law,
- canonical JSON encoding,
- SHA-256 commitment over the entire mathematical payload.

This makes a φ-certificate a **trustless, platform-independent analytic primitive**.  
Anyone can load the certificate, verify the hash, re-check the moment identities,  
and reproduce the operator exactly on any machine, forever.

It is effectively a deterministic **analytic virtual machine instruction**,  
sealed by a 32-byte hash.

In other words:

> the φ-engine solves calculus; φ-certificates solve **shipping the solution around** without ever re-doing the math.

---

### Quick Certificate Demo 

You can run the certificate example directly from the installed package:
```bash
python -m phi_engine.examples.example_certificates
```

You should see something like this but larger:
```text
✓ Emitted φ-certificate → certs/phi_cert_integral_fib7.json.gz
✓ Verified φ-certificate: certs/phi_cert_integral_fib7.json.gz
{
  "type": "phi_beta/v1",
  "moment": "derivative",
  "fib_count": 5,
  "fibs": [
    1,
    2,
    3,
    5
  ],
  "encoding": "exact_rational",
  "node_formula": "x_i = 1/(F_i!)^2",
  "hash": {
    "alg": "sha256",
    "root": "cf83e5170d9eada1a8f374a5b3637b0f0ddbb3f9394e7f10a578b4fb925ea15f"
  },
  "meta": {
    "author": "Purrplexia",
    "generated": "2025-11-22T22:52:44.171720Z"
  }
}

β_rationals preview (first few):
  i=0: -1/1511895
  i=1: 1/21594
  i=2: -27/9310
  i=3: 6912000000/6892326133

β count: 4
```

---

## Engine Configuration Overview

Engine config object:
```python
@dataclass(frozen=True)
class PhiEngineConfig:
```
| **Category**      | **Key**              | **Purpose**                                                                      |
|-------------------|----------------------|----------------------------------------------------------------------------------|
| **Precision**     | `base_dps`           | Starting decimal precision for backend function evaluations                      |
|                   | `max_dps`            | Hard ceiling on adaptive precision growth                                        |
|                   | `dps_step`           | Step size when increasing mpmath precision                                       |
|                   | `per_term_guard`     | Enable analytic per-layer precision budgeting **(this is OP use it)**            |
|                   | `rtol`, `atol`       | Optional numeric tolerances for safety checks                                    |
|                   | `suppress_guarantee` | Hide the φ-digit lower bound in diagnostic reports                               |
| **Structure**     | `fib_count`          | Number of factorial layers - 1 (`N-1` → kills Taylor terms up to degree `2N−2`)  |
| **Diagnostics**   | `timing`             | Print φ-timing information                                                       |
|                   | `show_error`         | Show error in the diagnostics report if user `error` key in diag dict            |
|                   | `return_diagnostics` | Return diagnostics dictionaries from every engine call                           |
|                   | `display_digits`     | Number of digits to print in reports                                             |
|                   | `header_keys`        | Custom keys to include in batch-report headers                                   |
| **Certification** | `beta_source`        | `"compute"` = build β-streams on demand; `"cert"` = load from φ-certificate      |
|                   | `cert_mode`          | `"emit"` = write certs, `"verify"` = verify, `"require"` = refuse uncertified βs |
|                   | `cert_path`          | Path to certificate file when using `"cert"` mode                                |
|                   | `gzip_cert`          | Compress certificate output as `.json.gz`                                        |
|                   | `cert_meta`          | Optional metadata to embed inside emitted certificates                           |
|                   | `ensure_moments`     | Re-verify all moment identities when loading a φ-certificate (expensive).        |


### Instantiating Configurations

A diagnostic-rich configuration suitable for research and debugging:
```python
from phi_engine import PhiEngine, PhiEngineConfig

cfg = PhiEngineConfig(
    base_dps=300,
    fib_count=10,
    per_term_guard=True,        # analytic precision guard
    timing=True,                # show φ-timings
    show_error=True,            # print "truth" comparison if provided
    return_diagnostics=True,    # return full diagnostics dict
    display_digits=12           # report-visible digits
)

eng = PhiEngine(cfg)
```

### Using Presets:
```python
cfg = PhiEngineConfig.preset_fast()      
# → Very lightweight exploration (6 fibs, modest precision) great for laptops that like digits

cfg = PhiEngineConfig.preset_accurate()
# → research mode (12 fibs, adaptive precision)

cfg = PhiEngineConfig.preset_cert_only(
  "certs/phi_cert_integral_fib12.json.gz")
# → formal verification: requires φ-certificates

cfg = PhiEngineConfig.preset_diagnostics()
# → introspection mode: timing + error + per-term precision budgets
```
All presets accept overrides:
```python
cfg = PhiEngineConfig.preset_accurate(return_diagnostics=True)
```

### 🔒 Security Note:  
`preset_cert_only` locks users out of semantically hacking the input.  
The certificate is **authoritative**.  
```python
    def preset_cert_only(cls, path: str, **kwargs):
        forbidden = {"beta_source", "cert_mode", "cert_path"}
        if any(k in forbidden for k in kwargs):
            raise ValueError("preset_cert_only does not allow overriding certification parameters.")
        # . . .
```
The frozen dataclass protects the data from being tampered with after instantiation;  
the preset protects the data at the time of instantiation.  
**Together they guarantee the engine cannot load uncertified β-streams.**


### Editing Configurations
All configs are immutable; clone them via `with_updates`:
```python
cfg2 = cfg.with_updates(fib_count=12, max_dps=6000)
eng = PhiEngine(cfg2)
```

### 💾 Save / Load

```python
s = cfg.to_json()
cfg_restored = PhiEngineConfig.from_json(s)
```

---

## Diagnostics: what `return_diagnostics=True` actually returns

If `return_diagnostics=True` in your `PhiEngineConfig`, every operator returns:

* `differentiate(...)  → (result, diag)`
* `integrate(...)      → (result, diag)`
* batch mode           → `eng.report(diags, batch=True)` expects a list of these dicts

The engine always populates the following core keys:

```python
diag = {
    "fib_count": fib_count,                         # effective Fibonacci depth used
    "beta_time": beta_time,                         # time spent synthesizing β (0 if cached)
    "terms": ladder_len or len(used_dps_list),      # number of factorial layers actually evaluated
    "used_dps_list": list(used_dps_list),           # per-layer working precision
    "needed_dps_list": list(needed_dps_list),       # per-layer requested precision
    "used_dps_max": max(used_dps_list) or base_dps,
    "used_dps_min": min(used_dps_list) or base_dps,
    "used_dps_avg": avg(used_dps_list) or base_dps,
    "hit_ceiling": any(d >= max_dps for d in used_dps_list),
    "timing_s": self._stop_timer(t0),               # total time for this operator call
    "function": name or "",                         # user-supplied label (e.g. "exp(-x^2)")
}
```

Those are always present whenever diagnostics are enabled.

### Optional / user-populated fields

The engine **does not** automatically stuff everything into the diagnostics dict.
Instead, it *looks for* certain keys if you choose to add them before calling `eng.report(...)`:

* `"result"`

  * Your final value (derivative / integral).
  * Not added by default to avoid duplicating huge rationals or very long decimal prints.

* `"x0"`

  * Evaluation point for differentiation.

* `"interval"`

  * For integration, e.g. `"[0, 1]"`.

* `"error"`

  * Absolute error vs some trusted reference (e.g. closed form or `mp.quad`).
  * If present **and** `show_error=True` in the config, it gets printed in the report.

* `"timing_s_alt_"`

  * Optional timing if you benchmark against any other system.
  * You attach this yourself; the engine will just display what is after the `alt_` it if you include it.
  * Example: desired display `simpson(s)`, correct usage: `"timing_s_alt_simpson"`

* `"operation"`

  * A human-readable tag like `"Differentiation"` or `"Integration"` used in reports, especially in batch mode.

You can also define `"header_keys"` **in the config**, not in `diag`:

```python
cfg = PhiEngineConfig(
    ...,
    header_keys=("global_dps", "num_fibs", "evaluation_point", "max_dps_used"),
)
```

Any key you later inject into each `diag` with that name will be shown in the batch header.

### Typical pattern

```python
res, diag = eng.differentiate(f, x0, name="exp(-x^4)")

truth = f_truth(x0)

diag["result"] = res
diag["x0"] = x0
diag["operation"] = "Differentiation"
diag["error"] = abs(res - truth)

eng.report(diag)
```

For batches:

```python
diags = []
for label, f, f_truth in tests:
    res, diag = eng.differentiate(f, x0, name=label)
    truth = f_truth(x0)
    diag.update({
        "operation": "Differentiation",
        "result": res,
        "truth": truth,
        "error": abs(res - truth),
        "evaluation_point": x0,
        "global_dps": mp.dps,
        "num_fibs": eng.config.fib_count,
    })
    diags.append(diag)

eng.report(diags, batch=True)
```

That's it: no hidden keys, no magic.
The engine guarantees the core fields above; everything else is *your* telemetry layered on top.


---

### Design philosophy

The φ-Engine treats calculus as **execution**, not approximation.  
All numeric behavior emerges from factorial structure and β-moment laws.  
Each coefficient file under `phi_engine/certs/` contains a reproducible “proof”.  

Config → Engine → β-stream contraction → mpmath evaluation → Output

All without grids.  
No points.  
No delta x → 0.  
Just factorial structure.  

This engine turns calculus into an I/O problem:  
the operator side (β-stream, factorial ladder, moment laws) is exact rational math;  
the only error is in your f(x) backend and decimal printing.  
$$
\boxed{\text{Now go solve it!}}
$$

---

### License

- All code: [GPLv3-or-later](https://github.com/Purrplexia/LettersToMyHeroes/blob/main/LICENSE-code)
- All documents: CC BY-SA 4.0

---

### Authorship

This project was written — start to finish — by Alex B (Purrplexia) alone.  
No collaborators, no editors, no institutional support.  
Every idea, proof, and line of code is original work.  

I believe in **freedom of information**: anyone may inspect, reuse, or extend this work under GPLv3-or-later.

---

## Contact / Commentary

Issues can be opened directly in [this repository](https://github.com/Purrplexia/LettersToMyHeroes).
Public peer commentary is greatly encouraged!
Pull requests improving clarity, code efficiency, or validation coverage are also welcome.

Formal inquiries may be directed to [mathsisbeautiful@proton.me](mailto:mathsisbeautiful@proton.me)

---

### 🩷 Support the Work (Optional)

This project will always remain free, open-source, and publicly signed.

If you'd like to support ongoing research and future *Letters*:

- **GitHub Sponsors:** https://github.com/sponsors/Purrplexia
- **ETH (MetaMask):**`0x663D8288b4Aa6F3A72FF4FE67d1a7B080cD5097d`
- **BTC (Electrum):** `bc1q0ntv4zvpdprlxexvz20mf7ajdydmp9ke9ulgep`

Never required — always appreciated. Every dollar goes toward building, teaching,  
and releasing more of this work openly.

I will never gate mathematical truth behind paywalls or NDAs.  
This work belongs to everyone.  
**Forever.**

### *Trust, but verify;*
