Closing theoretical companion to SESSION_FINDINGS_2026_04_11_12.md.
The Platonic object is not the compressor, not the swarm, not the Hamiltonian, and not the state space. It is the weighted space of admissible histories with boundary conditions. Everything else is a chart on it.
Or, harder:
Het platonische object is de geschiedenisruimte; alles anders is een chart.
𝔥 = (Γ_adm, ∂_in, ∂_out, A, C)
where
U, R and every applicable Hoare-style corridorThe Lean symbolic side of Γ_adm, ∂_in, ∂_out, A is already in Proof/Scattering.lean:
History X Sym U EWellFormed D γ + Admissible D g γinboundary γ, outboundary γaction L γ with action_nil, action_cons, action_appendWhat the history space adds is the totality: not one γ, but the indexed family of all admissible γ with fixed boundary conditions.
Z(λ; b^in, b^out) = Σ_{γ ∈ Γ_adm, ∂_in γ = b^in, ∂_out γ = b^out}
exp(−λ · A(γ))
At λ = 1 this is the unnormalised scattering weight already proven correct for the deterministic codec case in research/tlc/scattering.py. As a function of λ it is a spectral object: its asymptotics encode entropy rate, its poles encode phase transitions, its derivatives encode expected action.
When Γ_adm collapses to a single admissible path (a deterministic codec with fixed (b^in, b^out) by encode/decode correctness), Z is a single exp(−A). When the guards allow branching — nondeterministic routing, stochastic corrections, MCTS lookahead — the sum has real content.
Each file in research/tlc/ and each theorem in Proof/ is a LOCAL description of 𝔥. None of them is 𝔥 itself.
| Artefact | Chart on 𝔥 | What it exposes |
|---|---|---|
tlc_compressor.py, ArithCoder.Model |
single deterministic γ | boundary coding of one history |
dual_compressor.py |
same γ in ℚ[ε]/(ε²) | multiplicative action form |
casimir_compressor.py |
families of γ with different guard structure | ΔL as comparison of two charts |
atlas_spectrum.py, RepresentationalAtlas.lean |
ladder of FiniteFamily + advantage curves |
the measured projection of 𝔥 onto rungs |
hyperbolic_compressor.py |
local chart where branching is geometrically cheap | hyperbolic coordinates |
open_kernel.py |
EncoderKernel / DecoderKernel | the same γ with swapped ports |
scattering.py, Scattering.lean |
enumerator + Σ exp(−A) over a finite slice of Γ_adm | the sum-over-histories directly |
swarm_model.py, SwarmModel |
measure on a local chart of Γ_adm | Bayesian filter over charts |
charged_swarm.py |
same measure + field coupling | ρ → φ → drift, adds topological charge |
field_coupled_swarm.py |
same + field feeds into emission | C is feeding back into A |
guarded_mcts.py |
PUCT exploration of a subtree of Γ_adm | search within the history space |
thermal_swarm.py |
Z(λ) at various λ | finite-temperature slices |
quantum_swarm.py |
replace Σ exp with |Σ e^{iΦ}|² | Born rule version of Z |
solomonoff_swarm.py |
2^(−L) prior over generators of histories |
universal-prior sum |
adversarial_swarm.py |
saddle point of Z over boundary data | minimax on 𝔥 |
Proof.ArithCoder.Model.encode_decode_id |
γ is uniquely determined by (∂_in, ∂_out) | deterministic-codec corner |
Proof.RepresentationalAtlas |
advantageCurve, coordinateSusceptibility |
ladder-relative projections of Z |
Proof.CasimirBridge.casimirRatio_lt_one_iff_width_lt |
ordering of charts ↔︎ ordering of Z-ratios | the ℚ↔︎ℕ handshake |
In this reading, the whole session’s output is one book of charts. The book is finite and the reader can pick any chart for any local measurement, but the object described is not any of the charts. It is their colimit.
The stance: stop asking “what is the next state?” and ask “how does the space of admissible histories count?” The questions that become sensible are:
Dirichlet series of admissible histories.
D(s; b^in, b^out) = Σ_γ A(γ)^(−s)
Well-defined when A(γ) has discrete values (integer bit counts or bounded ℚ grids). Its abscissa of convergence encodes the growth rate of admissible histories by action. For a codec on a bounded alphabet this number is the byte-conditional entropy rate.
Euler product over local scattering vertices. Each step (x_t, b^in_t, u_t, ε_t) → (x_{t+1}, b^out_{t+1}) is a local vertex. If the guards factor through local-at-each-step constraints, then Z factorises as a product over steps, with each factor a finite sum. That is the Euler side of the history-zeta:
Z = ∏_{step} Σ_{u, ε valid here} exp(−L(·))Modular / dual symmetry between encode and decode. The open-kernel port-swap already operational in open_kernel.py is a Z₂ symmetry: relabel boundary in ↔︎ out and run the same dynamics. In the generating function this should be an involution Z(λ; b^in, b^out) → Z(λ; b^out, b^in) — a functional equation for the coder/decoder pair. The Lean statement candidate:
casimirRatio m₁ m₂ = (casimirRatio m₂ m₁)⁻¹
(trivially true from the definition, but semantically: encoder and decoder are charts of each other.)
Poles of Z(λ). Thermal-swarm style scan of λ = 1/T on a non-stationary corpus. A pole in (extrapolated) Z(λ) at some λ_c means a phase transition of the computation — a regime switch. The session’s thermal_swarm.py measured no peak on stationary 1 KB because one particle already dominates; the prediction is that on a regime-mixed corpus Z(λ) will show real critical structure.
Functional equation for the codec. A dual relation like
Z(λ; b^in, b^out) = f(λ) · Z(1 − λ; b^out, b^in)
would be the sharp form of encoder-decoder duality. Speculative; no concrete candidate yet.
The preceding sessions produced a long list of related artefacts:
decode_message_correct)scattering.py)RepresentationalAtlas.lean)None of these feel like the underlying object. Each lives inside the others in some way: the scattering sum contains the codec as its deterministic corner; the swarm measure projects onto the scattering sum; the ladder projects the swarm onto its coarse costs; the hyperbolic chart gives a local parametrisation of the branching tree; the Casimir shifts are comparisons between two choices of bulk geometry.
The smallest object that contains all of them as projections is the history space 𝔥. It is the only thing in this session that is not a chart on something else.
Three types of inquiry that become well-defined once 𝔥 is the target:
Counting instead of sampling. The MCTS tree search, the scattering sum, and the Solomonoff prior are all already sums over sub-regions of Γ_adm. Any quantitative improvement comes from computing those sums more cleanly.
Boundary fibres as the natural unit of analysis. A Casimir measurement is a comparison of two boundary fibres (two choices of ∂_in/∂_out structure). A compression ratio is the action of one γ normalised by the log-size of one fibre. An entropy rate is the log-growth of fibres.
Asymptotic spectral data. Instead of asking “does this compressor beat gzip at 10M?”, ask “what is the asymptotic leading singularity of Z(λ) at λ → λ_c?” The former is a measurement on one chart; the latter is an invariant of 𝔥 itself.
Computation = choice or sum over γ ∈ Γ_adm
Execution = min-action γ ∈ Γ_adm
Generation = sample from e^(−A) / Z
Compression = boundary coding of a selected γ
Physics-analogy = interpretation of (Γ_adm, A) as open
scattering
Training = parameter tuning so data histories sit low
in A
Search (MCTS, …) = partial enumeration of Γ_adm
Field dynamics = C evolving through the history, feeding
back into A
And a final compact formalism:
Z(λ; b^in, b^out) = Σ_{γ ∈ Γ_adm}
exp(−λ A(γ))
1[∂_in γ = b^in]
1[∂_out γ = b^out]
The session’s job was to build one concrete coordinate system on this object. The next session’s job — if taken this direction — is to study 𝔥 itself: its Dirichlet series, its Euler factorisation, its dualities, its spectral data. The arithmetic of admissible histories.
State, field, swarm, charge, code, and action are all coordinate systems on one history object.
And the compact form from which the whole session unfolds:
The Platonic object is the space of admissible histories weighted by their action.