Predictability

Conditional entropy, memory depth, sample entropy
dynamicaldim information5 metrics

What It Measures

How much does knowing the past help predict the future?

Computes conditional entropy H(X_t | X_{t-1}, ..., X_{t-k}) at increasing depths k = 1, 2, 4, 8. If the conditional entropy drops as you add more history, the signal has memory — past values constrain the future. Also includes sample entropy (SampEn), a phase-space regularity measure.

Metrics

excess_predictability

Total information gain from knowing the past 8 values: H(X) - H(X | past_8). De Bruijn sequence scores 3.0 (maximally predictable — it's constructed so every 8-bit pattern appears exactly once, making the next bit deterministic given the last 7). Hilbert walk (2.90) and sawtooth (2.89) are nearly as predictable. White noise scores 0.0 (the past tells you nothing).

sample_entropy

Regularity of the phase-space trajectory. Low = self-similar, predictable. High = complex, unpredictable. Pi digits (2.22) and BSL residues (2.21) are the most irregular signals in the atlas — close to the theoretical maximum for byte-valued data. Devil's staircase scores 0.019 (nearly zero — its long constant plateaus create trivially self-similar trajectories). Constants score exactly 0.0.

entropy_decay_rate

How fast does conditional entropy decrease with depth? Baker map has the steepest positive slope (0.26): it reveals more structure at each depth. De Bruijn has the steepest negative slope (-0.31): it becomes maximally predictable at depth 7 and the entropy collapses. Near zero means either unpredictable at all depths (noise) or already fully predicted at depth 1 (simple periodic). cond_entropy_k1 / cond_entropy_k8 — Conditional entropy at depths 1 and 8. White noise scores ~3.0 at both (8 bits of uncertainty, knowing the past doesn't help). Logistic period-3 scores 0.0 at both (past fully determines future). The gap between k1 and k8 reveals hidden long-range dependencies — PRNG outputs score identically at k1 and k8 (memoryless), while the baker map drops from 2.8 to 2.3 (its 2D structure creates long-range predictability invisible at lag 1).

Atlas Rankings

cond_entropy_k1
SourceDomainValue
Wichmann-Hillbinary2.9976
Arnold Cat Mapchaos2.9976
White Noisenoise2.9976
···
Constant 0xFFnoise0.0000
Constant 0x00noise0.0000
Logistic r=3.83 (Period-3 Window)chaos0.0000
cond_entropy_k8
SourceDomainValue
MINSTD (Park-Miller)binary2.9997
XorShift32binary2.9997
Baker Mapchaos2.9997
···
Constant 0xFFnoise0.0000
Constant 0x00noise0.0000
Logistic r=3.83 (Period-3 Window)chaos0.0000
entropy_decay_rate
SourceDomainValue
Baker Mapchaos0.2637
Sunspot Numberastro0.2015
Noisy Period-2chaos0.1343
···
De Bruijn Sequencenumber_theory-0.3130
Champernownenumber_theory-0.2123
Collatz Trajectorynumber_theory-0.1853
excess_predictability
SourceDomainValue
De Bruijn Sequencenumber_theory3.0000
Hilbert Walkexotic2.8955
Sawtooth Wavewaveform2.8851
···
White Noisenoise0.0000
Constant 0xFFnoise0.0000
Pink Noisenoise0.0000
sample_entropy
SourceDomainValue
Pi Digitsnumber_theory2.2151
BSL Residuesnumber_theory2.2132
Neural Net (Dense)binary2.2121
···
Constant 0xFFnoise0.0000
Constant 0x00noise0.0000
Devil's Staircaseexotic0.0191

When It Lights Up

Predictability's sample_entropy and entropy_decay_rate were key discriminators in the negative re-evaluation study (2026-03-05). Standard map (44 significant metrics), Arnold cat (19), and GARCH (29) were reclassified from negative to positive detections largely on Predictability metrics. Sample entropy distinguishes deterministic chaos (moderate, ~1.5) from true noise (high, ~2.2) and periodicity (low, <0.5).

Open in Atlas
← Multifractal SpectrumInformation Theory →