How much does knowing the past help predict the future?
Computes conditional entropy H(X_t | X_{t-1}, ..., X_{t-k}) at increasing depths k = 1, 2, 4, 8. If the conditional entropy drops as you add more history, the signal has memory — past values constrain the future. Also includes sample entropy (SampEn), a phase-space regularity measure.
Total information gain from knowing the past 8 values: H(X) - H(X | past_8). De Bruijn sequence scores 3.0 (maximally predictable — it's constructed so every 8-bit pattern appears exactly once, making the next bit deterministic given the last 7). Hilbert walk (2.90) and sawtooth (2.89) are nearly as predictable. White noise scores 0.0 (the past tells you nothing).
Regularity of the phase-space trajectory. Low = self-similar, predictable. High = complex, unpredictable. Pi digits (2.22) and BSL residues (2.21) are the most irregular signals in the atlas — close to the theoretical maximum for byte-valued data. Devil's staircase scores 0.019 (nearly zero — its long constant plateaus create trivially self-similar trajectories). Constants score exactly 0.0.
How fast does conditional entropy decrease with depth? Baker map has the steepest positive slope (0.26): it reveals more structure at each depth. De Bruijn has the steepest negative slope (-0.31): it becomes maximally predictable at depth 7 and the entropy collapses. Near zero means either unpredictable at all depths (noise) or already fully predicted at depth 1 (simple periodic). cond_entropy_k1 / cond_entropy_k8 — Conditional entropy at depths 1 and 8. White noise scores ~3.0 at both (8 bits of uncertainty, knowing the past doesn't help). Logistic period-3 scores 0.0 at both (past fully determines future). The gap between k1 and k8 reveals hidden long-range dependencies — PRNG outputs score identically at k1 and k8 (memoryless), while the baker map drops from 2.8 to 2.3 (its 2D structure creates long-range predictability invisible at lag 1).
| Source | Domain | Value |
|---|---|---|
| Wichmann-Hill | binary | 2.9976 |
| Arnold Cat Map | chaos | 2.9976 |
| White Noise | noise | 2.9976 |
| ··· | ||
| Constant 0xFF | noise | 0.0000 |
| Constant 0x00 | noise | 0.0000 |
| Logistic r=3.83 (Period-3 Window) | chaos | 0.0000 |
| Source | Domain | Value |
|---|---|---|
| MINSTD (Park-Miller) | binary | 2.9997 |
| XorShift32 | binary | 2.9997 |
| Baker Map | chaos | 2.9997 |
| ··· | ||
| Constant 0xFF | noise | 0.0000 |
| Constant 0x00 | noise | 0.0000 |
| Logistic r=3.83 (Period-3 Window) | chaos | 0.0000 |
| Source | Domain | Value |
|---|---|---|
| Baker Map | chaos | 0.2637 |
| Sunspot Number | astro | 0.2015 |
| Noisy Period-2 | chaos | 0.1343 |
| ··· | ||
| De Bruijn Sequence | number_theory | -0.3130 |
| Champernowne | number_theory | -0.2123 |
| Collatz Trajectory | number_theory | -0.1853 |
| Source | Domain | Value |
|---|---|---|
| De Bruijn Sequence | number_theory | 3.0000 |
| Hilbert Walk | exotic | 2.8955 |
| Sawtooth Wave | waveform | 2.8851 |
| ··· | ||
| White Noise | noise | 0.0000 |
| Constant 0xFF | noise | 0.0000 |
| Pink Noise | noise | 0.0000 |
| Source | Domain | Value |
|---|---|---|
| Pi Digits | number_theory | 2.2151 |
| BSL Residues | number_theory | 2.2132 |
| Neural Net (Dense) | binary | 2.2121 |
| ··· | ||
| Constant 0xFF | noise | 0.0000 |
| Constant 0x00 | noise | 0.0000 |
| Devil's Staircase | exotic | 0.0191 |
Predictability's sample_entropy and entropy_decay_rate were key discriminators in the negative re-evaluation study (2026-03-05). Standard map (44 significant metrics), Arnold cat (19), and GARCH (29) were reclassified from negative to positive detections largely on Predictability metrics. Sample entropy distinguishes deterministic chaos (moderate, ~1.5) from true noise (high, ~2.2) and periodicity (low, <0.5).