Full-pattern reconstruction from partial measurement. Encode a pattern into a polycontextural interference field, measure only a subset of qubits, and reconstruct the full pattern—a distinct architectural layer for distributed, holographic-style information processing.
Quantum-native task framing →
How we present results: prepare & probe a hologram in Hilbert space; NISQ noise vs task definition.
QPC-SWR v1 executive results →
Structure witness & recovery on Fez: plain-language verdict vs holographic line (not on Home index).
The holographic memory benchmark demonstrates a distinct QPC capability: information is encoded in a distributed way across qubits (multi-context phases and spreading), then only part of the system is measured (e.g. 16 of 32 qubits). From that partial readout we reconstruct the full pattern. This mirrors optical holography, where a fragment of the hologram can still reconstruct the full image.
Generate a binary pattern, encode it into a QPC circuit (superposition + phase encoding + spreading layers), then measure only k of N qubits on real hardware. From the k-bit outcome distribution, compute marginals and infer the full N-bit pattern. Compare reconstructed to original; report full, observed-only, and unobserved-only accuracy.
Standard benchmarks measure all qubits. Here we show that QPC can support a different regime: partial observation with full reconstruction. That is a distinct architectural layer—distributed storage and holographic-style recovery—aligned with research directions in quantum holography, associative memory, and distributed representation (including ideas from brain microtubule and interference-based memory models).
The same idea—encode once, read partially, reconstruct fully—applies wherever information is stored in a distributed way and must be recovered from incomplete or partial observation.
In optical holography, a 3D scene is encoded into an interference pattern on a plate. Even a fragment of the hologram can reconstruct the full image when illuminated. Here we do the computational analog: the pattern is encoded into the quantum interference field; we measure only a subset of qubits (a “fragment” of the full state). The decoder recovers the full pattern from that partial signal. So the test is a proof of concept for any application where full information must be recoverable from partial readout—imaging, compressed sensing, or redundant distributed storage.
In Hopfield-like associative memory, patterns are stored in a distributed way across many units; presenting a partial or noisy cue can recall the full pattern. The holographic benchmark implements a similar idea in a QPC circuit: the pattern is spread across qubits via phases and entanglement; partial measurement (a “cue”) is used to reconstruct the full pattern. So the capability is relevant for associative recall, content-addressable memory, or any setting where “full state from partial view” is required.
Theories of quantum cognition and brain microtubule models propose that neural information can be represented in a distributed, interference-based way—closer to holographic storage than to local registers. The QPC holographic benchmark is a computational analog: it shows that a quantum circuit can store a pattern in a distributed manner and recover it from partial observation. So it fits the research direction of quantum-inspired or quantum-supported distributed representation and holographic information processing.
In sensor or agent networks, you often have access only to a subset of nodes (failed links, limited bandwidth, or privacy). The question is: can we still infer the full system state? The benchmark demonstrates that QPC can encode a full pattern and then “observe” only part of the system (k qubits) and still reconstruct the whole. So it is a proof of concept for full-state inference from partial sensors when the encoding is distributed and interference-based.
In holographic storage, redundancy means that damage to part of the medium does not destroy the stored information—the rest can still reconstruct it. Here, “measuring only k qubits” simulates having access to only part of the stored state. If reconstruction accuracy remains meaningful, the architecture supports robustness to partial loss or partial access, useful for fault-tolerant or distributed memory designs.
Bottom line: The holographic memory test shows that QPC supports a distinct capability layer: distributed encoding and full-pattern reconstruction from partial measurement. The same idea applies to holography-style imaging, associative memory, quantum-inspired distributed representation, sensor-network inference, and robust storage.
How the holographic computation maps to a model of tubulins and microtubules inside neurons—with Kenograms, Contextures, entanglement, and transitions between them.
The QPC holographic script is a computational analog of a possible holographic process. Here we explain how you can interpret and use it in a brain-inspired model where information is stored in tubulins and microtubules (e.g. as in quantum cognition and microtubule-based memory theories), using the QPC notions of Kenograms (contextual units / state carriers) and Contextures (distinct logical layers). We map each step of the script to this model and clarify entanglement and transitions between contextures. This is a theoretical and computational mapping—not a claim that the brain runs this exact circuit.
Inside neurons, microtubules are cylindrical polymers made of tubulin subunits. In some theories (e.g. Orch-OR, quantum cognition), tubulins can occupy different conformational or dipole states and may sustain quantum coherence over short times and distances. Information is then not stored in a single tubulin but distributed across many tubulins along a microtubule (and possibly across microtubules). The system behaves more like a hologram: the “pattern” is written into the collective phase and entanglement of many units, so that partial observation (e.g. only some tubulins coupled to the environment or “read” by downstream biochemistry) could in principle still allow reconstruction of the full pattern. The holographic script implements this idea in a minimal quantum circuit.
Kenograms (in QPC) are the minimal contextual units that carry state and participate in logical operations—here they correspond to sites that can hold a binary or phase state. In the script, each qubit is one such site (N qubits = N sites). In the tubulin model, each tubulin (or each lattice site along a microtubule) is one Kenogram: it can be in different states (e.g. 0/1 or phase-encoded), and the pattern is the initial configuration of these states (which tubulins are “on” or “phase-marked”).
Contextures are distinct logical or operational layers that coexist and interact. In the script there are three contextural layers (three “contexts”): (0) pattern as phase on a superposition, (1) spreading along the chain, (2) a second phase layer. In the tubulin/microtubule model you can think of them as follows. Contexture 0 (Kenogrammatic): Initial state preparation—the “pattern” is written as phases (or conformational/dipole states) on the tubulins, after a global superposition (all sites in a coherent mix). Contexture 1 (Spreading / morphogrammatic): Information is spread along the microtubule by entangling neighboring tubulins (the chain of CX gates in the script). This creates correlations so that the pattern is no longer local but distributed across the chain. Contexture 2: A second phase layer adds another contextual interference; the full state is then a polycontextural interference field. Transitions between contextures are the passage from one layer to the next (e.g. from “pattern written” to “spreading” to “second phase”)—in the circuit these are the barriers between layers; in the biological analogy they could be thought of as temporal or structural transitions (e.g. conformational waves or coupling changes along the microtubule).
In the script, entanglement is created by the chain of two-qubit (CX) gates in Contexture 1: each site is coupled to its neighbor, so the state of one qubit depends on the others. In the tubulin model, entangled tubulins would mean that the state of one tubulin is quantum-correlated with others (e.g. along the same microtubule or across junctions). The script thus gives a concrete recipe: a linear chain of sites (one microtubule, or one strand of tubulins) with nearest-neighbor coupling that spreads the initial pattern into a distributed, non-local state. The more the information is spread, the more “holographic” the storage: no single tubulin holds the full pattern; the pattern is in the joint state. That is why partial readout can still recover it—the statistics of the observed sites (marginals) carry information about the whole.
The script measures only k of N qubits (e.g. 16 of 32). In the brain model, not all tubulins are “observed” at once—only a subset might couple to the environment, to molecular motors, or to downstream signaling at a given time. So partial measurement in the script models this: only k sites (tubulins) are “read”; the rest remain unobserved. The device returns a distribution over k-bit outcomes (e.g. 4096 shots). From that we compute marginals—the probability that each observed site is in state 1—and from marginals we decode the full N-bit pattern (observed sites from marginals, unobserved by interpolation from nearest observed). That decoding step is the holographic reconstruction: full pattern from partial readout. In the biological analogy, “reconstruction” would be the process by which the rest of the pattern (the state of unobserved tubulins) is inferred or functionally recovered from the partial signal.
Transitions between contextures in the script are the step from one layer to the next (Contexture 0 → 1 → 2). In the tubulin model these can be interpreted as (a) temporal transitions—the system first writes the pattern (kenogrammatic), then spreads it (entanglement along the microtubule), then adds the second phase (another contexture)—or (b) structural transitions—different segments or modes of the same microtubule (or different microtubules) acting as different contextures that interact. Transitions between states (e.g. tubulin conformational change, or 0↔1 flip) are not explicitly modeled as gates in this script; the script encodes a fixed pattern and then spreads and measures it. But the phase rotations (RZ) in each contexture can be seen as setting the “state” or phase of each Kenogram; changing those phases would correspond to different patterns or different moments in time. So you can use the same circuit structure with different pattern inputs (or different RZ angles) to model transitions from one stored pattern to another—e.g. one pattern per “memory” or per time step.
To use the holographic script as a computational model of tubulin/microtubule holographic storage:
--qubits 32 or 64, 128 as a proxy for the number of sites.--observe 16 for half, or fewer/more to study how reconstruction degrades with less partial readout.Running ./run_holographic.sh --qubits 32 --observe 16 --compare-simulator gives you (1) ideal reconstruction accuracy (noiseless) and (2) hardware accuracy (noise-limited). The gap illustrates that the logic of holographic storage and reconstruction works; the limit is the “environment” (noise). In the tubulin model, that would correspond to the idea that the architecture can in principle support full-pattern recovery from partial observation, while real biological noise and decoherence limit the fidelity.
| Script / QPC | Tubulin / microtubule model |
|---|---|
| N qubits | N tubulin sites (Kenograms) |
| Pattern (N bits) | Initial tubulin state configuration |
| Contexture 0 (H + RZ) | Kenogrammatic layer: superposition + pattern as phase |
| Contexture 1 (chain CX + RZ) | Spreading: entanglement along microtubule; information distributed |
| Contexture 2 (RZ) | Second contexture: polycontextural interference |
| Entanglement (CX chain) | Quantum correlation between neighboring tubulins |
| Partial measurement (k of N) | Only k tubulins “read” / coupled to environment |
| Marginals + decode | Holographic reconstruction: full pattern from partial readout |
| Transition contexture 0→1→2 | Temporal or structural transition between layers |
| Different patterns (seed) | Different stored states or time steps; transition between memories |
So you can use this script and computation as a concrete implementation of a tubulin/microtubule hologram model where Kenograms are the tubulin sites, Contextures are the three encoding layers (with entanglement and transitions between them), and partial observation leads to full-pattern reconstruction—with the caveat that this is a computational and theoretical analog, not a literal claim about biophysical implementation.
Same pattern and seed on noiseless simulator (ideal) and on IBM hardware (Fez, Torino, Pittsburgh); compare accuracies to separate method from device noise.
./run_holographic.sh --qubits 32 --observe 16 --compare-simulator -o qhm_compare.json or python3 qpc_holographic_memory.py --qubits 32 --observe 16 --compare-simulator -o qhm_compare.json
Drag to rotate. From left to right: pattern (N bits) → polycontextural encoding (3 layers) → partial measurement (k qubits) → full pattern reconstructed.
Interactive 3D illustration: pattern grid → encoding volume → partial readout → reconstruction.
32 qubits, 16 observed (partial readout), 4096 shots. Ideal (simulator) vs hardware on three IBM backends.
| Backend | Ideal | Hardware | Gap |
|---|---|---|---|
| ibm_fez | 37.50% | 43.75% | −6.2 pp |
| ibm_torino | 46.88% | 43.75% | 3.1 pp |
| ibm_pittsburgh | 56.25% | 43.75% | 12.5 pp |
Ideal changes because each run used a different random pattern. Hardware is 43.75% on all three backends.
| Metric | Value |
|---|---|
| Ideal (simulator) full accuracy | 37–56% (run-dependent, random pattern) |
| Hardware full accuracy | 43.75% (Fez, Torino, Pittsburgh) |
| Gap (ideal − hardware) | Up to 12.5 pp when ideal > hardware (noise) |
| Observed qubits | 16 of 32 (real partial measurement) |
| Backends | ibm_fez, ibm_torino, ibm_pittsburgh |
| Shots | 4096 |
python3 list_ibm_backends_noise.py. To run on a specific backend: ./run_holographic.sh --backend ibm_pittsburgh --compare-simulator -o qhm_pittsburgh.json (or ibm_fez, ibm_torino, etc.).
Applies across QPC tests on this site, not only holographic memory.
Plain language. The main practical limit on our hardware demonstrations today is universal NISQ noise and circuit depth—imperfect gates, decoherence, and readout—not a flaw that is unique to QPC. IBM and other providers face the same physics; QPC circuits do not receive a special “extra noise penalty,” though deep or highly entangled encodings accumulate more error on any current machine.
We therefore show, where possible, both the intended behavior (e.g. ideal simulator or shallow benchmarks) and realistic processor results. When hardware outcomes are only partly satisfying relative to the ideal QPC story, that gap is expected on today's chips—not proof that the quantum-native task definition is wrong.
Forward path. Fuller expression of QPC's capabilities on device tracks better hardware, shallower or tailored circuits, error mitigation, and in the longer term quantum error correction—the same roadmap as the rest of the industry.
How the holographic memory test fits with PQST-64, Crash Detection, PRCBS, and RICT encode–decode.
PQST-64 shows supremacy-style output. QPC Crash Detection shows cascade detection. PRCBS and RICT encode–decode show relational encoding and graph reconstruction from full measurement. All run on real hardware.
This test adds a distinct capability: partial measurement with full-pattern reconstruction. It does not replace the others; it demonstrates a different architectural layer—distributed, holographic-style storage and recovery—that aligns with optical holography, associative memory, distributed representation, and quantum-inspired cognition research. Ideal vs hardware comparison (Fez, Torino, Pittsburgh) shows the task is mastered in principle, with outcomes on current IBM hardware restricted by device noise.
Together, PQST-64, Crash Detection, PRCBS, RICT encode–decode, and QPC Holographic Memory give a consistent story: supremacy-style output, application (crash), relational decode (full measurement), and holographic reconstruction from partial measurement as a distinct capability layer.