Customer Task Catalog

A buyer-facing map from real-world problem shapes to QPC run families: what stays the same (engineering stack), what changes per domain (encoding only), what you supply, what you receive, and what stays outside the public boundary so core IP remains protected.

Evaluation & diligence Same stack · many domains No core exposure

What “the same stack” means

Not one formula for every industry — one repeatable quantum–classical pattern you operationalise across tasks.

Domain data (entities, tiers, weights, scenarios) → Structured graph / rolesPolycontextural pre-layers (context → gates on qubit groups) → Variational or sampling core (e.g. QAOA-style cost on an Ising / QUBO-shaped Hamiltonian) → Hardware shotsClassical metrics (thresholds, indices, rankings, JSON)

Per domain you change: node semantics, coupling rules, context labels, and how metrics are explained — not the cloud SDK pattern (Qiskit, tokens, job IDs, reproducible artifacts).

Security boundary (what buyers never need)

The catalog is designed so procurement and integration can proceed without access to proprietary synthesis.

Task catalog (repository references)

Each row links to concrete artifacts in this release tree (reports, scripts). Commands assume your venv, tokens, and plan match each provider’s docs. Weather / forecast rows are framed as scenario and combinatorial analytics, not a replacement for numerical weather prediction.

Domain Problem shape QPC run family Reference (script / report) Typical inputs → outputs
Finance / systemic risk Network of institutions; cascade or phase-transition narrative Multi-context + Ising / QAOA stress on graph qpc_crash_detection_ibm.py, qpc_crash_detection_128q.py · Crash report Graph / weights → stress indices, ranked nodes, phase-style readouts
Portfolio / insurance Many assets, constraints, multi-objective framing Multi-context optimisation encoding Harel summary · IBM / IonQ case lines on home Asset set + rules → allocation-style quantum workflow outputs (per your published case)
HPC / cyber resilience Infrastructure nodes, tiers, lateral movement & storage contexts QPC-HTD-20Q (full-chip Garnet class) qpc_htd_iqm_garnet.py · IQM HTD page Fixed 20-node map → θ, P(H≥θ), R, top dangerous nodes, JSON + job id
Climate / policy (CO₂) Multi-regime, multi-stakeholder constraints; QUBO / QAOA narrative Parallel / multi-contexture CO₂ architecture runs qpc_co2_qubo_qaoa_qpc_architecture.py, qpc_co2_2context_parallel_execution.py, qpc_co2_3contexture_parallel_execution.py · CO₂ results Scenario tags + architecture params → comparative quantum metrics across context counts
Supply chain / operations Discrete choices, couplings, capacity-style constraints QAOA on structured operational graph (IBM-scale demo) qpc_supply_chain_optimization_65q.py Chain encoding → sampled configurations and scored objectives (per script design)
Geopolitical / crisis scenarios Scenario lists, ranked outcomes after quantum pass Data-driven scenario quantum ranking qpc_crisis_fez.py · Crisis final report Scenario payloads → ordered results, executive report linkage
Strategy / multi-option Discrete strategic options; synergies and conflicts as edges Same family as supply / portfolio: decision qubits + interaction Hamiltonian Extend patterns from qpc_supply_chain_optimization_65q.py / crash encodings; custom briefings under NDA Option set + pairwise J → high-coherence combinations and indices (template)
Weather & forecast analytics Discrete regime bins, ensemble branches, or risk couplings — not replacing NWP PDEs Small Ising / sampling on user-defined discrete states No single named script yet — roadmap row; encode regimes like CO₂ / strategy rows Regime graph + weights → joint high-risk patterns in label space (supporting analytics only)
Relational structure in → out Graph → circuit → decode to relation RICT encode–decode qpc_rict_decode_test.py · RICT report Graph spec → measurements → reconstructed structure (F1-style metrics in report)
Multi-context depth (IBM) Many formal contexts in one hardware run PFQM family qpc_pfqm_v3.py, Heron paths · PFQM Context count + params → breadth demonstration on named backends
Platform compatibility Same brickwork / job shape on vendor cloud Hardware smoke & full-width checks qpc_iqm_benchmark.py · Pasqal, Origin, Azure, IQM pages Token + backend → JSON metrics, job ids, uniqueness / depth stats

Script file names refer to the site_release tree; paths may differ on your clone. For interactive submission templates, use the customer console when connected to your QPC / IBM API.

How buyers use this for evaluation

  1. Pick the row closest to their problem shape (or combine two rows, e.g. strategy + risk).
  2. Ask for a sanitised sample JSON and metric definitions for that family only.
  3. Run or witness a job on their contract (IBM, IQM, Azure, …) using the listed entry point or a partner bundle.
  4. Map outputs to internal dashboards; keep circuit synthesis out of scope for the first diligence phase if desired.

Honest limits (builds trust)