Every field that produces computational claims faces the same gap: the number was reported — but can anyone verify it came from the reported computation? ML benchmarks nobody could audit. Simulations nobody could trace. Regulatory submissions built on trust. MetaGenesis Core is the notary for computations: any result packaged into a tamper-evident bundle, verified offline with one command, in 60 seconds. For physics and simulation domains — the chain is anchored to physical reality itself, not an invented threshold.
ML benchmarks. Simulation outputs. Regulatory submissions. Financial model results. None of them produce a verifiable artifact — a third party cannot confirm the claimed number without re-running everything from scratch. $28 billion wasted annually on irreproducible research. 294 ML papers with inflated results. Carbon markets approaching $2 trillion backed by unverifiable models. There has never been a standard for computational proof. Until now.
When any computational result is produced — an ML accuracy score, a FEM displacement, a VaR estimate, an ADMET prediction — no verifiable artifact is generated. The result exists as a number in a PDF, a log file, or a dashboard. A reviewer who wants to verify it faces one choice: rebuild the entire environment, data, and compute from scratch. That is not verification. That is reproduction — and most reviewers never attempt it.
Every computation produces a tamper-evident evidence bundle: SHA-256 integrity, semantic invariant verification, and a Step Chain execution trace across 4 cryptographic steps. Any third party verifies offline with one command — no model access, no environment, no trust. This is not logging. This is machine-verifiable proof.
Independent researchers, regulators, and clients can no longer accept computational results on trust alone. The cost of unverified claims is rising.
“I couldn’t prove my simulation result was real. Not to anyone. Not even to myself. So I built the thing that could. That’s the whole story.”
Yehor Bazhynov — Inventor, USPTO #63/996,819
One inventor. One problem. One protocol. Built after hours, without a team or funding. The goal was never a company — it was to solve a specific problem: results nobody could verify.
The problem had a name: the verification gap. ML benchmarks nobody could audit. Simulations nobody could trace. Regulatory submissions built on PDF trust. The entire field was running on unverifiable numbers.
Once the protocol finally worked: USPTO provisional patent filed, this site live, 2407 passing tests. Not because I planned it that way — because once the right abstraction clicked, everything else followed. Like DOI became the standard for publications. Like git became the standard for code. MetaGenesis Core is built to become the standard for computational results.
This is not a story about being special.
It’s a story about staying with a problem long enough to find the right question.
mg.py verify → PASSMetaGenesis Core started as a materials simulation verifier. Then the protocol worked for ML accuracy. Then data pipelines. Then risk models. The same 4-step structure handles every computational claim — because the problem is always the same: proof, not trust.
Young's modulus. Thermal conductivity. Multilayer contact. Three claims, one physicist, one problem: simulation results that couldn't be independently verified by any external party.
The same 4-step pipeline — run, index, pack, verify — works for ML benchmarks, system identification, data pipelines, drift monitoring. Any computational claim. Any domain. One command. PASS or FAIL.
Think of it like a certificate of conformity for computation.
When a product gets certified, the certifier doesn’t rebuild it from scratch — they verify it meets the documented spec. That certificate is then trusted by anyone, anywhere, without repeating the test.
MetaGenesis Core does the same for computational results. The bundle is the proof.
And where a physical constant exists — E = 70 GPa for aluminum (NIST, ~1% uncertainty), or kB = 1.380649×10‑23 J/K by SI 2019 definition (exact, zero uncertainty) — the verification chain is anchored to physical reality itself. NIST-measured material constants. SI fundamental constants. This is traceability, not compliance.
Physical constants (E=70 GPa, kB, NA from NIST/SI 2019) anchor materials and physics claims. ML, Finance, and Pharma claims use domain-specific verification thresholds — a different but documented level of traceability. Scope documented in reports/known_faults.yaml.
Every run produces an evidence bundle with five independent verification layers. Any deviation surfaces as FAIL with a specific reason.
NIST Randomness Beacon timestamp, catches backdated bundlesClone the repo. Run one command. Your result is now cryptographically anchored, semantically verified, and independently auditable by anyone — offline, without trusting you.
Every system using only file hashes is vulnerable. An adversary removes content, recomputes all hashes, and passes integrity checks silently.
job_snapshot from run_artifact.json — stripping the core evidencejob_snapshot present, payload.kind matches, canary_mode correct.job_snapshot missing.tests/steward/test_cert02_pack_includes_evidence_and_semantic_verify.pyThe open demo isn't synthetic. It runs a real physics calibration against a real dataset — then packages everything into a verifiable bundle. Here's exactly what happens, step by step.
backend/progress/mtr1_calibration.py · Dataset: demos/.../strain_stress_open.csv · V&V threshold: MTR-1 rel_err ≤ 0.01Every claim has an implementation, runner dispatch, threshold, and tests. Enforced on every PR — not by human review.
Bidirectional claim coverage checked on every pull request. A claim without implementation blocks merge. An implementation without claim blocks merge.
MetaGenesis Verification Protocol (MVP) v1.0 — open spec for packaging computational claims into independently verifiable evidence bundles.
A minimal, concrete spec for what “independently verifiable” means for any computational result.
pack_manifest + evidence_index + per-claim artifactsAny ML accuracy claim packaged into a tamper-evident bundle. Reviewers verify offline — no model access, no environment, no GPU.
ADMET predictions, PK/PD simulation outputs — packaged with full provenance. FDA 21 CFR Part 11 compatible audit trail.
Carbon sequestration and deforestation models become independently auditable. Corporate buyers verify without proprietary model access.
VaR, credit scoring, stress test outputs packaged for Basel III/IV model risk management.
Young’s modulus and thermal conductivity — verified against NIST-measured physical constants: Al E=70 GPa, Ti E=114 GPa, SS316L E=193 GPa, Cu k=401 W/(m·K). Machine-verifiable proof for 4 materials. Drift detection included.
ANSYS, FEniCS, OpenFOAM outputs verified against a physically measured anchor — E = 70 GPa for aluminium, measured in thousands of labs worldwide. Not threshold compliance. Traceability to physical reality. Machine-readable proof for engineering certification.
Existing tools solve parts of the problem. MetaGenesis Core is the only open protocol that combines governance enforcement, semantic integrity, and offline third-party verification.
MLflow and DVC excel at experiment tracking and pipeline reproducibility — MetaGenesis Core adds the missing verification layer on top, not a replacement for experiment infrastructure.
Three major frameworks — EU AI Act, FDA 21 CFR Part 11, Basel III/IV — all require the same thing: independently auditable computational evidence. MetaGenesis bundles satisfy that requirement with a single offline-verifiable artifact.
Article 12 mandates logging of AI system operations enabling post-market monitoring. Annex IV requires technical documentation proving the system functions as intended. MetaGenesis bundles provide immutable, offline-verifiable evidence records without exposing proprietary model internals to regulators.
FDA draft guidance (Jan 2025) establishes a 7-step credibility framework for AI in drug development. Computational claims in IND filings require documentation that a regulator can verify without re-running the model. MetaGenesis provides exactly that: a self-contained bundle, verifiable offline in 60 seconds.
SR 11-7 requires independent validation of risk model outputs by a party that did not build the model. Today that means handing over model code or paying for a full re-run. MetaGenesis bundles give validators verifiable evidence of exactly what the model produced — without touching proprietary code or data.
MetaGenesis Core does not constitute legal or regulatory compliance advice. It provides technical infrastructure that supports compliance documentation workflows.
MetaGenesis Core is open source. The protocol is free to use. We offer a free pilot for your specific use case before any commercial conversation.
The questions we'd ask if we were you.
SECURITY.md and reports/known_faults.yaml.test_cert02. The Step Chain layer catches execution-order tampering — proven by test_cert03. Known limitations are in reports/known_faults.yaml. The verifier (mg.py) is MIT-licensed open source — any third party can read, audit, or independently re-implement it. The trust model requires trusting open-source code, not any single party.mg.py) and the protocol specification. Both are open source and auditable. The protocol is designed so any third party can re-implement the verifier independently from the spec and get the same result.One protocol. Five industries. Find your use case.
Your model achieves 94.3% accuracy. Your client wants proof — not a screenshot, not a PDF, not a Docker container they have to run. They want an answer they can verify themselves in 60 seconds.
mg.py verify --pack bundle.zip on their laptopYour computational claim is a number in a PDF. A regulator can’t verify it without recreating everything from scratch. MetaGenesis packages each claim into an artifact any auditor verifies in 60 seconds. Not trust — proof.
SR 11-7 and Basel III/IV require independent validation of risk model outputs. Your VaR models, credit scoring pipelines, and stress test results need a verifiable evidence trail — without handing over your proprietary model to the validator.
Your paper's reviewer cannot reproduce your simulation results without your exact environment, data, and compute. Nature estimates 70%+ of computational results are never independently verified. MetaGenesis makes yours the exception.
Your ANSYS, FEniCS, or OpenFOAM simulation produces displacement results. Your engineering client needs machine-verifiable proof that the output matches physical reference data — not a PDF report, not a screenshot. A certificate any auditor can check offline.
mg.py verify --pack bundle.zip on their machineThis is the exact logic the protocol runs. No backend. No network. The same verification that ships with the codebase.
Share a computational result — any domain. We build a verification bundle for it at no charge. You see PASS or FAIL before any commercial conversation. No strings attached. Response within 48 hours.
Any computational output — ML accuracy, calibration data, pipeline certificate, simulation output.
We implement the verification claim for your domain and generate a tamper-evident evidence bundle.
Run mg.py verify --pack bundle.zip on your machine. See PASS or FAIL. No trust required.
If it solves your problem, we discuss next steps. If it doesn't fit, we tell you honestly.
Clone. Run the demo. See PASS.
No account. No API key. No GPU. No network.