The next Mendel
already wrote their paper.
We will find it.

A protocol for the systematic rediscovery of scientific ideas already present in the world archive — lost to paradigm, timing, and institutional inertia for decades.

Read White Paper
Built on
>200M+
publications in archive
most never read again
~27yrs
average recognition delay
for breakthrough ideas
$2.4T
global annual R&D spend
inside structural amnesia
14/14
leading DeSci projects
with zero retrospective AI
Past · S1 · The Immutable Archive

Science forgets
faster than it learns

The scientific system is optimised for producing new knowledge — not for preserving existing knowledge. When a paradigm wins, alternative paths close. AI systems trained on this corpus amplify the mainstream. Exceptions disappear into archives no algorithm is scanning.

This is not a historical curiosity. It is a systemic failure reproducing itself right now.

Gregor Mendel
35 years waiting
Laws of heredity — the foundation of genetics. Published 1866. Rediscovered 1900.
Died without knowing his work mattered
Ignaz Semmelweis
20 years waiting
Antiseptics — could have saved thousands of lives per year from the moment of publication.
Died in a psychiatric institution
John Hopfield
40 years waiting
Neural networks — the foundation of modern AI. The work was there. The world wasn't ready.
Nobel Prize — at age 91
"Sometimes humanity already knows the answer — but doesn't realise it knows."
Gunther Stent · Prematurity and Uniqueness in Scientific Discovery · Scientific American, 1972
Past · S1
Immutable archive
The archive holds everything. Nobody is reading it.
Over 200 million publications. IPFS hashes. NFT discovery proofs with timestamps. Forgotten ideas waiting. The archive is immutable — every paper that entered it is still there. The question is whether anyone will ever find it.
Present · S2–S4
AI finds. Experts verify. Industry funds.
The algorithm detects. The scientist interprets. The market funds.
SDA computes CDS scores. GNN finds structural anomalies in the citation graph. Gemini extracts structured data from archival scans. ORCID-verified experts review. Industrial bounties are matched. The loop activates.
Future · S5
Replicate. Tokenize. Close the loop.
Replication. RWA tokenization. Discovery Index. The loop closes.
Only after confirmed laboratory replication does RWA tokenization open. Patents, startups, licences. Revenue distributed across the chain. Discovery Index published. The archive record is updated. The loop closes and begins again.
Present · The Discovery Engine

Computational Discovery Archaeology

m3mbrane is, to our knowledge, the first systematic operational framework for algorithmically detecting, verifying, and financing ideas already present in the scientific archive. No existing DeSci project occupies this niche. 90% of the required infrastructure already exists — m3mbrane builds only the discovery algorithm on top.

1
Corpus ingestion
OpenAlex · Semantic Scholar · PubMed · arXiv · CORE
2
Vision extraction
Gemini OCR · formulas → LaTeX/SMILES · scans pre-1990
3
Embedding
SPECTER2 (Allen Institute) · SciBERT methodology scoring
4
Citation graph
NetworkX → Neo4j · Crossref metadata
5
GNN anomaly detection
Nodes that should have become hubs — but never did
6
Link prediction
Missing connections in the graph of science
7
LLM interpretation
Gemini · semantic context · expert-ready explanation
8
CDS ranking
Learning-to-rank · PyTorch + XGBoost
9
Expert queue
ORCID validation · Replication · RWA tokenization
// Composite Discovery Score
CDS(p) =
  w₁·CAD′(p)
  // citation anomaly detection
  // E[c|X] ≈ NegBin(year,IF,|F|,M)
+ w₂·CDE′(p)
  // cross-domain embedding
  // cos(θ)·e^(−λΔt)·d(F₁,F₂)
+ w₃·TMG′(p)
  // technology maturity gap
  // NASA TRL ∈ {1..9}
+ w₄·TRS′(p)
  // temporal relevance score
  // max_τ cos(emb(p),emb(trend(τ)))
 
// CDS ∈ [0,1] · Σwᵢ = 1
// GNN detects
// LLM interprets
// CDS ranks · experts verify
Killer Feature

Reverse Discovery

Every search tool finds what you already know to look for. Reverse Discovery finds answers to questions you haven't asked yet. The client describes an R&D problem — SDA builds its embedding — link prediction finds structurally related work in the citation graph.

"Paper by Yoshizumi Ishino, Osaka University, 1987. Describes repeating DNA sequences — a CRISPR precursor. CDS = 0.91 · TMG = 0.94 · Relevance to your task: 0.88"
Example Reverse Discovery proactive alert · Phase 2 feature
Trust Architecture

Four barriers against pseudoscience

An open system without filters inevitably attracts noise. Funding is structurally unavailable until all four barriers are passed — in order.

1
AI screening
SDA checks argument structure, statistics, and epoch-appropriate methodology standards. Obvious fabrications and esoterica are blocked here.
2
Citation anomaly
High-potential paper vs. simply ignored paper — they have distinct citation graph profiles. The algorithm distinguishes them.
3
ORCID validation
Minimum 2 verified domain scientists. $MBG stake. Slashing on error. No pseudoscience passes an expert who has skin in the game.
4
Replication
RWA tokenization only after confirmed laboratory replication. Everything that cannot be reproduced is blocked here. No exceptions.
Future · Market & Tokenomics

$2.4T market.
Zero competitors with graph discovery.

Google Scholar and Semantic Scholar find what you already know to search for. m3mbrane finds what nobody is searching for — because nobody knows it exists. That requires GNN on citation subgraphs, link prediction, and learning-to-rank on premature discovery cases. Nothing in the market does this.

TAM$2.4T
Global R&D spending · UNESCO, 2021
SAM$48B
DeSci + corporate R&D search · 2026–2030
SOM$240M
Pharma + deeptech + 2,000 scientists · launch target
Drug Repurposing — $30B+ by 2030
$2.6B + 12 yrs
$300M + 3–5 yrs
SDA finds a molecule from a 1987 archive with partial safety profile — pharma saves years of preclinical work
Year 1$320K
Year 2$2.2M
Year 3$12M
Tokenomics

Earned. Not bought.

Two-token architecture where governance weight is determined by contribution, not capital. This directly addresses the core failure of most DAOs — plutocracy dressed as decentralisation.

$MBR
Utility Token
Earned through contributions: ideas, reviews, ORCID verification, bounty work. Spent on SDA access. Corporate clients buy $MBR to post R&D tasks. Every commercialisation event creates new demand. Staked by validators — slashed on error. Not speculative by design.
Tied to real work
$MBG
Soulbound · Governance
Non-transferable. Cannot be purchased — structurally, not just by rule. Does not trade, does not inherit, does not move. Accumulates only through verified contributions to the protocol. Weights DAO votes by domain expertise. Governance = merit, not capital.
Cannot be bought · Ever
Public Product

Discovery Index

The first measurable indicator of how well science uses its own past. Nature Index measures what science produces today. Discovery Index measures what it already knew — but didn't realise. A language readable by Nature, the Financial Times, and corporate R&D directors simultaneously.

Latency Discovery
Years from publication to rediscovery. The primary metric of scientific memory failure. No analogue exists.
Field Memory Score
What percentage of significant ideas in a domain remains undetected. "Depth of amnesia." Preliminary data points to Soviet physics and 1940–70s chemistry as highest-potential domains.
Economic Impact
Estimated commercial value of rediscovered ideas: patents, licences, startups, R&D budgets activated.
Citation Growth Rate
Citation growth of a rediscovered idea in the 12 months after SDA detection. Verifies that rediscovery had real impact.
Cross-domain Jump
Number of fields the idea spread into after rediscovery. The strongest finds always cross disciplinary boundaries.
Annual publication
Published each year. The only systematic ranking of how well science uses what it already knows.
Roadmap

From algorithm to ecosystem

Q3 2026
SDA v1
CDS_v1 = CAD + CDE · PubMed/arXiv pre-2000 · First 500 ideas archived · Vision extraction layer live
v1
Q4 2026
First revenue
3 industrial bounties · RWA pilot · ORCID verification · Community Archaeology Layer opens
v1
Q2 2027
Replication
LabDAO partnership · 10 confirmed replications · Knowledge graph v1 · Reverse Discovery beta
Phase 2
Q4 2027
Full CDS
TMG + TRS + GNN + learning-to-rank · Discovery Index #1 published · 10+ partnerships
Phase 2
2028+
Archaeology
JSTOR / HathiTrust · Soviet science 1950–1990 (RAS/VINITI) · Non-English archives · CS module
Phase 3
Community · Anyone Can Dig

Community Archaeology Layer

The algorithm finds anomalies in the citation graph. The scientist evaluates scientific merit. But between the archive and the algorithm there is a layer the machine cannot fully close: first-pass navigation through volume.

Inspired by Galaxy Zoo (150,000 volunteers classified galaxies more accurately than the automated algorithms of the era) and Foldit (players without biology degrees solved a protein folding problem in ten days that structural biologists had worked on for fifteen years) — m3mbrane opens the archive to anyone.

Archive Scout
Browse & flag forgotten papers
Scan digitised archives (JSTOR, HathiTrust, archive.org). Mark papers that seem anomalously forgotten — strong methodology, unexpected topic, suspiciously few citations. 5–20 minutes per session. No academic degree required.
1–3 RP per flag → $MBR on verification
Transcription & Translation
Unlock what OCR can't read
Transcribe manuscripts, decipher damaged scans, translate short fragments from German, French, Latin, Russian. Where Gemini breaks on historical typeface or a torn page — a human reads. Unlocks Soviet RAS/VINITI archives and pre-revolutionary Russian journals.
5–15 RP per fragment → $MBR on use
Connection Mapping
"I read about this somewhere else"
Signal potential connections between papers from different eras or fields. A weak individual signal — but 50 independent people seeing a link between a 1962 paper and a modern topic becomes a strong input to the link prediction model.
2–5 RP per link → 1.5× on confirmation
Primary Sorting
Structure, not content
Answer simple structural questions: "Is there a testable hypothesis?", "Is a specific method described?", "Are there numerical results?" No assessment of correctness — only structure. Reduces load on the expert queue by 30–40%.
1–2 RP per task → $MBR on queue entry
Community members determine scope. Experts determine quality. The two never swap roles.
Core design principle · Community Archaeology Layer
Governance · Ostrom's Commons

Three levels. One loop. Earned, not bought.

m3mbrane builds governance on Elinor Ostrom's principle from "Governing the Commons" (1990): commons governance fails not when there are many participants — but when those with the most capital gain disproportionate control over the rules. That is why $MBG cannot be purchased. Governance weight is determined solely by verified contribution — whether you are an ORCID scientist, a community member with high RP, or a laboratory with confirmed replications.

Level 1 · Community
Anyone without a degree
Entry: first completed task. Rights: archive scouting, transcription, connection mapping. No DAO vote initially. Earns RP → $MBR on confirmed finds. No slashing. Path upward through real contribution.
Level 2 · Validator
ORCID-verified scientist
Entry: ORCID + domain verification + $MBG stake. Rights: review SDA finds, participate in Verification Council. Slashing on error. Earns $MBR per review + $MBG for confirmed discoveries.
Level 3 · Delegate
Experienced validator or senior community member
Entry: 5,000+ RP + 3 replicated finds + nomination by existing delegates. Rights: full vote in one of 4 DAO councils. Right to propose protocol changes. Cannot be purchased — only earned.
Join

The next Bruno has
already written their paper.
m3mbrane will find it.

It may have been sitting in an archive since 1987. We are looking for scientists, investors, laboratories, and curious people who want to be part of this. The White Paper is available now.

Scientist / Validator
ORCID verification. 15–30 minutes per review. $MBR for participation, $MBG for accuracy. Governance voice in DAO. Domain-matched alerts — no noise outside your field.
Investor
$48B SAM with zero graph-discovery competitors. Drug repurposing as first enterprise entry point. Structural market gap confirmed by direct comparison across 14 DeSci projects. White Paper available.
Laboratory / Partner
Replication grants. Multiplier up to 2.8× in $MBR. Novelty premium in publications — "we reproduced a 1987 discovery." Exclusive find — no competition. Partial safety profiles reduce cost by 30–60%.

m3mbrane is in active development. We are building with scientists, investors, and laboratories who believe the next breakthrough may already be written — waiting in an archive.

Contacts
Coming soon
We will be in touch