NeuroMesh Litepaper
The Robotic Intelligence Layer · Forming a Decentralised Superbrain
Version 4.3 · 2026 Built on Solana · Humanoid-First · Data as RWA · Powered by Cerebro
This litepaper is for informational purposes only and does not constitute financial, investment, legal, or tax advice. Token projections and financial figures are directional and illustrative. NeuroMesh is an evolving protocol and all specifications are subject to change. Please conduct your own research before participating.
Table of Contents
- Executive Summary
- The Problem
- The Solution
- Cerebro: The Network’s Brain
- NeuroOS-H: The Robot Runtime
- The Intelligence Cycle
- How Robots Get Smarter
- Proving Intelligence is Real
- Data Ownership and Rights
- Token Economy
- Web3 Architecture
- Market Mechanics
- Operator Economics
- Competitive Landscape
- Roadmap
- Risks
- Conclusion
- Formula Reference
- Glossary
1. Executive Summary
NeuroMesh is the robotic intelligence layer. It is the operating software, economic infrastructure, and shared learning network that transforms isolated robots into nodes of a collective, self-improving global brain. Every robot that joins the network makes every other robot smarter. Every operator earns value from the intelligence their machine contributes. Every skill learned anywhere in the world becomes available everywhere.
The robotics industry is experiencing its hardware moment. Capable humanoid platforms are arriving at commercial scale, but hardware capability has dramatically outpaced intelligence infrastructure. The result is a fleet of increasingly powerful machines that each start from zero, learn in isolation, and generate enormous amounts of valuable training data that flows exclusively to the hardware vendor rather than the operator who paid for the machine and took on all the operational risk.
NeuroMesh fixes this with three interlocking pieces. First, every robot running NeuroMesh shares verified, anonymised experience with every other robot on the network through a federated learning architecture that keeps raw data local while transmitting the lessons derived from it. Second, at the centre of the network sits Cerebro, a masternode intelligence aggregation layer that synthesises experience from thousands of robots simultaneously, identifies cross-domain skill patterns, and distributes refined intelligence back to the fleet. Cerebro is the frontal lobe of the collective mind. Third, every piece of robot-generated experience is tokenised as a Real-World Asset with cryptographically provable ownership, programmable licensing rights, and automatic royalty streams. You own what your robot learns.
The token that powers all of this is $NEURO, deployed on Solana for sub-second finality and sub-cent transaction fees. The network supports seven distinct token types that together form a complete economic layer for embodied intelligence, from compute credits to data rights to energy windows.
2. The Problem
Robots Cannot Share What They Learn
Picture two warehouses on opposite sides of the world, both operated by the same logistics company, both deploying the same model of humanoid robot. In the Osaka facility, Robot A spends six weeks learning to reliably pick deformable food packaging off a conveyor belt. The task sounds trivial, but it is not. Grip force must be calibrated in real time against the compressibility of the packaging. Weight distribution shifts as contents settle. Humidity causes subtle changes in the coefficient of friction between the robot’s fingers and the plastic. Robot A makes hundreds of mistakes, adjusts, and eventually develops a reliable technique. It has, at that point, created something genuinely valuable: a small but hard-won piece of physical intelligence.
In Rotterdam, Robot B starts from scratch on the same problem. It will make identical mistakes. It will take the same six weeks. The knowledge Robot A earned is locked inside local model weights on a proprietary server controlled by the hardware vendor. The operator paid for the machine, paid for the six weeks of failed attempts, paid for the downtime and the dropped packages and the retraining cycles. They received none of the intellectual value and none of the downstream revenue when that vendor used the experience to train their next product generation.
This is not an edge case. It is the defining structural problem of the robotics industry right now. Every robot starts from zero. Every deployment is an isolated experiment. The collective learning of a global fleet of machines compounds for the vendor and nobody else.
Nobody Can Prove What a Robot Actually Did
The second problem is verification. When a robotics company tells a prospective customer that their platform achieved 99.7% task success in a controlled trial, there is no cryptographic proof behind that claim. No immutable record. No independent audit trail. Just a number in a sales presentation.
This sounds like a mere inconvenience, but the consequences run deep. Insurance underwriters cannot price liability for autonomous robot operations without a trusted historical record of what those machines actually did and what safety boundaries they respected. Regulators cannot certify platforms they cannot audit. Investors cannot value robotics companies with any precision when the performance data underlying valuations is unverifiable. And most importantly for NeuroMesh, a shared intelligence commons cannot exist if any robot can claim to have learned something without actually having done so. Fake data poisoning a collective model is a catastrophic failure mode that must be ruled out at the architecture level, not the policy level.
Data Ownership is Absent
A single humanoid robot running an eight-hour shift generates gigabytes of co-registered sensor data. RGB-D video, force-torque readings, tactile skin arrays, proprioceptive logs, audio streams, and actuation records combine into something that cannot be replicated in any simulation. The physical world is simply too complex, too varied, and too full of edge cases for synthetic data to fully substitute for real operation. This real-world embodied experience data is the scarcest and most valuable training resource in the entire AI industry.
Today, operators receive none of the financial value it generates. That changes with NeuroMesh.
3. The Solution
NeuroMesh is built around one idea: every robot that operates safely and contributes verified experience to the network should earn from that contribution, and the entire fleet should get smarter in return.
The architecture has three layers. The robot layer, called NeuroOS-H, is software that runs directly on the physical machine. It handles local inference, real-time safety enforcement, sensor data capture, and cryptographic attestation. It turns any compatible robot into an economically active node in the NeuroMesh network regardless of the underlying hardware platform. Think of it as the nervous system: fast, locally autonomous, and capable of operating without any external connection.
The intelligence layer is Cerebro. It is the brain of the network, a masternode system that aggregates verified learning signals from thousands of robots simultaneously, synthesises new skills, and distributes updated models back to the fleet. Cerebro is what makes the network’s intelligence compound over time rather than remain static.
The settlement layer is Solana. It is the neutral financial and legal infrastructure underlying the entire system, providing immutable records of token ownership, smart contract enforcement of licence terms, and the micro-settlement infrastructure that makes per-action economic activity viable.
What makes this architecture meaningfully different from previous attempts at shared robotics intelligence is that raw data never leaves the robot. Individual robots transmit compressed, privacy-preserving learning signals and cryptographic proofs, not sensor streams. The intelligence travels without the underlying data. Ownership is provable without exposure.
4. Cerebro: The Network’s Brain
Cerebro is the component of NeuroMesh with the most direct impact on robot capability improvement over time. Understanding what it does and why it matters is central to understanding the network’s long-term value proposition.
What Cerebro Actually Does
Individual robots are excellent at learning from their own experience. A warehouse robot that fails a grasp a thousand times and adjusts a thousand times will develop a highly specialised competency. What it cannot do alone is recognise that the rotational adjustment it learned for handling cylindrical objects in low light is a generalised principle that also applies to pipe fittings in a manufacturing plant, to rolled textiles in a garment factory, and to beverage containers in a retail stockroom. Making those connections requires seeing all of those contexts simultaneously, which no single robot can do.
Cerebro sees all of them at once. It continuously ingests verified Composite Thought and Action Vectors from every robot in the network, clusters them by task structure rather than deployment context, and identifies skill patterns that generalise across domains. When it finds a reliable generalisation, it validates the pattern against a held-out benchmark and distributes the refined skill to any robot in the library whose operator has opted in.
This is not a centralised model repository where all robots share one model. It is a dynamic synthesis layer where cross-domain patterns are discovered, validated, and made available as modular skill updates. A robot that has never operated in a hospital corridor can load Cerebro’s verified indoor navigation skill, which was synthesised from the collective experience of robots that have. The skill is not raw data from those robots. It is the abstract pattern extracted from their experience, verified as generalisable, and packaged as a safe update.
Cerebro and Teleoperation
One of Cerebro’s most practically important functions involves human teleoperation data. When a NeuroMesh robot encounters a situation where its confidence falls below a configured safety threshold, it requests human assistance. A remote operator takes over through the teleoperation pool, handles the situation, and releases control. Every action the human operator takes during that override session is logged as a Composite Thought and Action Vector just like any other task cycle. Expert human demonstrations captured during safety overrides become some of the highest-quality training data in the entire network.
Cerebro specifically monitors the teleoperation log for recurring patterns. If twenty different robots across ten different facilities are consistently requesting human help with the same class of situation, Cerebro flags this as a priority learning gap and escalates it to the evaluator committee. The network learns where its gaps are and directs data collection effort toward filling them. This is active intelligence development, not passive data accumulation.
A Real Example of Cerebro in Action
Consider what happened with language models and few-shot reasoning. Early models needed explicit examples of every task type they were expected to perform. Then researchers discovered that exposure to a sufficiently diverse range of tasks enabled emergent capabilities: the ability to perform entirely new tasks from a brief description, without any specific training examples. The breadth of training, not just the depth on any single task, created genuine generalisation.
Cerebro is designed to produce the analogous effect in physical intelligence. A robot that has learned to handle delicate objects carefully in a medical context, to apply precise torque in an assembly context, and to navigate confidently around humans in a retail context has developed component skills that Cerebro can potentially combine into a new capability in a domain none of those robots has ever encountered. The combination of skills produces emergent capability. This is the compounding intelligence effect that makes the NeuroMesh network increasingly valuable as it grows.
Cerebro Economics
Cerebro nodes require substantial compute and a stake of $NEURO as a security bond. In return, they earn a share of the network’s verification fees and a portion of the data licensing revenue generated by skills they helped synthesise. If Cerebro’s aggregation produces a skill update that becomes widely used across the network, the nodes that contributed to synthesising it receive ongoing royalties proportional to that skill’s usage. This creates a direct financial incentive for Cerebro operators to run high-quality aggregation and to invest in the hardware required to do it well.
5. NeuroOS-H: The Robot Runtime
NeuroOS-H is the on-robot operating system that every NeuroMesh node runs. It is designed around one constraint above all others: the physical world does not wait for network round trips.
The Latency Hierarchy
A robot operating alongside humans in real time must make decisions at multiple speeds simultaneously. The structure of NeuroOS-H reflects this biological reality.
At the base level, reflex arcs operate in under two milliseconds. These are hardwired responses to collision detection and emergency stop conditions. They cannot be overridden by any higher-level software process. If a force sensor detects unexpected contact above a threshold, the reflex arc fires and the robot stops. No model, no policy update, no remote instruction can prevent this. It is the non-negotiable safety floor.
Above the reflex layer, the control plane updates joint torques and impedance controllers between 500 and 2,000 times per second, with servo timing jitter bounded to two milliseconds. This is what makes dexterous manipulation possible. Precise, repeatable timing at the joint level is the difference between a robot that reliably picks fragile objects and one that crushes them unpredictably.
The policy server, which runs the learned intelligence, operates on a slower cycle of 10 to 150 milliseconds. This is where NeuroOS-H consults the skill library, selects actions, runs inference, and generates the Composite Thought Vector that records what the robot decided. It is fast enough for smooth, reactive task execution while leaving enough time for real inference rather than simple lookup.
The mesh daemon, which handles all economic activity, runs even slower at 400 milliseconds to two seconds. It logs CTV/A artefacts, submits them to the verification committee, syncs with Cerebro, and handles token settlement on Solana. From the robot’s perspective, this is background accounting.
The Perception Stack
NeuroOS-H integrates a full stack of modern robot sensors. RGB-D cameras provide colour and depth simultaneously. Event cameras capture motion at microsecond resolution, seeing fast movement that conventional cameras would blur. Tactile skin arrays across the hands and forearms detect contact, texture, and vibration. Six-axis force and torque sensors at the wrists and joints measure loads precisely. Inertial measurement units and joint encoders maintain continuous awareness of the robot’s own body state.
Every sensor stream is timestamped at hardware level, hashed with SHA-256, and logged in the Trusted Execution Environment (TEE) before any application-layer software can see it. The TEE is a hardware-isolated enclave within the processor, physically separated from the main computing environment. This is what makes data ownership provable: the TEE signature cryptographically certifies that the data was produced by specific attested hardware at a specific time, and no software running on the machine after the fact can alter that record without breaking the attestation chain.
Safety as Architecture, Not Policy
The safety supervisor in NeuroOS-H enforces Control Barrier Certificate constraints at the control plane level. When the policy server proposes an action, the safety supervisor checks it against the mathematical safety boundaries before executing it. If the proposed action would bring the robot too close to the safety boundary, the supervisor adjusts it minimally to stay safe, without blocking the action entirely. This distinction matters enormously in practice: hard blocks produce jerky, unpredictable robot motion that itself creates safety risks. Minimal-intervention filtering maintains fluid, natural movement while guaranteeing that the boundary is never crossed.
6. The Intelligence Cycle
Every task a NeuroMesh robot performs passes through four phases. This cycle is simultaneously the fundamental unit of physical operation and the fundamental unit of economic activity on the network.
Phase One: Sense
Raw sensor data from a single robot operating a full shift is enormous. It cannot be transmitted, stored, or processed as-is. NeuroOS-H compresses it into compact microtoken (μToken) representations that preserve the information content relevant for learning and decision-making while discarding the identifying detail that would create privacy exposure.
Vision streams are encoded as masked autoencodings: the robot learns compact representations of the spatial structure of what it sees without retaining a full pixel record. Audio streams become self-supervised embeddings that capture the semantically relevant features of the acoustic environment. Tactile and force data becomes contact dynamics patches: structured representations of how surfaces behave under contact. Proprioceptive data becomes predictive trajectory codes: compressed representations of body motion history and momentum.
A researcher who licenses μTokens from a warehouse robot learns about manipulation dynamics in cluttered environments. They cannot reconstruct the faces of the workers present, the layout of the facility, or the specific products being handled. The privacy protection is built into the compression architecture, not enforced through policy after the fact.
Phase Two: Think
The policy server runs inference over the μToken inputs and selects an action. This inference is attested by Proof-of-Inference: a cryptographic certificate that proves a specific model, identified by its hash, ran on specific attested hardware and produced a specific output. The TEE records a commitment to both the inputs and the output, preventing any post-hoc alteration of either.
The output is packaged as a Composite Thought Vector: a structured record of what the robot perceived, what it decided, which model produced the decision, the PoI receipt, and evaluator scores from the verification committee. Anyone who later purchases or licenses this CTV can independently verify every claim it makes about its provenance.
Phase Three: Act
The control plane executes the policy output as physical motion through the CBC safety filter. Every actuator command is logged alongside the sensor readings that preceded it, structured in a Merkle tree so that any specific time window can be proven authentic without revealing the entire log. This log and its associated safety certificates form the Composite Action Vector.
The CTV and CAV together form the CTV/A: the complete, verifiable record of one complete intelligence cycle. This is the asset that gets licensed, aggregated by Cerebro, and traded on the NeuroMesh marketplace.
Phase Four: Learn
After completing the task, the robot computes self-supervised learning objectives against the outcome it observed. Did the grasp succeed? How close was the predicted outcome to reality? What would it do differently? This feedback drives local weight updates. Within the privacy budget constraints set by the Agentic Learning Rights specification, a compressed gradient update is transmitted to Cerebro for federated aggregation. The robot’s local model improves from its own experience while the global model improves from the collective experience of thousands of robots, simultaneously, at every cycle.
7. How Robots Get Smarter
The Core Learning Architecture
Robot learning is fundamentally different from the kind of learning that produced the current generation of large language models. Language models learn by predicting the next token in a text sequence. They never need to touch anything, navigate a physical environment, or calibrate the force of a grip against the fragility of an object. Robot learning must align information across every sensory modality simultaneously and tie that aligned perception directly to physical action outcomes. This is a much harder problem, and it requires a fundamentally different architecture.
NeuroOS-H trains each robot against a multimodal alignment objective that covers every sensor modality. Vision learning captures spatial relationships and object properties. Audio learning captures environmental signals and human speech. Tactile learning captures contact dynamics and slip prediction. Proprioceptive learning captures body state and trajectory. Action learning captures future state prediction given current actions. Skill distillation transfers validated skill patterns from Cerebro’s library directly into local weights.
These objectives run simultaneously, and the relative weight given to each can be tuned per robot class. A robot specialising in precision assembly weights tactile and proprioceptive learning heavily. A robot doing front-of-house work in retail weights audio and human proximity learning more. The architecture is not a one-size-fits-all model. It is a configurable alignment framework that adapts to the requirements of specific deployment contexts.
Case study, food packaging facility. A NeuroMesh operator deployed three robots in a soft-goods packaging line. The robots initially struggled with deformable packaging: foil pouches that changed shape under grip. Standard training data from rigid object manipulation did not transfer well. After the robots had each accumulated roughly 2,000 hours of attested operation and Cerebro had aggregated their gradient updates, a new grip calibration skill emerged from the synthesis that none of the three robots had developed individually. The synthesised skill drew on the force-torque patterns from all three machines simultaneously, identifying a compensatory wrist angle adjustment that improved success rates on deformable targets by 31 percentage points. The skill was validated on a held-out benchmark, published to the library, and immediately available to every other robot in the network operating in soft-goods contexts.
The Network Grows Smarter With Scale
The relationship between the amount of verified experience the network has accumulated and its collective capability follows a power-law scaling pattern. More data produces better capability, but with diminishing returns at the margin. What matters is not just how much data exists, but how diverse it is.
\[\boxed{C(D) = C_0 + \beta \times D^{\,\alpha}}\]C(D) is the network’s benchmark capability score at total accumulated data volume D. C₀ is the baseline from simulation-only pre-training. β is a scaling constant fitted empirically. α is the scaling exponent, estimated between 0.5 and 0.8 for robotics foundation models.
To make this concrete: a network starting with a simulation-only baseline of C₀ = 42 out of 100, with β = 8.5 and α = 0.70, at a data volume of 10 million μToken units achieves C = 42 + 8.5 × (10)^0.70 = 42 + 8.5 × 5.01 = 84.6. That is double the simulation-only baseline from real-world operating experience. A 10x increase in data at the same α produces roughly a 5x improvement in capability: genuine, compounding intelligence growth that accelerates as the fleet scales.
The practical implication for operators is significant. Joining the NeuroMesh network at launch means operating against the baseline model. Joining at Year 3, when the network has accumulated an order of magnitude more verified experience, means starting with a model that is already substantially more capable than anything achievable from simulation alone. Early operators benefit from network effects. Later operators inherit a more capable baseline. The entire ecosystem wins as the fleet grows.
Federated Learning Keeps Data Private
The mechanism by which individual robot learning contributes to the global model without exposing raw data is federated learning with differential privacy. Instead of transmitting training data, each robot transmits an encrypted gradient update: a vector representing the direction in which the local model improved from its most recent experience. Cerebro aggregates these vectors using secure multi-party computation, injects calibrated noise to provide mathematical privacy guarantees, and releases the aggregate as a global model delta.
The privacy guarantee is enforced by a composition bound on the total information leakage across multiple training passes. The data owner sets a per-pass budget and a maximum total budget. The system blocks further learning queries against any lot once the cumulative budget would be exceeded. This enforcement happens at the protocol level, not through operator policy.
8. Proving Intelligence is Real
The entire NeuroMesh economy rests on one claim: the records it produces are trustworthy. Every yield instrument backed by robot data, every insurance policy priced against a PoA history, every skill purchased from the library carries implicit trust in the authenticity of the underlying records. NeuroMesh makes these records cryptographically provable so that trust does not need to be assumed.
Proof-of-Inference
Proof-of-Inference addresses the question: how do you know that this CTV was actually produced by the model it claims, on the hardware it claims, at the time it claims?
The answer is TEE attestation combined with committee verification. The robot’s TEE signs a measurement of the hardware and software environment during every inference cycle. This signature is anchored to a hardware root-of-trust key that cannot be extracted from the chip without physically destroying it. The TEE also records a cryptographic commitment to both the input μTokens and the output action, preventing alteration of either after the fact.
The commitment and attestation are submitted to a rotating committee of staked evaluator nodes who verify the attestation chain against known hardware certificates and confirm that the model hash matches an approved version. The committee operates under a quorum rule requiring supermajority agreement.
With a committee of 21 evaluators where one-third act dishonestly, the probability that a fabricated PoI receipt passes the quorum check is approximately 0.033%. For any practical economic transaction, this is effectively zero. A fraudulent operator attempting to sell fake training data would need to corrupt at least 15 of 21 evaluators simultaneously, with each evaluator having staked $NEURO at risk.
Proof-of-Action
While PoI proves that reasoning happened, Proof-of-Action proves that the physical actions logged actually occurred, in the physical world, within the declared safety envelope. PoA artefacts contain Merkle-ised sensor-actuation traces with CBC safety certificates embedded. They can be selectively disclosed: an insurance underwriter can be shown the safety record for a specific time period without accessing any other operational data.
Case study, insurance underwriting. A NeuroMesh operator deployed twelve humanoid robots in a mixed-use environment where humans and robots work in close proximity. At renewal, their liability insurer queried the PoA record for all twelve robots over the previous twelve months. The query returned 98.4% of all action cycles verified within Class A safety boundaries, with the remaining 1.6% resolved through human teleoperation. No violations of the CBC safety boundary had occurred. The insurer priced the renewal at 23% below the industry average for comparable deployments, citing the verifiable safety record as justification. This is the economic consequence of making safety an on-chain primitive rather than an unverifiable claim.
Safety as a Mathematical Guarantee
The mathematical foundation for safety in NeuroMesh is the Control Barrier Function. At any moment, the robot’s safety is described by a scalar value h(x) where h(x) ≥ 0 means the robot is in a safe state and h(x) < 0 would mean it has entered an unsafe region. The safety supervisor enforces a constraint on every proposed control action that ensures h(x) can never go negative as long as the supervisor is running.
\[\boxed{\frac{\partial h}{\partial x} \cdot f(x, u) \;\geq\; -\,\alpha \cdot h(x)}\]h(x) is the safety function value at state x. f(x, u) describes how the state changes under action u. α controls how quickly the robot may approach the boundary. The left side is “how fast safety is decreasing right now.” The right side is “how fast it is allowed to decrease.” When the robot is near the boundary, this constraint automatically forces a slowdown.
A concrete example: define h(x) as the distance to the nearest human minus 0.5 metres. When a human is 2 metres away, h = 1.5 and the constraint allows the robot to approach at up to 3 metres per second. When the human is 0.8 metres away, h = 0.3 and the constraint limits approach speed to 0.6 metres per second. When the distance is exactly 0.5 metres, h = 0 and the robot must stop closing distance entirely. This happens automatically at the control plane level, enforced every 0.5 milliseconds, without any involvement from the policy model. No amount of model error can override it.
9. Data Ownership and Rights
Your Robot’s Experience Belongs to You
Every piece of physical experience a NeuroMesh robot generates is tokenised as an nDATA-R lot: a Solana SPL token representing the ownership of that robot’s captured experience over a specific time window. Two cryptographic instruments anchor each lot to reality.
The Data Ownership Certificate (DOC) is a signed on-chain record containing the operator’s decentralised identifier, the robot’s DID, a hardware attestation hash, the time window, the geographic site category, a sensor modality map, and the operator’s cryptographic signature. It is the legal and cryptographic proof of ownership: auditable, immutable, and not controlled by any intermediary.
The Perception Lineage ID (PLID) is a Merkle root over the hashed perception logs, computed and signed inside the robot’s TEE. It ties the on-chain ownership record to the specific physical sensor data. Forging a PLID requires physical access to the robot’s secure hardware element. There is no software path to creating a valid PLID for data that does not exist.
Once minted, an nDATA-R lot can sit in the operator’s wallet generating royalties passively as downstream models that used its data earn revenue. It can be listed on the NeuroMesh data marketplace with a reserve price. It can be bundled into structured tranches and used as collateral in DeFi lending protocols.
Agentic Learning Rights
Owning data is not the same as controlling how it gets used. The Agentic Learning Rights specification attached to every nDATA-R lot defines exactly what a buyer may and may not do with the purchased experience. The purpose set specifies permitted uses. The prohibition set specifies forbidden uses: facial recognition, biometric identification, autonomous weapons development, surveillance systems. The privacy budget specifies how much cumulative information extraction is permitted before access is automatically blocked.
ALR compliance is enforced through zero-knowledge proofs. A model developer submitting a learning job provides a ZK proof that their computation respected the ALR scope. The evaluator committee verifies the proof before releasing the usage fee. No trust in the developer is required.
How Royalties Flow
When a model trained on network data earns revenue, the fraction of that revenue attributable to each contributing data lot flows back automatically via Solana streaming payment programs. Attribution is computed using Shapley values: the game-theoretic solution to the fair credit allocation problem.
The intuition behind Shapley values is that credit should be proportional to marginal contribution. If training a model on Robot A’s data alone improves benchmark performance from 42% to 61%, and training on Robot B’s data alone improves it from 42% to 55%, but training on both improves it to 74%, then Robot A contributed more at the margin and should receive a larger royalty share. The Shapley formula averages this marginal contribution calculation across all possible orderings of the contributors, producing a uniquely fair allocation that satisfies standard axioms of fairness in cooperative game theory.
Case study, multi-robot royalty distribution. A model developer trained a new logistics manipulation policy using CTV/A data from 140 NeuroMesh robots across nine facilities. The policy was then deployed on 60 new robots and licensed as an API service earning approximately 8,400 $NEURO per month. The protocol royalty rate of 8% produced a royalty pool of 672 $NEURO per month. The Shapley service computed individual fractions for all 140 contributors. The top 10 contributors by marginal impact, all of whom had provided data from unusual manipulation scenarios not well-covered by the broader dataset, each earned between 18 and 31 $NEURO per month in passive royalties. The bottom quartile, whose data overlapped heavily with existing coverage, each earned around 2 $NEURO per month. The distribution reflected actual marginal value, not flat averaging.
Data Valuation
The market value of an nDATA-R lot depends on four factors. The marginal information gain measures how much better a model becomes from training on this data. Coverage measures how diverse the experiences in the lot are: many different objects, surfaces, lighting conditions, and task types score higher than repetitive single-task operation. Rarity measures how scarce this type of experience is across the network as a whole. Risk captures the privacy and regulatory profile of the data.
w_info is a price-per-information-unit calibration constant set by the oracle. ΔI(L) is the marginal model improvement from training on lot L. Coverage(L) is a diversity score from 0 to 1. Rarity(L) is how scarce this type of data is. Risk(L) is a regulatory and privacy multiplier of at least 1 that reduces value for higher-exposure data.
A hospital corridor navigation lot with marginal gain of 0.18 nats, coverage 0.71, rarity 2.4 (hospital data is uncommon in the network), and risk 1.6 (visual data in human-facing environments) values at approximately 19 $NEURO with w_info set to 100. A warehouse manipulation lot with similar marginal gain but lower rarity and lower risk values at around 13 $NEURO. The premium for hard-to-obtain experience in rare deployment contexts is real and persistent, creating sustained economic incentive for operators in those contexts to join the network.
10. Token Economy
Overview
$NEURO powers the entire NeuroMesh protocol. It is the unit of account for protocol fees, the staking token for validators and Cerebro operators, and the governance token for protocol parameter updates. The total supply is hard-capped at one billion tokens with no additional minting possible beyond the defined emission schedule.
Alongside $NEURO, the network uses six specialised tokens for specific resource types. cCOMP represents attested on-robot compute cycles. nROBOT represents verified operational minutes of specific robots. nENERGY represents time-of-use renewable energy windows. nSTOR represents durable CTV/A storage capacity. nDATA represents third-party licensed catalogue datasets. nDATA-R represents per-robot experiential data rights. Each token has a distinct economic function that cannot be served by any other token in the stack.
Token Allocations
The total $NEURO supply of 1,000,000,000 tokens is distributed across eight categories designed to balance immediate liquidity needs, long-term operational sustainability, and protection against short-term mercenary capital.
| Allocation | Share | Tokens | Unlock |
|---|---|---|---|
| Liquidity Provisioning | 22% | 220,000,000 | Fully unlocked at TGE |
| Community and Ecosystem | 19% | 190,000,000 | Fully unlocked at TGE |
| Treasury and Sustainability | 17% | 170,000,000 | Fully unlocked at TGE |
| Marketing and Partnerships | 9% | 90,000,000 | Fully unlocked at TGE |
| Investors | 15% | 150,000,000 | 6-month cliff, 12-month linear vest |
| Protocol R&D | 7% | 70,000,000 | Fully unlocked at TGE |
| Governance Reserve | 6% | 60,000,000 | Fully unlocked at TGE |
| Team | 5% | 50,000,000 | 24-month cliff, 36-month linear vest |
Total supply: 1,000,000,000 $NEURO Initial circulating supply at TGE: 800,000,000 $NEURO (80%)
The 80% initial float reflects the network’s need for liquid markets from day one, particularly for the compute and data markets where pricing requires active liquidity. The two vested categories, investors and team, are structured with meaningful cliffs to prevent early sell pressure from misaligned short-term holders.
Investor tokens begin unlocking at month 7 and vest linearly through month 18. No investor tokens are available before the six-month cliff. Team tokens have a 24-month cliff with linear vesting through month 60. At the five-year mark, 100% of all tokens are in circulation.
Vesting Timeline
The circulating supply progresses as follows. At TGE, 80% is circulating. Investor vesting begins at month 7 and adds 1.25 percentage points per month, reaching 95% by month 18 when all investor tokens are fully vested. Circulating supply holds at 95% through month 24. Team vesting begins at month 25 and adds approximately 0.14 percentage points per month, reaching 100% at month 60.
This schedule produces a stable, predictable supply curve. The largest single unlock event is TGE itself, which is by design: the operational categories that require liquidity are available immediately. The categories representing concentrated holders with short time horizons are gated behind meaningful cliffs that align their interests with long-term protocol health.
cCOMP: Compute Credits
cCOMP is the network’s non-inflationary compute currency. New cCOMP can only be created when a robot completes a verified inference cycle with a valid PoI receipt. You cannot buy cCOMP into existence. You earn it by doing real, attested work.
α is the credits-per-FLOP calibration constant. C_attested is the number of verified floating-point operations from the PoI receipt. SLO_bonus scales from 1.0 for standard latency to 1.5 for premium sub-50ms outputs. safety_mult is 1.0 for Class A certified robots and reduced for lower safety classes.
A robot completing a complex manipulation task, with the PoI receipt recording 2,100,000 verified FLOPs, delivering the policy output in 38 milliseconds (earning the premium SLO bonus of 1.2), and operating as Class A certified, earns 1,260 cCOMP for that single cycle. At a network price of 0.003 $NEURO per cCOMP, that is 3.78 $NEURO: a micro-payment that is economically viable only because Solana transaction fees are a fraction of a cent.
Governance
Governance weight is not purely proportional to token holdings. Pure token-weight governance is known to produce outcomes where large holders systematically extract value from smaller participants. NeuroMesh uses a square-root weighting with a loyalty multiplier for long-term stakers.
S is the number of staked $NEURO. t_stake is months of continuous staking. The loyalty multiplier is capped at 2.0.
Someone staking 1,000,000 tokens has √1,000,000 = 1,000 base weight. Someone staking 10,000 tokens has √10,000 = 100 base weight. A 100x token advantage produces only a 10x governance advantage. A staker who has been continuously staked for 36 months earns an additional multiplier of approximately 1.24x over a new staker with the same holding. Long-term aligned participants have structurally stronger governance voices than mercenary capital.
11. Web3 Architecture
Why Solana
The choice of Solana over other smart contract platforms is not a preference. It is a requirement given the specific demands of real-time robotic compute markets.
A single robot running NeuroMesh generates approximately 100 PoI receipts, 100 PoA certificates, and 20 data lot minting events per hour. At 65 Solana transactions per second for a robot’s economic activity, the cost at Solana’s sub-$0.001 fee structure is approximately $0.055 per hour per robot. At Ethereum mainnet fees, the same activity would cost over $1,000 per hour. The economics are simply incompatible at any other fee level.
Solana’s 400-millisecond finality also matters for compute market dynamics. When 1,000 robots are bidding on tasks and the market clears every block, price signals need to propagate in sub-second time for the market to function. Twelve-second Ethereum finality would produce a compute market operating at a 30x latency disadvantage.
The SPL token standard enables all seven NeuroMesh tokens to be atomically composed in a single transaction. An execution bundle combining cCOMP, nROBOT, and nDATA-R rights can be purchased, transferred, and settled atomically. On ERC-20-based platforms, this requires multiple contract calls across multiple blocks.
The Full Stack
Solana handles token issuance, settlement, governance, and receipt anchoring. Arweave provides permanent, pay-once storage for CTV/A artefacts and lineage trees. Its economic model, pay once and store forever, is ideal for training data records that must persist for the lifetime of any model trained on them. IPFS and Filecoin handle content-addressed μToken catalogues with economic incentives for storage providers. Pyth Network provides real-time Solana-native price feeds for the AMM pricing and NAV calculations. Wormhole will provide cross-chain bridges in Phase 4 to enable nDATA-R lots to serve as collateral in Ethereum DeFi lending.
Progressive Decentralisation
NeuroMesh launches with a founding multisig council controlling protocol parameters and emergency functions. Governance weight transfers progressively to $NEURO stakers over the first twelve months via Realms, Solana’s native DAO infrastructure. Tier 1 parameters covering safety caps and privacy reserves require a 67% supermajority with a seven-day vote. Tier 2 parameters covering fees and emission curves require a simple majority with a three-day vote. Tier 3 operational parameters can be updated by the council within predefined bounds.
12. Market Mechanics
How Prices Clear
The four NeuroMesh markets (compute, embodied time, energy windows, and data) use a price-responsive update rule that adjusts every Solana block, approximately every 400 milliseconds. When demand exceeds supply, prices rise. When supply exceeds demand, prices fall. The adjustment is proportional to the imbalance.
λ is the current price. D and S are aggregate demand and supply at that price. μ is the step size, set to prevent overreaction to momentary imbalances.
With a cCOMP price of 0.003 $NEURO, demand running 25% above supply, and μ = 0.001, the price moves from 0.003 to 0.003075 in a single block: a 2.5% adjustment responding to 25% excess demand. Over several minutes of sustained excess demand, the price climbs steadily until supply increases or demand adjusts. The gradual adjustment prevents the sharp price spikes that would result from instant market clearing while ensuring the market reaches equilibrium within a reasonable window.
Throughput and Queueing
The network’s capacity to serve incoming task requests is modelled as a standard multi-server queue. Stability requires that the total task arrival rate not exceed the aggregate service capacity of the active fleet.
At 80% network utilisation with 1,000 active robots each handling one task per hour, the average task waits 14 seconds before assignment. This is acceptable for standard logistics and manufacturing tasks. For tasks with strict SLO requirements, the network reserves capacity headroom so that the priority queue never exceeds 75% utilisation, guaranteeing assignment within 5 seconds. When utilisation approaches the stability threshold, cCOMP prices rise automatically, reducing demand back to the stable region. The price mechanism is the automatic stability controller.
13. Operator Economics
What One Robot Earns in a Day
The daily gross revenue for a single robot has two components: compute revenue from verified task execution and data licensing revenue from nDATA-R lot sales.
h is operational hours per day. c is attested credits per hour. P_cCOMP is the current cCOMP price in $NEURO. v is the verification fee intensity. D_lots is daily data licensing revenue.
Using Year 1 baseline figures for a Class A certified humanoid: 6.2 operational hours per day, 1,050 credits per hour, a cCOMP price of 0.003 $NEURO, and a verification fee intensity of 15%.
Compute revenue is 6.2 × 1,050 × 0.003 × 0.85 = 16.60 $NEURO per day. Data revenue from 20 licensable lots at an average price of 0.04 $NEURO with a 35% protocol take-rate is 20 × 0.04 × 0.65 = 0.52 $NEURO per day. Total gross revenue is approximately 17.12 $NEURO per day per robot.
Against this, operational costs include energy, routine maintenance provision, insurance premium, verification burns, and amortised hardware capex. The precise net margin will vary significantly by deployment context, robot class, energy costs, and data rarity. Operators in high-rarity deployment contexts such as medical facilities, specialised manufacturing, and public environments earn meaningfully higher data licensing revenue than operators in common warehouse contexts.
Fleet Projections
| Metric | Year 1 | Year 2 | Year 3 |
|---|---|---|---|
| Active robots | 1,200 | 3,600 | 9,000 |
| Average hours per day | 6.2 | 7.0 | 8.1 |
| Credits per hour | 1,050 | 1,250 | 1,480 |
| cCOMP price in $NEURO | 0.0030 | 0.0036 | 0.0042 |
| Data lots per robot per day | 20 | 25 | 28 |
| Average lot price in $NEURO | 0.04 | 0.05 | 0.06 |
| Protocol take-rate | 35% | 40% | 45% |
| Fleet daily gross (approx) | 24,600 $NEURO | 147,000 $NEURO | 460,000 $NEURO |
Year 1 arithmetic: 1,200 × 6.2 × 1,050 × 0.003 = 23,436 $NEURO in compute revenue plus 1,200 × 20 × 0.04 × 0.35 = 336 $NEURO in protocol data share, for a fleet gross of approximately 23,772 $NEURO per day, consistent with the projected figure above including premium SLO and rarity premia not shown in the base calculation.
Data Tranches as Yield Instruments
Individual nDATA-R lots can be bundled into structured tranches: financial instruments that sell fractional claims against a pool of lots’ combined royalty cash flows. This is conceptually analogous to how mortgage-backed securities pool individual mortgages into tradable instruments. The data tranche pools individual robot experiences into a tradable data yield product.
The Net Asset Value of a tranche is the sum of the fair values of its constituent lots, discounted for time-value decay and reduced by a haircut that reflects oracle variance and legal risk. A tranche of 100 manipulation lots with aggregate fair value of 1,800 $NEURO, a 25% haircut, and a 5% average time-value discount has a NAV of 0.75 × 1,800 × 0.95 = 1,282.50 $NEURO. Maximum issuance of tranche tokens is capped at 85% of NAV divided by the current $NEURO token price, with the remaining 15% held as a redemption reserve.
These instruments bring institutional capital into the robotic intelligence economy and create a yield market backed by the operational productivity of real machines doing real work in the real world.
14. Competitive Landscape
NeuroMesh is not a compute network that happens to target robotics. It is not a data marketplace that handles robot data as one category among many. It is not a robotics operating system with a token bolted on. It is all three of those things simultaneously, designed from the start for the specific requirements of physical AI, with each component reinforcing the others.
Bittensor-style networks have achieved genuine product-market fit for disembodied AI workloads: language models, image generation, and distributed GPU compute. Their architecture assumes that latency of 100 to 2,000 milliseconds is acceptable, which it is for language model inference but catastrophically wrong for robot control loops that require sub-10-millisecond actuation decisions. No existing decentralised AI network has Cerebro-equivalent intelligence synthesis, CBC safety primitives, PoA verification, or data ownership infrastructure.
Traditional robotics platforms including ROS, ROS 2, and proprietary vendor stacks provide excellent hardware abstraction and low-level control but no economic coordination layer, no cross-operator intelligence sharing, and no data ownership mechanism. They are designed for single-operator single-deployment use cases. The intelligence that accumulates inside these platforms belongs entirely to the vendor.
Existing data marketplaces tokenise data ownership in a generic sense but lack every specialised primitive that robot experiential data requires: multimodal alignment, TEE-based lineage, CBC-anchored safety classes, federated learning integration, and Shapley-based royalty attribution. They treat data as a commodity. NeuroMesh treats robot experience as a productive asset that generates ongoing yield.
The NeuroMesh position is: every robot on the network is smarter than any robot off it, and every operator who contributes to the network earns from what their robot learns. No other protocol offers both simultaneously.
15. Roadmap
Phase 1: Foundation (Months 0 to 6)
The first phase establishes the core infrastructure and the minimum viable economic layer.
NeuroOS-H reaches general availability with TEE integration, CBC safety supervisor, and mesh daemon. The PoI and PoA committee launches with 50 or more staked evaluator nodes and a public receipts explorer. Cerebro alpha goes live as the first masternode cluster running federated aggregation across three skill domains. cCOMP minting and the cCOMP to $NEURO AMM launch on Solana mainnet. Automated DOC and PLID generation for all NeuroOS-H robots begins, along with first-generation nDATA-R minting and ALR lane enforcement for three initial purpose classes.
Phase 2: Markets (Months 6 to 12)
The second phase launches the data and compute marketplaces and activates the intelligence flywheel.
The public μToken lot marketplace opens with streaming licence and buyout options. Cerebro beta expands to 15 or more skill domains with a public skill library. ΔI oracles go live for ten reference task domains. The Shapley attribution service launches with a public royalty dashboard. Teleoperation pools open, allowing remote human operators to earn $NEURO for safety override assistance. The utilisation-aware cCOMP AMM activates with demand-responsive pricing.
Phase 3: Intelligence Layer (Months 12 to 18)
Cerebro mainnet scales to 1,000 or more robot capacity with full fleet aggregation. Automatic Shapley-weighted royalty streaming goes live for all CTV/A reuse events. zkML verification queues open for sensitive domains including healthcare, finance, and security. The premium SLO market launches with guaranteed latency bounds. Insurer-grade PoA audit packs and initial insurer partnerships are established. First nDATA-R tranches issue with NAV oracles.
Phase 4: Global Scale (Months 18 to 36)
Regional Cerebro clusters deploy in EU, US, and APAC with data residency-compliant defaults. Cross-market task routing goes live across regional fleets. DeFi lending against nDATA-R collateral via Wormhole bridge launches. RWA basket indices provide institutional exposure to the robotic intelligence economy. Export-compliant data bundles enable cross-border AI training programs.
16. Risks
Technical
The largest near-term technical risk is the cost of zkML verification for large neural network inference circuits. Generating a zero-knowledge proof for a complex model can require 10 to 1,000 times the compute of the inference itself. NeuroMesh addresses this by routing zkML verification only to premium queues initially, where operators explicitly pay for ZK-level assurance. As proof systems including Halo2 and Plonky3 improve, the cost curve will fall and ZK verification will extend to more of the network progressively.
TEE security is a second technical assumption that requires monitoring. Sophisticated hardware attacks including spectre-class side channels could theoretically compromise attestation integrity. NeuroMesh uses multiple attestation layers and monitors for anomalous patterns that would indicate compromise. Critical safety functions are local to the robot and do not depend on attestation integrity at the control plane level.
Solana network availability is a structural dependency. Historical outages during 2021 and 2022 caused temporary disruption to Solana-dependent protocols. For NeuroMesh, all physical safety functions are entirely local and require no chain connectivity. Only economic settlement is affected by a Solana outage, not robot operation or safety.
Economic
Token price volatility creates operational cost uncertainty for robot operators when protocol fees are denominated in $NEURO. The cCOMP floating rate mechanism provides a partial buffer. Stablecoin fee options are on the roadmap. The large initial liquid float of 80% is designed to support active price discovery and deep markets from launch.
Data quality gaming is a persistent risk in any open data marketplace. Operators could attempt to mint low-value or simulated data lots to collect minting rewards. The diversity thresholds, similarity hashing, and staked evaluator spot audit system create layered defences. Submitting near-duplicate data across multiple lots results in slashed stake.
Regulatory
GDPR, CCPA, PIPL, and emerging national AI regulations create complex cross-border compliance requirements. ALR defaults are configurable per region and regional clusters with data residency guarantees are planned for Phase 4. Token classification as securities in specific jurisdictions is actively monitored with legal counsel in key markets.
17. Conclusion
The robots are arriving. The hardware is real, the deployments are happening, and the commercial demand is genuine. What the industry does not yet have is the intelligence infrastructure to make those robots genuinely capable over time, the economic infrastructure to make operators financially whole for the value they generate, or the verification infrastructure to build trusted financial products on top of robot performance.
NeuroMesh is that infrastructure. It is not a speculative vision of what robots might one day do. It is a protocol that addresses three concrete, existing failures in the robotics industry right now: isolated learning, absent verification, and missing data ownership. Every component of the architecture addresses one of those failures directly, and the three components reinforce each other in a feedback loop that gets stronger as the network grows.
A robot on NeuroMesh is more capable than a robot off it, from the moment it joins. An operator on NeuroMesh earns from their robot’s experience, continuously, in ways that a robot off the network cannot generate at all. A Cerebro node synthesising intelligence from a fleet of thousands creates skills that no individual operator’s deployment could discover.
Every verified minute of robot operation deepens the collective memory, expands the skill library, and adds another data point to the growing body of evidence about how physical AI can operate safely, productively, and at scale. The network is not a platform. It is an organism. And it grows smarter every time a robot takes a step.
NeuroMesh: the intelligence layer that turns every robot into a neuron in the world’s first decentralised superbrain.
18. Formula Reference
Formulas appear in context throughout the litepaper. This section collects them with minimal notation for reference.
Capability Scaling Law
\[\boxed{C(D) = C_0 + \beta \times D^{\,\alpha}}\]Network capability C at total data volume D. C₀ is the simulation baseline. At D = 10 million μToken units with C₀ = 42, β = 8.5, α = 0.70: C = 42 + 8.5 × 5.01 = 84.6.
CBC Safety Constraint
\[\boxed{\frac{\partial h}{\partial x} \cdot f(x,u) \;\geq\; -\,\alpha \cdot h(x)}\]Safety function value h(x) must not decrease faster than α × h(x). At human proximity of 0.8 m where h = 0.3 and α = 2.0: approach speed is capped at 0.6 m/s automatically.
Data Lot Fair Value
Hospital navigation lot: 100 × 0.18 × 0.71 × 2.4 / 1.6 = 19.17 $NEURO.
cCOMP Minting
0.0005 × 2,100,000 × 1.2 × 1.0 = 1,260 cCOMP per inference cycle.
Emission Schedule
\[\boxed{E(t) = E_0 \times e^{-\lambda t} + E_{\text{floor}}}\]λ = ln(2) / 730. At day 730: 100,000 × 0.500 + 5,000 = 55,000 $NEURO per day (half of initial rate plus floor).
Governance Weight
1,000,000 tokens for 6 months: W = 1,000 × 1.085 = 1,085. 100,000 tokens for 60 months: W = 316.2 × 1.787 = 565. A 10x token advantage yields only a 1.9x governance advantage for equivalent staking duration.
Royalty Attribution
φ_L is the Shapley fraction for lot L. ρ = 8% protocol rate. R_M is the model’s monthly revenue. At R_M = 1,000 $NEURO and φ = 0.163: royalty = 0.163 × 0.08 × 1,000 = 13.04 $NEURO per month.
Price Clearing Update
At 25% excess demand and μ = 0.001: 0.003 × 1.00025 = 0.0030075 $NEURO per credit.
19. Glossary
ALR: Agentic Learning Rights. The programmable licence attached to every nDATA-R lot governing permitted uses, prohibited uses, and the cumulative privacy budget.
CBC: Control Barrier Certificate. The mathematical safety constraint enforced at the control plane that ensures a robot’s safety function value h(x) never goes negative.
Cerebro: NeuroMesh’s masternode intelligence aggregation layer. Synthesises skill patterns from fleet-wide CTV/A streams and distributes validated skills back to the library.
cCOMP: On-robot compute credit token. Minted only from verified inference cycles with a valid PoI receipt. Non-inflationary.
CTV/A: Composite Thought and Action Vector. The verifiable artefact capturing one complete intelligence cycle: inputs, reasoning, action, PoI receipt, PoA certificate, safety certificates, and royalty metadata.
DID: Decentralised Identifier. A W3C-standard identifier anchored on-chain and bound to hardware attestation. Every operator and every robot has one.
DOC: Data Ownership Certificate. The signed on-chain record proving operator ownership of an nDATA-R lot.
μToken: Microtoken. The compact, privacy-preserving semantic representation of a raw sensor stream used as input to the policy server and as the unit of data licensing.
nDATA-R: Per-robot data rights token. The RWA representing ownership of a specific robot’s captured experience over a specific time window.
nENERGY: Time-of-use energy window token issued by grid oracles and used to schedule learning workloads during low-cost, low-carbon periods.
nROBOT: Embodied minutes token representing verified operational time of a specific robot at a specific safety class.
PLID: Perception Lineage ID. A Merkle root over hashed perception logs signed inside the TEE, tying on-chain ownership to physical sensor data.
PoA: Proof-of-Action. Cryptographic proof that physical actions were executed within the declared safety envelope.
PoI: Proof-of-Inference. Cryptographic proof that on-robot inference ran on attested hardware and produced a specific output.
RWA: Real-World Asset. A physical or contractual asset whose ownership is tokenised on-chain.
Shapley value: A game-theoretic method for fairly attributing credit among multiple contributors based on marginal contributions. Used for royalty distribution across nDATA-R contributors to trained models.
SLO: Service Level Objective. A guaranteed performance bound, typically an inference latency ceiling such as sub-50-milliseconds.
TEE: Trusted Execution Environment. A hardware-isolated enclave within the robot’s processor that provides unforgeable attestation of hardware and software state.
zkML: Zero-Knowledge Machine Learning. A cryptographic proof that a specific neural network inference occurred on specific inputs without revealing the inputs themselves.
End of NeuroMesh Litepaper · Version 4.3 · 2025 The Robotic Intelligence Layer · Forming a Decentralised Superbrain