NeuroMesh Litepaper
The Intelligence Layer for On-Robot Compute
Version 4.3 · 2026 Built on Solana · On-Robot Inference · Data as RWA · Powered by Cerebro
This litepaper is for informational purposes only and does not constitute financial, investment, legal, or tax advice. Token projections and financial figures are directional and illustrative. NeuroMesh is an evolving protocol and all specifications are subject to change. Please conduct your own research before participating.
Table of Contents
- Executive Summary
- The Problem
- The Solution
- Cerebro: The Network’s Brain
- NeuroOS-H: The Robot Runtime
- The Intelligence Cycle
- How Robots Get Smarter
- Proving Intelligence is Real
- Data Ownership and Rights
- Token Economy
- Web3 Architecture
- Market Mechanics
- Operator Economics
- Competitive Landscape
- Roadmap
- Risks
- Conclusion
- Formula Reference
- Glossary
1. Executive Summary
NeuroMesh is the intelligence layer for on-robot compute. It is the operating software, economic infrastructure, and shared learning network that transforms isolated robots into nodes of a collective, self-improving global brain. The compute never leaves the machine. The intelligence compounds across the fleet.
The central idea is this: every robot that joins the network makes every other robot smarter, and every operator earns value from the intelligence their machine contributes. Today those two things are impossible simultaneously. Vendors extract the intelligence and operators bear all the cost. NeuroMesh inverts that arrangement entirely.
The architecture has three parts. Every robot runs NeuroOS-H, which handles local inference, real-time safety enforcement, and cryptographic attestation of every computation the robot performs. At the centre of the network sits Cerebro, a masternode aggregation system that synthesises verified experience from thousands of robots simultaneously, identifies cross-domain skill patterns, and distributes refined intelligence back to the fleet without ever touching the underlying raw data. The settlement layer is Solana, which provides the neutral financial infrastructure: immutable ownership records, automatic licence enforcement, and the sub-cent transaction fees that make per-action economic micro-settlement viable.
Every piece of robot-generated experience is tokenised as a Real-World Asset with cryptographically provable ownership, programmable licensing rights, and automatic royalty streams. You own what your robot learns. The token powering all of this is \($\text{NEURO}\), with a hard-capped supply of one billion tokens and seven specialised instrument types covering everything from compute credits to energy windows to per-robot data rights.
2. The Problem
Robots Cannot Share What They Learn
Picture two warehouses on opposite sides of the world, both operated by the same logistics company, both deploying the same model of humanoid robot. In the Osaka facility, Robot A spends six weeks learning to reliably pick deformable food packaging off a conveyor belt. The task sounds trivial but it is not. Grip force must be calibrated in real time against the compressibility of the packaging. Weight distribution shifts as contents settle. Humidity causes subtle changes in the coefficient of friction between the robot’s fingers and the plastic. Robot A makes hundreds of mistakes, adjusts, and eventually develops a reliable technique. It has, at that point, created something genuinely valuable: a small but hard-won piece of physical intelligence.
In Rotterdam, Robot B starts from scratch on the same problem. It will make identical mistakes. It will take the same six weeks. The knowledge Robot A earned is locked inside local model weights on a proprietary server controlled by the hardware vendor. The operator paid for the machine, paid for the six weeks of failed attempts, paid for the downtime and the dropped packages and the retraining cycles. They received none of the intellectual value and none of the downstream revenue when that vendor used the experience to improve their next product generation.
This is not an edge case. It is the defining structural problem of the robotics industry right now. Every robot starts from zero. Every deployment is an isolated experiment. The collective learning of a global fleet of machines compounds for the vendor and nobody else.
Nobody Can Prove What a Robot Actually Did
When a robotics company tells a prospective customer that their platform achieved 99.7% task success in a controlled trial, there is no cryptographic proof behind that claim. No immutable record. No independent audit trail. Just a number in a sales presentation. This matters because insurance underwriters cannot price liability for autonomous robot operations without a trusted historical record of what those machines actually did. Regulators cannot certify platforms they cannot audit. Investors cannot value robotics companies with any precision when the underlying performance data is unverifiable. And a shared intelligence commons cannot exist if any robot can claim to have learned something without actually having done so. Fabricated data poisoning a collective model is a catastrophic failure mode that must be ruled out at the architecture level, not the policy level.
Data Ownership is Absent
A single humanoid robot running an eight-hour shift generates gigabytes of co-registered sensor data. RGB-D video, force-torque readings, tactile skin arrays, proprioceptive logs, audio streams, and actuation records combine into something that cannot be replicated in any simulation. The physical world is too complex, too varied, and too full of edge cases for synthetic data to fully substitute for real operation. This real-world embodied experience data is the scarcest and most valuable training resource in the AI industry. Today, operators receive none of the financial value it generates. That changes with NeuroMesh.
3. The Solution
NeuroMesh is built around one principle: every robot that operates safely and contributes verified experience to the network should earn from that contribution, and the entire fleet should get smarter in return.
The reason on-robot compute is the foundational architectural choice, rather than cloud inference, comes down to physics. A robot control loop that routes decisions through a remote server adds 100 to 500 milliseconds of round-trip latency in even the best network conditions. A robot operating alongside humans in a dynamic environment needs policy outputs in under 50 milliseconds to respond naturally to unexpected situations. There is no cloud architecture that closes that gap. The compute must run on the machine. NeuroMesh is built on this reality rather than working around it.
The consequence of that choice is that every robot is a self-contained inference node. It carries its own compute, its own sensor processing, its own safety enforcement, and its own model weights. What it lacks in isolation is the breadth of experience required to generalise. That is what the network provides. Cerebro aggregates the lessons from thousands of on-robot compute nodes simultaneously, synthesises cross-domain skill patterns, and pushes refined intelligence back to the fleet. Raw data never leaves the machine. Only the learning does.
The settlement layer handles everything economic: token ownership, licence enforcement, royalty streaming, and the micro-settlement of compute credits. Solana’s sub-cent fees and 400-millisecond finality make it the only chain where per-action settlement is economically viable at the frequency NeuroMesh requires.
4. Cerebro: The Network’s Brain
Cerebro is the component of NeuroMesh with the most direct impact on robot capability over time. It is the reason joining the network produces intelligence gains that no isolated deployment can match.
What Cerebro Actually Does
Individual robots are excellent at learning from their own experience. A warehouse robot that fails a grasp a thousand times and adjusts a thousand times will develop a highly specialised competency for that specific task in that specific environment. What it cannot do alone is recognise that the rotational wrist adjustment it learned for cylindrical objects in low light is a generalisable principle that also applies to pipe fittings in a manufacturing plant, to rolled textiles in a garment factory, and to beverage containers in a retail stockroom. Making those connections requires seeing all of those contexts simultaneously, which no single robot can do.
Cerebro sees all of them at once. It continuously ingests verified Composite Thought and Action Vectors from every robot in the network, clusters them by task structure rather than deployment context, and identifies skill patterns that generalise across domains. When it finds a reliable generalisation, it validates the pattern against a held-out benchmark and distributes the refined skill to any robot in the library whose operator has opted in.
This is not a centralised model repository where all robots share one model. It is a dynamic synthesis layer where cross-domain patterns are discovered, validated, and made available as modular skill updates. A robot that has never operated in a hospital corridor can load Cerebro’s verified indoor navigation skill, synthesised from the collective experience of robots that have. The skill is not raw data from those robots. It is the abstract pattern extracted from their experience, verified as generalisable, and packaged as a safe update.
Cerebro and Teleoperation Data
When a NeuroMesh robot encounters a situation where its confidence falls below a configured safety threshold, it requests human assistance. A remote operator takes over through the teleoperation pool, handles the situation, and releases control. Every action the human operator takes during that override session is logged as a Composite Thought and Action Vector, identical in structure to any other task cycle. Expert human demonstrations captured during safety overrides become some of the highest-quality training data in the network.
Cerebro specifically monitors the teleoperation log for recurring patterns. If twenty different robots across ten different facilities are consistently requesting human help with the same class of situation, Cerebro flags this as a priority learning gap and escalates it to the evaluator committee. The network learns where its blind spots are and directs data collection effort toward filling them. This is active intelligence development, not passive data accumulation.
A Concrete Example
Consider what happened with language models and few-shot reasoning. Early models needed explicit examples of every task type they were expected to perform. Researchers later discovered that exposure to a sufficiently diverse range of tasks enabled emergent capabilities: the ability to perform entirely new tasks from a brief description, without any specific training examples. Breadth of training, not just depth on any single task, created genuine generalisation.
Cerebro is designed to produce the analogous effect in physical intelligence. A robot that has learned to handle delicate objects carefully in a medical context, to apply precise torque in an assembly context, and to navigate confidently around humans in a retail context has developed component skills that Cerebro can combine into a new capability in a domain none of those individual robots has encountered. This is the compounding intelligence effect that makes the NeuroMesh network increasingly valuable as it scales.
Cerebro Economics
Cerebro nodes require substantial compute and a stake of \($\text{NEURO}\) as a security bond against dishonest aggregation. In return, they earn a share of network verification fees and a portion of the data licensing revenue generated by skills they helped synthesise. If a Cerebro aggregation cycle produces a skill update that becomes widely used across the fleet, the nodes that contributed to synthesising it receive ongoing royalties proportional to that skill’s usage. This creates direct financial incentive for Cerebro operators to invest in high-quality hardware and honest aggregation.
5. NeuroOS-H: The Robot Runtime
NeuroOS-H is the on-robot operating system that every NeuroMesh node runs. It is designed around one constraint above all others: the physical world does not wait for network round trips. Everything that affects the robot’s safety, motion, and decision-making must execute locally, without any dependency on external connectivity.
The Latency Hierarchy
A robot operating alongside humans must make decisions at several different speeds simultaneously, and the architecture of NeuroOS-H reflects this directly.
At the base level, reflex arcs operate in under two milliseconds. These are hardwired responses to collision detection and emergency stop conditions. They cannot be overridden by any higher-level process. If a force sensor detects unexpected contact above a threshold, the reflex arc fires and the robot stops. No model, no policy update, no remote instruction can prevent this. It is the non-negotiable safety floor and it runs entirely on local hardware.
Above the reflex layer, the control plane updates joint torques and impedance controllers between 500 and 2,000 times per second, with servo timing jitter bounded to two milliseconds. This precision is what makes dexterous manipulation possible. The difference between a robot that reliably picks fragile objects and one that crushes them unpredictably is measurable in milliseconds of timing variance at the joint level.
The policy server, which runs the learned intelligence, operates on a cycle of 10 to 150 milliseconds. This is where NeuroOS-H consults the skill library, runs on-robot inference, selects actions, and generates the Composite Thought Vector recording what the robot decided. It is fast enough for smooth, reactive task execution while leaving enough computational headroom for real inference rather than simple lookup.
The mesh daemon handles all economic activity at a slower cadence of 400 milliseconds to two seconds. It logs CTV/A artefacts, submits them to the verification committee, syncs with Cerebro, and handles token settlement on Solana. From the robot’s perspective, this is background accounting that runs without interfering with the core operational loop.
The Perception Stack
NeuroOS-H integrates the full sensor suite of a modern humanoid robot. RGB-D cameras provide colour and depth simultaneously. Event cameras capture motion at microsecond resolution, tracking fast movement that conventional cameras blur into noise. Tactile skin arrays across the hands and forearms detect contact, texture, and vibration. Six-axis force and torque sensors at the wrists and joints measure loads precisely. Inertial measurement units and joint encoders maintain continuous awareness of the robot’s own body state.
Every sensor stream is timestamped at hardware level, hashed with SHA-256, and logged in the Trusted Execution Environment before any application-layer software can see it. The TEE is a hardware-isolated enclave within the processor, physically separated from the main computing environment. This is what makes data ownership provable: the TEE signature certifies that this data was produced by this specific hardware at this specific time, and nothing running on the machine afterward can alter that record without breaking the attestation chain.
Safety as Architecture
The safety supervisor in NeuroOS-H enforces Control Barrier Certificate constraints at the control plane level, every half-millisecond. When the policy server proposes an action, the supervisor checks it against the mathematical safety boundary before it executes. If the proposed action would bring the robot too close to the boundary, the supervisor adjusts it minimally to stay safe, without blocking the action entirely. Hard blocks produce jerky, unpredictable motion that itself creates safety risks. Minimal-intervention filtering keeps movement fluid and natural while guaranteeing the boundary is never crossed.
6. The Intelligence Cycle
Every task a NeuroMesh robot performs passes through four phases. This cycle is simultaneously the fundamental unit of physical operation and the fundamental unit of economic activity on the network. Understanding it is the key to understanding where value is created and how it flows.
Phase One: Sense
Raw sensor data from a single robot on a full shift runs to gigabytes. It cannot be transmitted, stored, or processed as-is, nor should it be. NeuroOS-H compresses it into compact microtoken representations that preserve the information content relevant for learning and decision-making while removing the identifying detail that would create privacy exposure.
Vision streams compress into masked autoencodings: compact representations of spatial structure without a full pixel record. Audio streams become self-supervised embeddings capturing semantically relevant acoustic features. Tactile and force data becomes contact dynamics patches: structured representations of how surfaces behave under contact. Proprioceptive data becomes predictive trajectory codes: compressed representations of body motion history and momentum.
A researcher who licenses μTokens from a warehouse robot learns about manipulation dynamics in cluttered environments. They cannot reconstruct the faces of workers in the background, the layout of the facility, or the specific products being handled. The privacy protection is structural, not a policy overlay.
Phase Two: Think
The policy server runs on-robot inference over the μToken inputs and selects an action. This inference is attested by Proof-of-Inference: a cryptographic certificate that a specific model, identified by its hash, ran on specific attested hardware and produced a specific output. The TEE records a commitment to both the inputs and the output, preventing alteration of either after the fact.
The output packages as a Composite Thought Vector: a structured record of what the robot perceived, what it decided, which model produced the decision, the PoI receipt, and evaluator scores. Anyone who later purchases or licenses this CTV can independently verify every claim it makes about its provenance without trusting the operator.
Phase Three: Act
The control plane executes the policy output through the CBC safety filter. Every actuator command is logged alongside the sensor readings that preceded it, structured in a Merkle tree so that any specific time window can be proven authentic without disclosing the full log. This produces the Composite Action Vector. Together the CTV and CAV form the CTV/A: the complete, verifiable record of one full intelligence cycle. This is the asset that gets licensed, traded, and aggregated by Cerebro.
Phase Four: Learn
After completing the task, the robot computes self-supervised learning objectives against the observed outcome. Did the grasp succeed? How close was the predicted outcome to reality? This feedback drives local weight updates. Within the privacy budget constraints set by the Agentic Learning Rights specification, a compressed gradient update is transmitted to Cerebro for federated aggregation. The robot’s local model improves from its own experience. The global model improves from the collective experience of thousands of robots. Both happen simultaneously, at every cycle.
7. How Robots Get Smarter
The Learning Architecture
Robot learning must align information across every sensory modality simultaneously and tie that perception directly to physical action outcomes. A robot learning to pick a fragile glass object needs to correlate what it sees, what it feels through its fingers, what its joints report about applied force, and what outcome resulted. This cross-modal correlation is what makes robot learning qualitatively different from any single-modality training task.
NeuroOS-H trains each robot against a multimodal alignment objective covering every sensor modality. Vision learning captures spatial relationships and object properties. Audio learning captures environmental signals and human speech. Tactile learning captures contact dynamics and the early vibration signatures that precede a grasp failure. Proprioceptive learning captures body state and trajectory prediction. Action learning captures future state prediction given current motion. Skill distillation transfers validated patterns from Cerebro’s library directly into local weights.
These objectives run simultaneously. Their relative weights can be tuned per robot class: a precision assembly robot weights tactile and proprioceptive learning heavily, while a front-of-house retail robot weights audio and human proximity learning more. The architecture does not impose a single model across all contexts. It is a configurable alignment framework that adapts to specific deployment requirements.
Case study, food packaging facility. A NeuroMesh operator deployed three robots on a soft-goods packaging line handling foil pouches that changed shape under grip. Standard manipulation training did not transfer well to deformable targets. After each robot had accumulated approximately 2,000 hours of attested operation and Cerebro had aggregated their gradient updates, a new grip calibration skill emerged from the synthesis that none of the three robots had developed individually. The synthesised skill drew on the force-torque patterns from all three machines simultaneously, identifying a compensatory wrist angle adjustment that improved success rates on deformable targets by 31 percentage points. The skill was validated on a held-out benchmark and immediately available to every other robot on the network operating in soft-goods contexts.
The Network’s Collective Capability
The relationship between the verified experience the network has accumulated and its collective capability follows a well-established power-law scaling pattern. Adding more diverse real-world data produces meaningfully better models, but with diminishing returns at the margin.
\[\boxed{C(D) = C_0 + \beta \cdot D^{\,\alpha}}\]What this describes: C(D) is the network’s benchmark capability score when the total accumulated data volume across all nDATA-R lots is D. C₀ is the baseline score a robot achieves from simulation-only pre-training before any real-world network data exists. β is a scaling constant fitted from benchmark measurements as the network grows. α is the scaling exponent, estimated between 0.5 and 0.8 based on robotics foundation model research.
Why it matters for operators: This formula governs how much a robot’s starting capability improves simply by joining the network. A robot joining at launch gets the baseline model. A robot joining after the network has accumulated ten times more verified experience starts with a substantially more capable model before its first shift. The earlier an operator joins, the more data they contribute to shaping that baseline, and the larger the royalty stream they receive as their early contributions become foundational to the growing model.
A concrete illustration: starting from C₀ = 42 out of 100 on the benchmark suite, with β = 8.5 and α = 0.70, at D = 10 (million μToken units) the network achieves C = 42 + 8.5 × 5.01 = 84.6. That is double the simulation-only baseline from real-world operating experience alone. A 10x increase in data volume at the same exponent produces approximately a 5x improvement in capability, consistent with scaling law behaviour observed across other foundation model domains.
Federated Learning Keeps Data Private
The mechanism by which individual robot learning contributes to the global model without exposing raw data is federated learning with differential privacy. Instead of transmitting training data, each robot transmits an encrypted gradient update: a vector representing the direction in which its local model improved from recent experience. Cerebro aggregates these vectors using secure multi-party computation, injects calibrated noise to bound the total information leakage, and releases the aggregate as a global model update.
The data owner sets a per-pass privacy budget and a maximum total budget when minting their nDATA-R lot. The system blocks further learning queries against any lot once the cumulative budget would be exceeded. This enforcement happens at the protocol level through zero-knowledge proof verification, not through operator policy.
8. Proving Intelligence is Real
The entire NeuroMesh economy rests on one bedrock claim: the records it produces are trustworthy. Every yield instrument backed by robot data, every insurance policy priced against a PoA history, every skill purchased from Cerebro’s library carries implicit reliance on the authenticity of the underlying records. NeuroMesh makes these records cryptographically provable so that trust does not need to be assumed.
Proof-of-Inference
Proof-of-Inference answers this question: how does anyone know that a given CTV was actually produced by the model it claims, on the hardware it claims, at the time it claims?
The answer is TEE attestation combined with committee verification. The robot’s TEE signs a measurement of the hardware and software environment during every inference cycle. This signature is anchored to a hardware root-of-trust key that cannot be extracted without physically destroying the chip. The TEE also records a cryptographic commitment to both the input μTokens and the output action, preventing any alteration of either after the fact. That commitment and attestation are submitted to a rotating committee of staked evaluator nodes who verify the attestation chain against known hardware certificates and confirm that the model hash matches an approved version.
With a 21-member committee where at most one third act dishonestly, the probability that a fabricated PoI receipt passes the quorum check is approximately 0.033%. A fraudulent operator would need to simultaneously corrupt at least 15 evaluators, each of whom has staked \($\text{NEURO}\) at risk of slashing. The economics make sustained fraud irrational.
Proof-of-Action
While PoI proves that reasoning happened, Proof-of-Action proves that the physical actions logged actually occurred in the physical world within the declared safety envelope. PoA artefacts contain Merkle-ised sensor-actuation traces with CBC safety certificates embedded. They support selective disclosure: an insurance underwriter can be shown the safety record for a specific time period without accessing any other operational data.
Case study, insurance underwriting. A NeuroMesh operator deployed twelve humanoid robots in a mixed environment where humans and robots work in close proximity. At renewal, their liability insurer queried the PoA record for all twelve robots across the previous twelve months. The query returned 98.4% of all action cycles verified within Class A safety boundaries, with the remaining 1.6% resolved through human teleoperation. No violations of the CBC safety boundary had occurred in the period. The insurer priced the renewal at 23% below the industry average for comparable deployments, citing the verifiable safety record as justification. This is the direct economic consequence of making safety an on-chain primitive rather than an unverifiable claim.
The Mathematics of On-Robot Safety
The mathematical foundation for safety in NeuroMesh is the Control Barrier Function. The robot’s state x at any moment has a safety value h(x) where h(x) ≥ 0 means the robot is in a safe configuration and h(x) < 0 would mean it has entered an unsafe region. The safety supervisor enforces a constraint on every control action that prevents h(x) from ever going negative.
\[\boxed{\frac{\partial h}{\partial x} \cdot f(x,\, u) \;\geq\; -\,\alpha \cdot h(x)}\]What this enforces: The left side is how quickly the safety margin h(x) is decreasing right now under the proposed action u. The right side is the minimum allowed rate of decrease. When the robot is well within the safe region, h(x) is large and the constraint is loose: the robot can move freely. When the robot is close to the safety boundary, h(x) is small and the constraint tightens automatically, forcing the robot to slow its approach. This is not a speed limit imposed by a rule. It is a mathematical consequence of the constraint that makes the boundary physically unreachable.
Why this connects to on-robot compute: The CBC constraint is evaluated at the control plane level, 500 to 2,000 times per second, entirely on the robot’s own hardware. There is no cloud check, no remote safety call, no latency gap between the proposed action and the safety evaluation. On-robot compute is what makes this guarantee physically meaningful rather than aspirational.
A concrete example: define h(x) as the distance to the nearest human minus a 0.5-metre minimum clearance. At 2 metres away h = 1.5, so the robot can approach at up to 3 metres per second. At 0.8 metres away h = 0.3, so the constraint limits approach to 0.6 metres per second. At exactly 0.5 metres h = 0 and the robot cannot close distance at all. This happens entirely on-board, enforced at each control step by the safety supervisor running on the robot’s own processor.
9. Data Ownership and Rights
Your Robot’s Experience Belongs to You
Every piece of physical experience a NeuroMesh robot generates is tokenised as an nDATA-R lot: a Solana SPL token representing the ownership of that robot’s captured experience over a specific time window. Two cryptographic instruments anchor each lot to reality.
The Data Ownership Certificate is a signed on-chain record containing the operator’s decentralised identifier, the robot’s DID, a hardware attestation hash, the time window, the geographic site category, a sensor modality map, and the operator’s cryptographic signature. It is the legal and cryptographic proof of ownership: auditable, immutable, and not controlled by any intermediary.
The Perception Lineage ID is a Merkle root over the hashed perception logs, computed and signed inside the robot’s TEE. It ties the on-chain ownership record to the specific physical sensor data that was captured. Forging a PLID requires physical access to the robot’s secure hardware element. There is no software path to creating a valid PLID for data that does not exist.
Once minted, an nDATA-R lot can sit in the operator’s wallet generating passive royalties as downstream models that used its data earn revenue. It can be listed on the NeuroMesh marketplace with a reserve price. It can be bundled into structured tranches and used as collateral in DeFi lending protocols. The operator retains full control over all of these options.
Agentic Learning Rights
Owning data is not the same as controlling how it gets used. The Agentic Learning Rights specification attached to every nDATA-R lot defines exactly what a buyer may and may not do with the purchased experience. The purpose set specifies permitted uses. The prohibition set specifies forbidden ones: facial recognition, biometric identification, autonomous weapons development, and surveillance systems are examples of universal defaults. The privacy budget specifies how much cumulative information extraction is permitted before access is automatically blocked at the protocol level.
ALR compliance is enforced through zero-knowledge proofs. A model developer submitting a learning job provides a ZK proof that their computation respected the declared scope. The evaluator committee verifies the proof before releasing the usage fee. No trust in the developer is required.
How Royalties Flow
When a model trained on network data earns revenue, the fraction attributable to each contributing data lot flows back automatically via Solana streaming payment programs. Attribution is computed using Shapley values: the game-theoretic method for fairly distributing credit among multiple contributors based on marginal contribution.
The intuition is straightforward. If training a model on Robot A’s data alone improves benchmark performance from 42% to 61%, and training on Robot B’s data alone improves it from 42% to 55%, but training on both improves it to 74%, then Robot A contributed more at the margin and should receive a larger royalty share. Shapley values average this marginal contribution calculation across all possible orderings of contributors, producing an allocation that reflects actual value provided rather than flat sharing.
Case study, multi-robot royalty distribution. A model developer trained a new logistics manipulation policy using CTV/A data from 140 NeuroMesh robots across nine facilities. The policy was then deployed on 60 new robots and licensed as an API service earning approximately 8,400 \($\text{NEURO}\) per month. The 8% protocol royalty rate produced a royalty pool of 672 \($\text{NEURO}\) per month. The Shapley service computed individual fractions for all 140 contributors. The top ten contributors, each of whom had provided data from unusual manipulation scenarios not well covered by the broader dataset, earned between 18 and 31 \($\text{NEURO}\) per month in passive royalties. The bottom quartile, whose data overlapped heavily with existing coverage, each earned around 2 \($\text{NEURO}\) per month. The distribution reflected actual marginal value.
Data Valuation
The market value of an nDATA-R lot is determined by four factors that together describe what a buyer actually gets. The marginal information gain measures how much better a model becomes from training on this data versus not having it. Coverage measures how diverse the experiences in the lot are. Rarity measures how scarce this type of experience is across the network as a whole. Risk captures the privacy and regulatory profile of the modalities involved.
What this measures: FV(L) is the estimated market clearing price for lot L in \($\text{NEURO}\). The formula rewards data that is genuinely informative (high ΔI), covers a wide range of scenarios (high Coverage), represents experiences the network rarely sees elsewhere (high Rarity), and does not carry elevated privacy or regulatory exposure (low Risk). A lot that is high on the first three but low on risk commands a strong price. A lot with identical ΔI but sourced from a GDPR-covered region with rich visual data of people gets a Risk multiplier above 1, reducing the effective payout to the operator to reflect the compliance overhead for buyers.
A hospital corridor navigation lot with ΔI = 0.18 nats of model improvement, Coverage = 0.71, Rarity = 2.4 because hospital data is uncommon in the current network, and Risk = 1.6 due to visual data in human-facing environments: FV = 100 × 0.18 × 0.71 × 2.4 / 1.6 = 19.17 \($\text{NEURO}\). A warehouse manipulation lot with similar marginal gain but lower rarity and lower risk values at around 13 \($\text{NEURO}\). The 47% premium for the hospital lot reflects genuine scarcity of that deployment context, and that premium persists as long as the network underrepresents medical environments relative to the demand for models trained on them.
10. Token Economy
Overview
\($\text{NEURO}\) powers the entire NeuroMesh protocol. It is the unit of account for protocol fees, the staking token for validators and Cerebro operators, and the governance token for protocol parameter updates. The total supply is hard-capped at one billion tokens. No additional minting is possible beyond the defined emission schedule.
Six specialised instruments work alongside \($\text{NEURO}\). cCOMP represents attested on-robot compute cycles and is the direct economic output of on-robot inference. nROBOT represents verified operational minutes of specific robots. nENERGY represents time-of-use renewable energy windows used to schedule learning workloads during low-cost periods. nSTOR represents durable CTV/A storage capacity. nDATA represents third-party licensed catalogue datasets. nDATA-R represents per-robot experiential data rights. Each token has a distinct function that the others cannot serve.
Token Allocations
The total supply of 1,000,000,000 \($\text{NEURO}\) distributes across eight categories designed to balance immediate liquidity needs, long-term operational sustainability, and protection against short-horizon capital.
| Allocation | Share | Tokens | Unlock |
|---|---|---|---|
| Liquidity Provisioning | 22% | 220,000,000 | Fully unlocked at TGE |
| Community and Ecosystem | 19% | 190,000,000 | Fully unlocked at TGE |
| Treasury and Sustainability | 17% | 170,000,000 | Fully unlocked at TGE |
| Marketing and Partnerships | 9% | 90,000,000 | Fully unlocked at TGE |
| Investors | 15% | 150,000,000 | 6-month cliff, 12-month linear vest |
| Protocol R&D | 7% | 70,000,000 | Fully unlocked at TGE |
| Governance Reserve | 6% | 60,000,000 | Fully unlocked at TGE |
| Team | 5% | 50,000,000 | 24-month cliff, 36-month linear vest |
Total supply: 1,000,000,000 \($\text{NEURO}\) Initial circulating supply at TGE: 800,000,000 \($\text{NEURO}\) (80%)
The 80% initial float reflects the network’s need for active, liquid markets from the first day of operation. The compute market, data market, and cCOMP AMM all require deep liquidity to function, and that liquidity cannot be built gradually. The two vested categories, investors and team, are structured with meaningful cliffs that prevent early sell pressure from holders whose interests may not yet be aligned with long-term protocol health.
Vesting Timeline
At TGE, 80% of all tokens are in circulation. Investor vesting begins at month 7 after the six-month cliff and adds 1.25 percentage points of supply per month, with all investor tokens fully circulating by month 18. Circulating supply holds at 95% from month 18 through month 24. Team vesting begins at month 25 after the 24-month cliff and adds approximately 0.14 percentage points per month through to month 60, when 100% of all tokens are in circulation.
The largest single supply event is TGE itself, which is intentional. Every subsequent unlock is gradual and telegraphed well in advance. At no point after TGE does any single unlock event represent more than 1.25 percentage points of total supply in any given month.
cCOMP: The On-Robot Compute Credit
cCOMP is the direct economic representation of on-robot compute. New cCOMP can only be created when a robot completes a verified inference cycle with a valid PoI receipt. It cannot be minted through governance, purchased into existence, or created administratively. The supply of cCOMP is a real-time measure of verified productive work happening on the network.
What this measures: α converts verified FLOPs from the PoI receipt into a credit count. C_attested is the number of floating-point operations the TEE confirmed actually occurred on the robot’s hardware. SLO_bonus rewards robots that can guarantee tight inference latency: 1.0 for standard outputs, 1.2 for sub-50ms delivery, 1.5 for sub-20ms. safety_mult is 1.0 for Class A certified robots operating in human-collaborative contexts and reduced for lower safety classes. The formula ensures that higher-quality on-robot compute earns proportionally more, creating economic incentive to invest in better hardware and certification.
Why this connects to the broader thesis: cCOMP is the token that proves the on-robot compute thesis is real. If a robot is not doing genuine attested work, it earns no cCOMP. If it is, every inference cycle it runs becomes a micro-economic event. At a network price of 0.003 \($\text{NEURO}\) per cCOMP, a robot completing a manipulation task with a verified 2.1 million FLOPs at Class A safety and sub-50ms latency earns 1,260 cCOMP or approximately 3.78 \($\text{NEURO}\) per cycle. Scaled across 6.2 hours of operation, this is the compute revenue line in the operator economics.
\($\text{NEURO}\) Emission Schedule
The 350,000,000 tokens allocated to operator, evaluator, and data emissions are released over ten years according to a decaying schedule with a permanent floor to prevent long-tail reward collapse.
What this controls: E(t) is the number of \($\text{NEURO}\) emitted per day on day t. E₀ is the initial daily rate. The exponential decay with half-life t_half means emissions halve every t_half days. E_floor is a minimum daily rate that never reaches zero, ensuring long-term network participants always have a meaningful reward to earn. The default half-life is 730 days, meaning emissions are half their launch rate at the two-year mark and one-quarter at four years, while always remaining above the floor.
Why the floor matters: Many DeFi emission schedules approach zero over time, which causes liquidity and participation to collapse in the long tail. A permanent floor keeps evaluator nodes, Cerebro operators, and robot operators financially motivated to remain on the network indefinitely, not just during the high-emission early period.
With E₀ = 100,000 \($\text{NEURO}\) per day and E_floor = 5,000 per day: at day 730 emissions equal 55,000 per day (50,000 decayed plus 5,000 floor). At day 1,460 they equal 30,000 per day (25,000 plus floor). At year ten they approach 8,000 per day, sustained by the floor.
Governance
Governance weight does not scale linearly with token holdings. Pure token-weight governance consistently produces outcomes where large holders extract value from smaller participants. NeuroMesh uses a square-root weighting with a loyalty multiplier for long-term stakers.
What this achieves: A holder with 1,000,000 tokens has √1,000,000 = 1,000 base weight. A holder with 10,000 tokens has √10,000 = 100 base weight. A 100x difference in holdings produces only a 10x difference in governance weight. The loyalty multiplier then rewards holders who have been continuously staked: at 36 months of staking the multiplier reaches approximately 1.24x over a new staker with the same holdings, and is capped at 2.0x at very long durations. This structurally disadvantages mercenary capital that stakes briefly to influence a vote and exits immediately.
11. Web3 Architecture
Why Solana
The choice of Solana over other platforms is not a preference. It is a hard requirement given the transaction volumes that real-time robotic compute markets generate.
A single robot running NeuroMesh produces approximately 100 PoI receipts, 100 PoA certificates, and 20 data lot minting events every hour. That is 220 on-chain transactions per robot per hour. With 1,000 active robots, that is 220,000 transactions per hour, or approximately 61 per second per robot. Ethereum mainnet handles 15 transactions per second for the entire network at fees that average several dollars per transaction. A single robot’s hourly settlement cost on Ethereum would exceed \($1{,}000\). At Solana’s sub-\($0.001\) fee, the same activity costs \($0.055\) per hour per robot: genuinely negligible.
Solana’s 400-millisecond finality matters equally. The cCOMP compute market clears every block. Price signals need to propagate in sub-second time for robots bidding on tasks to receive accurate market information before the next cycle. Twelve-second Ethereum finality means every compute market signal is 30 blocks stale by the time it settles.
The SPL token standard enables all seven NeuroMesh tokens to be atomically composed in a single transaction. An execution bundle combining cCOMP, nROBOT, and nDATA-R rights purchases, transfers, and settles atomically. On ERC-20 platforms this requires multiple contract calls across multiple blocks with multiple gas costs.
The Full Infrastructure Stack
Solana handles token issuance, settlement, governance, and receipt anchoring. Arweave provides permanent pay-once storage for CTV/A artefacts and lineage trees: its economic model, pay once and store forever, is ideal for training data records that must persist for as long as any model trained on them remains in use. IPFS and Filecoin handle content-addressed μToken catalogues with economic incentives for storage providers. Pyth Network provides real-time Solana-native price feeds for AMM pricing and NAV calculations. Wormhole will provide cross-chain bridges in Phase 4 to enable nDATA-R lots as collateral in Ethereum DeFi lending.
Progressive Decentralisation
NeuroMesh launches with a founding multisig council controlling protocol parameters and emergency functions. Governance weight transfers progressively to \($\text{NEURO}\) stakers over the first twelve months via Realms, Solana’s native DAO infrastructure. Tier 1 parameters covering safety caps and privacy reserves require a 67% supermajority with a seven-day voting period. Tier 2 parameters covering fees and emission curves require a simple majority with a three-day vote. Tier 3 operational parameters can be updated by the council within predefined bounds without a full governance vote.
12. Market Mechanics
How Compute Prices Clear
The cCOMP compute market, the nROBOT embodied time market, and the nDATA-R data market all use the same price-responsive update rule. Prices adjust every Solana block, approximately every 400 milliseconds, proportional to the imbalance between demand and supply at the current price.
What this does in practice: When more compute is being demanded than the active robot fleet can supply, λ rises, which signals operators to put more robots online and signals task buyers to either accept the higher price or reduce their task volume. When the fleet is underutilised, λ falls, which increases demand and discourages capacity from idling. The step size μ is calibrated small enough that momentary demand spikes do not produce disruptive price swings, while large enough that sustained imbalances resolve within minutes rather than hours.
Why this matters for on-robot compute specifically: The cCOMP price is the real-time market signal for on-robot compute scarcity. A robot operator watching the cCOMP price at 82% network utilisation will see a premium above the base rate, signalling that the market values additional capacity right now. This is the financial incentive mechanism that drives fleet scaling without central coordination.
With a current cCOMP price of 0.003 \($\text{NEURO}\), demand running 25% above supply, and μ = 0.001, the price after one block is 0.003 × (1 + 0.001 × 0.25) = 0.0030075 \($\text{NEURO}\). Sustained over 200 blocks, roughly 80 seconds of continuous excess demand, the price reaches 0.003 × (1.00025)^200 ≈ 0.00315 \($\text{NEURO}\), a 5% premium. This is the automatic signal that brings new capacity online.
Network Capacity and Stability
The network’s ability to serve incoming task requests without building an unbounded queue depends on the relationship between aggregate task arrival rate and aggregate robot service capacity. A simple way to reason about this: at 80% network utilisation with 1,000 active robots each handling one task per hour, a task arriving at a random moment will wait on average 14 seconds before being assigned. At 90% utilisation that wait grows to approximately 54 seconds. At 95% it exceeds three minutes. The cCOMP price mechanism prevents the network from operating near the instability region: rising prices reduce demand and encourage additional robots to come online before the queue becomes problematic.
13. Operator Economics
What One Robot Earns in a Day
An operator’s gross daily revenue from a single robot has two components: compute revenue from verified task execution, and data licensing revenue from nDATA-R lot sales.
The two revenue streams in plain terms: The first term is the money earned from performing verified work. h hours of operation at c credits per hour at the current cCOMP price P, minus the verification fee fraction v that goes to the evaluator committee and burn. The second term, D_lots, is the passive income from licensing the data the robot generated during those same hours. An operator earns from what their robot does and from what it knows, simultaneously.
Using Year 1 baseline figures for a Class A certified humanoid: 6.2 hours per day, 1,050 credits per hour, cCOMP price of 0.003 \($\text{NEURO}\), verification fee intensity of 15%. Compute revenue is 6.2 × 1,050 × 0.003 × 0.85 = 16.60 \($\text{NEURO}\) per day. Data licensing revenue from 20 lots at an average price of 0.04 \($\text{NEURO}\) with a 35% protocol take-rate contributes a further 0.52 \($\text{NEURO}\) per day. Total gross revenue per robot per day is approximately 17.12 \($\text{NEURO}\).
Operators in high-rarity deployment contexts such as medical facilities, specialised manufacturing environments, and public-access spaces earn meaningfully higher data licensing revenue than operators in common warehouse contexts. Rarity is not a fixed property: as the network accumulates more experience in any given context, the rarity premium for that context gradually declines, creating ongoing incentive for operators to explore new deployment domains.
Fleet Projections
| Metric | Year 1 | Year 2 | Year 3 |
|---|---|---|---|
| Active robots | 1,200 | 3,600 | 9,000 |
| Average operational hours per day | 6.2 | 7.0 | 8.1 |
| Credits per hour | 1,050 | 1,250 | 1,480 |
| cCOMP price in \($\text{NEURO}\) | 0.0030 | 0.0036 | 0.0042 |
| Data lots per robot per day | 20 | 25 | 28 |
| Average lot price in \($\text{NEURO}\) | 0.04 | 0.05 | 0.06 |
| Protocol take-rate | 35% | 40% | 45% |
| Fleet daily gross (approximate) | 24,600 \($\text{NEURO}\) | 147,000 \($\text{NEURO}\) | 460,000 \($\text{NEURO}\) |
Year 1 arithmetic check: 1,200 robots × 6.2 hours × 1,050 credits × 0.003 \($\text{NEURO}\) per credit = 23,436 \($\text{NEURO}\) in compute gross. Plus 1,200 × 20 lots × 0.04 \($\text{NEURO}\) × 0.35 protocol share = 336 \($\text{NEURO}\) in data protocol share. Total approximately 23,772 \($\text{NEURO}\) per day at baseline, with the remainder to 24,600 accounted for by premium SLO mark-ups and rarity premia not captured in the base calculation.
Data Tranches as Yield Instruments
Individual nDATA-R lots can be bundled into structured tranches: financial instruments that sell fractional claims against a pool of lots’ combined future royalty cash flows. The concept is analogous to mortgage-backed securities pooling individual home loans into a tradable instrument. A data tranche pools individual robot experiences into a tradable yield product backed by the productive output of real machines doing real work.
The tranche’s Net Asset Value is the sum of the fair values of its constituent lots, reduced by a haircut reflecting oracle variance and legal risk, and adjusted for time-value decay on the oldest lots. Maximum token issuance against a tranche is capped at 85% of NAV, with 15% held as a redemption reserve. These instruments create a path for institutional capital to gain yield exposure to the robotic intelligence economy through a familiar structured-product format.
14. Competitive Landscape
NeuroMesh is not a compute network that happens to target robotics. It is not a data marketplace that handles robot data as one category among many. It is not a robotics operating system with a token added afterward. It is all three simultaneously, designed from the start for the specific requirements of physical AI, with on-robot compute as the foundational architectural choice that makes everything else possible.
Bittensor-style networks have genuine product-market fit for disembodied AI workloads: language models, image generation, and distributed GPU compute. Their architecture assumes that latency of 100 to 2,000 milliseconds is acceptable, which it is for language model inference but not for robot control loops that require sub-50-millisecond policy outputs. None of these networks have Cerebro-equivalent intelligence synthesis, CBC safety primitives, PoA verification, or data ownership infrastructure. They cannot be retrofitted with these capabilities because the architecture that makes low-latency robot control possible, on-robot compute with local safety enforcement, is incompatible with the remote miner architecture these networks depend on.
Traditional robotics platforms including ROS, ROS 2, and proprietary vendor stacks provide excellent hardware abstraction and low-level control but no economic coordination layer, no cross-operator intelligence sharing, and no data ownership mechanism. The intelligence that accumulates inside these platforms belongs entirely to the vendor.
Existing data marketplaces tokenise data ownership in a generic sense but lack every specialised primitive that robot experiential data requires: multimodal alignment, TEE-based lineage, CBC-anchored safety classes, federated learning integration, and Shapley-based royalty attribution. They treat data as a static commodity. NeuroMesh treats robot experience as a productive asset that generates ongoing yield.
The NeuroMesh position is straightforward: every robot on the network is more capable than any robot off it, from the moment it joins, and every operator earns from what their robot learns. No other protocol offers both simultaneously.
15. Roadmap
Phase 1: Foundation (Months 0 to 6)
NeuroOS-H reaches general availability with full TEE integration, CBC safety supervisor, and mesh daemon. The PoI and PoA verification committee launches with 50 or more staked evaluator nodes and a public receipts explorer. Cerebro alpha goes live as the first masternode cluster running federated aggregation across three skill domains. cCOMP minting and the cCOMP to \($\text{NEURO}\) AMM launch on Solana mainnet. Automated DOC and PLID generation begins for all NeuroOS-H robots, along with first-generation nDATA-R minting and ALR lane enforcement.
Phase 2: Markets (Months 6 to 12)
The public μToken lot marketplace opens with streaming licence and buyout options. Cerebro beta expands to 15 or more skill domains with a public skill library. Information gain oracles go live for ten reference task domains. The Shapley attribution service launches with a public royalty dashboard. Teleoperation pools open, allowing remote human operators to earn \($\text{NEURO}\) for safety override assistance. The utilisation-aware cCOMP AMM activates with demand-responsive pricing.
Phase 3: Intelligence Layer (Months 12 to 18)
Cerebro mainnet scales to support 1,000 or more robots with full fleet aggregation. Automatic Shapley-weighted royalty streaming goes live for all CTV/A reuse events. zkML verification queues open for sensitive deployment contexts including healthcare, finance, and security. The premium SLO market launches with guaranteed latency bounds and corresponding price premiums. Insurer-grade PoA audit packs and initial insurer partnerships are established. First nDATA-R tranches issue with NAV oracles.
Phase 4: Global Scale (Months 18 to 36)
Regional Cerebro clusters deploy in EU, US, and APAC with data residency-compliant ALR defaults. Cross-market task routing goes live across regional fleets with latency-aware placement. DeFi lending against nDATA-R collateral via Wormhole bridge launches. RWA basket indices provide institutional exposure to the robotic intelligence economy. Export-compliant data bundles enable cross-border AI training programs.
16. Risks
Technical
The most significant near-term technical risk is the cost of zkML verification for large inference circuits. Generating a zero-knowledge proof for a complex model can require 10 to 1,000 times the compute of the inference itself. NeuroMesh routes zkML to premium queues only initially, where operators explicitly pay for that level of assurance. As proof systems including Halo2 and Plonky3 continue improving, costs will fall and zkML will extend progressively across more of the network.
TEE security is a second assumption that requires monitoring. Spectre-class side-channel attacks could theoretically compromise attestation integrity. NeuroMesh uses multiple attestation layers and anomaly monitoring. Physical safety functions are entirely local and do not depend on attestation integrity at the control plane level.
Solana network availability is a structural dependency. Historical outages in 2021 and 2022 caused temporary disruption to Solana-dependent protocols. All physical safety functions in NeuroMesh are local and require no chain connectivity. Only economic settlement is affected by an outage.
Economic
Token price volatility creates operational cost uncertainty when protocol fees are denominated in \($\text{NEURO}\). The cCOMP floating rate mechanism provides a partial buffer. Stablecoin fee options are on the roadmap for Phase 2. The 80% initial float is designed to support active price discovery and deep markets from launch.
Data quality gaming is a persistent risk in open data markets. Diversity thresholds, similarity hashing, and staked evaluator spot audits create layered defences. Submitting near-duplicate lots across multiple minting events results in slashed stake.
Regulatory
GDPR, CCPA, PIPL, and emerging national AI regulations create complex cross-border compliance requirements. ALR defaults are configurable per region. Regional clusters with data residency guarantees are planned for Phase 4. Token classification in specific jurisdictions is actively monitored with legal counsel in key markets.
17. Conclusion
The robots are arriving. The hardware is real, the commercial demand is genuine, and the deployments are happening at scale. What the industry does not yet have is the intelligence infrastructure to make those robots genuinely capable over time, the economic infrastructure to make operators financially whole for the value they generate, or the verification infrastructure to build trusted financial products on top of robot performance.
NeuroMesh is that infrastructure, and on-robot compute is its defining architectural choice. Every formula in this document is a consequence of putting the compute on the machine. The CBC safety constraint runs on-board because cloud latency makes real-time safety enforcement physically impossible. The cCOMP token exists because on-robot inference is now a verifiable, attestable act that deserves its own economic primitive. The data ownership model works because the TEE on the robot is the only place where ownership can be anchored to physical reality. The federated learning architecture preserves privacy because the data never needs to travel to be learned from.
A robot on NeuroMesh is more capable than a robot off it from the moment it joins, gets smarter every day it operates, and earns for its operator from both the work it performs and the experience it accumulates. Cerebro synthesises what the entire fleet has learned into skills no individual operator’s deployment could discover alone. The network grows more valuable with every robot that joins, and the intelligence it produces is owned by the people who created it.
Every verified minute of on-robot compute deepens the collective memory, expands the skill library, and adds another data point to the growing body of evidence that physical AI can operate safely, productively, and at scale. The network is not a platform. It is an organism. And it grows smarter every time a robot takes a step.
NeuroMesh: the intelligence layer for on-robot compute, forming a decentralised superbrain.
18. Formula Reference
Each formula below appears in context earlier in the document. This section provides a compact reference with the plain-language purpose of each one.
On-Robot Compute Earnings (cCOMP Minting)
Translates verified on-robot FLOPs into compute credits. Higher-quality compute (faster latency, higher safety class) earns proportionally more. A robot delivering policy outputs in 38ms earns a 1.2x bonus over one delivering in 500ms, creating direct financial incentive to invest in better on-board hardware.
Emission Schedule
Controls daily \($\text{NEURO}\) emissions to operators and evaluators over time. Decays from a high launch rate to a permanent floor that keeps long-term participants financially motivated. At the default 730-day half-life, emissions are 55,000 \($\text{NEURO}\) per day at the two-year mark (50,000 decayed from 100,000 plus the 5,000 floor), and approach but never reach the floor in the long run.
Governance Weight
Squares-root weighting reduces the governance advantage of large token holders while the loyalty multiplier rewards long-term stakers. A 100x token advantage yields only a 10x governance advantage. Someone staked continuously for three years has approximately 1.24x the governance weight of a new staker with the same holdings.
CBC Safety Constraint
\[\boxed{\frac{\partial h}{\partial x} \cdot f(x,\,u) \;\geq\; -\,\alpha \cdot h(x)}\]Enforced on-board at the control plane, 500 to 2,000 times per second. h(x) is the current safety margin. The constraint prevents h(x) from ever going negative, meaning the safety boundary is mathematically unreachable as long as the supervisor runs. At a human distance of 0.8m where h = 0.3 with α = 2.0, the robot’s approach speed is automatically capped at 0.6 m/s. At 0.5m it cannot close distance at all.
Data Lot Fair Value
Prices a data lot by what it actually contributes to model quality (ΔI), how broadly it covers the task domain (Coverage), how rare that type of experience currently is in the network (Rarity), and the privacy and regulatory overhead it carries for buyers (Risk). Hospital corridor data at ΔI = 0.18, Coverage = 0.71, Rarity = 2.4, Risk = 1.6: FV = 100 × 0.18 × 0.71 × 1.5 = 19.17 \($\text{NEURO}\).
Royalty Attribution
φ_L(M) is the Shapley fraction for data lot L in model M, measuring L’s marginal contribution to M’s performance. ρ is the protocol royalty rate of 8%. R_M is the model’s revenue. A lot with a Shapley fraction of 0.163 contributing to a model earning 1,000 \($\text{NEURO}\) per month receives 0.163 × 0.08 × 1,000 = 13.04 \($\text{NEURO}\) per month in automatic royalty streaming.
Network Capability Scaling
\[\boxed{C(D) = C_0 + \beta \cdot D^{\,\alpha}}\]Describes how network-wide robot capability improves as verified experience accumulates. C₀ is the simulation-only baseline. Operators joining early contribute to shaping the baseline that all later operators inherit, and earn royalties on every model trained on their early data. At D = 10 million μToken units from C₀ = 42 with β = 8.5, α = 0.70: C = 84.6, double the simulation baseline.
Compute Market Price Clearing
Updates the cCOMP price every Solana block to reflect real-time demand relative to supply. Sustained excess demand raises the price, signalling the market to bring more on-robot capacity online. Sustained oversupply lowers it, reducing the cost of purchasing compute for task buyers. The step size μ = 0.001 means a persistent 25% demand surplus raises the price by approximately 5% over 80 seconds, gradual enough to avoid volatility while fast enough to clear imbalances.
19. Glossary
ALR: Agentic Learning Rights. The programmable licence attached to every nDATA-R lot governing permitted uses, prohibited uses, and the cumulative privacy budget for information extraction.
CBC: Control Barrier Certificate. The mathematical safety constraint enforced on-board at the control plane ensuring a robot’s safety margin h(x) never goes negative.
Cerebro: NeuroMesh’s masternode intelligence aggregation layer. Ingests verified CTV/A streams from the fleet, synthesises cross-domain skill patterns, and distributes validated skill updates back to the library without accessing raw sensor data.
cCOMP: On-robot compute credit token. Minted only when a robot completes a verified inference cycle with a valid PoI receipt. Non-inflationary: supply is a real-time measure of verified productive work on the network.
CTV/A: Composite Thought and Action Vector. The verifiable artefact produced by one complete intelligence cycle, containing inputs, reasoning, action, PoI receipt, PoA certificate, safety certificates, lineage references, and royalty metadata.
DID: Decentralised Identifier. A W3C-standard identifier anchored on-chain and bound to hardware attestation. Every operator and every robot has one.
DOC: Data Ownership Certificate. The signed on-chain record proving operator ownership of an nDATA-R data lot.
μToken: Microtoken. The compact, privacy-preserving semantic representation of a raw sensor stream, used as input to the policy server and as the unit of data licensing.
nDATA-R: Per-robot data rights token. The RWA representing ownership of a specific robot’s captured experience over a specific time window, anchored by a DOC and PLID.
nENERGY: Time-of-use energy window token issued by grid oracles, used to schedule learning workloads during low-cost, low-carbon periods.
nROBOT: Embodied minutes token representing verified operational time of a specific robot at a specific safety class certification level.
PLID: Perception Lineage ID. A Merkle root over hashed perception logs signed inside the TEE, cryptographically tying on-chain ownership to physical sensor data.
PoA: Proof-of-Action. Cryptographic proof that physical actions executed within the declared safety envelope, with CBC certificates embedded in the artefact.
PoI: Proof-of-Inference. Cryptographic proof that on-robot inference ran on specific attested hardware, using a specific model, and produced a specific output.
RWA: Real-World Asset. A physical or contractual asset whose ownership is tokenised on-chain with cryptographic provenance.
Shapley value: A game-theoretic attribution method that distributes credit among multiple contributors proportional to their marginal contributions, used for royalty distribution across nDATA-R lot holders.
SLO: Service Level Objective. A guaranteed performance bound, typically an inference latency ceiling, that qualifies a robot for the corresponding cCOMP bonus tier.
TEE: Trusted Execution Environment. A hardware-isolated enclave within the robot’s processor that produces unforgeable attestations of hardware state, model identity, and sensor data provenance.
zkML: Zero-Knowledge Machine Learning. A cryptographic proof that a specific neural network inference occurred on specific inputs without revealing those inputs, used in premium verification queues for sensitive deployment contexts.
End of NeuroMesh Litepaper · Version 4.3 · 2026 The Intelligence Layer for On-Robot Compute · Forming a Decentralised Superbrain