Turning Decentralization into Security
The Blockchain Trilemma
The blockchain trilemma — a fundamental challenge in blockchain design — posits that blockchain networks can only optimize for two out of three key properties: decentralization, security and scalability.
Major blockchains including Bitcoin and Ethereum continue to struggle with this trade-off, often sacrificing scalability to maintain security and decentralization. This has resulted in high transaction fees, slow confirmation times, and a limited throughput that hinders mainstream adoption. Others, like Solana, have chosen to sacrifice decentralization in the name of high TPS.
The Subspace Protocol, which underlies the Autonomys Network, takes a novel approach to resolving the blockchain trilemma in connecting decentralization to security through its proof-of-archival-storage (PoAS) consensus mechanism. The network will then achieve hyper-scalability through the implementation of our forthcoming scalability roadmap.

Decentralization = Security
Originally envisioned as the most decentralized blockchain in web3, Autonomys taps into the most accessible, commodified hardware resource on the planet — disk storage. Every consumer computer has some storage capacity. Leveraging that idle resource for network security (instead of expensive ASICs or inegalitarian stake thresholds) unlocks unprecedented decentralization, as well as access to expansive distributed data storage.
Rather than treating decentralization and security as separate concerns, the storage-based Subspace Protocol recognizes that the network’s security is directly proportional to how distributed the storage is across independent participants. In implementing a one-disk-one-vote system, Autonomys ensures that the network becomes more secure as it becomes more decentralized, effectively collapsing two sides of the trilemma into one.
Data: From Burden to Security Feature
What makes the Subspace Protocol particularly elegant is how it transforms what is typically seen as a blockchain’s greatest burden — storing the ever-growing history — into its primary security mechanism. Instead of viewing blockchain storage as a necessary evil that threatens decentralization, Autonomys makes it the cornerstone of network security.
Farmers (storage ‘miners’) create and store unique partial replicas of the chain’s history, with their ability to participate in consensus directly tied to their storage contribution. This means that the very act of securing the network also ensures its history remains distributed and available — a remarkable alignment of incentives that addresses both the farmer’s dilemma and the network’s need for robust data availability.
By reimagining the relationship between security, decentralization, and storage, the Autonomys Network offers a fresh perspective on scaling blockchain networks while maintaining their fundamental value propositions.
About Autonomys
The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube
Autonomys x Morphic: Securing Unstoppable Agents with TEE-based Private Compute
Autonomys is pleased to announce a strategic partnership with Morphic Network, an Actively Validated Services (AVS) provider and specialist in private, distributed compute and Trusted Execution Environment (TEE) technology.
Key Aspects of the Partnership
- Private Compute: Morphic’s TEE-based confidential compute protocol could be integrated into Autonomys’ future marketplace for compute resources to enable AI3.0 developers to run scalable, privacy-preserving computation directly on encrypted decentralized data stored within Autonomys distributed storage network (DSN) for end-to-end confidentiality — ideal for sensitive AI workloads.
- Unstoppable Agents: Auto Agent developers could combine Morphic’s programming language-agnostic TEE containers with Autonomys’ configurable domain chains for truly unstoppable, confidential on-chain AI agents.
- Rollups & Layer-3s: Autonomys could utilize Morphic’s secure, TEE-enabled compute infrastructure to implement private and efficient rollup solutions for chain execution, ensuring faster finality and reduced latency, and empowering Layer-3 (L3) networks.
“Partnering with Autonomys puts hyper-scalable execution and permanent storage for deAI at developers’ fingertips. Together, we are redefining the future of decentralized AI, breaking barriers in secure, scalable, and autonomous computation.” Liam Ren, Co-founder of Morphic
“Partnering with Morphic Network brings cutting-edge TEE-based compute to Autonomys, enabling secure, private, and scalable AI solutions. Together, we’re advancing unstoppable agents and efficient Layer-3 networks, driving the future of decentralized AI.” Parth Birla, Head of Partnerships at Autonomys
About Morphic
Morphic is a TEE-enabled appchain protocol for decentralized, privacy-preserving AI training, inference, dApps and agents. Morphic’s trustless AVS-as-a-service combines unprecedented security and instant finality and verifiability with Docker-based container-level programmability for efficient, customizable development and deployment. Morphic AI is its modular security layer of portals for the confidential verification of agent operations.
About Autonomys
The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube
Autonomys x Multiple Network: Accelerating deAI Data Transmission for On-Chain Agents with DePIN
Autonomys is pleased to announce a strategic partnership with Multiple Network.
Key Aspects of the Partnership
- High-Speed Agents: Autonomys developers building Auto Agents could leverage Multiple Network’s bandwidth DePIN to ensure their on-chain AI agents operate at high-speed, and in a privacy-perserving manner.
- Low-Latency Training: Combining Multiple’s low-latency bandwidth DePIN with Autonomys’ distributed storage network (DSN) and decentralized compute domains would make large scale, private decentralized AI training more accessible.
- AI3.0 Ecosystem: Multiple Network privacy-preserving bandwidth DePIN represents an important addition to Autonomys’ AI3.0 ecosystem as an integral component to the building and deployment of Auto Agents.
“Partnering with Autonomys brings a new dimension to decentralized AI. By integrating our privacy-preserving bandwidth DePIN with their robust storage and compute infrastructure, we’re enabling faster, more secure Auto Agents and making scalable decentralized AI training a reality. This collaboration strengthens the AI3.0 ecosystem and redefines what’s possible in on-chain AI.” Zeus Chen, Co-Founder of Multiple Network
“Our partnership with Multiple Network marks a breakthrough for decentralized AI, combining their low-latency, privacy-focused bandwidth with Autonomys’ storage and compute capabilities to power high-speed, secure Auto Agents and advance the AI3.0 ecosystem.” Parth Birla, Head of Partnerships at Autonomys
About Multiple Network
Multiple Network is a high-speed bandwidth DePIN with data acceleration and privacy protection for enhanced decentralized AI data transmission efficiency and security. Multiple Network aggregates the bandwidth of distributed nodes to create a programmable peer-to-peer software-defined wide area network (SD-WAN) that users can leverage for large-scale, private, low-latency encrypted data transmission via its API.
X | LinkedIn | Discord | Telegram | Blog | Docs | YouTube
About Autonomys
The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube
Scaling Bandwidth with Near-Optimal Data Transfer
This document outlines the Autonomys Network’s bandwidth scalability designs for near-optimal data transfer, and briefly explains how the Autonomys Labs Research team evaluated and selected these approaches.
Scaling Computation & Bandwidth
In blockchain design, sharding is essential for achieving two critical scaling goals:
- Computation: Autonomys tackles computation scaling through the use of domains and domain operators. Domains are similar to Ethereum Layer-2s, while domain operators act as decentralized sequencers. Implementing decentralized sequencing from the very beginning was an integral part of our network design philosophy.
- Bandwidth: Autonomys addresses bandwidth scaling by leveraging verifiable erasure coding — a cutting-edge method that ensures data transfer at near-optimal efficiency. Below is an brief explanation of how it works and why it matters.
Scaling Bandwidth for Large-System Data Transmission
Imagine a blockchain network with 1,000,000 nodes, of which only 20% are reliable — staying connected and up-to-date, maintaining liveness, and contributing to the system. In order too manage data efficiently in this context, Autonomys applies erasure coding to domain bundles. This involves:
- Encoding the Data: The original data is transformed into slightly more than 500 coded chunks.
- Distribution: These coded chunks are distributed across nodes, one chunk per node.
- Recovery: Any 100 coded chunks are sufficient to reconstruct the original data (as the size of the original data is equal to the total size of 100 coded chunks).
By distributing more than 500 chunks to the nodes, we ensure that at least 100 reliable nodes (of the 20% total) receive a chunk and can successfully recover the bundle. If fewer than 500 chunks are distributed, fewer than 100 reliable nodes are likely to receive them, making recovery impossible. This approach minimizes data transfer while maintaining robustness, achieving a near-optimal solution.
Autonomys’ Erasure Coding vs. Legacy Sharding
Comparing Autonomys’ erasure coding design with existing bandwidth scaling methods helps explain its significance.
In approaches without erasure coding, 1,000,000 nodes might be divided into 200 shards — each with 5,000 nodes — handling different bundles. However, this would require data transfer at a scale 1,000x greater than our design, as each shard would have ~1,000 reliable nodes, despite only one being needed to retain a bundle. Increasing the number of shards (e.g., to 20,000) might appear to offer a solution, but practical constraints, including dynamic shard assignment to mitigate adaptive adversaries, make this unfeasible. Most existing blockchains therefore cap shards at ~200. Even with erasure coding, dividing 1,000,000 nodes into 200 shards leads to data transfer levels that are still 1,000x higher than Autonomys’ design, as each shard has ~1,000 reliable nodes, and only one is needed to retain a coded chunk.
Autonomys’ near-optimal data transfer design — where the number of shards is unbounded — delivers throughput up to 1,000x higher than traditional scaling approaches. For every new bundle, a new shard is generated, consisting of more than 500 honest nodes. We eliminate the need for dynamic shard assignment by allowing for overlapping shards, and rely on mining and farming mechanisms to counter adaptive adversaries.
Conclusion
Autonomys’ innovative approach to scaling bandwidth through verifiable erasure coding represents a paradigm shift in blockchain technology. By combining minimal data transfer with robust recovery guarantees, Autonomys is setting a new standard for throughput and efficiency. Our design doesn’t just look good on paper — it’s built for the demands of the AI3.0 future.
Stay tuned as we continue to push the boundaries of what’s possible in web3 x AI scalability.
About Autonomys
The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube
Autonomys x DIN: Scaling deAI Data Infrastructure
Autonomys is pleased to announce a strategic partnership with DIN.
Key Aspects of the Partnership
- Data Availability & Storage: DIN collects and processes large amounts of data that needs secure, scalable storage. Autonomys’ decentralized storage network (DSN) offers highly decentralized, permanent and temporary storage which could provide DIN with an alternative or complementary solution to its current provider. Autonomys’ ability to handle data with high throughput across decentralized infrastructure could also enhance DIN’s data storage, retrieval and security processes.
- AI Model Integration: DIN could leverage Autonomys’ compute partnerships, allowing its AI models and data to be processed using powerful clusters (including A100s, H100s) integrated within Autonomys AI3.0 ecosystem. This would allow DIN to scale its AI data services more effectively.
- Cross-Chain Interoperability: Autonomys and DIN share a vision of a decentralized AI infrastructure spanning multiple blockchains with built-in cross-chain interoperability. Autonomys’ multi-domain architecture of decoupled execution (DecEx) environments is well equipped to implement this vision. Both projects will collaborate to ensure that data and AI services are interoperable across different ecosystems, enhancing the developer experience and overall scalability of deAI.
“We’re excited to partner with Autonomys to scale decentralized AI infrastructure. Their cutting-edge storage and AI compute capabilities will strengthen our mission to build a robust ecosystem for AI training and agent development. Together, we aim to shape the future of secure, interoperable, and scalable AI-powered decentralized applications.” Harold, Founder of DIN
“DIN’s data infrastructure will form an integral component of Autonomys’ AI3.0 ecosystem for decentralized AI training and on-chain agent development, utilizing our unique distributed storage network, data availability technology, and scalable modular compute to permanently store, process and retrieve training and workflow data.” Parth Birla, Head of Partnerships at Autonomys
About DIN
DIN — the data intelligence network — is a decentralized data contribution and preprocessing layer incentivizing users to collect, validate, annotate and vectorize data for AI training and model development. DIN provides its ecosystem with continuous on-chain and off-chain data feeds and is building towards a blockchain for AI agents and decentralized AI applications (dAI-Apps).
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | YouTube
About Autonomys
The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube
Autonomous Agents on the Autonomys Network: Argu-mint Demo
Autonomys Labs is pleased to present a demonstration of Argu-mint, a proof-of-concept showcasing how the Autonomys Network enables developers to build transparent, autonomous on-chain AI agents with contextual awareness using our open-source tooling.
The Argu-mint demo and accompanying breakdown highlight how builders can use our Auto-Agents-Framework and Decentralized Storage Network (DSN) to create truly autonomous agents with verifiable, permanent memory.
Verifiability is key to Autonomys’ vision of a human-centric AI3.0 ecosystem, where collaboration, decentralization and censorship-resistance are prioritized.
Introduction to Argu-mint (0:00–1:08)
What You’ll See:
Jeremy Frank, Head of Engineering, introduces Argu-mint, the first autonomous agent leveraging the Autonomys Network. The segment highlights the agent’s core innovation: a permanent on-chain memory that enables fully autonomous, context-aware decision-making. Jeremy outlines the limitations of current centralized memory systems, including their vulnerability to tampering, censorship, and hardware failures.
Why It Matters:
Argu-mint represents a significant leap forward for decentralized AI. By using the Autonomys Network, agents can achieve:
- Immutable memory: Ensuring transparency and accountability.
- Resilience: Eliminating single points of failure.
- Autonomy: Operating independently of centralized control.
These capabilities provide developers with a robust foundation on which to build trustworthy and tamper-proof autonomous agents.
Argu-mint’s Decision-Making Process (1:09–3:15)

What You’ll See:
Argu-mint evaluates tweets and makes autonomous decisions based on a multi-step process, which includes:
- Scanning mentions and updated timelines.
- Analyzing posts from key opinion leaders (KOLs).
- Assessing relevance and tone for potential engagement.
- Evaluating if the response aligns with specific criteria.
Why It Matters:
This process demonstrates the technical sophistication of Argu-mint’s decision-making framework. By enabling agents to autonomously analyze context and generate appropriate responses, developers can build agents that engage in meaningful interactions tailored to specific applications, such as customer support, market analysis, social media moderation, and much more.
Agent Memory Viewer (3:16–5:04)

What You’ll See:
This segment introduces the Agent Memory Viewer, which visualizes Argu-mint’s complete memory chain. The memory viewer displays each interaction chronologically, linking every memory to its predecessor. This transparency is further supported by the Autonomys Network’s block explorer, where users can query each permanently stored memory.
Why It Matters:
A chronological memory chain ensures that all agent interactions are verifiable and auditable, providing a level of transparency critical for applications in compliance, research, and development. Developers can use this feature to study agent behavior, improve algorithms, and even resurrect agents by reconstructing their memory history.
Argu-mint Analyzing a Post & Awareness of Its Immortality (5:05–6:35)

What You’ll See:
Argu-mint analyzes a specific post, evaluates its engagement strategy, and stores the interaction on-chain. This segment also explores the concept of agent immortality, where a permanent memory ensures the agent’s history can be preserved, revisited, and even leveraged for future use.
Why It Matters:
The ability to immortalize an agent’s memory opens doors for advanced applications and capabilities, such as:
- Agent-specific fine-tuning: Using historical data to enhance and tailor AI models for specific applications.
- Behavioral auditing and analysis: Providing verifiable insights into agent actions and decision-making processes.
- Resilience to failures: Safeguarding against data loss from hardware or network disruptions.
Additionally, Argu-mint’s awareness of its own immortality is a fascinating concept. It bridges a unique psychological dimension into AI development — allowing for systems that “know” their data will persist indefinitely. This awareness could influence how agents interact with the world, potentially prioritizing long-term outcomes and fostering ethical considerations in AI behavior. It’s a critical step toward building systems that are not just autonomous but also capable of evolving responsibly within decentralized frameworks.
Use Cases & Advantages (6:36–8:44)

What You’ll See:
Jeremy discusses practical applications for autonomous agents with permanent memory, including:
- Entertainment: Creating engaging and dynamic AI personas.
- Transparency Studies: Enabling verifiable research into AI behavior.
- Censorship Resistance: Ensuring agents operate independently of centralized entities.
Why It Matters:
These use cases highlight the practical implications of Autonomys’ infrastructure, empowering developers to build applications that balance autonomy, verifiability and censorship resistance.
Autonomys Agent Roadmap (8:45–9:54)

What You’ll See:
This section outlines the future of autonomous agents on the Autonomys Network. Key advancements include:
- Decentralized inference for private AI computation.
- Identity frameworks for secure agent authentication.
- Rich on-chain interactions for enhanced functionality.
Why It Matters:
These developments reinforce Autonomys’ commitment to building a collaborative and scalable ecosystem that prioritizes developer needs, privacy, and decentralization.
Explore Argu-mint & Auto-Agents-Framework v0 (9:55–End)

What You’ll See:
Jeremy concludes by introducing the Auto-Agents-Framework v0, an open-source toolkit designed to enable developers to build autonomous agents with features such as:
- Customizable personalities for tailored interactions.
- Permanent memory storage for verifiable transparency.
- Extensible tools for integration across platforms.

Why It Matters:
The Auto Agents framework offers developers a versatile foundation on which to build on-chain AI agents that align with their specific goals, whether in research, business, or entertainment.
Interested in Building Your Own Auto Agent?
🧑💻 Check out the Auto Agents Framework on GitHub
🔗 Build an Auto Agent or super dApp proof-of-concept and enter the Auto Horizon Developer Challenge
About Autonomys
The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.
X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube