📢 Gate Square #Creator Campaign Phase 1# is now live – support the launch of the PUMP token sale!
The viral Solana-based project Pump.Fun ($PUMP) is now live on Gate for public sale!
Join the Gate Square Creator Campaign, unleash your content power, and earn rewards!
📅 Campaign Period: July 11, 18:00 – July 15, 22:00 (UTC+8)
🎁 Total Prize Pool: $500 token rewards
✅ Event 1: Create & Post – Win Content Rewards
📅 Timeframe: July 12, 22:00 – July 15, 22:00 (UTC+8)
📌 How to Join:
Post original content about the PUMP project on Gate Square:
Minimum 100 words
Include hashtags: #Creator Campaign
The Integration of AI and DePIN: The Rise of Decentralized GPU Networks Leading a New Revolution in Computing Resources
The Fusion of AI and DePIN: The Rise of Decentralized GPU Networks
Since 2023, AI and DePIN have been hot topics in the Web3 space. The market value of AI has reached $30 billion, while the market value of DePIN stands at $23 billion. These two categories encompass a variety of different protocols that serve various fields and needs. This article will focus on the intersection of the two, exploring the development of protocols in this field.
In the AI tech stack, the DePIN network empowers AI by providing computing resources. The large demand for GPUs from major tech companies has led to shortages in supply, making it difficult for other developers to obtain sufficient GPUs for AI model training. This often forces developers to turn to centralized cloud service providers, but the need to sign inflexible long-term high-performance hardware contracts results in lower efficiency.
DePIN provides a more flexible and cost-effective alternative. It utilizes token rewards to incentivize resource contributions that align with network objectives. In the AI domain, DePIN integrates GPU resources from individual owners into data centers, providing a unified supply for users in need of hardware. These DePIN networks not only offer developers customizable and on-demand computing power but also create additional income sources for GPU owners.
There are various AI DePIN networks in the market, each with its own characteristics. The following will introduce the features and development status of several major projects.
AI DePIN Network Overview
Render
Render is a pioneer in the P2P GPU computing network, initially focused on graphic rendering for content creation, and later expanded its scope to AI computing tasks by integrating tools such as Stable Diffusion.
Main Features:
Akash
Akash is positioned as a "super cloud" platform that supports storage, GPU, and CPU computing, serving as an alternative to traditional cloud service providers. By utilizing a container platform and Kubernetes-managed computing nodes, it enables seamless deployment of various cloud-native applications across environments.
Main Features:
io.net
io.net provides access to distributed GPU cloud clusters specifically designed for AI and ML use cases. It aggregates GPU resources from data centers, crypto miners, and other Decentralization networks.
Main Features:
Gensyn
Gensyn focuses on a GPU network for machine learning and deep learning computations. It employs an innovative verification mechanism that includes proof of learning, a graph-based precise location protocol, and incentive games involving staking and slashing.
Main Features:
Aethir
Aethir specializes in deploying enterprise-level GPUs, focusing on computation-intensive fields such as AI, machine learning, and cloud gaming. The containers in its network act as virtual endpoints for executing cloud applications, transferring workloads from local devices to the containers, achieving low-latency experiences.
Main features:
Phala Network
Phala Network, as the execution layer of Web3 AI solutions, provides a trustless cloud computing solution. Its blockchain utilizes a trusted execution environment (TEE) to address privacy issues, allowing AI agents to be controlled by on-chain smart contracts.
Main Features:
Project Comparison
| Project | Hardware | Business Focus | AI Task Type | Job Pricing | Blockchain | Data Privacy | Job Cost | Security | Completion Proof | Quality Assurance | GPU Cluster | |--------|---------|----------------|--------|--------|--------|--------|----------------------|--------|--------|----------|-------| | Render | GPU&CPU | Graphic Rendering and AI | Inference | Performance-based Pricing | Solana | Crypto&Hash | 0.5-5% per Job | Render Proof | - | Dispute | No | | Akash | GPU&CPU | Cloud Computing, Rendering and AI | Both | Reverse Auction | Cosmos | mTLS Authentication | 20% USDC, 4% AKT | Proof of Stake | - | - | Yes | | io.net | GPU&CPU | AI | Both | Market Pricing | Solana | Data Encryption | 2% USDC, 0.25% Reserve Fee | Proof of Calculation | Proof of Time Lock | - | Yes | | Gensyn | GPU | AI | Training | Market Pricing | Gensyn | Secure Mapping | Low Cost | Proof of Stake | Proof of Learning | Verifiers and Whistleblowers | Yes | | Aethir | GPU | AI, Cloud Gaming, and Telecommunications | Training | Bidding System | Arbitrum | Crypto | 20% per session | Proof of Render Capability | Proof of Render Work | Checker Node | Yes | | Phala | CPU | On-chain AI Execution | Execution | Stake Calculation | Polkadot | TEE | Proportional to Staking Amount | Inherited from Relay Chain | TEE Proof | Remote Proof | No |
The Importance of Clusters and Parallel Computing
The distributed computing framework implements GPU clusters, providing efficient training while enhancing scalability. The training of complex AI models requires powerful computing capabilities, which often rely on distributed computing. Most projects have integrated clusters to achieve parallel computing to meet market demands.
Data Privacy Protection
AI model training requires large datasets, which may contain sensitive information. To this end, various projects adopt different data privacy protection methods. Most projects use data encryption, while io.net has also introduced fully homomorphic encryption (FHE), and Phala Network employs a trusted execution environment (TEE). These measures aim to protect data privacy while allowing the data to be used for training purposes.
Completion proof and quality inspection
To ensure service quality, multiple projects have introduced proof of computation completion and quality inspection mechanisms. Gensyn and Aethir generate proof of work completion, while io.net certifies that GPU performance is fully utilized. Gensyn and Aethir also have quality inspection mechanisms, and Render employs a dispute resolution process. These measures help guarantee the quality and reliability of computational services.
Hardware Statistics
| Project | Number of GPUs | Number of CPUs | Number of H100/A100 | H100 Cost/Hour | A100 Cost/Hour | |--------|-------|--------|------------|-----------|-------------| | Render | 5600 | 114 | - | - | - | | Akash | 384 | 14672 | 157 | $1.46 | $1.37 | | io.net | 38177 | 5433 | 2330 | $1.19 | $1.50 | | Gensyn | - | - | - | - | $0.55 (预计) | | Aethir | 40000+ | - | 2000+ | - | $0.33 (预计) | | Phala | - | 30000+ | - | - | - |
Demand for high-performance GPUs
AI model training requires the highest performance GPUs, such as NVIDIA's A100 and H100. These high-end GPUs provide the best training quality and speed, but they are expensive. Decentralization GPU market providers need to find a balance between offering a sufficient number of high-performance GPUs and maintaining competitive pricing.
Currently, projects such as io.net and Aethir have acquired over 2000 H100 and A100 units, which are more suitable for large model computations. The costs of these decentralized GPU services have already fallen below those of centralized GPU services, but validation will still take time.
The role of consumer-grade GPU/CPU
Although high-end GPUs are the main demand, consumer-grade GPUs and CPUs also play an important role in AI model development. They can be used for data preprocessing, memory resource management, and fine-tuning or training small-scale models from pre-trained models. Projects like Render, Akash, and io.net also serve this market segment, providing developers with more options.
Conclusion
The AI DePIN field, although still in its early stages of development, has already shown great potential. These decentralized GPU networks are effectively addressing the supply-demand imbalance of AI computing resources. With the rapid growth of the AI market, these networks will play a key role in providing developers with cost-effective computing alternatives, making significant contributions to the future landscape of AI and computing infrastructure.