FAR Labs has opened node registrations for its decentralized inference network, FAR AI, a program that intends on tapping into an estimated 3 billion idle GPUs worldwide and perhaps take some of
the load off centralized data centers. By allowing individual consumer and enterprise GPU owners to lease their spare compute capacity, the project aims to create a more accessible and distributed infrastructure for artificial intelligence development.
FAR's platform functions by intelligently routing AI inference requests, i.e. the process where a trained model makes predictions or generates content, to the most suitable nodes within its global network. For larger, memory-intensive jobs, FAR AI’s orchestration system can group multiple compatible nodes to execute workloads in parallel.
Sounding a lot like
Folding@Home and
SETI@Home, this compute-sharing model effectively turns high-end gaming rigs and underutilized office workstations into
active contributors to the AI economy, while providing, on paper anyway, owners with a steady stream of passive income in exchange for their hardware's power. How much each contributor gets paid depends on variables such as GPU type, hours of availability, local electricity costs, etc.
What about security and verification, then? FAR intends to solve that through isolated execution environments and encrypted communications. To ensure accountability across its thousands of anonymous nodes, the network utilizes proof-of-compute whereby the system verifies that workloads are actually processed and that the results returned to developers are accurate and secure. For developers, the complexity of this distributed backend is masked by a simple API, allowing them to integrate AI features into products or build entire startups without the prohibitive overhead of traditional cloud services.
Ilman Shazhaev, founder and CEO of Dizzaract (the tech powering the system) emphasizes that the goal is to make "AI infrastructure more open and more practical". He notes that a vast amount of useful compute already exists outside of traditional data centers, and FAR AI is the bridge that brings that capacity online for real-world usage. Currently, the system is undergoing closed testing with a select group of partners to refine live performance and developer workflows.
Early registrants for node operations are being given priority status as the network prepares for a wider rollout. As the demand for AI compute continues to outpace the supply of dedicated server-grade chips, the ability to tap into billions of existing consumer GPUs could alter how the next gen of AI apps is powered. Developers are expected to gain full API access to the distributed inference network starting in the second quarter of 2026.