OpenAI has partnered with AMD, Broadcom, Intel, Microsoft, and NVIDIA to develop MRC, an open source networking protocol designed to eliminate a critical bottleneck in AI supercomputers. The technology sends data across hundreds of simultaneous paths between GPUs instead of routing traffic through traditional network architectures.
The improvement is substantial. Current supercomputers require three or four switch layers to connect GPUs. MRC accomplishes the same task with just two layers while handling over 100,000 GPUs. This architectural simplification reduces both power consumption and operational costs.
MRC is already deployed on OpenAI's Stargate supercomputer, which means the protocol has moved beyond theory into production use. The move to open source signals OpenAI's confidence in the design and its willingness to let competitors and other institutions adopt the standard.
This addresses a real problem in large-scale AI infrastructure. GPU-to-GPU communication becomes a severe bottleneck when training models at massive scales. Network latency and congestion directly impact training speed and efficiency. By enabling multipath routing, MRC distributes traffic more efficiently across the infrastructure.
The vendor consortium behind MRC matters. These companies control different pieces of the AI hardware stack. NVIDIA dominates GPU manufacturing. Intel and AMD provide CPUs and some accelerators. Broadcom builds networking chips. Microsoft operates one of the largest AI cloud platforms. Getting all five to agree on a standard is unusual and suggests the problem was serious enough to warrant collaboration.
The open source approach could accelerate adoption beyond just these five companies. Other chip makers and supercomputer operators can implement MRC without licensing negotiations. This is particularly valuable for organizations building competing AI infrastructure who still benefit from using the same networking standard.
THE TAKEAWAY: OpenAI's MRC protocol cuts the networking complexity needed to connect massive GPU clusters in half, reducing power and costs while
