Nvidia bid to ‘open source’ 6G may rattle Ericsson and Nokia : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

A commitment to an ‘open’ and ‘AI-native’ radio access network is unveiled by Nvidia as the US Department of War points open source directly at 6G.

It’s a random drizzly day in DC and officials at various US government agencies are feeling suitably despondent about the state of 5G. Their mood is conceivably shared by executives in the private sector, including some at Nvidia, the giant US chipmaker. For all the initial enthusiasm, 5G has not brought much improvement on 4G apart from adding capacity. It was never built for AI, the technology that took flight nearly five years after it was first standardized. Worse, the gatekeepers outside China are still Ericsson and Nokia, European companies that keep a tight grip on the keys to the network.

A movement called open RAN had hoped to change all this by offering industry-standard interfaces as substitutes for the proprietary ones found in traditional networks. It has largely failed to influence market dynamics and attention is shifting at this year’s Mobile World Congress (MWC) tradeshow in Barcelona to the alternative safecracker of open source, rather than open RAN. The Department of War has joined forces with the Linux Foundation, perhaps the world’s best-known open-source group, on an initiative called OCUDU that aims to inject open-source code into the heart of the 6G network. A signed-up member of that effort, Nvidia is trumpeting “open” and “open source” as part of a separate albeit related 6G project that includes some very big names.

It is not yet another club or alliance in a sector already rife with them, says Ronnie Vasishta, who heads up telecom activities for Nvidia. Instead, he frames it as a “commitment” to ensure 6G is designed to be “AI-native” and “open” from the outset. The signatories, on the telco side, include BT, Deutsche Telekom, SK Telecom, SoftBank and T-Mobile. The names of Ericsson and Nokia are perhaps a more surprising feature of that list. Both Nordic vendors are also part of OCUDU.

Open sores

Open is a much-abused term in telecom, and it rarely means open source, itself often misunderstood as “free” software. The 3GPP, the umbrella group for the evolving cellular standard, would also claim to be fully open. The standard-setting process is supposedly a collaborative effort that pools contributions and makes them available to others on fair, reasonable and non-discriminatory (FRAND) terms.

But Ericsson and Nokia generate a substantial share of their profits from licensing their technologies – an approach deemed incompatible with open-source tenets – and the systems that power today’s 5G services are proprietary ones in the main. That has blocked smaller companies from entering the market and innovating on top of those network platforms, according to Tom Rondeau, who runs the Department of War’s FutureG network project. “Even with open RAN, it was still too closed,” he told Light Reading. “You had to be in the ecosystem to do any development.”

Vasishta appears to sympathize with that take. “If you think about the stack, 5G Advanced and 6G open sourcing enables code to be available to developers across the entire stack,” he said during an interview on the eve of MWC. “And I think that has been quite challenging for some companies, smaller companies, to have that degree of flexibility.” If open source were widely adopted for 6G, a developer with a new algorithm for beamforming – a 5G-era technology – would conceivably be able to integrate that into the bigger platform, he says.

Nvidia already boasts an open-source reference platform for the radio access network (RAN) dubbed Aerial. That has allowed developers such as DeepSig to do exactly what Vasishta describes, inserting an AI-native waveform into the Aerial stack. DeepSig, intriguingly, is one of two small companies that built the reference platform for OCUDU, the other being Ireland-headquartered SRS. Part of Nvidia’s AI-RAN pitch is about taking the RAN algorithms that humans have refined over decades and replacing them with AI to boost spectral efficiency.

Aerial, though, still requires developers to work largely within the boundaries of CUDA, the overarching Nvidia software platform that is often thought of as the company’s defensive moat. Within Layer 1, the most hardware-dependent slice of RAN software, it is not deployable on a general-purpose central processing unit (CPU) from Intel or AMD, with their x86 architecture, or a CPU based on the rival Arm system.

In this area, Vasishta doubts open source can entirely address that issue of hardware dependency, and he is relatively disparaging about CPUs as a Layer 1 option. “I don’t think anyone’s successfully done that purely in a software-defined manner,” he said. “So, I don’t know if open source is really trying to address that.”

That sounds like an indirect criticism of Intel, which has been a champion of virtual or cloud RAN for years. Ericsson, its biggest RAN partner, can apparently run all the Layer 1 functions on an Intel CPU apart from forward error correction (FEC), a resource-hungry task that requires a discrete hardware accelerator. In its latest products, Intel combines that with the CPU.

CPUs, GPUs or something else?

Vasishta clearly prefers an approach that relies on one of Nvidia’s graphics processing units (GPUs) for all Layer 1 functions, FEC included, with the company’s Grace-branded and Arm-based CPU – or an x86 chip – retained for higher-layer software. This is what Nokia has now done in a trial deployment with T-Mobile US since last year, when it formed an especially close partnership with Nvidia, which simultaneously made a $1 billion investment in the Finnish company.

“I would say that if you have a GPU in the system that is capable of running some advanced parallel compute functions, you should use it,” said Vasishta. “And many of the Layer 1 functions are functions a GPU is very adept at.” The attractions of offloading a single function such as FEC are questionable due partly to “inherent latency challenges,” he says. “We think there are inherent benefits of running more of the Layer 1 in a GPU.”

Nevertheless, what was indicated in a recent blog written by John Saw, the chief technology officer of T-Mobile US, is that Ericsson has taken a very different approach from Nokia. The Swedish company is instead working to adapt the software first used with Intel so that it can be deployed on Nvidia’s Grace CPU. That was seemingly facilitated by Arm’s earlier adoption of vector processing – a technology critical in Layer 1 – via an instruction set called SVE2, the equivalent of Intel’s AVX-512. Only FEC is offloaded to the Hopper-branded Nvidia GPU.

While Nokia appears to have gone for a “native implementation,” potentially creating an Nvidia lock-in, Ericsson seems eager to remain as agnostic as possible and keep its silicon options open. Whether it would be able to build a single software stack deployable on x86, Arm, CUDA or something else is doubtful.

Vashista, meanwhile, insists there are downsides to full hardware independence. “There are inherent trade-offs one makes to do that,” he said. “Of course, there are benefits to keeping one software stack across multiple hardware platforms, but there are inherent trade-offs that you have to make when you do that.”

Questions about the pros and cons of the Nokia versus Ericsson AI-RAN approach in a telco network are best addressed to the vendors, or better still the telco, he says. And T-Mobile did not answer questions emailed to it by Light Reading on this subject. Telcos’ main concern, however, seems to be that integrating GPUs into a RAN would be expensive given the notorious reputation they have for energy consumption. “The performance per watt capability of GPUs everywhere is improving dramatically,” said Vasishta. “It is almost inevitable that the RAN stack is going to be software-defined on a highly performant compute platform.”

Linux-led disruption

The involvement of Ericsson and Nokia in OCUDU and Nvidia’s mission for more “open” 6G remains curious. For Rondeau, the incentive for the Nordic vendors is being able to work with the US government and military at a time when telco spending on civilian networks is at a low point. The US is also the place where Ericsson and Nokia generate a big chunk of their profits and find much of their technology expertise.

But an open source 6G could threaten their business models and potentially create challengers that did not previously exist. Without their participation, it would also struggle to lift off, and there is doubt inside the Linux Foundation that Ericsson and Nokia will immediately embrace it.

“What I expect is that Nokia and Ericsson will not throw away all their existing code on day one but slowly and gradually move toward more compatibility,” said Ranny Haiby, the Linux Foundation’s chief technology officer for networking. “Over time, I expect they will see the value in maybe replacing some of their homegrown stuff with OCUDU, but it’s not an overnight thing.”

All this would also seem to have implications for the standardization process and its FRAND system. “They know the 3GPP and other standards bodies are going to really define the standards of 6G, so it’s not a standards body,” emphasized Vasishta in describing the 6G commitment and its signatories. But AI itself is something the 3GPP and groups like the European Telecommunications Standards Institute (ETSI) did not have to deal with when defining 3G, 4G and 5G, and it is rapidly morphing.

“A big question is how much is AI going to influence 6G,” said Ultan Mulligan, ETSI’s chief services officer. “That’s a moving target because AI is evolving so quickly, and if you lock down 6G too soon you may end up locking out some capabilities you want to use.” For the traditional purveyors of mobile network technologies, these are turbulent days.

https://www.lightreading.com/6g/nvidia-bid-to-open-source-6g-may-rattle-ericsson-and-nokia