NVIDIA GTC was a hotbed of innovation and energy around all things AI. NetActuate and Coherently CEO, Mark Mahle, was on the ground at GTC with several team members meeting with the community and popping into sessions that dove deep into AI transformation. Read Mark’s post for a first-hand account of insights from the show.

Jensen’s keynote covered a wide range of topic areas, from AI Factories and DGX Cloud to Digital Twins and Omniverse to AI in Telecom to Software and Ecosystem Growth. Interestingly, Jensen highlighted AI-enabled services riding on high-performance, low-latency infrastructure as the natural evolution of AI. Cloud Service Providers (CSPs) and telcos are increasingly evolving into distributed AI platforms by placing compute at the edge. 

“AI will go everywhere, and we’re going to talk about AI in a lot of different ways, and the cloud service providers, of course, they like our leading edge technology.” –Jensen Huang, CEO, NVIDIA

Massive Data Centers & Clouds

The major CSPs were early AI adopters and the pace at which they are both scaling up and scaling out their networks to meet AI-driven demands is astounding. They’ve learned a few things along the way and several industry luminaries shared them during the session Wired for AI: Lessons from Networking 100K+ GPU AI Data Centers and Clouds.

Like Jensen, CSPs see AI moving to the edge and are preparing quickly for the new frontier.

“When you think about the networking aspect of it, the more you can bring jobs closer to the user, the more you can bring them together physically speaking, the better it is we can do a job.” —Pradeep Vincent, Oracle Cloud Infrastructure

Telcos as AI Factories

AI also represents a sea change for the telco industry. Once a slow moving industry, the pace at which telcos and the surrounding ecosystem are embracing AI is astounding. This was reflected in the many telco focused sessions at GTC throughout the week. 

The overarching message of these sessions comes through loud and clear: Telcos are being reimagined as AI factories. Just like the CSPs, their AI-based edge infrastructure will be the engine that drives tomorrow’s AI economy. With AI models and services demanding faster processing, more bandwidth, and local data access, the global shift toward the edge is not theoretical—it’s rolling out in real time every day across the globe. 

But for telcos and other service providers looking to capitalize on this opportunity, the AI road ahead can feel like the wild west: full of promise, but also full of uncertainty. That’s where NetActuate’s team of experts come into play to “ride shotgun” with you through uncharted territory.  

A Global Network

With a footprint spanning over 40 global locations, a latency-optimized Anycast network, and carrier-neutral colocation facilities, NetActuate offers the infrastructure needed to turn AI ambitions into reality. Our edge platform delivers the performance, proximity, and resilience AI workloads demand—whether you’re deploying inference engines, GPU-powered services, or setting up MEC locations. We’re already helping customers deploy compute nodes at the network edge where they need them most.

NetActuate: Your Team of Experts for The Edge

More than just infrastructure, we bring decades of networking expertise to the table. Our team is ready to partner with you to validate architectures and stand up Proofs of Concept (PoCs)—early experiments to test your hypothesis running live on our network in real world deployments. We understand that every deployment is different, and we pride ourselves on meeting each customer exactly where they are in their journey.

NetActuate’s Open Network Edge (ONE) concept works alongside your existing environment with API-ready services that make it easy for DevOps teams to spin up workloads while our hybrid options give you the flexibility to connect your edge to hyperscalers, private cloud, or on-prem deployments. ONE is agnostic with regard to underlying hardware, CPU architectures, and operating systems. It’s declarable and programmable in line with CI/CD and open source tooling like Ansible and Terraform. Migrating to or away from the network edge is as easy as your applications’ portability. 

Get Started

“You should essentially experiment with multiple cloud providers and different hardware technology technologies and see what works for you.” —Pradeep Vincent, Oracle Cloud Infrastructure

If you’re a service provider building edge applications and workloads, or a network operator exploring AI enablement, we’d love to hear what you’re working on—and how we can help.

Together, we can test your hypotheses, validate your vision, and build what’s next.