




It’s been an incredibly energizing time at NVIDIA GTC, where innovations in AI and partner connections reminded me of running ISPs back in the ’90s. That same pioneering spirit made me think of NetActuate co-founder Mark Price’s “The Serial Port” —showcasing early networking creativity. Today, we’re moving at an even more intense pace, with AI standing front and center.
Coherently is a new startup backed by NetActuate—and while we’re new in name, our core team brings 50 years of combined expertise in multi-tenant cloud infrastructure. We’re working with NVIDIA NCP(s), CSP(s), and other partners to build the stack that enables “AI factories,” and we’re aggressively hiring to keep up with demand.
If you’re deploying NVIDIA GPUs—on-prem, cloud, or air-gapped—looking for a “cloud in a box” AWS-style solution, reach out.
NetActuate itself has decades of experience managing some of the world’s most demanding distributed workloads—often billions of transactions per second. In my recent Forbes article, How Increasing Global Demand And Competition Are Driving The Edge Revolution, I explained how skyrocketing demand, fierce hardware competition, and strategic global partnerships are fueling a new era for AI and edge computing.
As more businesses and governments realize that remaining competitive requires significant investments in advanced infrastructure—ranging from nuclear power initiatives to nuanced international collaborations—the winners will be those who seize innovation head-on. This mindset is exactly what NetActuate and Coherently bring to the table: proven expertise and a willingness to engage globally so that AI can flourish, even under the most intensive performance and regulatory requirements.






Throughout the week, we joined multiple investor gatherings, including one at the impressive Miro Towers (built by Bayview’s Yiwen Li), hosted by Fusion Fund (Lu Zhang) and BDK Capital, plus other VCs and angels. Altogether, Coherently ended up with 90+ meetings.
One highlight was a private Developer/Partner telco event with Jensen Huang, who addressed a select group of startups and partners. He reminded us that “I can tell you for a fact that the people that I know who are successful in this journey, people with speed and agility, they get shit done fast. Instead of taking three years to build a data center or two years to build a data center and stand things up, they stand things up in months, not years, months.”
”
Telco Day Recap
The Telco AI Renaissance Is Here [S72984]

AI is reshaping telecom’s core operations, pushing data centers to act as “factories” for compute-heavy workloads. Monetizing hardware resources and advanced AI solutions is now a strategic focal point for many operators. GPU-driven architectures promise new revenue streams, not just cost savings. Collaboration and dynamic frameworks emerged as central themes.
Ronnie Vasishta (NVIDIA): Illustrating how telcos can pivot into AI production hubs, he stressed GPU-accelerated workloads and software-driven frameworks as crucial. Ronnie argued that agile, scalable methods can radically transform data center economics.
Delivering Real Business Outcomes With AI in Telecom [S73438]

Real-world AI deployments—especially generative models—are driving measurable ROI. Speakers emphasized secure, cloud-native strategies to scale without jeopardizing data security. Ultimately, adopting AI as an enterprise-wide function rather than a pilot project accelerates innovation and profitability.
- Chris Penrose (NVIDIA): Painted a bold vision where data centers evolve beyond cost centers into “factories” that spin up new revenue streams at scale. By deploying multi-tenant GPU clusters to handle real-time AI workloads, operators can unlock entirely new services and markets—transforming the network edge into a dynamic profit engine. Chris emphasized that once infrastructure starts directly generating returns, the future of AI-driven telecom is unlimited, fueling both top-line growth and rapid product innovation.
- Andy Markus (AT&T): Shared figures on enterprise ROI, explaining how targeted AI use cases cut ops costs. He noted that broader AI adoption across the organization yields faster returns.
- Kaniz Mahdi (AWS): Underlined secure, cloud-native frameworks as the backbone for big AI rollouts. AWS tooling, from container orchestration to identity management, helps operators deploy rapidly.
- Anil Kumar (Verizon): Urged a federated approach that weaves AI throughout. Verizon’s center of excellence unites best practices internally, creating cross-team synergy.
- Hans Bendik Jahren (Telenor): Showed how AI analytics improve voice quality and network stability. Spotting potential problems early keeps revenue consistent by preserving customer satisfaction.
AI-RAN in Action [S72987]

Moving away from manual “best effort,” AI-driven RAN uses machine learning for real-time orchestration. Scheduling, resource distribution, and performance optimization become adaptive and intent-based. This shift fosters both speed and reliability, particularly in advanced 5G deployments.
- Soma Velayutham (NVIDIA): Demonstrated how GPUs handle large data streams, enabling real-time RAN coordination. Integrating AI from hardware to software ensures the best outcomes.
- Aji Ed (Nokia): Showed Nokia’s push to refactor RAN for machine learning pipelines, automating tasks once manually handled. Reduced overhead and swift reactions to traffic spikes are key outcomes.
- Ryuji Wakikawa (SoftBank): Emphasized building “AI-ready” infrastructure from the start for performance leaps and new revenue. Cultivating a data-driven culture is vital.
- Freddie Södergren (Ericsson): Promoted intent-based orchestration, letting AI manage user needs dynamically. Ericsson’s examples showed bandwidth allocated as usage patterns shift.
- Karri Kuoppamaki (TMobile): Highlighted iteration speed along with raw performance, ensuring TMobile adapts fast in a changing market.
How Indonesia Delivered a Telco-Led Sovereign AI Platform [S73440]

Serving 270M people under stringent data regulations, Indonesia proved large-scale AI can remain sovereign. Working with NVIDIA, local telcos, and healthcare innovators, they tackled language and policy hurdles effectively. The initiative reveals significant potential for AI in emerging markets. Truly a remarkable and forward thinking approach that will help drive the younger population of Indonesia forward as global innovators.
- Lilach Ilan & Anissh Pandey (NVIDIA): Detailed NVIDIA’s localized AI approach that meets compliance without sacrificing performance. Custom GPU clusters overcame bandwidth and latency limits.
- Vikram Sinha (Indosat): Demonstrated how “AI for all” reaches remote and urban areas by merging local infrastructure with robust AI training. He believes it’s a template for countries of any size. Vikram shared that his team decided to skip traditional PoC exercises and focus exclusively on “proof of value” to demonstrate tangible impact of the Indosat AI strategy.
- Munjal Shah (HippocraticAI): Focused on healthcare breakthroughs enabled by compliance-friendly AI. Remote diagnostics can radically boost care in underserved regions.
- Senthil Ramani (Accenture): Stressed that synergy among hardware, software, and policy layers is critical for large-scale AI. Accenture’s role is ensuring each piece integrates smoothly.
Accelerating Sovereign AI Factories [S73439]

AI factories, when aligned with local infrastructure and regulations, can unlock lucrative revenue streams. Edge-scale solutions and greener data centers also address national goals—seen in Norway, Canada, and more. These examples suggest “homegrown” AI can thrive alongside global collaborations.
- Joao Kluck Gomes (NVIDIA): Positioned AI factories as telcos’ next big move, not just a side project. He cited data centers fine-tuned for multiple vertical demands.
- Kaaren Hilsen (Telenor): Pointed to Norway’s eco-friendly data centers that unify sustainability with AI. Telenor invests in partnerships to share costs and rewards.
- Ryuji Wakikawa (SoftBank): Emphasized how edge-scale AI links national infra with top-tier compute, optimizing latency. He sees micro data centers near users as crucial for ROI.
- Chris Madan (Telus): Unveiled North America’s first telco-based AI factory, aligning with Canada’s AI roadmap. Partnerships enable HPC-level performance that a single telco alone couldn’t achieve.
AI Agents & Digital Humans [S72988]
Generative AI avatars can handle user interactions 24/7, enhancing customer experiences, but brand trust demands accuracy. This session showcased how digital humans automate front-line support. Missteps can quickly undermine credibility, so rigorous testing is essential.
- Lilach Ilan (NVIDIA): Showed how real-time processing drives engaging digital humans. Brand coherence remains key—an off-brand AI erodes user confidence.
- Alan Dennis (Indiana University): Covered the psychology of “human-like” AI, from uneasy realism to emotional design. Balancing subtlety keeps users comfortable.
- Anthony Goonetilleke (Amdocs): Shared metrics on how 24/7 agents alleviate staff workloads, raising satisfaction scores. Transparency about AI fosters acceptance.
- Mark Austin (AT&T): Warned that a single AI error can go viral. AT&T invests in robust QA and frequent model updates. Accuracy is linked directly to brand reputation.
- Romit Ghose (ServiceNow): Focused on automation from calls to ticket closure, harnessing AI at each stage. End-to-end integration frees human agents for complex tasks.
Defining AI-Native RAN for 6G [S72985]




Software-based 6G will rely on AI for waveforms, HPC synergy, and precise positioning. Intelligence, not just more bandwidth, defines next-level performance. Speakers agreed that building AI in from the ground up sets 6G networks on a faster, more flexible upgrade path.
- Ardavan Tehrani (Samsung): Advocated near-total software definition, letting AI adapt networks on the fly. Hardware accelerators still matter, but agile software is key.
- Chris Dick (NVIDIA): Showed HPC synergy—merging GPU compute with advanced waveforms to handle next-gen channels. Believes HPC-level AI is essential to meet explosive data growth.
- Moe Win (MIT): Zeroed in on sub-meter positioning and “situational awareness.” He maintained that synergy between AI and advanced waveforms unlocks new performance frontiers.
- Jim Shea (Deepsig): Demonstrated AI-based modulation that arranges constellation points unconventionally for efficiency. Field tests revealed more stable links under varying conditions.



Enable AI-Native Networking [S72993]
Offloading heavy tasks to DPUs and orchestrating them with Kubernetes can handle large AI workloads while maintaining security and service chaining. AI-native networking is the logical next evolution in advanced telecom, enabling automation and agility at scale.
- Elad Blatt (NVIDIA): Showed how Data Processing Units (DPUs) offload intensive network and security workloads from CPUs, freeing up precious HPC resources to focus on AI. By deploying solutions like BlueField, operators can streamline edge security and traffic inspection, shifting routine packet-handling tasks onto specialized hardware.
Elad stressed that this offloading architecture not only boosts raw performance but also simplifies operations—allowing AI models to run unimpeded while network security processes hum along in the background. The result is a more agile, scalable infrastructure perfectly suited for the high-demand future of AI-driven services. - Erwan Galan (RedHat): Discussed containerized network functions managed by Kubernetes and RedHat’s ecosystem, reducing deployment friction. He noted open-source communities push networking forward.
- Ahmed Kuntari (F5): Highlighted security and service chaining, describing how AI-driven policies handle threats dynamically. He believes AI-based encryption and routing reduce manual overhead.
Closing
Telcos are no longer just carriers—they’re shifting into full-fledged AI powerhouses, defining the future of connectivity. As Jensen Huang reflected, decades of computing and data centers were built around a legacy “serial” model, where old architectures turned data centers into cost centers rather than revenue generators. In the AI era, that paradigm is flipped: data centers become “factories of the future,” enabling telcos, enterprises, and cloud providers to monetize advanced compute resources and move beyond traditional network delivery.
Coherently, spun out from NetActuate’s decades of large-scale computing experience, is here to help operators and enterprises build these next-generation AI factories quickly and effectively.
Like Jensen said: computers were once locked into an outdated design, but we’re entering a new realm where agility and forward-thinking infrastructures drive real value.
If you’re looking to push AI boundaries in your own operations—transitioning from old serial models to modern, AI-native platforms—let’s team up. The opportunity is massive, and it’s accelerating by the day.
— The Coherently Team