NetActuate and NETINT Deliver Global VPU-Accelerated Infrastructure

Two days before NAB opened, we announced our partnership with NETINT Technologies to deliver global, on-demand VPU-accelerated video infrastructure across NetActuate's network. By Tuesday morning of the show, the conversation on the NETINT booth floor was already about what to build next.
For anyone who missed the press release, the short version: hardware-accelerated video encoding and decoding, powered by NETINT's Smart VPUs, is now available as-a-service on NetActuate's distributed infrastructure in every major media market. Customers can deploy on pre-installed VPU-enabled VMs or request custom builds. Pricing starts at $0.237 per hour for a Quadra T1A-enabled VM, which is the lowest sustained price in the industry we are aware of. We are also running a matching $500 infrastructure credit for new users who want to spin up a test deployment in the next 30 days. Get started here.
NETINT founded the VPU (Video Processing Unit) category. They won the 2024 Tech Emmy Award for Design & Deployment of Efficient Hardware Video Accelerators for Cloud on the back of real production workloads and real streaming services. Purpose-built silicon for modern video at scale, without the power, density, or economic drawbacks of GPU-based encoding.
What we bring is the other half. Our network is the fourth largest in the world by peer count, built over the last decade in the carrier hotels and submarine cable landing stations where the ISPs and backbones actually meet. When you deploy a VPU on NetActuate, you are deploying it close to your audience, not on the other side of a transit provider. That is the piece of the stack that makes hardware acceleration economic at global scale.
Practically, this means our customers can treat VPU capacity like any other Anycast resource. Spin it up in any of our 45+ POPs. Run live transcodes, capped-CRF ladders, dynamic ad insertion, or video security workloads. Pay hourly. Tear it down when the event is over. That is the elasticity the video infrastructure market has been quietly asking for.

One thing that stood out at the NETINT booth through the week was how naturally AI-driven workflows are starting to land on this stack. NETINT's Bitstreams control software, paired with our public APIs, makes the full end-to-end deployment (provision a server, attach a VPU, ingest an SRT stream, output LL-DASH, monitor) clean enough that an AI agent can plan and execute the workflow in a single pass, using nothing but natural language instructions. We showed a demo to interested parties using a laptop; deployments landed in minutes.
Beyond the novelty, what that really tells us is something about developer experience. If the APIs are clean enough for a model to orchestrate the workflow in one shot, they are clean enough for any customer's ops team to automate. That matters for the customers NETINT and NetActuate are building for: platforms running hundreds of simultaneous encode jobs across multiple regions, where hand-tuning every deployment is no longer an option. AI-native is going to be the default shape of video infrastructure over the next few years, and having it work cleanly on day one of this partnership is a meaningful signal.

One of the things that made NETINT's presence a standout at NAB was not just the booth itself; it was the Ecosystem Pavilion. NETINT has built a partner model that is genuinely unique in this market. Silicon vendor, system builders, consulting firms, codec companies, transport specialists, broadcast monitoring, and infrastructure partners, all lined up around a shared architecture. We are proud to be in that group. A few of the partners we spent meaningful time with:
Advantech (TWSE: 2395) announced their own partnership with NETINT on April 17, built around their VEGA series video edge servers. The Quadra Mini Server sitting on the VEGA-6321 platform fits up to 20 live streams in a 1U half-rack form factor, which is the kind of thing that changes what is possible in mobile broadcast and on-site media production. A second, larger VEGA system based on AMD EPYC 9005 and up to twelve NETINT Quadra T1U modules takes the same architecture to 384 simultaneous 1080p30 streams in a single 1RU chassis. Forty years of industrial design expertise are behind the platforms, and it shows.
Arcadian was at the Ecosystem Pavilion on Monday, and it was useful to finally put faces to the company that has been NETINT's primary VPU deployment partner for Hollywood studios and large vertically integrated media companies. They handle the heavy deployment engineering, QA, subtitling, and custom app work that makes an ASIC-based transcode fleet actually land cleanly inside a production media workflow. Their self-service Bitstreams payment portal integration is one of the cleaner commercial layers we have seen on a silicon product. Genuinely impressive team.
CIRES21 had news of their own at NAB. In partnership with VisualOn, they published a joint whitepaper at the show demonstrating up to 19-point VMAF gains on H.264, HEVC, and AV1 when the VisualOn Optimizer is integrated into CIRES21's live transcoding pipeline. The benchmarks cover software, NVIDIA NVENC, NETINT, and Intel QuickSync, and the low-bitrate rescue story (15 to 19 VMAF point gains at 360p to 576p) is genuinely important for mobile audiences. Some of the most rigorous quality-measurement work in the VPU ecosystem.
V-Nova was at booths W1646 and W1647 showing MPEG-5 LCEVC, VC-6, Vision AI, and PresenZ XR. LCEVC as an enhancement layer on top of existing codecs (H.264, HEVC, AV1) integrated directly into NETINT VPUs is one of the cleaner "better bitrate without waiting for a new silicon generation" stories at the show. Brazil's TV 3.0 rollout using LCEVC for 4K HDR broadcast on existing spectrum is one of the more interesting live deployments anyone is running. Over 1,500 patents and years of codec standards work behind this one.
Misao Network came in from Japan (Saitama, specifically) and was one of the more unexpectedly fun conversations we had all week. Real-time WebRTC transport paired with NETINT VPU encoding, with an 8K-capable AV1 real-time encoder and 12G-SDI support for professional broadcast workflows. The technical story alone is compelling, but the meeting took an even better turn when we got to the subject of cats, specifically the universal "psst psst psst" call that apparently works the same on Japanese cats as it does on American ones. They also brought some of the best swag of the show: cool cat stickers that are now decorating several of our laptops.
Scalstrm had their own booth on the show floor and was also colocated in NETINT's Ecosystem Pavilion. Stockholm-based, cloud-native, standards-based live OTT streaming. They have been quietly building a technically strong video platform for a while now, with an architecture designed for efficient scale and clean handoffs between workflow stages. They are one of the more interesting teams in the European streaming stack, and the sort of company needed in the ecosystem.

The broader VPU ecosystem is real, it is growing, and it is one of the things making the “open market case” against “hyperscaler-only” video architectures practically defensible in 2026. Nine architectural perspectives, one silicon category, a lot of shipping products.
A few things we want to do between now and IBC:
Genuinely a pleasure to be working with this crew. Special thanks to Randal Horne and Mark Donnigan for setting the tone on this partnership from day one. Randal and Mark are the reason this partnership got off the ground as quickly and cleanly as it did, and the collaboration has been first-rate. Great to meet Joshua, NETINT's CEO and co-founder, during the show. Thanks also to Kenneth Robinson, Anita Flejter, Julieta Alatorre, Patty, and the rest of the Bitstreams, product, marketing, and booth crew who made this week so productive. The four-then-five Marks running joke at the booth ("if your name is Mark, you're hired") is going to outlive this blog post.
See you all at IBC 2026 in Amsterdam. We expect to have a number of new announcements by then.
If you want to run VPU-accelerated workloads on our global network, or talk through a deployment, the contact form on our website goes to me, Mark de Jong, Mark Price, and the engineering team directly.
Reach out to learn how our global platform can power your next deployment. Fast, secure, and built for scale.