Skip to main content
  1. Blog
  2. Article

Alex Cattle
on 6 February 2020


Deploying AI/ML solutions in latency-sensitive use cases requires a new solution architecture approach for many businesses.

Fast computational units (i.e. GPUs) and low-latency connections (i.e. 5G) allow for AI/ML models to be executed outside the sensors/actuators (e.g. cameras & robotic arms). This reduces costs through lower hardware complexity as well as compute resource sharing amongst the IoT fleet.

Strict AI responsiveness requirements that before required IoT AI model embedding can now be met with co-located GPUs (e.g. on the same factory building) as the sensors and actuators. An example of this is the robot ‘dummification’ trend that is currently being observed for factory robotics with a view to reducing robot unit costs and fleet management.

In this webinar we will explore some real-life scenarios in which GPUs and low-latency connectivity can unlock previously prohibitively expensive solutions now available for businesses to put in place and lead the 4th industrial revolution.

Watch the webinar

Related posts


Canonical
16 March 2026

Canonical announces it will distribute NVIDIA DOCA-OFED in Ubuntu

AI Article

Today Canonical, the publishers of Ubuntu, announced that it will integrate and distribute the NVIDIA DOCA-OFED networking driver with Ubuntu. ...


Canonical
16 March 2026

Meet Canonical at NVIDIA GTC 2026: NVIDIA CUDA and NVIDIA Vera Rubin NVL72 support in Ubuntu 26.04 LTS

Ubuntu Article

Previewing at NVIDIA GTC 2026: NVIDIA CUDA support in Ubuntu 26.04 LTS, NVIDIA Vera Rubin NVL72 architecture support in Ubuntu 26.04, Canonical’s official Ubuntu image for NVIDIA Jetson Thor, upcoming support for NVIDIA DGX Station and NVIDIA DOCA-OFED, and NVIDIA RTX PRO 4500 support. NVIDIA GTC 2026 is here, bringing together the techno ...


David Beamonte
11 March 2026

The bare metal problem in AI Factories

MAAS MAAS

As AI platforms grow into large-scale “AI Factories,” the real bottleneck shifts from model design to operational complexity. With expensive GPU accelerators, hardware failures and inconsistent configurations lead directly to lost throughput and reduced return on investment. While Kubernetes orchestrates workloads, it cannot fix broken ph ...