Mobile internet usage more important than network coverage, stresses GSMA report

GUEST OPINION: New and emerging digital experiences underscore the importance of understanding and optimizing how traffic travels to and from an end user.

A lot has happened in the last few years that has put latency – and particularly low and ultra-low latency – on the map.

Latency – the delay between leaving the streaming source and arriving at a device – takes many forms, with the most common being delays, dropped frames, buffering and thus reduced content quality.

To be clear, latency was already a topic of interest to some cohorts of users.

{loading position stephen08}

On the consumer side, this included gamers, viewers of ultra-high definition video-on-demand such as live sports, and regional and remote satellite-connected users.

On the enterprise side, latency could be an issue for real-time use cases like high-frequency or algorithmic trading, or for data collection at remote “edge” locations like mines and gas platforms.

However, latency was really pushed into the mainstream during the work-from-home revolution. This led to new metrics to understand the intricate interconnected networks used to send and receive user traffic, and specifically how different times of day affect latency.

In fact, latency and its impact on web applications is a regular measure monitored by the Australian Competition and Consumer Commission.

When it comes to latency, it is important to understand that it is not only important for the performance of applications today, but also for those of tomorrow.

In particular, it is a goal of network engineers and end-users alike to have low or – eventually – ultra-low latency connections to the Internet. As PwC Australia notes, this is already happening to some extent with the growing presence of 5G networks.

The truth is that ultra-low latency connections need to become even more ubiquitous for the Metaverse era. High-performing connections are likely to be critical to broad participation in environments that rely heavily on real-time interactions.

Any lag in performance between users would not only be very noticeable, but would also undermine the promise of the experience.

When latency impacts those experiences, customers turn to other providers, watch other events, or connect with other people and businesses. So the real benefit of ultra-low latency is that content, user experience and data are delivered in near real-time, laying the foundation for better user experiences and enabling new business models.

What latency looks like

For the purposes of this article, I will explain latency, its challenges, and its opportunities largely in consumer terms.

The pipeline of content from creation through transmission to final reception by the consumer device requires processing and bandwidth as well as time.

It can often take up to 10 seconds for a live event to appear on a consumer device.

While the average HD cable broadcaster experiences 4 to 5 seconds of latency, about a quarter of content networks face 10 to 45 seconds of latency.

In the absence of a current standard, low-latency delivery typically means video is delivered less than 4-5 seconds after the live action on a consumer’s screen, while ultra-low latency is lower.

So-called “glass-to-glass” latency – from the camera lens to the viewer’s screen – is often around 20 seconds, while high-definition cable TV content, with around 5 seconds of latency, is the benchmark for low latency.

Reducing latency

There are many causes of latency in broadcast and delivery networks. The mere act of encoding a (live) video stream into packets to be sent over a network introduces delays in the video stream. Add to this the delivery over a variety of third-party networks to the end user’s device, and the latency increases. Furthermore, different protocols have different strengths and weaknesses, and reducing latency may not always be the primary consideration.

Latency can be reduced by optimizing the encoding workflow for faster processing. However, this leads to inefficiencies elsewhere – and higher costs. Smaller network packets and video segments mean more overhead and less bandwidth but reduce latency, while larger segments increase overall bandwidth and efficiency at the expense of a real-time experience.

The media ingest and encode workflow is a good place to look for ways to reduce latency. A well-tuned workflow can deliver encoded video segments quickly, but focusing on minimizing processing time is not the only goal. Spending more time processing can often produce more compact data streams, reducing overall network latency. Therefore, there is a difference between processing efficiency and network transport efficiency, and content publishers need to strike the right balance.

While building an efficient way to record, encode, and initially transmit content can help eliminate inefficiencies and latency upfront, much of the actual latency occurs during delivery. Minimizing or reducing delivery latency requires planning and optimization, but also accepting tradeoffs between latency and cost.

Content companies need to find solutions for both the system front-end and the network delivery components to achieve the lowest possible latencies.