Artificial intelligence and 5G are better at the edge

Our world is full of exciting technologies that promise to open up new possibilities for businesses. In some cases, the coming together of two new technologies amplifies the benefits of both, and AI and 5G are perfect examples of such complementary technologies.

Each has tremendous potential, but together they’re even better.

Why location matters to AI

The AI ​​workflow involves ingesting large amounts of data from multiple sources to train models and then using those models to produce automated, data-driven results. As more data is generated at the edge, the different steps (workloads) in the AI ​​workflow are executed in different places based on performance, privacy, flexibility and cost requirements, which is referred to as distributed AI. Many companies are now working with distributed AI orchestrators to offload their AI training and inference workloads to the appropriate locations.

Model inference and model training have very different requirements. Model training is more resource-intensive, so it is typically performed in a large data center or on the public cloud. In contrast, model inference is more sensitive to latency and typically runs at the digital edge, where it is closer to data sources.

Why location matters for 5G

5G’s success depends on having the right infrastructure in the right places, and its promise is to enable enterprise-class wireless services. When it comes to enabling these services, one of the most important components of the 5G network infrastructure is the User Plane Function (UPF). It is responsible for de-encapsulating 5G user traffic so that it can be routed out of the wireless network to external networks such as the Internet. B. can be transmitted over the Internet or cloud ecosystems.

READ :  Top 5 AI Tools for Content Authors

Because the applications that 5G users want to access are live on these external networks, it’s important to have reliable, low-latency connectivity where the UPF is located. For this reason, moving UPFs from their core networks to the digital edge is one of the most important steps telecom operators can take to unlock the full value of their 5G infrastructure.

5G helps AI detach itself from device and local infrastructure

Many AI use cases have strict performance requirements. One way to meet these requirements is to do the inference on the device itself or use a local server stored very close to the device. These types of servers are often found in the closets of stadiums, retail stores, airports, and anywhere AI data needs to be processed quickly. This approach has its limitations: performing complex AI inference processing on the device can quickly drain the battery, and the AI ​​hardware on the device is often not powerful enough to perform the required processing.

Additionally, many AI use cases require aggregated data from multiple sources and all too often there is insufficient memory/disk space on the device to host the various datasets. Likewise, performing AI inference in the local closet presents issues of physical security, physical space limitations, inability to provide the required performance, and higher operational costs to maintain the hardware. As 5G networks offer high-bandwidth connectivity, it is now possible to host an AI inference infrastructure and cache the required datasets also on the 5G infrastructure close to where the data is generated. Thus, AI inference tasks can be offloaded from device and local sites to the 5G multiaccess edge computing (MEC) site on the 5G infrastructure of nearby network service providers (NSPs) in the same metro area.

READ :  Platforms that Help Deploy AI and ML Applications on the Cloud

Co-located with the 5G network, the application’s latency and bandwidth requirements can be met, while allowing organizations to offload their AI infrastructure from the on-premises device or closet. Depending on the carrier’s 5G deployment architecture and application latency requirements, the 5G MEC infrastructure could be located in a micro data center (e.g. a cell tower), a cloud 5G zone (e.g. AWS Wavelength) or a macro data center such as B. There is an Equinix IBX.

AI enables better slicing and maintenance for 5G networks

One of the most powerful aspects of 5G is that it will enable NSPs to perform network slicing, essentially offering different classes of network services to different classes of users and applications. Today’s NSPs can apply predictive analytics powered by AI models to enable smarter network slicing. To do this, they can collect metadata about various applications, including the performance of those applications under specific network conditions. When both 5G infrastructure and AI models are on the edge, it is easy to gain predictive insights into what quality of service different applications may require and classify them into different network domains accordingly.

In addition, NSPs can retrieve log and usage data for the network and use it to train AI models that support proactive maintenance and management. These models can help identify conditions that indicate a possible service outage or an increase in user traffic. The network can then respond automatically to prevent the outage or provide additional capacity. Again, having both 5G and AI infrastructure at the digital edge is important to make the most of this capability.

READ :  The Top Trends in Construction for 2022

deployment at the edge

With data centers in more than 70 metropolitan areas across six continents, Platform Equinix makes it easy for enterprises to deploy AI inference and 5G infrastructure at all edge locations that deliver the best outcomes for their 5G and AI workloads. Additionally, digital as-a-service infrastructure such as bare metal, fabric, and network edge can help simplify and accelerate adoption.

To maximize the value of 5G and AI implementations, we have a global partner ecosystem of more than 2,100 NSPs. We are very familiar with how we can help them modernize their networks for the 5G era and know how to pass on the power of these networks to our customers. Finally, to enable better results for distributed AI, we offer cloud-adjacent data centers that offer proximity to all major cloud hyperscalers. Customers can conduct their AI training in the cloud, then move their models to our metro-based data center and move data between the clouds and edge locations over a private, secure Equinix Fabric connection.