Qualcomm demonstrates the fastest local AI imaging with stable diffusion on mobile devices

Qualcomm shows off its AI capabilities on mobile, demonstrating what it claims is the fastest deployment of its AI image generator, Stable Diffusion, on a smartphone.

In a demo video, Qualcomm shows version 1.5 of Stable Diffusion, which generates a 512 x 512 pixel image in less than 15 seconds. While Qualcomm doesn’t say what the phone is, it does say it’s powered by its flagship Snapdragon 8 Gen 2 chipset (which launched last November and features an AI-centric Hexagon processor). The company’s engineers have also made all sorts of custom tweaks on the software side to get Stable Diffusion running at its best.

Some context: It takes a lot of processing power to run a program like Stable Diffusion (which is a staple in AI imaging), and most apps that offer such services on mobile do all their processing in the cloud rather than your smartphone to burn or tablet. Even generating an image this way on a decent laptop takes minutes, so getting a 512×512 resolution image from a phone in a matter of seconds is impressive.

Some more sample images from Qualcomm generated on the test device with the prompt “super cute fluffy cat warrior in armor, photorealistic, 4K, ultra detailed, vray rendering, unreal engine”. No, they don’t really show anything other than cute fluffy cat warriors, stop complaining. Image: Qualcomm

Qualcomm claims this is a speed record and we have no reason to doubt it, although the company also says it’s the first time Stable Diffusion has ever run locally on Android, which doesn’t seem to be true. After some searching we found this blog post by developer Ivon Huang showing how they got Stable Diffusion working on a Sony Xperia 5 II with a Qualcomm Snapdragon 865 and 8GB RAM. However, as Huang also notes in a tweet, creating a 512 x 512 image with this setup took an hour, so Qualcomm certainly wins points for speed, if not for achieving a technical “first.”

READ :  Payment options for withdrawals and deposits can be tried while using the 1xbet betting app for Android and iOS

Using AI to generate an image in similar settings on iOS took 60 seconds

Another useful comparison is with iOS. Back in December, Apple released the optimizations needed to get Stable Diffusion working locally on its Core ML machine learning framework. To test the system today, we ran Stable Diffusion 1.5 through the Draw Things app with Core ML acceleration on an iPhone 13. With this setup, it took about a minute to render a 512 x 512 image, so Qualcomm picks up speed here too, although the obvious limitations apply. Qualcomm uses newer hardware and a custom optimization pack that isn’t publicly available, while our iOS test was conducted on a 2021 phone using a third-party app.

All those qualifications aside, this is still impressive from Qualcomm even if it’s just a demo. Getting large AI models to run locally on mobile devices has all sorts of advantages over cloud computing dependency. There’s convenience (you don’t need a cellular connection), cost (developers don’t charge users when the server bills come due), and privacy (running locally means you don’t send data to someone else’s computer).

It’s the productization of AI, and it’s happening fast.