Google introduces tools for developers to integrate machine learning and AI into their products

Google LLC introduces a suite of tools to help developers integrate machine learning and artificial intelligence into their applications using powerful AI models and solutions.

A number of new tools for TensorFlow were announced today at Google I/O, the company’s annual developer conference. TensorFlow is a free, open-source machine learning and AI software library with a particular focus on training and inferring neural networks across many different architectures, from servers to mobile.

Google is providing developers with enhanced support for AI models, including generative AI and image diffusion models, through TensorFlow so they can more easily integrate them into their applications using the library. Generative AI has become hugely popular lately with the launch of OpenAI LP’s chatbot ChatGPT, capable of human-like conversations, and the art-generating AI, Stable Diffusion, capable of creating beautiful and surreal works of art.

Keras, a high-level Python library for interacting with TensorFlow, receives two updates designed to make it easier for developers to add AI capabilities to their apps with just a few lines of code. The first is KerasCV for computer vision and the second is KerasNLP for natural language processing.

Whether a developer wants to use text-generating AI or image-generating AI, they can use KerasCV or KerasNLP and provide a prompt and get an output right in their app with just a few lines of code. Because these new additions are part of Keras, it has full access to the TensorFlow ecosystem.

Google has also updated DTensor, a dedicated tool for training AI models at scale that enables parallel scaling. As AI models get larger, training becomes more difficult as they cannot be trained on a single device and traditionally developers have had to split them up or split them across multiple processors, be they graphics processing units or tensor processing units.

With this update, DTensor enables richer and more powerful training and fine-tuning, and is on par with industry benchmarks for large dataset training. As a result, developers can be confident that they can complete their AI models faster and more efficiently.

Since much of the machine learning work starts with research, Google also made it easy for researchers to get started with TensorFlow by migrating their models to TensorFlow using an application programming interface called JAX2TF from JAX, a powerful framework for transforming numeric functions. This means that researchers developing entirely new models can continue to do so. When they’re ready for production, they can push it through the API and they’re good to go.

Google is also introducing a space for machine learning and AI solution development called ML Hub. In this hub, developers, engineers and prospects can define what they want to do and which use cases they want to pursue. Google then provides them with training, templates, modules and tools to create customized AI solutions from the Google ecosystem.

Google has many different tools to integrate machine learning and AI into developer apps. However, these are very complex and scattered, which can make it difficult to figure out what a developer might want to achieve a specific desired result.

MediaPipe makes it easy to deploy machine learning to mobile devices

Not every AI takes place on huge server farms. Some models are small enough to run on much more limited computing devices like cell phones. To make this easier, Google has updated MediaPipe.

MediaPipe makes it easy to build, customize, and deploy on-device machine learning solutions for portable, edge-based computing, such as those that can run on a mobile device, desktop, or the web. By leveraging on-device capabilities, machine learning can recognize gestures, such as listening for hand and face movements, enabling powerful capabilities for devices. It can also be used for many other functions such as automatic translation, background blurring, and numerous other purposes.

A particular use case for MediaPipe and smaller AI models is how it can be used for accessibility – particularly for people who are unable to use their limbs to access devices. To that end, Google developed “Project Gameface,” a computer control interface that uses facial expressions to control mouse movements in video games to help disabled gamers.

Google has teamed up with Lance Carr, a gamer with a rare form of muscular dystrophy. His house burned down, destroying the equipment he used to play games like World of Warcraft. Engineers at Google set out to use MediaPipe to use a webcam to control its gaming experience — for example, raising an eyebrow to click and drag an opening mouth, or twitching a lip sideways to move a cursor.

All of this can be done on a single machine without requiring anything extremely powerful, and it has restored Carr’s ability to play and fly around Azeroth again.

Project Gameface represents just one of the many potential possibilities of wearable AI, but it is a very powerful one. “Control my computer with funny faces? It’s pretty awesome,” Carr said.

Image: Pixabay Your support is important to us and helps us keep the content FREE. Clicking below supports our mission to provide free, comprehensive and relevant content. Join our community on YouTube. Join the community of more than 15,000 #CubeAlumni experts including Andy Jassy, ​​CEO of Amazon.com, Michael Dell, Founder and CEO of Dell Technologies, Pat Gelsinger, CEO of Intel, and many more luminaries and Experts.

“TheCUBE is an important partner in the industry. You guys are really a part of our events and we really appreciate you coming and I know people appreciate the content you create too” – Andy Jassy

THANKS