Stable Diffusion is now available as a Mac app, no command line needed

Incredibly realistic and creative art created by AI has been popping up more and more in recent months, and was once only accessible to a select few: Now anyone can run a full graphical version of the text-to-image artificial intelligence Stable Diffusion on any Apple Silicon-based Mac – without any technical knowledge. You just need to drag an app into your Applications folder, double click it, write your request, and magic happens. It’s called Diffusion Bee and You can download it here.

Until now, if you wanted to be a part of the magnificent wonder of text-to-image AI, you only had two options. Pay to use a service like middle of the trip or DALL-E (if you were lucky enough to get an invite), or go into the terminal to install Stable Diffusion, the free, open-source tool. While there are step-by-step instructions for installing all the necessary components, they often overlook little things that make it a frustrating exercise for us mere mortals.

Nobody is closing this Pandora’s box now

That’s all over thanks to Divam Gupta, an AI research engineer at Meta. Now anyone with an Apple Silicon computer can harness the power of Stable Diffusion more easily than installing Adobe Photoshop. A simple drag and drop is all that is required instead of having to sign up with Adobe or another service. There are no fees and it doesn’t even require a connection to the internet, apart from the initial download of the models required to make it work. No information has been uploaded to the cloud. It runs on any Mac with an M1 or M2 processor and takes advantage of the massive processing power of these nimble chips. The developers recommend 16GB of RAM as machine learning is very memory intensive, but I ran it on a MacBook Air M1 with 8GB of RAM without a single error. It took about five to six minutes to get it (one tip: close all your apps to give Diffusion Bee as much space as possible).

It’s a remarkable, albeit anticipated, development that’s sure to spark an explosion of new imagery and creativity with consequences we can’t realistically predict just yet.

While many illustrators have criticized these image-to-text tools, mostly out of fear of losing their jobs, there’s another side to this debate, the growing sense that this type of software will be just another tool in creatives’ arsenal worldwide.

A few weeks ago video artist and director Paul Trillo told me that he believes in this tool “will not take jobs from visual effects artists.” If anything, he expects, “it will make the work they’re already doing more efficient. It will open the door to entirely new types of techniques and bring photorealistic VFX to even projects on a smaller budget.”

A renaissance

UK based Art Director and AR/XR/3D artist Josephine Mueller echoed those sentiments and told me that technology allowed her to do more things. “Sometimes I feed my designs to DALL-E, which produces variations of them,” she describes, “and then I discover something unexpected that I hadn’t thought of, which takes me in a new creative direction.” Miller – the one with one team of artists and developers has been working on an AR filter that allows Instagram users to see advanced paintings– also says she used it to show clients variations of her work. “I tell them this is my design, but these others were made by the AI ​​for you to see,” says Miller, “sometimes they find something they like in a variation that integrates into the final design.” becomes.”

Manuel”Manu.VisionSainsily — an artist and XR design manager at Unity — also believes these tools are extremely powerful for creative people. They’re also inevitable, he tells me, and they open a path for imaginative people with no visual execution skills to create something visual. “It can empower people who don’t have power,” he says. Miller agrees, pointing to a very particular case where disabled children were suddenly able to create pictures with DALL-E using just their words, whereas before they couldn’t because they couldn’t even draw. “It was pretty magical,” she says.

Sainsily believes this technology will lead to a renaissance similar to what we have seen before with other technological revolutions such as digital video editing, desktop publishing or photography. We’ve been remixing for centuries. This AI technology only makes the process faster. And yes, this will greatly change the industry, but like other revolutions, it also offers incredible opportunities.

Regardless of whether you have a tool like Stable Diffusion available as a one-click app, the fact is, that creative moment is inevitable. While there will be laws and lawsuits trying to restrict the use of sampling – as was the case with music – it will become increasingly difficult to enforce control. Even less when you consider that visual artists are currently processing pieces of other people’s work to create everything from storyboards to full artworks, using Photoshop and other tools without specifying the original patterns. It’s hard to blame AI for reusing the work of others when the practice is already common across many visual industries. “People are already remixing regularly to create new things. AI just makes it faster,” emphasizes Sainsily.

In the end, it feels like any regulation is failing due to current industry practices and the nature of AI tools. And it will become more sophisticated over time, eventually becoming truly synthetic, erasing every trace of the original work. We can also download this early version of the technology and work with it to our advantage.