Those amazing photos you posted of yourself can now be manipulated to destroy your life

Millions – maybe billions – of social media users routinely post pictures of themselves online without giving it much thought, mostly just trusting the fact that it must be “okay” since so many others have done exactly the same thing. Since the advent of ubiquitous digital technology in the late 1990s, pretty much the majority of humanity has embraced the odd notion of storing their personal information (like photos) in the vast maw of the Internet – a global communication tool and database accessible to almost everyone – without causing personal consequences. The allure and community of posting such images has become so second nature over the past two decades that most of us now carry a device in our purses and pockets designed specifically for it.

With the rapidly evolving advances in artificial intelligence (AI) technology making it possible to manipulate and alter such online images, it may be time to reconsider what we are doing when we take that selfie or a other picture by post us on Instagram, Facebook or TikTok. As explained by Benj Edwards writing for Ars Technica,The new AI imaging technology allows anyone to save a handful of photos (or video frames) of you and then train the AI ​​to create realistic fake photos showing you doing embarrassing or illegal things.”

We’re not talking Photoshop-style pranks, either. As Edwards’ article explains, the AI ​​technology currently available to the general public can recreate or alter photographic images to the point where they are virtually indistinguishable from reality. Consequently, anyone whose personal or professional life is in any way vulnerable to such malicious “deepfakes” should consider themselves potentially vulnerable to this type of manipulation. To be clear, this includes (but is not limited to) anyone who has ever done anything to irritate, offend, or perhaps elicit feelings of envy or jealousy from another person, such as: a former spouse, lover, friend, partner, colleague, business competitor, or really anyone whose interests may be furthered or satisfied by the creation of such images.

READ :  Why does Elon Musk want artificial intelligence to be regulated?

As Edwards demonstrates through several visual examples of Ars Technica, just five images can be “learned” by the now publicly available AI image generation modeling to effectively create an entirely illusory narrative about each target. Such images can be selected from a social media account or taken as individual “frames” from videos posted somewhere online. As long as the image source is accessible, whether through those dubious “privacy” settings or otherwise, the AI ​​model can work on it any way the user pleases. For example, as Edwards explains, images can be recreated that depict realistic criminal “mug shots” or illegal and indecent activity, and then easily and anonymously posted to an employer, a news outlet, or an elementary school chat room on TikTok. Edwards’ team used the open-source AI image models Stable Diffusion and Dreambooth to “reconstruct” photos of a man-made image test subject (named “John”), including one showing “John” half-naked in front of a children’s playground. “John” romping around at a local bar dressed as a clown, or “John” standing naked in his empty classroom just before his students enter.

As Edwards reports:

Thanks to AI, we can trick John into committing illegal or immoral acts, such as breaking into a house, using illegal drugs, or showering naked with a student. With additional AI models optimized for pornography, John can be a pornstar, and that ability can even be reversed CSAM Area.

We can also generate images of John doing seemingly harmless things that could still be personally devastating to him – like drinking in a bar when he’s committed to sobriety, or spending time in a place he’s not should be.

Significantly, Edwards’ team used an artificial construct only because a real “volunteer” was ultimately unwilling to allow their own modified online images to be published for privacy reasons.

READ :  Should HR professionals embrace technology and artificial intelligence? | by Gabriel Luciano Pietrafesa | Oct, 2022

AI modeling technology is developing to such an extent that it is practically impossible to distinguish such images from real ones. Safeguards such as requiring the law to include an invisible, digital “watermark” or other surreptitious mark in these types of man-made images are among Edwards’ proposals to curb misuse of this technology, but, Edwards explains, even if such “counterfeits ‘ Ultimately, there is always the possibility of irreversible damage to an individual’s personal or professional reputation. In other words, once a school-age child has been slandered in this way, he doesn’t mind if the so-called “photos” later turn out to be fake.

Large tech companies responsible for developing AI modeling software have been criticized for not recognizing the potential human costs of spreading this technology (particularly their reliance on datasets containing racist and sexist stereotypes and depictions). ). And, as Edwards notes, commercially available AI deep learning models have already caused dismay among professional graphic designers whose own copyrighted work has been scraped by AI to create images for commercial use.

Regarding the potential human consequences of malicious misuse of AI, Edwards believes women are particularly vulnerable.

Once a woman’s face or body is incorporated into the image set, her identity can be trivially inserted into pornographic imagery. This is due to the large quantity of sexualized images found in commonly used AI training datasets (in other words, the AI ​​knows very well how to generate them). Our cultural biases towards the internet’s sexualized depiction of women have often taught these AI image generators sexualize their output By default.

Faced with such a paradigm shift and potentially devastating invasion of their privacy, people will no doubt reason that this is unlikely to happen to them. That may very well be true for most, but as such technologies become more available and easier to use for non-technical types, it’s hard not to imagine the potential for social disruption they bring. For those who think they’re at particular risk, one solution Edwards suggests “would be a good idea” is to delete all of your photos online. Of course, as he concedes, not only is that a personal option for most people (due to their social media addiction), but for many it’s practically non-existent. Politicians and celebrities, for example, whose photos have been posted all over the internet for decades – and whose visibility makes them natural targets for such “deepfakes” – will likely be the first to be forced to grapple with the problem as technology advances developed more and more widespread.

READ :  Tracking ESG Goals Through the Supply Chain

Of course, there is always the possibility that we will eventually become so used to these procedures that they lose their effectiveness. As Edwards suggests:

Another possible antidote is time. As awareness grows, our culture can eventually absorb and mitigate these issues. We can accept this kind of manipulation as a new form of media reality that everyone needs to be aware of. The provenance of every photo we see becomes all the more important; Much like today, we have to totally trust who is sharing the photos to believe them…[.]

Unfortunately, “trust” is a very scarce commodity, especially in the politically and socially polarized environment we currently live in, where people tend to believe what suits their makeup. It seems fitting that the mere existence of social media and the carefully filtered “bubble” mentality it fosters is probably the biggest enabler for this kind of unwanted invasion, just the latest example of the privacy we all wanted from the start sacrificed to “be logged in.”