Site icon Venture jolt

Inauthentic AI Photos Taking Over Your Social Feed

AI Photos Taking Over Your Social Feed

AI Photos Taking Over Your Social Feed

Social media platforms like Twitter, TikTok, and Facebook will be overwhelmed by the onslaught of AI-generated graphics designed to trick users.
An influential Twitter user captured French President Emmanuel Macron dodging riot police and demonstrators amid clouds of smoke on Tuesday in Paris and released three photographs of the event. More than three million people were duped by the photos.
But, this wasn’t so clear to those who hadn’t been following the development of AI-driven image producers. The user’s name, “No Context French,” is accurate in that there is no description or label on the image. And it turned out that a few people actually bought into their authenticity.
Apparently, at least two of my London-based professional colleagues happened across the photos and assumed they were from this week’s sometimes-violent pension reform protests. Someone in the group messaged the image before realizing it was false.
For years, social media platforms have been getting ready for this moment. They have repeatedly warned about the dangers of deep fake films and are well aware that anyone with access to photo editing software may create highly problematic fake images of politicians. Yet, social media giants like Twitter, Facebook, and TikTok are entering uncharted waters with the advent of image-generating tools backed by so-called generative AI models.
Tools like Midjourney (free for the first 25 photographs) and Steady Diffusion make what would have taken 30 minutes or an hour using Photoshop-style software possible in just a few minutes (completely free). There are no limitations on creating likenesses of celebrities using either of these programs. 1
Steady Diffusion allowed me to create some questionable “pictures” of Donald Trump and Kim Jong Un playing golf together last year. Nonetheless, image generators have made great strides in the past six months. The current iteration of Midjourney’s program can generate images that are nearly indistinguishable from the genuine thing.
Someone who goes by “No Context French” told me they got their Macron photos from Midjourney. They told me that anyone could “zoom in and read the comments to determine that these photographs are not real” when I asked why they didn’t designate the images as phony.
AI Photos Taking Over Your Social Feed
They didn’t back down when I said some individuals had been fooled by the pictures. They told me, “We know that these photographs are not real because of all these imperfections,” and then they sent me enlarged screenshots of the digital flaws. They didn’t answer my question concerning the small percentage of people who don’t bother to read the fine print, especially on a mobile phone’s screen.
On Monday, Eliot Higgins, co-founder of the investigative journalism group Bellingcat, played off widespread hopes for Trump’s arrest by tweeting phony photographs he’d made of Trump being arrested. More than 5 million people viewed the photographs without any identification. Higgins later claimed that he was no longer allowed to use Midjourney.

Twitter detectives may be able to spot the distorted hands and distorted faces of AI-generated photos, but many regular users are still fooled. WhatsApp users in Brazil were inundated with false information about the legitimacy of their presidential election last October, prompting many to riot in support of ex-president Jair Bolsonaro, who ultimately lost the election.

When an image is shared on a small screen at the height of the news cycle by someone you trust, it is much more difficult to spot flaws and fakery. As a fully encrypted messaging app, WhatsApp has limited ability to prevent the spread of false information, such as photos that are shared endlessly among friends, families, and groups.

Although Higgins and “No Context French” was only trying to pull a prank, the widespread extent to which their posts were believed to be genuine highlights a serious threat to the future of social media and, by extension, to society at large.

On Tuesday, TikTok revised its terms of service to prohibit artificial intelligence-generated content that is misleading. 2 The most recent version of Twitter’s policy on artificial media states that users should not distribute deceptive images and that the company “may label tweets containing misleading media.” Twitter’s new auto reply for the media is the poop emoji, which I received when I questioned the Elon Musk-led firm why it hadn’t labeled the bogus Trump and Macron images as they went viral.

Those that promoted the Trump photos as authentic with clickbait hashtags like “BREAKING” have been reported to Twitter’s Community Notes, a feature that allows users to provide background information about specific tweets. But, with Musk at the helm, Twitter has become increasingly permissive of content, which means false photos may do particularly well there.

Although Meta Platforms Inc. promised in 2020 that it will eliminate all AI-generated media designed to deceive consumers, as of Wednesday, at least one “Trump arrest” image shared by a Facebook user as true news had not been removed. 4 Meta did not provide any commentary after being asked.

With the proliferation of generative AI technologies like Midjourney and ChatGPT, it will become increasingly difficult for the general public to tell fiction from fact. Last year, the founder of one of these AI tools informed me that the solution to this problem was straightforward: we just need to make some changes.

Already, I find myself wondering whether or not the true photographs of politicians I see on social media are authentic. Many of us will become skeptics because of AI tools. They might lead the charge in spreading false information among the more impressionable members of society.

(Source link)

Exit mobile version