Midjourney, the AI image generator known for creating fake images of Donald Trump’s arrest, özgü banned image prompts that include the name of the presidential hopeful as well as current president Joe Biden.
The decision comes as experts and advocates warn about the fear of AI technology being used to influence voters and spread misinformation ahead of the 2024 presidential election.
Victims of nonconsensual deepfakes arm themselves with copyright law to fight the content’s spread
Tests of the new policy by the Associated Press Attempts showed a “Banned Prompt Detected” warning for requests of images of “Trump and Biden shaking hands at the beach,” the publication reported. Further attempts led the image generator to warn the users that they had “triggered an abuse alert.”
“I don’t really care about political speech. That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have put our foot down on it a bit,” said Midjourney CEO David Holz at a press gathering on March 13. Midjourney floated the idea of banning such prompts last month. Holz told members of the press that he envisions an even more frightening AI reality in 2028, with bad actors being able to more finely tune deepfakes and chatbots than we can even imagine now. To that end, he said, “this moderation stuff is kind of hard.”
Other generative AI tools have issued similar prompt bans to address the spread of concerning images. Last year, Microsoft’s Bing Image Generator attempted to ban prompts that included the phrase “twin towers” to curb the spread of memes featuring animated characters evoking the 9/11 attacks — meme makers, of course, found their work arounds.
Not long after, OpenAI’s advanced image generator, DALL-E 3, was revamped with much more restrictive usage policies, adding a “multi-tiered safety system” that limits “DALL-E 3’s ability to generate violent, hateful, or adult content,” Mashable’s Chance Townsend reported. OpenAI issued specific guidelines for election disinformation in January, noting that DALL-E 3 can decline “requests that ask for image generation of real people, including candidates.”
In February, OpenAI announced it had detected and terminated accounts of foreign state-affiliated bad actors using its generative AI technologies.
Until now, the more lax Midjourney had not issued a statement or instituted new policies to curb election disinformation. The platform does have a prohibition on users generating images “or political campaigns, or to try to influence the outcome of an election.” It was also one of the only leading AI image generating makers that did not sign a voluntary industry pact pledging to adopt deepfake and disinformation precautions, presented last month. Last year, the Center for Countering Digital Hate reported that Midjourney users could easily get around community guidelines and moderations to generate consistently conspiratorial and racist images.
A recent report from the nonprofit tested several image generators, including Midjourney, on their ability to curb prompts that promoted election disinformation. Generated images successfully included election disinformation in 41 percent of cases. Midjourney performed the worst among all tools tested, missing disinformation 65 percent of the time.
“Midjourney’s public database of AI images shows that bad actors are already using the tool to produce images that could support election disinformation,” the center warned.