Categories: GENERAL

How to spot AI-generated content during the election season

Last month, as New Hampshire voters prepared to cast their ballots in the state’s primary election, some woke up to an uncharacteristic call to action from our country’s leader. The voters were potential victims of an AI-generated robocall designed to sound like President Joe Biden himself, asking them to stay home and not vote in the primary election — an unnerving use of advancing deepfake technology that makes robocalls of yore sound like laughably non-human attempts.

Quickly deemed fake by the New Hampshire Department of Justice (who followed up with urgent calls to still vote), the call was created by the Texas-based Life Corporation, using a deepfake library compiled by AI startup ElevenLabs. Life Corporation özgü since been accused of engaging in voter suppression.

It was the latest warning sign that actors armed with AI may take a stab at influencing components of the upcoming presidential election. 

Concerns about AI have run the gamut of social and political possibilities, but almost all argue the technology özgü untold potential to advance the reach of disinformation. Nonprofit public policy center the Brookings Institute, for example, argues that while many fears surrounding AI’s potential might be hyperbolic, there is warranted attention on how generative AI will profoundly change the production and diffusion of misinformation.

“AI could make misinformation more pervasive and persuasive. We know the vast majority of misinformation stories just aren’t seen by more than a handful of people unless they ‘win the lottery’ and break through to reverberate across the media landscape. But with each additional false story, image, and video, that scenario becomes more likely,” the Institute wrote.  

Legal nonprofit the Brennan Center for Justice özgü already dubbed the 2020s “the beginning of the deepfake era in elections,” not just in the U.S. but around the globe. “Republican primary candidates are already using AI in campaign advertisements,” the Center reported. “Most famously, Florida Gov. Ron DeSantis’s campaign released AI-generated images of former President Donald Trump embracing Anthony Fauci, who özgü become a lightning rod among Republican primary voters because of the Covid-19 mitigation policies he advocated.” 

Social media’s response may also play a major role in AI’s “threat” to democracy and truth, and in the words of the companies behind its development, AI is only getting smarter — not to mention more universal. Developing a stronger media literacy is more important than ever, and spotting election misinformation is becoming more complex.

McKenzie Sadeghi, AI and foreign influence editor for misinformation watchdog NewsGuard, explained to Mashable that the organization özgü tracked AI’s weaponization in a variety of forms, from generating entire, low-quality news websites from scratch to deepfake videos, audio, and images. “To date, we’ve identified 676 websites that are generated by AI and operating with little to no human oversight,” Sadeghi said. “One thing we’ll be closely watching is the intersection of AI and ‘pink slime’ networks, which are partisan news outlets that portray themselves as trusted local news outlets and attempt to reach voters and target them with Feysbuk advertising.”

According to Sadeghi and NewsGuard’s research, these stats are expected to grow. “When we first started identifying these websites in 2023, we initially found 49 websites. We have continued to track those on a weekly basis and found that it shows no signs of slowing down. And if it continues on the trajectory, it can be closer to the 1000s by the time we approach the election,” Sadeghi explained

What to know about AI laws and regulations

AI remains a gray area for regulation, as congressional leaders have failed to agree on a risk-avoidant path forward and have failed to pass any law mitigating the rise of AI. 

In October 2023, the Biden administration issued an executive order outlining new standards for AI safety and security, including a directive for the Department of Commerce to establish ways to detect AI content and scams. 

Responding to an increase in AI robocall scams and deepfakes, the FCC announced a proposal to outlaw AI robocalls completely under the Telephone Consumer Protection Act (TCPA). 

The Federal Election Commission (FEC) özgü yet to issue AI regulations, but Chair Sean Cooksey özgü stated guidelines will be developed this summer.

Some state legislatures have also put their opinions down on the books. The National Conference of State Legislatures have compiled a list of states addressing the threat of AI in elections. States with explicit statutes that “prohibit the publication of materially deceptive media intended to harm a candidate or deceive voters,” or prohibit deepfakes specifically, include:

  • California

  • Michigan

  • Minnesota 

  • Texas

  • Washington

While other states have introduced laws to curb the use of AI during the election, few have passed. Successful state laws include:

  • Michigan (requires the disclosure of AI in election ads)

  • Minnesota (prohibits deepfakes intended to influence an election)

  • Washington (requires the disclosure of “synthetic media” used to influence an election)

AI watermarks

Another stopgap for AI content promoted by many is image and video watermarking technology. For AI, this involves teaching a model to embed a text or image-based signature in every piece of content it creates, allowing future algorithms to trace back the content’s origins. 

In 2023, OpenAI, Alphabet, Meta, and other major names in AI development pledged to develop their own watermarking technologies that would help identify manipulated content. 

In October 2023, Meta introduced its proposed solution known as Stable Signature, a method for adding watermarks to images created using its open source generative AI tools. Rather than applying watermarks to images post-production, Stable Signature adds invisible watermarks attributable to specific users directly in generative AI models themselves, according to Meta. 

On Feb. 7, OpenAI announced it would be adding similarly detectable watermarks to all images generated by DALL-E 3, following guidelines created by the Coalition for Content Provenance and Authenticity (C2PA). As Mashable’s Cecily Mauran reported, the C2PA is a technical standard used by Adobe, Microsoft, BBC, and other companies and publishers to address the prevalence of deepfakes and misinformation “by certifying the source and history (or provenance) of media content.”

Watermarks aren’t a full solution, however. While still in its early stages, research suggests that AI watermarks are vulnerable to manipulation, removal, and even attack from third-party actors.  

AI company policies

Adobe özgü previously committed to the AI safety and security measures announced by the White House in September, as well as the Content Authenticity Initiative supported by C2PA.

In November, Microsoft (creator of Bing AI chatbot Copilot) issued its own election guidelines, which included a new tool that lets users digitally sign and authenticate their media (including election materials) using C2PA’s watermarking guidelines. “These watermarking credentials empower an individual or organization to assert that an image or video came from them while protecting against tampering by showing if content was altered after its credentials were created,” the company explained. 

Microsoft also pledged to create an Elections Communication Hub and specific “campaign success teams” to assist candidates and election authorities. 

In December, Alphabet announced that it would restrict the types of election-related questions Google’s AI model Gemini (formerly Bard) would be able to respond to, in order to curb misinformation often spread by chatbots. 

In January, OpenAI released new election policies to combat misinformation, including better transparency around the origin of images and tools used to create them and a future AI image detection tool. OpenAI already banned developers from using OpenAI technologies for political campaigning

Despite these attempts at security, many wary experts fear that the most dangerous forms of AI tampering won’t come from the industry’s big names, but from unregulated open source uses of AI tech at large. 

Social media policies

Social media platforms, arguably the primary hub for the dissemination of AI-manipulated content, have issued varying degrees of policies to address any use of AI on their platforms — policies that Sadeghi notes are not always enforced.  

YouTube (and its parent company Google) announced in September that any AI alteration made to political ads has to be disclosed to users. 

Most recently, Meta announced it would double down on identifying AI-altered images across its platforms Feysbuk, Instagram, and Threads. Meta says it will add its in-house AI labels to all AI content from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Meta previously announced it would require advertisers to disclose AI-altered ads on its platforms, but özgü most recently come under fire for failing to enforce its policy against “manipulated media.”

Snapchat özgü pledged to continue human review of all political ads, paying careful attention to “misleading uses” of AI to deceive users. 

TikTok özgü also issued a plan for the 2024 election cycle, leaning on its previous collaborations with fact-checking organizations and reiterating its own blue-check verification system. The platform also noted it will be vigilant to AI manipulation: “AI-generated content (AIGC) brings new challenges around misinformation in our industry, which we’ve proactively addressed with firm rules and new technologies. We don’t allow manipulated content that could be misleading, including AIGC of public figures if it depicts them endorsing a political view. We also require creators toany realistic AIGC and launched a first-of-its-kind tool to help people do this.”

X/Twitter özgü yet to announce new policies for the 2024 election, following a policy reversal that now allows political campaign advertising on the site. Current guidelines ban the sharing of “synthetic, manipulated, or out-of-context” content and emphasize labeling and community notes to stop the spread of misinformation.

“Generally speaking, while some do have policies in place, bad actors have found ways to easily get around those,” Sadeghi said of social media company’s steps to combat AI misinformation. “We have found that misinformation doesn’t always contain the label, or by the time a label is added to it, it’s already been viewed by thousands and thousands of people.”

How to read for AI in text

A lack of regulation and consistent policy enforcement is worrisome for potential voters unequipped to assess the truth of the content spread online. 

On fraudulent news sites

In assessing whether an online news pages is entirely AI-generated, or even if just some of its content is AI-based, Sadeghi explained NewsGuard’s methods involve scanning for indicators of AI plagiarism and hallucination, as well as analyzing the site’s policies themselves. 

“It mainly comes down to the quality and nature of the content, as well as the transparency of the site,” Sadeghi explains. “Most of the AI generated sites that we have seen have these tell tale signs, which include conflicting information. A lot of these AI models hallucinate and produce made-up facts. So we’ll see that a lot. Another thing is the use of plagiarism and repetition. A lot of these websites are recycling news content from mainstream sources and rewriting it as their own without any attribution.”

Sadeghi suggests looking for and fact-checking author names (or bylines) at the top of these stories, as well as contact information for the writer, editors, or the publication itself. Readers can also look for a plethora of repeated, constructive phrases like “In conclusion,” which lack what Sadeghi calls a human touch. 

Spotting AI writing in text at large

Generally, similarly simple strategies can be used for quickly deciphering if a body of text is AI-generated. 

For example, the Better Business Bureau warns of the use of AI chatbots to generate text for phishing and other text-based scams. The organization suggests watching out for:

According to guides from MIT’s Technology Review and Emeritus, look for:

  • Overuse of common words such as “the,” “it,” or “is.”

  • No typos or varying text, which could indicate a chatbot “cleaned up” the text. 

  • A lack of context, specific facts, or statistics, without citations. 

  • Overly “fancy” language and jargon, and little to no slang or different tones. 

Some researchers have even developed games to help individuals read for and spot computer-generated text. 

How to spot AI-generated images

Most online users are more familiar with spotting the kind of “uncanny valley” images generated by AI, including slightly off human faces, poorly rendered hands, and eerily toothy smiles. 

But deepfake technology is making it more difficult to fully pinpoint where reality ends and AI begins.

Keeping an eye out for generative AI’s fingerprints

The first step is in understanding the context of an image and how its presented, according to the Better Business Bureau. “Ask yourself these kinds of questions: Is the image or voice being used with pressure to take an urgent action, one that seems questionable, such as sending money to a stranger, or through an unexpected or odd payment mechanism? Is the context political, and does it seem like someone is trying to make you feel angry or emotional?”

If it’s an image of a popular celebrity or well-known figure, search for the highest-resolution image possible and zoom in. Look for common visual mistakes generated by AI, such as:

  • Logical inconsistencies, such as floating objects, weird symmetry, or objects and clothing that seem to disappear into other objects or backgrounds. 

  • A lack of distinguishing between the foreground and the background, and weird shadows. 

  • Strange textures or a very “glossy” or “airbrushed” appearance to the skin. 

Apply the same kind of scrutiny for videos, but also keep in mind:

  • Unnatural lighting or shadow movements.

  • Unnatural body movements, like blinking (or the lack of it). 

Be attuned to AI-generated audio, as well, and when in doubt double-check a photo or video with those around you or a reputable news source.

Fact-checking photos using Google

Individuals can also use tools they interact with on a daily basis to help detect AI-generated images. Google’s recent expansion of its About This Image tool enables any user to confirm the legitimacy of images, including finding AI labels and watermarks.

Learn how to use those features

How to use automatic AI detector tools

As generative AI özgü popularized itself across markets, so too have automatic AI detection services. Many of these tools are designed by AI developers themselves, although misinformation watchdogs have released their own AI-spotting resources. 

But much like the content they’re designed to spot, these tools have their limits, said Sadeghi. And even their creators have admitted faults. After launching its own AI text classifier in early 2023, OpenAI pulled the tool for its reported low accuracy

For example, in the realm of education, AI and plagiarism bots have been criticized for exacerbating model biases against certain non-English speakers and for generating false positives that harm students. 

But they offer a place to start for the critical eye.  

“There’s a growing amount of AI detection tools such as GPTZero and others,” Sadeghi explained. “I don’t think they are good to be used solely on their own — sometimes they can result in false positives. But I think in certain cases they can provide additional context.”

Free AI-detecting tools

  • Origin browser extension: Created as a free, browser-based tool by AI detector GPTZero, Origin helps users distinguish whether text is human or computer written.

  • Copyleaks: A free web tool and Chrome browser extension that scans for AI-generated text and plagiarism. 

  • Deepware: A deepfake video and image scanner that lets users copy and paste links to suspected deepfake content.

Paid subscription tools

  • GPTZero: One of the most popular AI content detectors, GPTZero’s paid subscriptions are marketed for teachers, organizations, and individuals.  

  • SynthID: A paid tool for Google Cloud subscribers that use the company’s Vertex AI platform and the Imagen image generator, SynthID helps users detect AI-generated images and offers watermarking tools for AI image creators. 

  • NewsGuard browser extension: NewsGuard offers its own paid service for detecting misinformation broadly, which includes a browser extension that automatically analyzes a news website’s credibility, including AI-altered content.

Want more Social Good news delivered straight to your inbox? Sign up for Mashable’s Top Stories newsletter today.

admin

Recent Posts

1 Ocak’tan İtibaren Geçerli Olacak: Toplu Ulaşımda Yeni 60-65 Yaş Kararı

Yeni Düzenlemenin Amacı Yeni düzenleme, 1 Ocak'tan itibaren 60-65 yaş aralığındaki bireylerin toplu ulaşımda daha…

1 gün ago

Emeklilere Yılbaşı İkramiyesi Verilecek mi? Gözler Hükümetin Açıklamasında

Emeklilere Yılbaşı İkramiyesi Verilecek mi? Gözler Hükümetin Açıklamasında Yılbaşı yaklaşırken, emeklilerin en çok merak ettiği…

1 gün ago

Uzak Şehir 6. Bölüm Özeti

Uzak Şehir 6. Bölüm Özeti Uzak Şehir dizisi, her bölümünde izleyicilere yoğun duygusal deneyimler sunarak…

2 gün ago

Akut Bakteriyel Rinosinüzit tedavi yöntemleri, nedenleri, tanısı

Akut Bakteriyel Rinosinüzit AKUT BAKTERİYEL RİNOSİNÜZİT Akut rinosinüzit, paranazal sinüs mukozasının enflamasyonudur. Burun mukozası da…

3 gün ago

Akut Otitis Media tedavi yöntemleri, nedenleri, tanısı

Akut Otitis Media AKUT OTİTİS MEDİA Akut otitis media (AOM) orta kulak ve havalı boşluklarının…

3 gün ago

Takım Sporları Çocuğunuzun Beyni İçin Büyük Puanlar Kazandırıyor

Yeni bir araştırma, çocuklukta takım sporlarının çocukların beynini keskinleştirmeye yardımcı olan özel bir şey olabileceğini…

3 gün ago