As the 2024 presidential election approaches, OpenAI knows its tech is in the hot seat. The Washington Post dubbed this year’s race the “AI Election,” and the World Economic Forum’s recent “Global Risks Report 2024” ranked AI-derived misinformation and disinformation alongside challenges like climate change.
In a blog post shared today, OpenAI outlined the ways it intends to protect the integrity of elections and handle election interference, including “misleading ‘deepfakes’, scaled influence operations, or chatbots impersonating candidates” on its platforms.
DALL-E Images
In its post, OpenAI notes that “tools to improve factual accuracy, reduce bias, and decline certain requests” already exist. DALL-E, for example, can decline “requests that ask for image generation of real people, including candidates,” though the blog post doesn’t specify if or when DALL-E makes that decision.
OpenAI also promises better transparency around the origin of images and what tools used to create them. By the time DALL-E 3 rolls out later this year, the company says, it plans to implement an image encoding approach that will store details of the content’s provenance.
OpenAI is also testing a new tool that can detect whether or not an image was generated by DALL·E, even when the image özgü been “subject to common types of modifications.”
Chat GPT content
OpenAI doesn’t announce anything too new when it comes to ChatGPT and instead points to its existing usage policies for the platform and its API.
They do not currently allow people to build applications for political campaigning and lobbying, for example, or to create chatbots that pretend to be real people, including candidates or governments. It is also against ChatGPT’s usage policies to create applications “that deter people from participation in democratic processes” or that discourage voting.
The blog post does promise that ChatGPT will soon offer users greater levels of transparency by providing access to real-time global news reporting that includes attributions and links. The platform is also improving access to authoritative voting information by teaming up with the National Association of Secretaries of State (NASS) and directing users to CanIVote.org when asked “certain procedural election-related questions” like “where to vote?”
Users should also be empowered to report potential violations while using the platform, an option available on the company’s “new GPTs.”
Be safe out there and use your best judgment as the election approaches. If you’d like to learn more about how to identify how to spot election misinformation, check out ProPublica’s guide.
Topics
Artificial Intelligence
Elections