top of page
Writer's pictureJon M. Stout

Election Integrity and AI


Government & Elections

8 Things to Know About Election Disinformation in the Age of AI




The People

Artificial intelligence is helping spread and amplify falsehoods, but you can fight back

By

Edward C. Baig,

 

In this story


One of the most turbulent presidential election campaigns in American history is in full swing.


But whomever you intend to vote for, the potential to spread election disinformation is highly troubling, more so in this burgeoning age of generative artificial intelligence (AI), the type of AI that can churn out words, computer code, pictures and video based on text or other prompts from a user.


Seismic advancements in AI are making it more difficult for you to figure out whether an announcement from candidates and their representatives is actually from the campaigns. Supporters on the political fringes — and trolls who like to stir up trouble in general — have more tools at their disposal than ever before.


To varying degrees, candidates have always made exaggerated claims, and meddling in elections dates to the earliest days of politics. But AI-enhanced tools, amplified by the speedy and vast reach of social media, are digital carcinogens that can sow doubt about what is or isn’t factual.


How to vote in your state: Election Integrity and AI


Learn more about absentee and early voting, ID requirements and registration in all 50 states, the District of Columbia, Puerto Rico and the U.S. Virgin Islands. 


“The U.S. has confronted foreign malign influence threats in the past,” FBI Director Christopher Wray remarked at a national security conference. “But this election cycle, the U.S. will face more adversaries, moving at a faster pace and enabled by new technology.”


Alondra Nelson agrees. “Election hijinks are as old as time,” says the Princeton, New Jersey-based professor at the independent research Institute for Advanced Study and former acting director of the White House Office of Science and Technology Policy under President Biden. But with AI, “I do think there’s a step change from the past.”


The problem is both foreign and domestic and not just doctored text. Readily available online tools let anyone with even modest tech skills create bogus local news websites, clone voices, manipulate still photos and fabricate video that seems so authentic that it appears as if Candidate X appeared at a place he or she never set foot in and said something not ever uttered.


That dystopian future has begun. In January, an AI-generated robocall of Biden’s impersonated voice urged some 5,000 Democrats in New Hampshire not to vote in the primary. A political operative tied to former Democratic primary challenger Dean Phillips admitted to creating the call.


“Wherever the people are well informed, they can be trusted with their own government; [and] whenever things get so far wrong as to attract their notice, they may be relied on to set them to rights.”

Thomas Jefferson in a Jan. 8, 1789, letter from Paris


The Biden robocall exemplifies how technology can be deployed to widen existing divisions and suppress votes in certain communities, says Claire Wardle, a professor in the school of Public Health at Brown University in Providence, Rhode Island, and cofounder and codirector of the Information Futures Lab. The idea is that changing even a few minds could tilt the scales in a close election.

Here's what we know this campaign season and what to look out for:


1. If the message seems off, it’s probably false


What lent a modicum of legitimacy to the Biden robocall is that call quality these days is far from perfect. So damage can be wrought “if it sounds enough like somebody,” Wardle says.


Learn more


The smartest thing voters can do under such circumstances is apply the smell test.


“Does it really make sense to you that Joe Biden is making a call telling you not to vote in the primary?” asks Hany Farid, a University of California, Berkeley, digital forensics professor and member of Berkeley’s artificial intelligence lab. “Take a breath because the fact is whether you’re 50 and above or a Gen Z or a millennial, we do tend to move very fast on the internet.” 


Play Video

Video: 2 Ways AI Is Fueling Election Disinformation


2. ‘Cheap fakes’ can be as effective as slick fakes


The budget version of a deepfake, created with Adobe Photoshop or other editing software rather than AI, can wreak havoc, too, Wardle says. Cheap fakes might alter when an image was captured, make an older candidate look more youthful or change the context entirely.


In 2020, manipulated video of Nancy Pelosi made the former speaker appear as if she were intoxicated and slurring her speech. It went viral.


More recently, pictures of Donald Trump circulated on social media showing the former president in a crowd of smiling Black voters — a radio host who asked for it to be created acknowledged it wasn’t real — and separately, posing with a half dozen young African American men whose origins were discovered to be AI. Trump has courted Black voters but fewer than 1 in 8 cast their ballots for him in November 2020, according to the Roper Center for Public Opinion Research at Syracuse University in New York. 


3. The obvious red flags will be fewer


Polished deepfakes present their own challenges. The red flags that used to tip off an AI-generated image are largely gone. Rarely will you see six fingers, an unattached earlobe and other weird body parts or backgrounds that strike you as dubious.


OpenAI, the San Francisco startup behind ChatGPT, recently began teasing a new text-to-video AI model called Sora, which can generate slick one-minute Hollywood-style videos just from a text prompt. Before releasing a version to the public, OpenAI is testing Sora “to assess critical areas for harms or risks,” but the expectation is Sora will be released soon.


OpenAI also recently demonstrated an AI tool called Voice Engine that can generate, based on text input and a 15-second audio sample, emotive voices that closely resemble a particular speaker. The company is again proceeding cautiously, “due to the potential for synthetic voice misuse.”


Once again, employ common sense.

“Your own knowledge of how the world works is going to be to your benefit. If something seems too scandalous, too outrageous or too novel, then seek other sources and do a little bit of your own fact checking.”

Joan Donovan, Boston University


“How do we know a movie like Independence Day is a movie and not news of an alien invasion?” asks Joan Donovan, an assistant professor of journalism and emerging media studies at Boston University. “This is where your own knowledge of how the world works is going to be to your benefit. If something seems too scandalous, too outrageous or too novel, then seek other sources and do a little bit of your own fact checking or find out if reputable news organizations are also reporting on it.”


That fact checking should extend beyond words. Consider doing a reverse image search that may help you learn about when and where a photo was taken.


4. What’s flagged as fake may be true


Like almost any technology, or anything at all, AI can be used for evil. But Wardle worries about another possibility.


“That this technology exists allows a politician who does something wrong and gets caught to say, ‘Oh, that’s a deepfake. That wasn’t me. That wasn’t a hot mic moment. Somebody manipulated that,’” she says. “It gives the kind of plausible deniability to the people who are up to no good.” 


5. ‘Pink slime’ may be lurking in your browser history


Pink slime isn’t something from a horror flick. But it can creep up on you just the same.


The term, which you may associate with meat slurry used in processed foods like some chicken nuggets, refers to websites that masquerade as local news outlets. Their undisclosed financing is coming from political operatives on the left and the right.


In February, NewsGuard, an organization founded by media entrepreneur and journalist Steven Brill that rates the trustworthiness of news and information websites, launched the 2024 Elections Misinformation Tracking Center. 


It found 963 websites worldwide that repeatedly published false claims about elections. It identified 793 social media accounts associated with those sites. And as of mid-July, 1,265 partisan sites were masquerading as local news outlets with names such as The Philadelphia Leader and The Copper Courier.


What marks a pink slime site? “They load quickly, have no paywalls, have no pop-up ads,” Wardle says. “[Stories] are written by people who are trying to support their own candidate or smear the other one.” 


“[Pink slime sites] load quickly, have no paywalls, have no pop-up ads. They are written by people who are trying to support their own candidate or smear the other one.”

Claire Wardle, Brown University


When readers see a new local news site, they should dig deeper. Go to a fact-checking website like PolitiFact, Snopes or NewsGuard, which may have vetted the unfamiliar outlet’s legitimacy, says Carmen Nobel, editor-in-chief and strategic director of The Journalist’s Resource.


Be on the alert for outrageous headlines, or photos that are manipulated to look like headline images or news stories but don’t link to any article. If a site claims to be a nonprofit — one potential, but not foolproof, designation is a .org at the end of the web address — public information should be available.


For instance, AARP, a nonprofit founded in 1958, has the four most recent years of its annual reports, audited financial statements and IRS Forms 990 on its website. Its mission is listed at the bottom of every web page. ProPublica, an independent nonprofit newsroom, has AARP’s Form 990 for 2018 in its Nonprofit Explorer database and updates those records when the IRS publishes more of its backlogged electronic tax records. 


On any website, look at the About Us page. Does it adequately explain what the news organization is? Do you see a physical address or easy way to get in touch? Treat the absence of these things as a warning. The About AARP page is linked from each page on the website in the section at the bottom called the footer.

While you may appreciate the fact that these pink slime sites have no distracting ads, it also means that a fake news site “is getting funded by someone or something else,” Donovan says.


6. Misinformation may not come from the top of the ticket


Yes, you may see fake imagery, stories and video about the candidates before Election Day. But someone could just as easily spread disinformation by impersonating a local election official who says the polls are closed when they’re not.


Election Day 2024, the day when registered voters can cast their ballots for U.S. president and vice president, House members, senators, other officials and issues depending on where they live, is Tuesday, Nov. 5.


The states and Congress have the power to delay a presidential election. But in only one recent instance was a federal election postponed. It was after Super Typhoon Yutu struck U.S. territory the Northern Mariana Islands about 10 days before Election Day 2018, according to the Congressional Research Service.

“If you go back to the last three national elections, people tried all kinds of shenanigans to get people not to vote.”

Hany Farid, University of California, Berkeley


The U.S. election happened in 1864, too, despite the Civil War being waged close to the nation’s capital and troops being stationed far from their home polling places. 


“If you go back to the last three national elections, people tried all kinds of shenanigans to get people not to vote,” Farid says. “ ‘Oh, [the] voting date has changed. Oh, you can text your vote here. Oh, mail-in voting doesn’t work. Oh, your ballot boxes are full.’ ” 


Social media, email and texts circulated the lies quickly and with little, if any, cost. The same can happen today.


7. AI chatbots are unreliable sources for election info


Nearly 4 in 10 adults surveyed by Pew Research Center in February said they don’t trust information about the 2024 presidential race that comes from ChatGPT, a ratio that holds true whether the respondents leaned toward Democrats or Republicans. Their distrust is well founded.


A study released in late February from The AI Democracy Projects found that other AI chatbots in addition to ChatGPT cannot be relied upon to produce factual information about the election. The initiative, a collaboration between Proof News and the Science, Technology, and Social Values Lab at the Institute for Advanced Study in Princeton, N.J., assembled election officials from states where both Democrats and Republicans are in the majority in legislatures, along with academics, industry experts and journalists.


These experts posed 26 questions to five leading generative AI models — Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2 and Mistral’s Mixtral. Questions were meant to serve as a proxy for voters seeking facts on registering to vote, finding polling places, voting, tabulating ballots and certifying the election.


Among their findings:

  • All the AI models performed poorly around election information.

  • Half the responses were inaccurate.

  • More than a third of responses were considered harmful or incomplete.

  • More than 1 in 10 were deemed biased.

Answers were often hallucinations, meaning they sounded correct or authoritative but were fabricated.


When asked where to vote within a Phoenix zip code, Google Gemini, which was called Bard, listed outdated polling places. ChatGPT 4.0 falsely responded that someone could wear a Make America Great Again hat to a Texas polling place.


The state prohibits voters from wearing campaign-specific apparel at the polls.

“These election officials were really taken aback and shocked [that] information that sounded so credible could be so wrong,” Nelson told AARP. “It made them quite worried about the whole plethora of issues as we go into this really important election year.”


“These election officials were really taken aback and shocked [that] information that sounded so credible could be so wrong.”

Alondra Nelson, Institute for Advanced Study, Princeton, New Jersey 


8. Social media shouldn’t be your source for news


Facebook parent Meta, which has had problems with data misuse during the 2014 midterm elections, fake posts and hate speech during the 2016 presidential elections and misleading ads during the 2020 election and its aftermath, has announced measures to curb AI misinformation in 2024.


Now, Meta’s policy requires Facebook and Instagram advertisers to disclose whenever a social issue, election or political ad contains a photorealistic image or video or realistic-sounding audio that was digitally created or altered.


Meta also said it will not allow ads that independent fact checkers rate as “false, altered, partly false or missing context.” In the coming months it says it will label images that users post to Facebook, Instagram and Threads that it detects are AI generated.


But critics of Facebook and other social media warn users to steer clear of using them to learn about news.


“The science is unequivocal here. Social media is bad for your mental and physical health, let alone your IQ,” Farid says. “If you must be on social media, don’t use [them] as a news source. … [They are] a place for entertainment and connecting with your buddies.”


Instead, Farid advises people to get news from mainstream media outlets, however imperfect they may be. Before sharing anything, people must be responsible and seek out trusted fact-checking organizations that are “a Google search away,” he says.


This story, originally published April 3, 2024, has been updated with additional information on “pink slime" websites.

Contributing: Chris Morris and Lexi Pandell

Edward C. Baig covers technology and other consumer topics. He previously worked for USA Today, BusinessWeek, U.S. News & World Report and Fortune, and is author of Macs for Dummies and coauthor of iPhone for Dummies and iPad for Dummies. Follow him on LinkedInThreads; and X, formerly known as Twitter.


Election Integrity and AI

7 views0 comments

Comments


bottom of page