Generative AI fake news a looming threat, but unlikely to disrupt 2024 elections in the United States

Share

By Sujeet Rajan

NEW YORK, April 18, 2024 – It might be an era where disinformation, blatant propaganda with scant respect for truth and deep fakes created by generative artificial intelligence (AI), trolls and bots populate social media – it has become a part and parcel of our engagement with news reading; a vexatious process for many gullible readers on how to disseminate the real from the unreal.

However, when it comes to likely election manipulation in the Presidential and local elections later this year in November, there is increasing evidence that several checks and balances are being put in place to counter and negate generative AI and social media-generated fake news content by bots; quash its spread.

Big Tech and activists are gearing up to condemn fake news through the very social media channels it may emanate from, and there’s concerted government action in the pipeline, from the federal to state level, to penalize the perpetrators of fake generative AI.

A big factor, however, is the sheer unpredictability of the whole process of manipulative media in this year’s elections: while there is undoubtedly a huge buzz and discussion around distortion through AI, it’s also a fact that technology on that front is evolving, perhaps not yet nuanced enough or free of barriers, to be at alarming levels to dupe masses, create upsets in polls.

Think of the insidious AI pieces lurking in the wings before elections, as race-horses idling in a paddock before they are led to the starting stalls.

Nobody can predict with certainty which form of fake AI will hurtle out the fastest from the starting gate, lead the race, or indeed, if any one of them could ultimately emerge a ‘winner’ on the track in terms of enough reason-bending power to capitulate choice of voters and create a genuine upset in an election.

Imagine a race where the fake AI horses keep galloping harder and harder, but the race never ends for them. Or worse, for campaigns who take recourse to it, that it turns into a boomerang, turns on them, sinks their credibility.

This year, around half of the Earth’s population have already voted or will vote in an upcoming election, on home turf. A total of 83 elections, the largest concentration for at least the next 24 years, are on the agenda, according to the consulting firm Anchor Change. These include the general elections in India and the Presidential elections in the United States.

The spate of elections provides enough opportunities for rogue actors in the AI arena to hone their skills. It also gives those trying to diminish its impact time to sharpen their skills, before the critical Presidential elections in November. It’s a Goliath vs. Goliath battle, on the lines of the best security experts in the world always on their toes trying to thwart the nefarious plans of malicious hackers.

Scientific American reported earlier this year that disinformation campaigns, online trolls and other “bad actors” are set to increasingly use generative AI to fuel election falsehoods, citing a study published by PNAS Nexus. Researchers project that AI will help spread toxic content across social media platforms on a near-daily basis in 2024, across the globe.

“Social media lowered the cost for disseminating misinformation or information. AI is lowering the cost for producing it,” Zeve Sanderson, executive director of New York University’s Center for Social Media and Politics, was quoted as saying by Scientific American. “Now, whether you’re a foreign malign actor or a part of a smaller domestic campaign, you’re able to use these technologies to produce multimedia content that’s going to be somewhat compelling.”

Sanderson also argued against overstating potential harms, saying the actual effects of increased AI content and bot activity on human behaviors – including polarization, vote choice and cohesion – still need more research.

“The fear I have is that we’re going to spend so much time trying to identify that something is happening and assume that we know the effect,” Sanderson told Scientific American. “It could be the case that the effect isn’t that large, and the largest effect is the fear of it, so we end up just eroding trust in the information ecosystem.”

There are a plethora of recent examples of distortion in elections. Nature reported, in Indonesia’s presidential election this year, winning candidate Prabowo Subianto relied heavily on generative AI, creating and promoting cartoonish avatars to rebrand himself as gemoy, which means ‘cute and cuddly’. Not only did it appeal to younger voters, but it also cast him in a new avatar displacing allegations linking him to human-rights abuses during his stint as a high-ranking army officer.

Nature reported candidates in Bangladesh and Pakistan used GAI in their campaigns, including AI-written articles penned under the candidate’s name, and several elections in Asia have been flooded with deepfake videos of candidates speaking in numerous languages, singing nostalgic songs and more — humanizing them in a way that the candidates themselves couldn’t do in reality.

In the US, one of the most infamous cases is that of text-to-speech and voice-emulation software from Eleven Labs, an AI company based in New York City, which was deployed to generate robocalls that tried to dissuade voters from voting for President Joe Biden in the New Hampshire primary elections in January. A counter to that: the Federal Communications Commission has banned the use of AI-generated voices in phone calls. Then there’s a widely circulated fake audio of British Labour Party leader Keir Starmer admonishing his staff, in an effort to cast him in a negative light.

In New York City, another fake audio clip generated plenty of controversy. A 10-second audio recording that surfaced in January of this year was jarring, featuring what sounded like Manhattan Democratic party boss and former state Assemblyman Keith Wright disparaging Harlem Assemblywoman Inez Dickens, reported NY1.com.

“Yeah, she’s not running. She’s done. I dug her grave, and she rolled into it,” the recording said. “Lazy, incompetent. If it wasn’t for her, I’d be in Congress.”

Local legislators are concerned and have been busy trying to craft legislation to counter this growing technological menace, that threatens to derail democratic norms of ethical poll battles.

Brooklyn Rep. Yvette Clarke is trying to put guardrails for artificial intelligence for years now, including revenge porn that impacts women of color especially, and full disclosure on political ads, reported NY1.com.

“This is identity theft on steroids,” Queens Assemblyman Clyde Vanel was quoted on saying by NY1.com, in a separate story, on the fake Wright audio clip. Vanel has also been pushing a variety of AI-related legislation. In March, he introduced legislation to give legal structure for artificial intelligence, robotics and automation, with criminal and civil liability for misuse of generative AI.

Earlier, this year, another City Council bill, by Manhattan Councilwoman, Julie Menin, aimed to criminalize the misuse of artificial intelligence and other technology to sway local elections, the New York Post reported.

Videos, mailers, audio recordings and other campaign material “intentionally manipulated to depict speech or conduct of a candidate” that never occurred would be prohibited under Menin’s bill. The ban would apply to any NYC election for mayor, public advocate, comptroller, borough president and the City Council.

It covers tech misuse during the final 60 days leading up to each election, which is when most ads are run and voters pay closer attention to races, the Post reported.

Although critics agree that plenty of good can also come out of generative AI, especially for underfunded campaigns, there is also a huge amount of skepticism and doubt at every move that’s made.

Last year, reports said Mayor Eric Adams revealed the city had made AI-generated robocalls featuring his cloned voice speaking in foreign languages, including South Asian languages, with the intention to reach underserved New Yorkers. Later, that initiative was done away with, after doubts were raised on outsized influence it could have on voters.

Despite attempts to stem the flow of fake content on the federal and state level, a significant aspect is that no laws currently exist to punish offenders.

In an interview to India Overseas Report, Darrell West, Senior Fellow, Governance Studies, Center for Technology Innovation, and the Douglas Dillon Chair in Governance Studies, at Brookings, said that “disinformation is a threat to all marginalized communities. It is a vehicle to marshal negative narratives against any type of ‘out’ groups and reinforce unfavorable impressions about them.”

Dr. Darrell M. West, Senior Fellow – Center for Technology Innovation, Douglas Dillon Chair in Governmental Studies, Brookings. Photo: Brookings

According to West, digital literacy programs are critical to inform readers and voters, on how to discern fake from the real, in the battle against racially motived negative propaganda disseminated against South Asian-origin political candidates, among others, to try disparage them.

“The best counter is digital literacy programs that train people on how to evaluate online information sources. We live in an era where lots of people and organizations are pushing fake material and consumers have to be able to distinguish more from less authoritative outlets,” West said.

Asked if damage is already done, voter perceptions distorted, by the time counter measures are put into effect to tell the factual truth when social media platforms like Telegram, BitChute, Disqus, and Truth Social, among others, disseminate preposterous propaganda with impunity; more and more Americans fall prey to motivated propaganda, West responded that “despite the explosion of online sources, people still have the means to assess information and push back against blatant propaganda.”

He added: “There are a number of examples of countries where foreign entities sought to manipulate elections but failed. Propaganda does not always work.”

Asked how AI is most effective in reaching out to voters, West said: “AI brings powerful tools to all organizations regardless of their funding levels. So, it can level the playing field and help all groups communicate with voters. AI can generate audio or video, which used to require lots of money.”

Sarah Shah, Senior Director of Strategic Communications and Partnerships, Indian American Impact (IAI) – a national advocacy organization that also helps to help recruit and elect Indian American and South Asian-origin candidates to political positions across the US – said in an interview to India Overseas Report that misinformation and disinformation spread on social media is “absolutely a real threat for South Asian-origin candidates.”

Sarah Shah, Senior Director of Strategic Communications and Partnerships, Indian American Impact

“The high use of WhatsApp in our communities exacerbates the problem because mis/disinformation spreads largely unchecked on the platform,” Shah said.

“In the long term and on a macro level, we firmly believe that enhancing the representation of our community across various industries, sectors, and within the halls of power will significantly mitigate ignorance and combat racially motivated propaganda,” Shah said. “We advocate for legislation mandating Asian American and Pacific Islander (AAPI) curriculum instruction in our schools. Such education not only imparts a deeper understanding of our collective American history and culture but also diminishes the divisive ‘othering’ mindset.”

According to Shah, when Indian American and South Asian-origin candidates face false information or racially charged propaganda, IAI adopts a strategic approach that addresses the underlying intentions rather than solely reacting to the surface claims.

“For instance, when opponents label our candidates as ‘too radical,’ what they often mean is “they don’t care about you.” In response, we emphasize our candidates’ dedication to uplifting all communities and proven track record,” Shah said.

“Furthermore, we confront any racist or xenophobic undertones head-on, exposing the motivations and tactics of those perpetuating such divisive narratives to our audiences,” she added.

Shah explained that IAI also has launched an initiative called DesiFacts, where the organization proactively release culturally-tailored explainers and toolkits in anticipation of major policy changes or events.

Four of the five Indian American members of the US Congress – Pramila Jayapal, Raja Krishnamoorthi (second from left), and Ro Khanna (second from right) with Ami Bera. Photo courtesy of Indian American Impact.

In 2021, Impact launched a robust, six figure program to combat mis-and disinformation in South Asian digital spaces and launched desifacts, the first-of-its kind fact checking website for South Asian Americans that contains explainers, accuracy ratings for viral claims and houses digital literacy toolkits and resources, all translated in multiple South Asian languages.

Desifacts has also connected the website to several other initiatives, including WhatsApp chatbot and tipline. In partnership with Meedan, IAI also launched a tipline on WhatsApp in which community members can forward something they see on WhatsApp and immediately get verification on its accuracy, receiving content from either desifacts.org or their partners at factcheck.org or factly.in.

IAI also launched WhatsTrue Crew, volunteer communities on WhatsApp, that both share misinformation they’re seeing and receive culturally-tailored content from our team that they can share as trusted messengers in their own communities

They have also launched digital literacy and trainings, which West is an ardent believer of. They have hosted numerous digital literacy trainings and policy briefings with partners across the country to empower the Indian American community to factcheck on their own.

“Our strategy involves preemptively addressing the narratives and misinformation likely to confront our community, empowering them to discern truth from falsehoods,” Shah said. “As the saying goes, ‘a lie can travel halfway around the world while the truth is still putting on its shoes.’”

People remember what they hear first and most often, so we prioritize prebunking to thwart the spread of false narratives,” Shah added.

Vice President Kamala Harris (in center), with Congresswoman Pramila Jayapal (left). Photo courtesy of Indian American Impact

Asked to cite some instances in the recent past when Indian American and South Asian-origin candidates in general were affected by false social media narrative and AI generated propaganda, Shah said: “The war in Israel and Palestine has unleashed a wave of rampant Islamophobia, affecting many of our South Asian candidates and elected officials who have been unfairly labeled and targeted as Hamas-sympathizers. We’re also seeing this play out at the federal level where Judge Adeel Abdullah Mangi, a Pakistani American nominated by President Biden to serve on the U.S. Court of Appeals for the Third Circuit, has been subjected to an Islamophobic smear campaign and malicious line of questioning.”

Asked if new South Asian voters distinguish between right and left leaning media or social media posts influencing voters, Shah said: “Advances in deepfake and AI technology make it harder to detect what’s real and threaten to destabilize the 2024 elections. For South Asian voters, this problem is compounded by language barriers and the global reach of mis- and disinformation.”

Shah added: “Recently, the World Economic Forum released its Global Risks Report and said that India was ranked first as the country facing the highest risk of mis- and disinformation in the coming decade. This is important because much of the information that spreads over WhatsApp crosses borders and has global reach as individuals in the diaspora here use the platform as the primary means to engage overseas family members. As India also hosts its elections this spring, we’ve already seen how both parties have officially and unofficially created deepfakes to both tear down opponents and bolster their own candidates, and some of these videos have infiltrated WhatsApp groups of communities here.”

On the hot button issue of immigration, which could sway the upcoming Presidential and other local elections in key states, and on the numerous aspersions of being outsiders cast at immigrant or candidates with immigrant roots, Shah said: “This divisive ‘othering’ tactic poses a significant challenge for our candidates. In our strategic approach, we prioritize breaking down the binary that immigrants are inherently different or have different values. Instead, we highlight all the ways in which our candidates are ingrained in their communities and their tangible contributions to lift all people in their communities.

“For the candidates themselves, for whom these aspersions can be emotionally taxing, our most effective support tool is to offer a robust network of peers and fellow candidates. This network serves as a valuable resource for troubleshooting challenges and validating experiences, providing much-needed solidarity in the face of adversity.”

According to West, immigrants bring lots of strengths to the United States and people need to make those arguments.

“Half of Silicon Valley companies had an immigrant founder or co-founder. The story of American tech innovation is closely intertwined with immigration,” he opined.

Name distortion has always been a common tactic to disrespect and try ‘alienate’ Indian American candidates as ‘foreigners’, when it comes to immigrant candidates, despite their stellar credentials and work in the community. This has happened to several candidates, including Niraj Antani and most recently Nikki Haley.

Shah said that fear often stems from unfamiliarity.

“When people encounter names or languages they haven’t previously encountered, there’s a tendency to retreat to “othering” or associate them negatively. When someone has met a Kamala before, they are less likely to respond to name distortion and its effects because they have a picture of a Kamala they know in their heads. This understanding underscores the importance of increasing representation and exposure to South Asian culture within communities—a cornerstone of Impact’s founding mission,” Shah said.

“To directly counter, we find two strategies to be most effective. First is to understand what the person employing name distortion really means rather than what they’re saying. Recognizing that the aim is to create distance between the candidate and the voters they would represent, we focus on highlighting the candidate’s accomplishments and shared values,” Shah added.

“Within our own community, we’ve found direct and assertive responses particularly impactful. By openly calling out attempts to alienate our candidates as xenophobic or racist, we not only confront such behavior but also galvanize support. As an example, in 2021 during the Senate runoff in Georgia, David Perdue made fun of the Vice President-elect’s name and how to pronounce it during a speech to supporters. We created an ad calling him out that became a rallying cry among our community and other communities of color.”

Asked on the issue of name distortion to try ‘distance’ minority candidates from mainstream voters, West said: “candidates should directly confront name distortion as racism and push back hard on grounds that America is an immigrant nation and prejudice is unAmerican.”

IAI has collaborated closely with South Asian state legislative officials to advocate for the recognition of Diwali and Eid as state holidays nationwide, to make South Asian candidates more familiar to voters.

“Last year, our elected officials achieved significant milestones in New York and Texas, and this year, we continue our efforts in other states where more work remains to be done,” Shah said.

IAI is also having an upcoming annual Summit and Gala: ‘Desis Decide’, scheduled for May 15th and 16th in Washington DC. The summit will feature informative sessions, including guidance on knowing one’s rights at the ballot box and strategies for safeguarding voting rights—crucial topics, particularly in districts with high immigrant populations where misinformation around elections is prevalent.

The event will facilitate networking opportunities for state and local South Asian elected officials from across the country, allowing them to share experiences and best practices, she informed.

“Building a supportive community among candidates and elected officials is integral to combating misinformation effectively. By fostering peer networks, our elected leaders and candidates will have a reliable support system to turn to when facing instances of misinformation, thereby empowering them to address such challenges with confidence and resilience,” Shah said.

IAI, however, does not have an active plan to use generative AI this election cycle, to promote its activities or bolster candidates it endorses, Shah said.

Chanel Martinez, Treasurer, New Yorkers for Shekar, the campaign arm of Shekar Krishnan, the NYC Council Member for District 25, which encompasses Jackson Heights, Elmhurst and Woodside, Queens, three of the most diverse immigrant communities in the world, in an interview to India Overseas Report, said that the campaign has never been a target of any form of fake generative AI attack.

“We don’t engage with negative campaigning by an opponent,” Martinez said when asked how the campaign deals with negative and discriminatory campaigning by an opponent in polls.

(Sujeet Rajan is the Editor-in-Chief of India Overseas Report. This story was produced as part of the 2024 Elections Reporting Mentorship, organized by the Center for Community Media and funded by the NYC Mayor’s Office of Media and Entertainment.)


Share