In March 2023, images of police officers chasing and handcuffing former President Donald Trump were disseminated on social media. [1] Of course, Trump was not arrested in such a violent manner, and the questionable images were rife with blurry and featureless faces, arms and legs inexplicably emerging from bodies, and hands with more than five fingers. The media came to the somewhat obvious conclusion that the images were generated by artificial intelligence (AI), and the stunt duped very few netizens.
This incident, however, was one of the first in what is likely to become an increasingly difficult problem with AI disinformation. Despite the amusing nature of the photos, a handful of tech reform advocates and AI developers sounded the alarm about the potential dangers of AI-generated content. Some hypothesized AI-generated content would become indistinguishable from human-made content if AI development were left unfettered. Speaking to the BBC, Mounir Ibrahim of Truepic, a digital content analysis company, commented that “synthetic content is evolving at a rapid rate and the gap between authentic and fake content is becoming more difficult to decipher.” [2]
In January 2024, Ibrahim’s prediction manifested. Ahead of the now-terminated Biden-Harris ticket for the 2024 Democratic primary in New Hampshire, an AI-generated robocall of President Biden was delivered to New Hampshire Democrats urging them not to vote. [3] “Your vote makes a difference in November, not this Tuesday,” Biden’s AI voice said. The voice is a near-perfect imitation, with the only thing distinguishing it from Joe Biden himself being the nonsensicality of the robocall. Similarly, an AI-generated audio deepfake of Kamala Harris was spread online in May 2023, featuring a seemingly inebriated Harris slurring an incoherent speech. [4] “Today is today, and yesterday was today yesterday. Tomorrow will be today, tomorrow,” the AI-generated Harris declared.
These examples of malicious AI-generated content demonstrate that the United States holds a stake in the ethical development of its AI, and that the lack of reliable AI detection threatens the national security, politics, and liberal-democratic values of the United States. This is only exacerbated in the face of a particularly contentious 2024 election. What is needed now is the creation of a new federal executive department to ensure the control and regulation of AI.
Critics would note that the expansion of the federal government contravenes the classic American creed of small government. However, this expansion would be made to protect the sanctity of our elections and bolster American national security. According to Brookings, AI can easily generate fake news and hamper information ecosystems; for example, in February 2023, an AI-generated audio deepfake of Elizabeth Warren showed the senator purportedly saying Republicans should not be allowed to vote in the 2024 elections. [5] [6] Also in February 2023, an AI-generated audio of Chicago mayoral candidate Paul Vallas was spread online, featuring the tough-on-crime candidate wildly claiming that “back in my day, cops would kill, say, 17 or 18 civilians in their career, and nobody would bat an eye.” [7]
The use of AI will only facilitate the creation of misinformation and disinformation, especially due to its low cost and ability to work with minimal human input. The Brookings article also found that AI could easily let autocratic regimes and malicious non-state actors generate fake news ahead of elections. [8]
Unfortunately, such a claim is no longer hypothetical. Actors from adversarial states have created and distributed AI-generated content, mainly using American AI technology to disseminate propaganda. A report from OpenAI on May 30, 2024 found actors originating from Russia, China, and Iran to be using OpenAI’s platforms to create AI-generated content aimed at influencing public opinion online. [9] Topics included American and European politics, the 2024 Indian general election, disparagement of those who criticized the Chinese government, the Israel-Hamas war, and the Russo-Ukrainian War. Some AI models were used to praise Putin’s war in Ukraine, while others used fictional personas to post on social media, for example, pseudonymously posting as a “57-year-old Jew named Ethan Bernstein.” [10] A Federal Bureau of Investigation (FBI) analysis found one AI operation praising the Chinese government to purportedly be linked to China’s Ministry of Public Security. [11]
Other stripes of AI technology have been used, such as a deepfake video of President Zelenskyy ordering Ukrainian soldiers to cease fighting. [12] Similarly, a deepfake video purportedly from Russian propaganda sources shows President Zelenskyy’s wife buying a €4.5 million Bugatti sports car. [13] This kind of content – a dangerous and unprecedented blend of cyberwarfare, information warfare, and psychological warfare – obviously threatens American interests. Geoffrey Hinton, a former Google executive sometimes dubbed “the Godfather of AI”, noted that autocratic regimes could easily exploit AI chatbots: “it’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.” [14]
Similarly, Ruhr University professor Thorsten Holz found that most internet users could no longer distinguish AI-generated images from human-made images in a study from the CISPA Helmholtz Center for Information Security last May. [15] Holz, the lead researcher on the subject, noted that “we have important elections coming up this year, such as the elections to the EU Parliament or the presidential election in the USA. AI-generated media can be used very easily to influence political opinion. I see this as a major threat to our democracy.” [16]
What is perhaps most worrying about geopolitically-contentious AI content is that the identity and motives of its creators often cannot be ascertained definitively. While OpenAI’s report found content that originated from Iran, Russia, and China, the exact creators cannot be determined for certain. A government, non-state actor, online activist, or simple internet troll could have generated the content. This has enormous implications for the international system, as any individual now has the power to influence public opinion to a degree unprecedented in human history. This also creates a dilemma for diplomats: disinformation and misinformation that originates from an adversarial country cannot be definitively pinned to the government of said country, thus undermining the receiving country’s ability to properly respond.
AI companies cannot be trusted to regulate themselves or ensure their content will not be used for malicious purposes. For example, in January 2024, OpenAI quietly edited its old usage policies by replacing the phrases forbidding “weapons development” and “military and warfare” with a blanket statement forbidding the use of OpenAI’s “to harm yourself or others.” [17] [18] Current attempts to monitor, control, and otherwise track AI have proved decentralized and disorganized, scattered across different Congressional committees, interest groups, and politically impotent offices in the Executive branch.
A change must be made to handle such an unprecedented issue. To do this, a new federal Department of Technology should be created. A Department of Technology would facilitate red-teaming, the act of purposefully trying to break, manipulate, and poke holes into the security of an organization. Red-teaming has already exposed errors in chatbots at hacker conventions like DEF CON, such as providing factually incorrect information under certain circumstances. [19]
Congressional testimony facilitated by information sharing with the Department of Technology would both allow greater transparency between AI developers and consumers and restore public trust in Congress’s ability to regulate and properly understand technology and digital media. The Legislative branch has historically struggled with maintaining public confidence in this field, due to well-publicized gaffes like Senator Orrin Hatch’s baffling question to Mark Zuckerberg: “How do you sustain a business model in which users don’t pay for your service?” [20]
The Department of Technology would centralize attempts to monitor and flag malicious AI content across the Departments of Justice, Defense, and Homeland Security, reducing bureaucratic lag and centralizing different attempts to flag malicious AI. To increase public trust between the Department of Technology and the general populace, a strong media presence would prove important as well, through press statements similar ones released by the Department of Defense, the Department of Justice, and the Department of Homeland Security. AI development must only continue under strict precautions and transparent policies, and the ethical development of AI – ensuring consumer privacy and security of user data, eliminating bias in chatbots, and reducing the risk that AI generates malicious results in text or image – will become paramount to U.S. national security.
Zachary Kwon ’27 is a rising sophomore at Northeastern University studying Political Science and Economics.
__
NOTES:
[1] Kayleen Devlin and Joshua Cheetham, “Fake Trump Arrest Photos: How to Spot an AI-Generated Image,” BBC News, March 24, 2023, https://www.bbc.com/news/world-us-canada-65069316
[2] Devlin and Cheetham, “Fake Trump Arrest Photos: How to Spot an AI-Generated Image.”
[3] Jeongyoon Han, “New Hampshire is investigating a robocall that was made to sound like Biden,” NPR, January 22, 2024, https://www.npr.org/2024/01/22/1226129926/nh-primary-biden-ai-robocall
[4] Gabrielle Settles, “Video shows Kamala Harris talking nonsensically about today, tomorrow and yesterday,” Politifact, May 5, 2023, https://www.politifact.com/factchecks/2023/may/05/facebook-posts/kamala-harris-wasnt-slurring-about-today-yesterday/
[5] Norman Eisen, et al., “AI Can Strengthen U.S. Democracy—and Weaken It,” Brookings, November 21, 2023, https://www.brookings.edu/articles/ai-can-strengthen-u-s-democracy-and-weaken-it/.
[6] Aleks Phillips, “Deepfake Video Shows Elizabeth Warren Saying Republicans Shouldn’t Vote,” Newsweek, February 27, 2023, https://www.newsweek.com/elizabeth-warren-msnbc-republicans-vote-deep-fake-video-1784117
[7] CNN, “This deepfake surfaced in a tight mayoral race. It’s just the beginning,” CNN, https://www.cnn.com/videos/politics/2024/02/07/deepfake-artificial-intelligence-elections-chicago-paul-vallas-orig.cnn.
[8] Eisen, et al., “AI Can Strengthen U.S. Democracy—and Weaken It.”
[9] OpenAI, “May 2024 AI and Covert Influence Operations: Latest Trends,” OpenAI, https://downloads.ctfassets.net/kftzwdyauwt9/5IMxzTmUclSOAcWUXbkVrK/3cfab518e6b10789ab8843bcca18b633/Threat_Intel_Report.pdf.
[10] OpenAI, “May 2024 AI and Covert Influence Operations: Latest Trends.”
[11] U.S. Department of Justice, “COMPLAINT AND AFFIDAVIT IN SUPPORT OF APPLICATION FOR ARREST WARRANTS,” U.S. Department of Justice, April 6, 2023, https://www.justice.gov/d9/2023-04/squad_912_-_23-mj-0334_redacted_complaint_signed.pdf.
[12] Atlantic Council, “Russian War Report: Hacked news program and deepfake video spread false Zelenskyy claims,” Atlantic Council, March 16, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-hacked-news-program-and-deepfake-video-spread-false-zelenskyy-claims/.
[13] Arpan Rai, “Olena Zelenska falls victim to deepfake video claiming she bought a Bugatti,” The Independent, July 3, 2024, https://www.independent.co.uk/news/world/europe/olena-zelenska-ukraine-deepfake-video-bugatti-b2573101.html.
[14] Josh Taylor and Alex Hern, “‘Godfather of AI’ Geoffrey Hinton Quits Google and Warns over Dangers of Misinformation,” The Guardian, May 2, 2023, https://www.theguardian.com/technology/2023/may/02/ geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning.
[15] Holz, et al., “A Representative Study on Human Detection of Artificially Generated Media across Countries,” https://arxiv.org/pdf/2312.05976.
[16] Felix Koltermann, “New Results in AI Research: Humans Barely Able to Recognize AI-Generated Media,” CISPA Helmholtz Center for Information Security. May 21, 2024. https://cispa.de/en/holz-ai-generated-media.
[17] OpenAI, “Usage policies,” OpenAI, March 23, 2023, https://web.archive.org/web/20240109122522/ https:/openai.com/policies/usage-policies.
[18] OpenAI, “Usage policies,” OpenAI, January 10, 2024, https://openai.com/policies/usage-policies/.
[19] Rishi Iyengar, “What an Effort to Hack Chatbots Says About AI Safety,” Foreign Policy, April 3, 2024, https://foreignpolicy.com/2024/04/03/def-con-31-ai-safety-red-teaming-hack-chatbot-safety/.
[20] Emily Stewart, “Lawmakers seem confused about what Facebook does — and how to fix it,” Vox, April 10, 2018, https://www.vox.com/policy-and-politics/2018/4/10/17222062/mark-zuckerberg-testimony-graham-facebook-regulations.