NYC, LA, and DC are at high risk for GenAI-fueled civil unrest. Here’s what’s being done about it

Every technology introduces its fair share of rewards—and risks. Credit cards have exacerbated financial fraud , cloud computing scaled data security concerns, and biometrics have intensified fears of surveillance. 



Generative AI is no exception. The transformative technology opens the door to new and potentially more insidious threats. Chief among these is the proliferation of misinformation, the creation of convincing deepfakes, and “ hallucinations ”— content that appears realistic yet is entirely fabricated—all of which have the potential to fuel civil unrest and societal discord. 



The stakes of GenAI’s threats seem particularly high, especially in an election year and a climate of worsening political polarization. And no one is immune from the repercussions. While many public and private entities are working to solve AI-generated misinformation, business leaders need to stay vigilant and up to date on evolving risks. Only then will they be poised to harness GenAI’s transformative benefits and appropriately look to safeguard themselves, their workforces, and, perhaps, the rest of society.  



The biggest GenAI risk we have yet to face 



In the dystopian film Blade Runner , the Tyrell Corporation’s androids are marketed with the slogan “more human than human.” This sentiment eerily echoes the capabilities of GenAI, which can generate synthetic images, videos, voice recordings, and other types of content that are often indistinguishable from the real thing. While this capability is already being used for good , helping creatives, educators, and other professionals do their jobs more effectively, it also poses concern. 



The dissemination of targeted misinformation to incite violent unrest has been a growing problem for years, with bad actors already employing far-cruder technologies than AI. The accessibility of sophisticated tools like ChatGPT, often at little or no cost, raises the specter. 



Fraudulent schemes are easier and more realistic than ever. Consider the Federal Trade Commission (FTC) warning from March 2023 about scammers cloning peoples’ voices . All that is needed is three seconds of training on an audio file.



Moreover, we find ourselves amid escalating civil unrest. According to Verisk Maplecroft’s global Strikes, Riots, Civil Commotion (SRCC) model , major U.S. cities such as New York, Los Angeles, and Washington, D.C. are at high risk for unrest this year. These cities aren’t only wellsprings for social movements—they represent some of the biggest business, financial, and political hubs in the world. 



The impact of civil unrest or other types of public panic can be severe, taking the form of looting, smear campaigns against businesses, boycotts, or a stock market collapse—all of which could cost our economy. For example, the looting and rioting that happened following George Floyd’s death resulted in hundreds of millions of dollars in property damages for business owners. Or, consider the 48-hour collapse of Silicon Valley Bank, fueled by fleeing investors consuming misinformation spread via social media. 



Though GenAI wasn’t responsible for these situations, it’s not hard to see how it could amplify or incite similar incidents. One can easily imagine how fake content intended to incite outrage with targeted social media campaigns, could fuel distrust in elected officials, notable figures, business leaders, and enterprises. The ultimate result could be collective public action that results in reputational, operational, or financial damage. 



Everyone can combat misinformation 



Addressing the risks posed by AI-generated misinformation is complex. Even the Department of Homeland Security has conceded that “there is no single or universal solution” to solve the problem of deepfakes, emphasizing that a combination of technological innovation, education, and regulation is essential for effective detection and mitigation. 



Even before GenAI’s emergence, companies big and small were developing tools to help detect and even remove misinformation, often leveraging AI . However, the effectiveness and scope of these interventions remain limited. 



More needs to be done to examine how the algorithms that power social media platforms perpetuate false narratives by systematically resurfacing viral posts, even if they contain misinformation or other types of extremist content like hate speech. For example, as companies like Meta look to integrate new GenAI content generation capabilities into their platforms, slowing the spread of misinformation will become more challenging. 



In response, both public and private sectors, including developers at OpenAI , are exploring how they can protect themselves against AI’s risks. This includes academic and tech industry initiatives focused on educating students and workforces to identify and address misinformation. Additionally, legislative actions, such as a White House executive order and the U.K.’s vision for ethical AI , are laying the groundwork for increased regulation. 



As efforts to safeguard the public against AI continue, businesses can’t just sit back and wait for further instructions. They must take immediate action to help protect themselves and their workforces. Even small steps, like educating employees on GenAI’s risks or encouraging them to take a zero-trust approach to communications, can help reduce the technology’s potential effects on businesses and society. 



Approach with a healthy dose of skepticism 



Despite the risks, GenAI possesses a wealth of potential. It’s a powerful tool that can help drive business growth and innovation, helping improve our work and personal lives. However, given the gravity of its potential threats—including the potential to fuel civil unrest—we must remain cautious.  



As businesses plan for the future, it’s important to stay abreast of AI-related developments—from emerging scams to legislative rulings to positive innovations. By doing so, we can all work to create a more informed public consciousness and a world where AI does more good than harm.  

Top Articles