top of page

Sounding the Alarm: Generative AI, A Weapon of Influence

Image created by Microsoft CoPilot


By Lieutenant Colonel Daniel Botero


Generative AI is quickly transforming society—and just as quickly producing problems. Through its ability to create new content (e.g., text, video, audio) based on existing data that can infringe on copyright protections or even mimic real footage to spread disinformation, much attention has been given to the problems Generative AI is creating in civilian society. Unfortunately, our military preparation is lagging, and Civil Affairs (CA) is no exception. Quantitative data is not yet available, as rapid innovation in microchip technology has only recently driven an explosion in growth in the practicality and accessibility of Generative AI. In my own interactions with leaders across the Civil Affairs enterprise, though, there appears to be little understanding of how this new technology impacts our support to lethality on the battlefield.[1] Generative AI enables our adversaries to weaponize disinformation like never before. Moreover, responsibility for the problem doesn’t rest solely with psychological operations (PSYOP), information operations (IO), or any one job field: it requires a unified approach. CA must be ready for the new digital landscape in order to support countering adversarial disinformation and maintain influence over the civil domain. Identifying the AI issue now, CA must leverage this as an opportunity to include it into training or risk becoming the weak link in an ever-changing battlefield that is already integrating it into the fight.

 

Introduction

CA forces operate during all phases of military activities, whether it be in Large Scale Combat Operations (LSCO), deterring competitors, or shaping an environment in support of strategic objectives. The cultural and demographic expertise of CA enables commanders to make informed decisions and drive relationship-building with civilian stakeholders, which enables influence over the civil domain. Civil Network Development and Engagement (CNDE) enables commanders to implement actions to “consolidate gains and create multiple dilemmas for an enemy force attempting to act and maneuver through that area.”[2] CA forces engage civil networks by targeting centers of gravity (COG) through civil engagements and civil reconnaissance.

      Changes in the information landscape, driven in large part by developments in AI, however, have rendered in-person engagements inadequate on their own. CA, as a branch, must integrate the tactics, techniques, and procedures (TTPs) that will enable it to reach the civil populace faster and more effectively than our adversaries in this new environment. Digital disinformation has already become a powerful weapon. The proliferation of AI tools, along with the development of new, inexpensive satellite internet services (e.g., Starlink, Eutelsat OneWeb, and China’s Qianfan, Guo Wang, and Honghu-3[3]), is amplifying these weapons at a dizzying pace, increasing the breadth of this problem. Understanding and addressing this deficiency will ensure the success and continued effectiveness of CA in stability operations and LSCO. Digital networks transcend social, economic, and cultural boundaries and require CA operations (CAO) to recognize that COGs are increasingly less about what we traditionally train to engage, and more about these digital forces.

       The reader should consider an obvious example: during LSCO, CA is tasked to influence a civil population to shelter-in-place to maximize the commander’s own freedom of maneuver. The CA forces leverage relationships with trusted local power brokers to support the campaign. However, the enemy observes friendly efforts and circumvents it with lightning speed. Using Generative AI, they instantly produce false information assets branded as coming from the US, with a message persuading civilians to evacuate into the US avenues of approach. They deliver these assets to the populace through social media and group messaging applications, using accounts previously created and maintained with AI tools. Our influence and lethality are degraded: Our local allies are helpless to stop disinformation spawned at a pace they have never seen – some are even taken by the ruse themselves. The information battle is over, and US units are stymied by fleeing civilians, causing the maneuver commander to lose the initiative. Despite careful planning, the CA mission has failed due to unpreparedness for the swift enemy employment of AI-generated techniques.


The Russia – Ukraine Conflict: A Testing Ground for Generative AI in Warfare

       The problems we face are not speculative. They are a reality in current conflicts. A decade ago, Russia used information warfare to help undermine Ukrainian sovereignty and rapidly annex Crimea. Russia’s attacks on digital information infrastructure confused and disoriented the Ukrainian population and provided a telling example of the power of disinformation.[4] In the aftermath, Ukraine undertook a scientific review of Russia’s information strategy.[5] They recognized that disinformation moves too quickly across networks and requires distributed, coordinated responses. Ukraine teamed with NATO and implemented a national strategic communications framework that coordinated efforts between government agencies and non-state actors to create a culture where information communications across the civil domain supported the fight against Russian disinformation.

       While these efforts initially yielded positive results, they are now being tested by attacks more numerous and efficacious than those of a decade ago. Russia’s information warfare is now being enhanced with Generative AI to disrupt support for the defense of Ukraine.[6] A torrent of instantly generated articles and social media posts, complete with corresponding imagery, have undermined Ukraine’s external partnerships and threatened to create disunity within the country itself.[7]

       Russia is not the only major power to recognize Generative AI as a weapon of instability. China has released multiple versions of its own tools [8] and has been accused of using them in its own “grey zone” tactics with Taiwan.[9] In fact, the number of tools available for free or a nominal fee has effectively made AI a weapon open to any potential adversary.[10] The impact of deepfake videos and images in normal civic environments around the world is already undeniable,[11][12] having led to vigilantism and murder.[13] In a chaotic environment such as an area in conflict they can increase instability even further. Generative AI can disrupt global markets and sway national elections. This requires a whole-of-government approach to AI deterrence; CA must not be left behind.

“Propaganda, deception, disinformation, misinformation, and the ability of individuals and groups to influence populations through technologies reflect the increasing speed of interaction.”[14] The techniques have always existed and now, through Generative AI, the weapon is accessible at scale by any adversary. Today, a single person can mass produce “fake news,” complete with corresponding fake media assets. Maintaining credibility and legitimacy is not possible by ignoring the problem. This is not only a messaging challenge for PSYOP or IO to manage; it impacts all information-related capabilities (IRC). Much like a ground combat unit requesting close air support (CAS) requires prior coordination and training, any response to disinformation must be planned to ensure cross-functional synchronization and timeliness. Just as CAS requires proficient contacts on the ground, the close relationships that CA forces cultivate with the local government and population place them in a position to initiate counter responses to disinformation in coordination with other IRCs.

 

Impact of Information Age on Civil Affairs Operations

To counter new threats, we first must understand them. Generative AI is “a particular approach to AI that uses large amounts of data to make predictions, typically about what things humans will do in some context, like what words someone might type at the end of the sentence, given the first several words.”[15] Large Language Models (LLMs) are the most familiar type of Generative AI, represented by popular products such as Microsoft Copilot, Google Gemini, and ChatGPT. These tools can mimic human behavior in a way that is largely indistinguishable from human actors. Their power, ease of use, low cost, and wide availability make them incredibly effective tools for disinformation, and while they have shortcomings, they are quickly becoming more potent. There is a need to counter the ill effects of Generative AI, but many experts focus on solutions that are legislation or regulations inapplicable to the military environment.[16] It is precisely these kinds of rapid changes that the US military has been slow to react to in the past. 

Successful CA Operations require credibility and legitimacy. These hard-won resources are highly fragile in the context of disinformation that can cause local populations to call into question US actions.[17] Disinformation attacks only need to be “right” just once to cause a disastrous effect. Like a drone or missile attack, one successful hit against a robust defense can be catastrophic. The asymmetry is, in fact, even more pronounced in Information Operations: it only takes one viral deepfake among thousands of attempts to cause indelible harm. The US has traditionally cultivated credibility by controlling information, but this approach by itself is now too slow. IRCs must have TTPs that establish trust with their commanders to allow them to act at Internet speed and scale. A successful in-person engagement can typically influence one network; digital engagements can have effects over entire networks in a fraction of the time. Furthermore, access to information through the Internet has fueled a general mistrust in traditional sources of information, requiring CA to leverage a wider variety of methods to inform and influence the civil population.[18][19] CA capabilities in CNDE and relationship-building are force multipliers in the digital landscape. Specifically, the engagement of digital influencers enables access to networks or COGs more quickly than ever before. While they may lack breadth compared to legacy methods, the speed and reach of these methods can overcome this in an ever-changing environment.  

The Internet has disrupted social hierarchies, and influence is largely cultivated through a digital presence, rather than solely positional authority. The algorithmic nature of social media, video sharing services (e.g., YouTube and other media sharing services), and other information sharing tools reduces the effectiveness of centralizing information and increases the need for relying on networks of influential accounts to rapidly inform and influence the civilian populace. Whereas a PSYOP digital presence is focused on messaging, a CA digital presence can mean building relationships with online influencers that leverage their preexisting networks and credibility. CA's cultural and social expertise, combined with relationships with NGOs, government agencies, and foreign military allies, provides opportunities for assisting other IRCs in establishing digital personas and proactively building their own networks of influence among the civil populace, which can be used to counter adversarial disinformation.

 

Recommendation

The most important first step is for our CA forces to learn how to access and employ readily available resources. Information Operations are “the integrated employment, during military operations, of information-related capabilities in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries while protecting our own.”[20] IO synchronizes our IRCs to achieve its mission through an IO Working Group (IOWG). Unfortunately, these IOWGs are not actioned frequently enough in the Army. The CA community must deliberately train in conjunction with each of the IRCs (e.g., PSYOP, IO, Public Affairs) to share tools that enhance presence and monitoring of the digital world and to develop ways to maximize their effects. The Army’s recent creation of Talent Acquisition Specialists is another potential resource. These Soldiers are taught “to leverage technology, social media, artificial intelligence, and other tools to connect with potential recruits.”[21] This expertise can aid CA forces to navigate and incorporate their own activity in the digital world and to better understand how best to reach any civilian population in future operations. Other specialists, such as cognitive and social psychologists, can provide quantitative data on how social groups react to disinformation to support the development of coordinated responses.

In the context of the Army Transformation Initiative and impending establishment of an Information Warfare (IWAR) specialty, the time to act is now.[22] Due to their oversight of multiple IRCs, I propose that the US Army Special Operations Center of Excellence establish a cross-functional operational planning team (OPT) focused on future adversarial use of Generative AI in disinformation and act on this initiative for the US Army. The OPT must include representation from public affairs, staff judge advocate, and other specialized cognitive capabilities to develop a better understanding of the problems Generative AI poses in the information environment and identify new TTPs to revise doctrine before adversaries can use it to supplant our own influence. Experimentation in collective exercises alongside IWAR unit development provides an opportunity to drive this initiative. Moreso, I would encourage liaising with others in the USG focused on this issue.[23] With time, a framework can be developed to support implementing this problem into training and create a shared foundational understanding to ensure CA officers are preparing for these problems.

The race to employ Generative AI on the battlefield is underway, and these steps are only the beginning. It will take the whole CA community to find innovative ways to counter our adversaries’ use of AI and maintain influence in the face of disinformation. Refusing to acknowledge the impact of these changes in the digital world will reduce the effectiveness of CA forces, impede our ability to support the lethality of US forces, and leave the CA community irrelevant in future operations.

 

About the Author:

 

Lieutenant Colonel Dan Botero is a Civil Affairs officer currently assigned to the United States Army Reserve Command. Dan was commissioned as an Infantry officer on Active Duty in 2009, where he served with 2nd Battalion, 27th Infantry Regiment and deployed once to Afghanistan. In 2014, he transitioned to the Army Reserve and became a Civil Affairs officer, and he has served in the Active Guard Reserve (AGR) program since 2016. Dan lives in Cameron, NC, with his wife and family. He spends his free time corralling his young children, along with his three dogs and three cats. 


The views and opinions expressed in this article are those of the author and do not reflect any official policy or position of the U.S. Army, the Department of Defense, of any other U.S. government agency.


Endnotes

[1] Slijkerman, Jan Frederik. 2024. “AI Revolution Driven by New Supercomputers.” ING Think. April 24, 2024. https://think.ing.com/articles/ai-a-revolution-driven-by-new-supercomputers/.

[2] States, United. 2021. Field Manual FM 3-57 Civil Affairs Operations July 2021, 2-10.

[3] Petrova, Magdalena. 2024. “How China’s Satellite Megaprojects Are Challenging Elon Musk’s Starlink.” CNBC. December 15, 2024. https://www.cnbc.com/2024/12/15/chinas-satellite-megaprojects-are-challenging-elon-musks-starlink.html.

[4] Schrijver, Peter. 2023. “Ukraine’s Fight on the Front Lines of the Information Environment.” Modern War Institute. September 12, 2023. https://mwi.westpoint.edu/ukraines-fight-on-the-front-lines-of-the-information-environment/.

[5] Grisé, Michelle, Alyssa Demus, Yuliya Shokh, Marta Kepe, Jonathan W Welburn, and Khrystyna Holynska. 2022. Rivalry in the Information Sphere: Russian Conceptions of Information Confrontation. Rand.org. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA198-8.html.

[6] Reuters. 2024. “Russia Using Generative AI to Ramp up Disinformation, Says Ukraine Minister.” Reuters. October 16, 2024. https://www.reuters.com/technology/artificial-intelligence/russia-using-generative-ai-ramp-up-disinformation-says-ukraine-minister-2024-10-16/.

[7] EUvsDisinfo. 2025. “How Russia Uses AI to Dehumanise Ukrainians - EUvsDisinfo.” EUvsDisinfo. February 7, 2025. https://euvsdisinfo.eu/how-russia-uses-ai-to-dehumanise-ukrainians/.

[8] Ng, Kelly, Brandon Drenon, Tom Gerken, and Marc Cieslak. 2025. “What Is DeepSeek - and Why Is Everyone Talking about It?” BBC, January 27, 2025. https://www.bbc.com/news/articles/c5yv5976z9po.

[9] Reuters Staff. 2025. “Taiwan Says China Using Generative AI to Ramp up Disinformation and ‘Divide’ the Island.” Reuters, April 8, 2025. https://www.reuters.com/world/asia-pacific/taiwan-says-china-using-generative-ai-ramp-up-disinformation-divide-island-2025-04-08/.

[10] Ryan-Mosley, Tate. 2023. “How Generative AI Is Boosting the Spread of Disinformation and Propaganda.” MIT Technology Review. October 4, 2023. https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/.

[11] Funk, Allie, Adrian Shahbaz, and Kian Vesteinsson. 2023. “The Repressive Power of Artificial Intelligence.” Freedom House. 2023. https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence.

[12] Reuters Fact Check. 2023. “Video Does Not Show Joe Biden Making Transphobic Remarks.” Reuters, February 10, 2023, sec. Fact Check. https://www.reuters.com/article/factcheck-biden-transphobic-remarks/fact-check-video-does-not-show-joe-biden-making-transphobic-remarks-idUSL1N34Q1IW/.

[13] Hatmaker, Taylor. 2018. “WhatsApp Now Marks Forwarded Messages to Curb the Spread of Deadly Misinformation | TechCrunch.” TechCrunch. July 10, 2018. https://techcrunch.com/2018/07/10/whatsapp-forwarded-messages-india/.

[14] States, United. 2021. Field Manual FM 3-57 Civil Affairs Operations July 2021, V.

[15] Marcus, Gary F. 2024. Taming Silicon Valley. MIT Press.

[16] Ibid.

[17] Note the motto of Russia’s state-controlled news network, RT, which has a documented history of supporting external influence operations: “Question More.”

[18] Brenan, Megan. 2024. “Americans’ Trust in Media Remains at Trend Low.” Gallup. Gallup. October 14, 2024. https://news.gallup.com/poll/651977/americans-trust-media-remains-trend-low.aspx.

[19] Brenan, Megan. 2024a. “U.S. Confidence in Institutions Mostly Flat, but Police Up.” Gallup.com. GALLUP. July 15, 2024. https://news.gallup.com/poll/647303/confidence-institutions-mostly-flat-police.aspx.

[20] States, United. 2018. Army Techniques Publication 3-13.1 The Conduct of Information Operations October 2018.

[21] “Army Launches New Training Program for Talent Acquisition Technicians.” 2024. Www.army.mil. May 21, 2024. https://www.army.mil/article/276468/army_launches_new_training_program_for_talent_acquisition_technicians.

[22] Bryant, W., & Bryant, W. (2025, December 20). Transforming and modernizing Army information forces: Creating the Information Warfare Branch. Small Wars Journal by Arizona State University. https://smallwarsjournal.com/2025/12/16/transforming-and-modernizing/

[23] Marcellino, W., Welch, J., Clayton, B., Webber, S., & Goode, T. (2025, July 24). Acquiring generative artificial intelligence for U.S. Department of Defense influence activities. RAND. https://www.rand.org/pubs/research_briefs/RBA3157-1.html

bottom of page