Public relations, propaganda, and psychological warfare through AI-powered methodologies have been defined as warfare in modern times

Warfare has now advanced to using multiple techniques on social media platforms with AI technology and information warfare. Gersman, a leading scientist at NATO’s Strategic Communication Center of Excellence, discussed this in a recent conference. He emphasized that new threats have been generated by AI-based information warfare and cognitive manipulation. In recent years, public relations, propaganda, and psychological warfare through AI-powered methodologies have been defined as warfare in modern times.

Unlike the past, methods which employed physical battles, progressive changes have resulted in using weaponized AI to cloud the public’s perception, spread propaganda, and construct an entire society’s psychological state. While Western Nations focus on ethics and policy of AI, there are no constraints for hostile nations, making them powerful competitors in the information battle. AI technologies have increased people’s voices while burying others.

The effectiveness of these communications systems through AI cyber warfare falls within the category of information warfare. This ultimately boosts the volume of disinformation scattered on the world’s web. In case you have not noticed, the internet has made available tools like Generative AI, where all that is needed is a prompt, and misleading content can be built in the blink of an eye.

AI warfare has benefits in terms of better decisions, higher accuracy, minimization of casualties of human soldiers, and cost reduction by automating processes

AI is about to upend modern warfare,  changing military strategies, operations, and world defence as we know it. AI-based autonomous weapons systems like drones and robotic vehicles can autonomously identify and attack targets without direct human input, thus improving accuracy while simultaneously creating ethical dilemmas. In order to improve intelligence, surveillance, and reconnaissance (ISR), AI analyzes (in real-time) thousands of information from satellites, drones, and social media, enabling agile decision-makers in the military sector.

In the cyber warfare scenario, AI helps improve both offensive and defensive capabilities—detecting cyber-attacks and hacking or data manipulation. AI also optimizes military logistics systems, automating inventory control and streamlining resource allocation for maximum efficiency. Army training AI is like training with a robot soldier at one career stage, including highly realistic use of the environment. AI warfare has benefits in terms of better decisions, higher accuracy, minimization of casualties of human soldiers, and cost reduction by automating processes. Unfortunately, this is where ethical and legal debates come into play, especially regarding who is accountable for the eventual result of AI-based decisions, the extent to which international laws should be adhered to, and the potential of an AI arms race intensifying global wars.

With advancements in AI technology, the possibility of combining fully autonomous combat systems and AI-assisted strategic tools could further change military operations. The difficulty is balancing innovation and regulation to use AI responsibly and not seeing it applied in war. Countries need to cooperate on specifications, rules of thumb, and safeguards to avoid an AI arms race that takes our world down a path of instability while leveraging the power of AI to improve security.

At its peak, warfare and social media manipulation occurred in Ukraine, where people were led to assume that new laws would be enforced more strictly

An example is the exploitation of bots and trolls—fake users who fabricate and post thousands of comments, likes, and shares to trick social media algorithms. By using these automated accounts, people create an illusion that a particular point of view has many supporters, shaping public opinion. The platforms are now clearly fighting grounds of net worth competition. Research done by NATO’s Strategic Communication Center shows how effortless it is to bypass follower and like verification seals. The elaborate sets highlight an inaccurate propaganda portrayal while ignoring its actual effects.

At its peak, warfare and social media manipulation occurred in Ukraine, where people were led to assume that new laws would be enforced more strictly. But this does show how far-reaching the bot accounts are now. How powerful bot accounts can generate false impressions on social media is frightening and incredible. Because it is detrimental to limit social contact through misinformation, such agents are highly harmful to society.

Under Elon Musk, the moderation of content on Twitter (or X, now) has changed more. These changes have allowed networks of false information to flourish. New research shows that accounts falsely discussed superpower countries globally received more than 60% extra engagement after these changes.

One of the most complex parts of developing tools for AI detection is how to differentiate between real and fake content

The ability to verify accounts has also been misused. Artificial intelligence (AI) content-generation services like Telegram and Twitter arose to disseminate worldwide public opinion. People are even using fake voices as a tool. A political leader in Slovakia was discredited after his voice was cloned and modified to create a fake conversation. The situation was very similar in the UK when members of the opposition were attacked after their followers artificially videotaped those uttering insulting comments.

These manipulations use people’s biases, such as how they tend to believe first impressions. One of the most complex parts of developing tools for AI detection is how to differentiate between real and fake content. Although specific systems can detect AI-generated text, simple things like stretching or adding random noise will cause these detection systems to fail. This means that adversaries can continually improve their techniques to outsmart the countermeasures put in place.

Social media platforms must have stricter verification policies relating to engagement for social algorithms to avoid malicious exploitation by ill-intending users

The issue is made even more complex by open-source AI models. Even while top tech companies keep an eye on their AI algorithms, open-source rivals can make and share incoherent stuff as they choose. Because of this control gap, bad actors can launch worldwide disinformation campaigns relatively quickly. Without counter-misinformation strategies, there is far more room for adverse side effects. Gersman pointed out, for example, the necessity of Media Literacy and Critical Thinking.

Instruction aimed at helping the general population identify false information is imperative. The knowledge of how narrative AI-generated stories operate is one of the most systematic, sensitive questions an individual faces—more burdensome Regulations on Social Media. Social media platforms must have stricter verification policies relating to engagement for social algorithms to avoid malicious exploitation by ill-intending users. Disinformation campaigns need more expenditure to conceal their malicious intent.

Government bodies and institutions blocs fundamentally need to support the development of computer systems capable of detecting marked AI narratives that have not yet gone viral. Cybersecurity. Reducing the probability of unapproved access and information exposure makes it more difficult for AI-AP to misinform the public. Software security further contributes to limiting the ability to weaponise AI disinformation. Warfare through information technologies was once a concept left for science fiction, but today it exists.

AI and social media have supercharged the way wars are fought, allowing for the politics of manipulation that goes even as far as influencing elections. As Gersman pointed out, responding to such threats demands technological solutions, media education, and alertness from society. Many challenges have surfaced, but proactive steps can assist democratic states in guarding their information sphere and preventing AI-fed disinformation. Indeed, the future war will not only be about guns and missiles but also data, code, and the art of storytelling.

DisclaimerThe opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.

Author