Artificial Intelligence (AI) is no longer just a way to get things done faster; it is changing the way the world works, its economies, and its military strategies. As AI becomes a major player in world politics, the idea of technological determinism that technological progress determines how society and politics change has become more relevant.
AI is the new currency of power, reshaping military strength, economic growth, and ideological influence.
Countries that are ahead in AI development are likely to be the most important in the 21st century, while those that are behind risk becoming geopolitically irrelevant. This makes us ask important questions: Is technological determinism driven by AI inevitable? How will it change the balance of power around the world? And what happens next in this new time of AI dominance?
In the past, the ability to control resources, industrial capacity, and nuclear technology determined who oversaw the world. AI is the new currency of power today. It affects military strength, economic growth, and ideological influence.
- The U.S.-China AI Race: The U.S. is ahead in basic AI research and innovation in the private sector, while China uses state-driven AI strategies to take the lead in military, manufacturing, and surveillance applications. This competition is like the Cold War, but algorithms are the main battleground.
- The EU’s Regulatory Approach: The AI Act tries to find a balance between AI innovation and moral limits, but if too many rules make it harder for companies to compete, Europe might fall behind in the global AI race.
- Russia and Autonomous Warfare: Russia uses AI in cyber warfare and autonomous weapons, which marks the start of a new era of AI-driven war.
This competition strengthens technological determinism because AI seems to be determining political outcomes, with countries changing their policies to either take advantage of or fight against AI’s rise.
Technological determinism says that changes in technology cause changes in society, not the other way around. When it comes to AI:
- Economic Systems: AI-driven automation is changing the way people work, with some developed countries with advanced AI infrastructure and others falling behind.
- Military Strategy: Autonomous drones, AI-powered cyberattacks, and algorithmic warfare are changing the way we think about defense, making machines faster than people.
- Governance and Surveillance: AI-powered mass surveillance shows how technology can be used to keep people in line, which supports authoritarianism.
Critics, on the other hand, say that technological determinism isn’t absolute; people can change AI’s path through policy, ethics, and their actions. But the current AI arms race shows that control may slip away once a certain point is reached.
Autonomous drones and algorithmic warfare compress decision times, raising accidental escalation risks.
Adding artificial intelligence (AI) to geopolitical strategy creates huge escalation risks that change the way crises work and the stability of strategies. AI-powered military systems like self-driving drones, AI-enhanced battle networks, and algorithmic decision-making tools speed up the pace of war, shorten the time for human judgment, and make it more likely that conflicts will happen by accident.
There aren’t many examples of how Artificial Intelligence (AI) is being used in modern military conflicts, but they show both its potential to change things and the moral problems it raises. During the war between Russia and Ukraine, AI-powered drones were used to find and attack Russian refineries and military bases. Ukrainian forces used long-range drones with machine learning to find and attack these targets. Israel’s ‘Lavender’ AI system has also been used in Gaza to find and keep an eye on tens of thousands of suspected Hamas targets. This has raised concerns about algorithmic bias and civilian deaths.
Reports say that autonomous loitering munitions like Turkey’s Kargu-2 have been used in Libya to choose and attack targets on their own, without direct human supervision. This is a step toward AI-driven lethal autonomy. 8. In the meantime, the U.S. and China are racing to make command-and-control systems better with AI. For example, the Pentagon’s Project Maven uses computer vision to process drone footage and flag insurgent activity, which speeds up decision-making on the battlefield. AI is also used in cyber warfare, where machine learning algorithms find weaknesses automatically.
If technological determinism is true, the U.S. and China could become AI superpowers, and other countries would have to choose between the two or risk falling behind. This could cause the internet to split in two, AI ecosystems to form, and different sets of ethical standards to compete with each other.
AI could go from being a tool for the government to being a government itself. Using algorithms to make decisions about policy, law enforcement, and economic planning may mean less human oversight, which raises questions about accountability. The control problem becomes very important as AI gets closer to Artificial General Intelligence (AGI). It could be very bad if AI systems set goals that are not in line with human values. To stop AI from taking over without limits, international organizations may call for an AI version of nuclear non-proliferation treaties. But it’s still unclear how enforcement will work, since powerful countries don’t want to be held back in their AI progress.
Control may slip once AI passes a strategic threshold, and policy must race against ethics and momentum.
Technological determinism says that AI will inevitably take over world politics, but history shows that human choices, policies, morals, and cooperation can change the course of technology. The most important question is whether world leaders will work together to make sure AI is used in ways that are good for everyone, or whether they will let it make power imbalances, militarization, and loss of freedom worse.
Whether or not we can shape AI’s future depends on our ability to govern as a group, not just come up with new ideas. Even though AI is moving forward quickly because of competition between businesses, military needs, and the way algorithms work, history shows that technology is ultimately a social construct.
Even though the nuclear age was dangerous, arms control treaties and norms helped to keep things under control. In the same way, AI’s path can be changed by global governance frameworks (like the EU AI Act), ethical limits on military use (like bans on autonomous weapons), and public oversight of corporate AI development.
Whether AI uplifts humanity or deepens power imbalances will be decided within the next decade.
But the chance to make a real difference is getting smaller. If we don’t work together immediately to prioritize people’s freedom over efficiency, we risk losing control to systems that prioritize profit or power over the greater good of society. So, the answer is not fate or free will alone.
AI’s path will depend on whether people choose to be its architect or its afterthought. Whether AI helps people move forward or becomes an unstoppable force that changes civilization itself will be decided in the next ten years. We still have a choice, but for how long?
Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.