A quiz in the New York Times has brought the reality of deepfakes into sharp focus. What started as a fun experiment to see if readers could spot the difference – between real videos and those altered digitally via Artificial Intelligence (AI) generated tools known as Deepfakes became a real challenge. Its spread could not only be detrimental to individuals but also to the states and their policies.

The emergence of deepfakes as tools for misinformation poses a greater threat to democratic processes than previous election interferences.

For example, the U.S. presential elections 2024 took place, during which AI-powered misinformation based on Big Data i.e. deepfakes was a greater danger than the interference in the 2016 U.S. presidential election. Currently, the nexus between Big Data and AI is taking a stronger form. This Oppenheimer situation demands necessary regulations and controls by the states. If we are not preparing for a future where truth is hard to identify, states with their societies will be left to manipulation and vulnerabilities.

Synthetic media tools were once cutting-edge technologies for movies and games, and are now weapons for the disinformation battleground. Large and complex datasets like those from social media, sensors, financial transactions, and other sets of information are known as Big Data. While this data provides valuable information, it also poses challenges because of its 3Vs i.e. volume, variety, and velocity. The volume is the vast amount of data. Due to its size, it is difficult to manage them.

Regarding variety, the data comes in many forms, from text to images and videos, requiring different handling approaches. The velocity is in concern to new data, which is generated rapidly. The fast process of generation of data makes it hard for organizations to keep up. The 3Vs of data represent the scale, diverse formats, and rapid generation of information. Therefore, it is a real-time problem for traditional methods to manage and analyze Big Data.

On the other hand, utilizing Big Data for meaningful insights needs powerful tools which we have in the form of AI. AI is trained on data to identify patterns and predict outcomes. There are some limitations to AI. Firstly, AI’s effectiveness relies on the quality and quantity of data availability. Secondly, AI models can be complex to understand how they are reaching their conclusions.

The combination of Big Data and AI creates both opportunities and risks, necessitating robust regulatory frameworks to safeguard against misuse.

Lastly, if the biased data is used to train the AI then it can lead to unfair or discriminatory outcomes. These limitations refer to the challenge of understanding how complex AI models arrive at decisions, with or without biases. Here, the nature of data can lead to fair/ unfair or discriminatory/ non-discriminatory results. Hence, this sheds light on the importance of the state’s responsibility for ‘AI development and its use.’

With the dawn of the digital age, the cyber realm stores an unprecedented amount of data which paves the way towards Big Data and AI. As shown in the diagram below, the duo of Big Data and AI, have transformed the way we interact, live, and work. Each click, swipe, and search creates data that contributes to the global phenomenon in the form of Big Data. This huge treasure of data contains deep patterns and insights that can be potentially misused by adversaries and non-state actors to strategically disrupt the operations of states.

In the meantime, the nexus of Big Data and AI offers opportunities in cyberspace. From the perspective of deepfakes, this combo offers a complete set for making things right or bad. What’s fearsome is that the 79th First Committee of the General Assembly session on October 25, 2024, identified the cyber domain’s perilous side. That’s for using deepfakes as a tool for propaganda. Also to promote espionage, and disinformation in the cyber realm. “The cyber domain in particular is being instrumentalized to undermine human rights, the rule of law and democracy — it has even become a war-fighting domain in its own right,” the Irish delegation mentioned in Disarmament and International Security Committee.

However, the real-world application of Big Data and AI faces a number of obstacles for federal governments. Data privacy, security, infrastructure, and ethical concerns are a few among many challenges for the merger of Big Data and AI. Also, it may cost a lot to integrate both technologies into existing systems. States may have to cater to these challenges by cost-benefit analysis.

Ethical considerations in AI development are crucial to prevent biased outcomes that could exacerbate existing societal inequalities.

Furthermore, Big Data and AI require ambitious technology deployments such as fast internet, memory hubs, skilled labor, software and hardware, and other resources. It will be challenging for a developing state. For the ethical considerations of using nexus of these technologies, state institutions can make sure that the data used for the AI are unbiased and non-discriminatory. This supervision is necessary due to the fact that these data are also supported to power autonomous weapon systems as well.

Lethal autonomous weapons have military advantages over states’ adversaries due to their ability to process data and make speedy AI-based decisions far exceeding human potential. States are thus investing tidy sums in research and development of these weapons. Taking it into account, there are issues about its development globally. It is essential to ensure international regulatory framework which justifies its moral, ethical, and legal actions.

To address these challenges, states can establish regulations that protect citizens’ data while enabling the productive use of AI for both commercial and national interests. Concurrently, it is crucial to invest in technology, cloud computing, and workforce development through targeted training programs, thereby building a robust infrastructure for AI deployment. Clear guidelines must also be formulated to ensure the ethical and responsible use of AI, reducing biases and safeguarding individual rights.

Collaboration between governments, private sectors, and international organizations is essential for addressing the challenges posed by AI and Big Data.

Furthermore, collaboration between governments, private enterprises, and international organizations is equally essential. As demonstrated by the United States under the Biden administration in 2023, such partnerships can facilitate the sharing of risks, expertise, and resources critical to AI innovation and growth.

In conclusion, the integration of Big Data and AI holds vast potential for states, offering enhanced decision-making, operational efficiency, economic progress, and innovation. However, unlocking these benefits requires governments to address pressing concerns related to privacy, infrastructure, and ethics proactively. By embracing a strategic and responsible approach, states can harness the transformative capabilities of Big Data and AI to improve governance and public services.

Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.