The revolution in Artificial Intelligence (AI) confronts us with the greatest challenge our generation has ever incurred. The convergence of biotechnology and information technology is making humans obsolete. Though emerging technologies hold wonderful promises, threats and dangers are also inevitable.
The emersion of Artificial Intelligence (AI) is shaping different sectors at an expeditious pace like sustainable development, equality and inclusivity, productivity, and environmental change.  The concept of AI is often labelled as fuzzy and difficult to define due to emerging transformations. However, to understand the nuance of AI, one must try to grasp its definition: “System’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”.
Various viewpoints and opinions are presented on the future of AI in enhancing human security, but less is discussed regarding the implications it will induce in the near future.
It is not easy to grasp all the angles of how AI will impact human security. Artificial intelligence is, without a doubt, transforming the world and will continue to do so. However, despite AI’s potential to bring positive change, there is still a chance it will have detrimental effects on society.
What does the future holds for us?
Consumers frequently concentrate on privacy and anti-discrimination since it’s easier to anticipate potential problems in these areas. However, the deeper issue is the handling of data by big corporations and using algorithms to manipulate the thoughts of humans. The sole objective that humans have been pursuing for ages is freedom of expression, yet big data algorithms curtail human authority and undermine the idea of individual freedom.
Like the democratic system, the political process does not revolve around what we think but rather about what we feel. States used personal data obtained by elite corporations in the name of safety to exploit for their political process. In his book, Yuval Noah Harari indicates, “For once somebody gains the technological ability to hack and manipulate the human heart, democratic politics will mutate into an emotional puppet show”.
States might not have reached into the hearts of humans, but they accomplish tracking human activities. States are using emerging algorithms to restrict people’s voices against their policies. A striking example of infringements on privacy occurred when Social Sentinal provided sophisticated technology to colleges for the safety and security of students through social media platforms, but Kennesaw State University authorities used those systems to track down the students involved in demonstrations. Similarly, China used AI to censor speeches against lockdowns during a pandemic.
Privacy is not the only solicitude faced by humans in advancing technology. AI is often praised for its good usage in achieving sustainable development goals, i.e. AI4Good. However, the environmental effects of growing technological use are disregarded in the flurry to develop cutting-edge technologies. A study illustrated that an estimated 600,000 lb of carbon dioxide emissions could result from training a single deep learning natural language processing (NLP) model on a GPU because of substantial energy consumption.
Looking at about the same amount of carbon dioxide emissions created over the course of five cars. Similarly, Google’s Alpha produces 96 tonnes of CO2 over the course of 40 days of training. In a crucial time when the global community is calling out for a reduction in carbon emissions to mitigate environmental hazards, one can question the carbon footprint left by algorithms just playing games.
AI is not just restricted to manufacturing; it is expected to become as ubiquitous as cell phones and the internet.
A Way Forward: Sustainable AI
At present, there is no centralized approach to measuring the impacts of AI and Machine learning on human security. Rapid technological transformations make regulating and monitoring the system’s ethics difficult. The phenomena of Sustainable AI is an initiative to promote change towards higher ecological integrity and social justice throughout the whole lifecycle of AI products, including concept generation, training, implementation, and governance. Sustainable AI is more centred on socio-technical systems rather than AI applications.
Data privacy and ethics should be monitored under the Universal Human Rights Declaration; a promising approach to understanding the impact is to use AI human rights impact assessment (HRIAs). HRIA system works through understanding the effects of specific projects when developers and deployers still have a chance to modify or abandon the project. It will help to mitigate the possible threats to human security.
The carbon footprint left by AI/ML is not inescapable and needs not only to be the price of progress. AI needs to be seen as a societal experiment being done on people. We still have a lot to learn about this technology. Since AI is now clearly experimental, it is crucial to implement moral precautions to keep both people and the environment safe.
State-led AI projects should be regulated under a “proportionality framework” to assess the carbon footprint left while tuning and training machines. Additionally, carbon trackers should be used by AI machines to not only indicate the carbon footprint of training a particular model is monitored but it should also be projected to halt model training if the anticipated environmental cost is surpassed.
In the race to achieve the most advanced technology in every walk of life, we certainly cannot ignore the cost of it. Humans must be protected in technological revolutions to make sense of this world.
Musfirah Rashid is a Quaid e Azam University graduate, remained associated with Islamabad Policy Research Institute (IPRI), and works extensively on Human Security and Geopolitics. She can be reached at musfirahrashid3@gmail.com
Twitter handle: @musfirah_rashid