Artificial Intelligence (AI) stands out as a revolutionary force in the rapidly evolving technological progress and innovation. AI can learn, adapt, and transform virtually every aspect of human life and existence.

When used daily, AI-powered algorithms can significantly impact how and what people think. AI algorithms can be written by anyone with a specific agenda to spread disinformation about certain things.

Biases of the algorithm maker toward a particular thing can muddle the information and data provided by the AI software, search engines, and social media sites.

Filter bubbles and echo chambers can also be fashioned to generate a specific environment, including potential misinformation and manipulation, where users encounter information that reflects and reinforces their opinion of a specific side.

This can limit exposure to other perspectives and create polarization. Similarly, AI can inadvertently present biases in the data in which it was trained. That is why there is a need for algorithmic literacy among users, particularly youth.

Nevertheless, AI is not immune to manipulation and human rights vulnerabilities. For Pakistan, AI can provide opportunities for the growth and development of its economy and way of life. Thus, it is imperative to ensure the development and proliferation of AI in Pakistan is necessary, which should be outcome-based and risk-weighted.

AI in Pakistan

Under the Digital Pakistan banner, the ‘Draft National AI Policy’ was launched in May 2023. Although it is a plausible move, Pakistan’s Draft AI Policy requires a strategic overhaul to ensure responsible, transparent, and accountable behavior to safeguard human rights and mitigate risks of content distortion as well as misinformation and disinformation while harnessing its potential.

Pakistan’s Draft AI Policy is divided into four pillars. The first pillar is AI Market Enablement. It aims to prepare society for AI adoption by promoting awareness, improving R&D quality, and enhancing the workforce through targeted interventions. The second pillar is Enabling AI through Awareness & Readiness. It aims to establish an AI ecosystem, addressing societal challenges such as awareness, data standardization, accessibility, and computational needs. The third pillar is Building a Progressive & Trusted Environment.

It aims to ensure that Pakistan will educate its population about artificial intelligence (AI) and its advantages, and its workers will acquire the skills they need to join the AI industry. The Fourth Pillar is Transformation and evolution. It aims to ensure that the sectors and industries will be transformed to use AI effectively with the help of national IT boards. These boards will increase awareness and provide training programs, promoting collaboration among different sectors.

The Draft Policy incorporates a wide range of national objectives in four main areas to attain AI dominance. The goal of the national AI Policy is to position Pakistan competitively in order to facilitate the integration of AI while also considering global trends.

Problems with the Draft AI Policy of Pakistan

As stated earlier, the draft policy has four pillars. Still, the whole policy does not mention how to cope with the dilemmas that are mentioned above relating to information manipulation and distortion.

The policy’s Sub-Section 3.2.4 under Section 8.3 contains the heading ‘Ethical Challenges,’ but this section is concise and contains negligible policy guidelines on identifying and solving ethical dilemmas of AI. Similarly, datasets are a crucial part of AI, allowing these technologies to be trained for practical uses.

This brings up two crucial points regarding the facts that are used to create, educate, and enable AI to function. These are the privacy and security of the dataset used to train AI and whether the dataset represents Pakistan’s perspective and context.

The Draft AI Policy should also address human rights concerns related to the use of AI-powered automated decision-making in the fields of government services, education, criminal justice, finance, and others under the pretext of ‘public interest,’ as it allows for discrimination, exclusion, and profiling of certain people. Regulations like the EU AI Act categorize such uses as “high risk” and impose complete moratoriums. Both Draft Policy and PDPB should reflect these realities.

The Digital Pakistan Vision of Pakistan’s AI Policy serves as a fundamental basis for integrating modern technology into the country. Its objective is to strategically place Pakistan in a competitive position throughout the fourth industrial revolution and governments that rely on data-driven digitization. Nevertheless, the policy is deficient in clearly expressing its aims and quantifiable targets, lacking precision and concentration.

Although it highlights the need to use evidence to develop policies and designs, including a detailed plan, readily available databases, and a regulatory framework, it does not clearly state-specific and quantifiable goals. A difference is observed if we compare the National Draft AI Policy with a few other AI strategies. For example, in the Brazilian Artificial Intelligence Strategy, promoting the use of AI and its applications in education, economy, workforce training, AI governance, and entrepreneurship is prioritized. Still, ethical use and public security are considered first.

Brazilian AI strategy or EBIA (Estratégia Brasileira de Inteligência Artificial) document mentions that it takes references from leading global institutions like OCED, which work to give policy recommendations and standards to different states regarding how an AI policy must look like which also promotes economic growth. Brazilian AI Strategy addresses several OECD AI Principles, which promote the ethical use of AI and human rights protection.

These include human-centered values, transparency, and international cooperation for trustworthy AI applications. While Pakistan’s Draft AI Policy does not take any recommendations from these international policy standards, as given by OCED or other institutes, it mentions in its objectives that AI policy would ensure forward-looking guidelines but does not mention a human rights-based approach.

Pakistan’s Draft AI Policy also does not entail how “AI regularity authority,” which it envisions to be formed, would ensure ethical use. It also fails to mention and define any high-risk systems AI can bring forward that could threaten Pakistan’s national security.

The implementation criteria are well defined, including the formation of a National AI Coordination Council in Section 2.2, “The State of AI in Pakistan,” which includes a comprehensive regulatory framework, research and development efforts, and AI innovation clusters. However, the feasibility of meeting these needs relies on the presence of enough resources and infrastructure, which might be a difficulty.

The policy delineates a surveillance system encompassing a specialized unit, periodic reporting, enhancement of skills and knowledge, collaboration with the commercial sector, and assessment procedures.

Draft AI Policy also fails to address the requirement for dependable and quality data and systems for training a trained workforce. While the policy provides a basis for advancing artificial intelligence in Pakistan, deficiencies and weaknesses must be resolved to execute and achieve its desired outcomes effectively.

These loopholes of AI present a few legal and human rights challenges to the world, including algorithmic bias and discrimination, which can perpetuate existing biases in the dataset provided to AI. The second challenge is privacy and data protection, which is critical, but no legislation in Pakistan addresses privacy or data protection in particular. There is a lack of comprehensive regulations in the country that regulate how organizations inside Pakistan acquire, handle, and utilize personal information.

Thirdly, freedom of expression and censorship are being impacted by the increasing use of AI because it is being utilized to censor and control the flow of information to the public. Deep fakes are also being generated at an unprecedented scale in Pakistan, which are derogatory towards human rights law. In the Draft AI Policy, ways of mitigating the manufacture of deep fakes are not even mentioned once, highlighting that Pakistan’s Draft AI Policy is not human rights-centric.

There are no mechanisms in the policy to identify or take down AI-generated content that is harmful and violates the right to dignity of the citizens of Pakistan. This shows that the Draft AI Policy needs to be reworked so that human rights aspects are also mentioned in detail and addressed. Ethical challenges are mitigated and should be incorporated into it so that people’s right to information is not hindered.

Conclusion

In conclusion, Artificial Intelligence is a revolutionary technology with much potential and capacity to learn and adapt, making it a powerful tool. Despite that, malicious actors can sway and manipulate AI by maligning the data from which it learns. Pakistan’s Draft AI policy overlooks its vulnerabilities and human rights concerns. It also lacks detailed guidelines for mitigating these shortcomings.

Although no International Law exists for the regulation of AI, regional and state policies of other states should be utilized to align and standardize Pakistan’s AI use.

The right AI policy is necessary for Pakistan to regulate and benefit from it. Despite being a courageous move, Pakistan’s draft AI policy is not outcome-based and risk-weighted. Therefore, Pakistan’s AI policy should be designed and utilised responsibly, ensuring adequate oversight, transparency, and accountability for its impacts to protect human rights and also protect Pakistan’s interests.

Print Friendly, PDF & Email