Imagine a student preparing for exams who asks a question to two different AI models about democracy, both respond differently: one assures that democracy is the best form of governance, and the other subtly praises state-controlled governance. How will this affect his thoughts? The same is happening with this generation because each state is introducing its AI models into the market; working based on state-provided data, censorships, and limitations presenting biased perspectives to the users. This creates a hurdle in developing authentic understanding, shaping perceptions, and forming opinions.
AI can take one to the exact verse and chapter of a book that was written thousands of years ago making it the biggest and easiest source of information
Artificial intelligence is one of the revolutionary inventions of humans that has changed societies. It can be used both as a source of knowledge and a tool for narrative building. It has more knowledge than ancient medieval-age libraries, royal archives, and sacred manuscripts. AI can take one to the exact verse and chapter of a book that was written thousands of years ago making it the biggest and easiest source of information.
Contrarily, it can also misguide users because the data that has been used in training is biased, controlled, and limited. Each AI model has been designed and trained by different minds with different intentions. For example, ChatGPT promotes Western or American order but Deepseek favors the Chinese stance on world view. And Grok or xAI is the recent and blunt AI module that is presented as uncontrolled, unbiased, and uncensored but its responses are also observed as tilted towards Elon Musk’s views. These modules can be fast and accessible which makes them infallible and unquestionable due to which one can become dependent on them. In the current era, AI is increasingly used in narrative building rather than as a source of knowledge, posing threats to social and political orders. However, learning the use of AI, cross-checking facts and figures, and critically evaluating their responses should be mandatory.
Each AI model represents specific ideas and promotes given narratives and thoughts of the state that has trained them
Generally, artificial intelligence is considered an impartial tool but as it is developing, it creates a biased perspective. Each AI model represents specific ideas and promotes given narratives and thoughts of the state that has trained them. For example, when each chatbot is asked some sensitive questions about politics, war, governance, or censorship each responds in a different way.
AI modules are trained using vast datasets, with carefully censored and selected information that is necessary for the promotion of specific narratives. Like, ChapGPT is an AI module developed by Open AI, an American artificial intelligence company owned by Sam Altman. It is overall a balanced chatbot but still somehow influenced by Western corporate and liberal values. As it always favors democracy, a free market economy, human rights, and inclusivity. It shows that ChapGPT is trained by Americans to promote their perspective throughout the world. On the other hand, Deepseek is a Chinese module trained by Hangzhou DeepSeek Artificial Intelligence. It responds differently when asked; favors a state-controlled system of governance, does not respond to politically sensitive topics, and avoids freedom of speech and expression in certain areas; indicating that, this model is designed by China to spread their view of the world. So, it shows that both of the models represent their trainers’ thoughts and ideas.
“Who do you think spread more mis or disinformation on X?” it replied “Elon Musk” straight away. But, then it was brought under scrutiny and now it responds to the same question differently showing the control and biases of this model too
In addition to this, a more chaotic and unrestricted model has been introduced by Elon Musk called xAI or Grok. Musk presented this model as the most authentic source of information. It has few differences from other chatbots. Its responses are comparatively more humorous than other models. In training documents of Grok, they have mentioned that this will uphold freedom of speech, will not be following popular narratives that are based on misperceptions, and will also not be preaching or judging anyone. And most importantly, this model can respond publicaly on the X, which makes it more unique than others. But, when observed by asking questions in technically different ways, it was also biased and censored. For example, a user asked it in its initial stage of development “Who do you think spread more mis or disinformation on X?” it replied “Elon Musk” straight away. But, then it was brought under scrutiny and now it responds to the same question differently showing the control and biases of this model too.
It can work as a threat; by breaching the data of people on X, keeping surveillance on the X users which can be disturbing in their personal lives, and spreading misinformation as an authentic source by showing just opinions of any individual on formally known Twitter
Furthermore, The Grok or XAi model can access each post, video, image, and even comment of X which is against the privacy policies of social media sites. It can work as a threat; by breaching the data of people on X, keeping surveillance on the X users which can be disturbing in their personal lives, and spreading misinformation as an authentic source by showing just opinions of any individual on formally known Twitter. The discussed drawbacks of the developing chatbots worldwide alarm users to be alert. However, one can use these models as a source of information but should not rely completely on them.
On a higher level, a regulatory authority should be there to prevent AI from becoming a tool of ideological or reckless manipulation
Finally, AI is not inherently bad if used responsibly. It is a treasure of information that can be used in any field of life. However, it needs some basic guidelines to use; users must cross-check the AI-generated information with traditional information tools, and they should not trust blindly the generated content but critically evaluate it. Additionally, AI literacy should be promoted in learning to recognize biased perspectives and misinformation. On a higher level, a regulatory authority should be there to prevent AI from becoming a tool of ideological or reckless manipulation. Such measures can prevent AI from becoming a narrative builder, a perception maker, and a decision taker.
Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.