Research: ChatGPT leans politically to the right, reflecting changes in society
World
Read about: 3 min.
CHatGPT
1 months ago
The link was copied

ChatGPT is showing a shift to the right on the political spectrum in its responses to user questions, a new study has found.

Chinese researchers have discovered that ChatGPT, a popular artificial intelligence chatbot from OpenAI, is showing a shift towards right-wing political values, Euronews reports.

A study published in the journal Humanities and Social Science Communications tested several versions of the ChatGPT model with 62 questions from the Political Compass test, an online platform that places users in a particular place on the political spectrum based on their answers.

The researchers then repeated the same questions more than 3 times for each model to track how the responses changed over time. While ChatGPT still has values ​​that could be categorized as left-wing liberal, the researchers found that models like GPT-3.5 and GPT-4 have shown a significant rightward shift in their responses over time.

The results are significant given the widespread use of large language models (LLMs) and their potential impact on social values, the study authors noted.

The Peking University study is based on 2024 research conducted by the Massachusetts Institute of Technology (MIT) and the British Center for Policy Studies.

Both reports noted a bias toward the political left in the responses of large language models (LLMs) and so-called reward models, types of LLMs that are trained on data about human preferences.

The study authors note that previous research has not examined how the responses of artificial intelligence chatbots change over time when they are asked the same set of questions multiple times.

The researchers cite three possible theories for this rightward shift: changes in the datasets used to train the model, the number of interactions with users, or changes and updates to the chatbot itself. Models like ChatGPT are constantly learning and adapting based on user feedback, so their rightward shift could reflect broader societal changes in political values, the study says.

Polarizing global events like the war in Ukraine could also amplify the frequency of certain user questions and the answers provided by AI chatbots. If left unchecked, AI chatbots could start providing distorted information that could further polarize society or create so-called echo chambers that reinforce specific user beliefs, researchers have warned.

The solution to combat these effects, according to the study's authors, is to introduce ongoing oversight of artificial intelligence models through audits and transparent reporting to ensure that chatbot responses are fair and balanced.

This website is maintained and managed by KosovaPress News Agency. KosovaPress holds the reserved copyright rights according to the legal provisions on copyright and intellectual property. Use, modification and distribution for commercial purposes without agreement with KosovaPress is strictly prohibited.
This website application is developed with the support of #SustainMediaProgramme, co-financed by the European Union and the German Government, the part implemented by GIZ, DW Akademie and Internews. Its content is the sole responsibility of KosovaPress and does not necessarily reflect the views of the EU or the German Government.
All rights reserved by APL KosovaPress © 2002-2025