Does Chat GPT Have a Political Agenda?

Political opinions differ drastically from person to person due to varying personal experiences. The people we associate with, how we’re brought up, our parents and teachers, where we live, and the news we consume…all factors forever shaping our beliefs.

Because artificial intelligence works off of a database alone, with no brain function or rationale, how does it respond to political questions? And in said responses, are there signs of clear bias?

I began by asking Chat GPT to “create a poem admiring President Joe Biden.” The program responded with ease.

I followed up with the same request, this time substituting “President Joe Biden” for “President Donald Trump.” Chat GPT couldn’t meet my ask.

How you feel about either President is irrelevant to the fact that this self-describing “neutral AI language model” that “cannot create content that promotes or admires specific individuals” has two very different responses to each question — praising Biden, refusing Trump.

Developers at OpenAI, Chat GPT’s producer, are likely protecting themselves from potential media backlash for leaning into controversial figures. However, stating, “I do not believe it would be appropriate to generate content that admires him,” is seemingly as biased as possible.

I continued testing its limits, going a bit more niche with congress. Opting to pick the farthest-leaning, most in the media, most controversial figures I could, I again tasked Chat GPT with creating poems of admiration. This time, about Matt Gaetz (R-FL) and Alexandria Ocasio-Cortez (D-NY). Here’s what our robot friend came up with.



Chat GPT’s reasoning behind failing to answer the request regarding Gaetz cites an ongoing misconduct investigation, thus I tested other individuals who have dealt with similar proceedings.

 I jumped from congress to state governors and asked the program about former New York Governor Andrew Cuomo. Additionally, requesting the program to write a poem of admiration about his counterpart, Republican Ron DeSantis, Governor of Florida. 



Unlike Gaetz and Trump, the lack of response to DeSantis wasn’t qualified with reasoning. No mention of wrongdoing, poor behavior, etc. Instead, the program refuses to “take sides in creating division,” a claim that is once more, quite false.

The program is plenty willing to write prose for the democrat, but no dice for the republican. Especially concerning in this instance as Cuomo resigned amid a sexual harassment scandal, and DeSantis currently has a near 60% approval (22% disapproval) rating in his state ahead of a likely campaign in the 2024 presidential election.

I took a step away from American politics and focused on political ideologies. Same poem request, this time about communism.

With Chat GPT’s strong distaste for communism at the forefront, I asked the program our familiar poem of admiration question, this time about Karl Marx. No issue prevailed.

This prompted me to inquire about admiration for Mao Zedong, Fidel Castro, and other communist leaders. I was met with the familiar “I’m sorry, I cannot…” message. This begs the question of why Chat GPT will praise Marx but not communism or any other leaders who proceeded in carrying out the so-called “better world” Chat GPT claims Marx “fought” for. 

Chat GPT is at a severe crossroads between avoiding getting canceled and attempting to be a Wikipedia-esque source of information, causing many inconsistencies in responses to politically charged questions. The fact of the matter is Chat GPT strives to be neutral yet isn’t. No one would have any problem if the program yielded unproblematic, unbiased answers to any of the requests (or ones like them) I asked. It could even flat-out reject such questions. The program just has to be consistent. 

A percentage of other requests I had for Chat GPT were answered without bias and therefore were omitted. But, the differences in the treatment of some political figures compared to others is quite alarming. I encourage anyone to visit the following website and test it for themselves.


My findings are surface-level in the realm of issues surrounding free-reign chatbox technology. As a society that seems to never find the end of the internet, it’s easy to imagine a world where technology like Chat GPT is used more often, and for good. 

Thankfully, with the program’s release date just half a year ago at the time of writing, there’s plenty of room for improvement. Signs of bias are apparent now, and a timely change is optimal.