What Are the Limitations of ChatGPT?
ChatGPT is a popular chatbot released by OpenAI in late 2022. Chatbots, or computer programs that simulate human interactions via artificial intelligence (AI) and natural language processing (NLP), can help answer many academic questions.
While using ChatGPT for your studies can be really useful, particularly for help with exam preparation, homework assignments, or academic writing, it is not without its limitations. It’s essential to keep in mind that AI language models like ChatGPT are still developing technologies and are far from perfect. Current limitations include:
ChatGPT limitation 1: Incorrect answers
Because ChatGPT is a constantly evolving language model, it will inevitably make mistakes. It’s critical to double-check your work while using it, as it has been known to make grammatical, mathematical, factual, and reasoning errors (using fallacies).
It’s not always reliable for answering complicated questions about specialist topics like grammar or mathematics, so it’s best to keep these types of questions basic. Double-check the answers it gives to any more specialised queries against credible sources.
Perhaps more concerningly, the chatbot sometimes has difficulty acknowledging that it doesn’t know something and instead fabricates a plausible-sounding answer. In this way, it prioritises providing what it perceives as a more “complete” answer over factual correctness.
Some sources have highlighted several instances where ChatGPT referred to nonexistent legal provisions that it created in order to avoid saying that it didn’t know an answer. This is especially the case in domains where the chatbot may not have expertise, such as medicine or law, or anything that requires specialised knowledge in order to proceed beyond a general language understanding.
ChatGPT limitation 2: Biased answers
ChatGPT, like all language models, is at risk of inherent biases, and there are valid concerns that widespread usage of AI tools can perpetuate cultural, racial, and gender stigma. This is due to a few factors:
- How the initial training datasets were designed
- Who designed them
- How well the model “learns” over time
If biased inputs are what determines the pool of knowledge the chatbot refers to, chances are that biased outputs will result, particularly in regards to how it responds to certain topics or the language it uses. While this is a challenge faced by nearly every AI tool, bias in technology at large represents a significant future issue.
ChatGPT limitation 3: Lack of human insight
While ChatGPT is quite adept at generating coherent responses to specific prompts or questions, it ultimately is not human. As such, it can only mimic human behavior, not experience it itself. This has a variety of implications:
- It does not always understand the full context of a topic, which can lead to nonsensical or overly literal responses.
- It does not have emotional intelligence and does not recognise or respond to emotional cues like sarcasm, irony, or humor.
- It does not always recognise idioms, regionalisms, or slang. Instead, it may take a phrase like “raining cats and dogs” literally.
- It does not have a physical presence and cannot see, hear, or interact with the world like humans do. This makes it unable to understand the world based on direct experience rather than textual sources.
- It answers questions very robotically, making it easy to see that its outputs are machine-generated and often flow from a template.
- It takes questions at face value and does not necessarily understand subtext. In other words, it cannot “read between the lines” or take sides. While a bias for neutrality is often a good thing, some questions require you to choose a side.
- It does not have real-world experiences or commonsense knowledge and cannot understand and respond to situations that require this kind of knowledge.
- It can summarise and explain a topic but cannot offer a unique insight. Humans need knowledge to create, but lived experiences and subjective opinions also are crucial to this process – ChatGPT cannot provide these.
ChatGPT limitation 4: Overly long or wordy answers
ChatGPT’s training datasets encourage it to cover a topic from many different angles, answering questions in every way it can conceive of.
While this is positive in some ways – it explains complicated topics very thoroughly – there are certainly topics where the best answer is the most direct one, or even a “yes” or “no”. This tendency to over-explain can make ChatGPT’s answers overly formal, redundant, and very lengthy.
Other interesting articles
If you want to know more about ChatGPT, using AI tools, fallacies, and research bias, make sure to check out some of our other articles with explanations and examples.
ChatGPT
Fallacies
Frequently asked questions about ChatGPT
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
George, T. (2023, June 22). What Are the Limitations of ChatGPT?. Scribbr. Retrieved 17 March 2025, from https://www.scribbr.co.uk/using-ai-tools/limitations-of-chatgpt/