Despite ChatGPT’s many incredible capabilities, it has one big Achilles heel — a lack of information on current events. However, it seems like OpenAI might be quietly working on a solution.
When OpenAI first unveiled ChatGPT nearly a year ago, the AI chatbot only had knowledge of information that had occurred before September 2021 since the data it was trained on only covered that scope of time.
Also: The best AI chatbots
Recently, however, some users have been taking to X (formerly Twitter) to share that they noticed an expansion in the time frame of knowledge that the chatbot possesses.
Other ChatGPT Plus users in the thread got the same response from the AI chatbot regarding its scope of knowledge. However, when asked about current events that occurred within the period of time that it claims to be aware of, ChatGPT didn’t seem to have the answers.
ZDNET decided to put it to the test and asked both ChatGPT Plus with GPT-4 and standard ChatGPT with GPT-3.5 what its scope of knowledge is. In both cases, ChatGPT responded with January 2022.
Although it’s not as recent as what the other users were getting, if functional, a knowledge scope expansion to January 2022 would still be a significant advancement for the chatbot from its initial cutoff of September 2021.
Similar to the experience described by the X users, despite ChatGPT claiming to have knowledge of information from January 2022, it wasn’t able to answer questions about events that happened in December 2021.
When asked when the first Omnicron death happened and who won the 70th Miss Universe pageant, ChatGPT said it didn’t have access to that information since it occurred after its January 2022 cutoff date — which is false — and to check a website for the most recent information.
OpenAI’s ChatGPT FAQ page, which was last updated last week, states that ChatGPT, “has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.”
The reason ChatGPT is claiming to have knowledge that it doesn’t have isn’t clear. ZDNET reached out to OpenAI for comment.
However, what it does point out is the need to verify the information you are getting from ChatGPT since, like any other generative AI model, it is prone to hallucinations.