Ever since the release of OpenAI’s humanlike chatbot ChatGPT in late 2022, users were warned about the tool’s potential for hallucinations, meaning the bot could spit out inaccurate or fictitious information in a very confident manner.

But more recently, these hallucinations have been used as one of the strategies to challenge the chatbot’s legal standing under the European Union’s General Data Protection Regulation.