Large language models can be trained to write code, predict the ending of a sentence, summarize complex concepts and even solve math problems. But have they been taught how to forget?

For now, it seems there aren’t many avenues for developers of foundational large language models (LLMs) to comply with the right of erasure, also known as the “right to be forgotten,” under the General Data Privacy Regulation (GDPR) without having to delete their entire models.