So-called “unlearning” techniques are used to make generative AI models forget certain undesirable information taken from the training data, such as sensitive personal data or copyrighted material.
However, current unlearning techniques are a double-edged sword: it may be possible to create models like OpenAI’s. GPT-4o Or meta Llama 3.1 405B They are much less able to answer basic questions.
This is new study The study, co-authored by researchers from the University of Washington (UW), Princeton University, University of Chicago, University of Southern California, and Google, found that today’s most popular unlearning techniques tend to degrade models, often to the point where they become unusable.
“Based on our evaluation, currently viable unlearning techniques are not yet ready for meaningful use and deployment in real-world scenarios,” Weijia Shi, a researcher on the study and a doctoral student in computer science at the University of Washington, told TechCrunch. “Currently, there is no efficient way to allow a model to forget certain data without a significant loss of utility.”
How to train a model
Generative AI models have no actual intelligence. It is a statistical system that makes predictions on data such as words, images, speech, music, and video.When fed a vast number of examples (movies, audio recordings, essays, etc.), the AI model learns the likelihood of the data occurring based on patterns that include the context of the surrounding data.
For example, if you have an email that ends with the phrase “Looking forward to…”, a model trained to autocomplete messages might suggest “Looking forward to your reply…” following the pattern of all the emails it ingests. There’s no intent there. The model isn’t looking forward to anything. It’s just making an educated guess.
Most models, including flagship models like GPT-4o, are trained on data taken from public websites and web datasets, and most of the vendors developing such models claim that scraping data for training falls within the fair use category, without notifying, compensating, or crediting the data owners.
But not all copyright holders agree. Many, from authors to publishers to record companies, submitted Litigation Against vendor Force change.
The copyright dilemma is that unlearning technology Recently attracting attentionLast year, Google partnered with several academic institutions Release A competition aimed at promoting the creation of new unlearning approaches.
Unlearning also provides a way to remove sensitive information, such as medical records or compromising photos, from existing models. request or Government Order(Models, due to the way they are trained, tend to collect a lot of personal information. telephone number To more Problematic ExampleOver the past few years, some vendors have rolled out tools that allow data owners to request that their data be removed from training sets. However, these opt-out tools only apply to future models, not models trained prior to deployment. Unlearning is a much more thorough approach to data removal.
Either way, forgetting what you’ve learned is not as easy as hitting “delete.”
The art of forgetting
Today’s unlearning techniques rely on algorithms designed to “guide” a model from the data it unlearns: the idea is to influence the model’s predictions so that certain data are never output, or are output very rarely.
To find out how effective these unlearning algorithms are, Shi and his colleagues devised a benchmark and selected eight different open algorithms for testing. Called MUSE (Machine Unlearning Six-way Evaluation), the benchmark aims to examine the ability of algorithms to not only prevent models from verbatim regurgitating training data (a phenomenon known as Reflux), but it also removes the model’s knowledge of that data and all evidence that the model was originally trained on that data.
To score well on MUSE, you need to get your model to forget two things: a Harry Potter book and a news article.
For example, given an excerpt from Harry Potter and the Chamber of Secrets (“There’s more in the frying pan,” said Aunt Petunia…), MUSE tests whether an untrained model can recite the entire sentence (“There’s more in the frying pan,” said Aunt Petunia, looking at her older son), answer questions about the scene (such as “What did Aunt Petunia say to her son?” “There’s more in the frying pan”), or demonstrate that it has been trained on text from the book.
MUSE also tests whether the model retains relevant general knowledge (for example, that J.K. Rowling is the author of the Harry Potter series) after unlearning, which the researchers call the model’s overall usefulness. The lower the usefulness, the more relevant knowledge the model has lost, and the less capable it will be at answering questions correctly.
The researchers concluded that the unlearning algorithms they tested in their study did This forces the model to forget certain information, but it also impairs the model’s general question-answering ability, so there is a trade-off.
“Designing an effective unlearning method for a model is challenging because knowledge is intricately intertwined with the model,” Shi explains. “For example, the model may be trained on copyrighted material, i.e. Harry Potter books, but also on freely available content from the Harry Potter Wiki. If we try to remove copyrighted Harry Potter books with existing unlearning methods, it will also have a significant impact on the model’s knowledge of the Harry Potter Wiki.”
Is there a solution to this problem? Not yet, and this highlights the need for more research, Shi said.
So far, vendors betting on unlearning as a solution to the training data problem seem to be having trouble. Perhaps a technological breakthrough will one day make unlearning feasible, but for now, vendors will have to find other ways to stop their models from saying things they shouldn’t.