Recent work has demonstrated that finetuning is a promising approach to'unlearn'concepts from large language models. However, finetuning can be expensive, as it requires both …
With advancements in self-supervised learning, the availability of trillions tokens in a pre- training corpus, instruction fine-tuning, and the development of large Transformers with …
Large language models (LLMs) have become the state of the art in natural language processing. The massive adoption of generative LLMs and the capabilities they have shown …
Large Language Models (LLMs) frequently memorize long sequences verbatim, often with serious legal and privacy implications. Much prior work has studied such verbatim …
Generative AI technologies have been deployed in many places, such as (multimodal) large language models and vision generative models. Their remarkable performance should be …
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material. These models can memorize and generate …
Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from …
The rapid advancement of Large Language Models (LLMs) has demonstrated their vast potential across various domains, attributed to their extensive pretraining knowledge and …
Conclusion In this letter, we propose E2URec, the efficient and effective unlearning method for LLMRec. Our method enables LLMRec to efficiently forget the specific data by only …