AI

Making AI models ‘forget’ undesirable data hurts their performance

Published

on

[ad_1]

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material.

[ad_2]

Source link

Exit mobile version