Skip Navigation
A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
  • You seem to have the assumption that they’re not. And that “helping society” is anything more than a happy accident that results from “making big profits”.

    It's not an assumption. There's academic researchers at universities working on developing these kinds of models as we speak.

    Are you asking me whether it’s a good idea to give up the concept of “Privacy” in return for an image classifier that detects how much film grain there is in a given image?

    I'm not wasting time responding to straw men.

  • A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
  • It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

    That is a gross oversimplification. LLM's operate on much more than just statistical probabilities. It's true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

    For example: Imagine you give an LLM the prompt, "Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream." Now, if you ask the model, "who got chocolate ice cream from the store?" it doesn't just blindly rely on statistical likelihood. There's no way you could argue that "Dumbledore" is a statistically likely word to follow the text "who got chocolate ice cream from the store?" Instead, it uses its understanding of the specific context to determine that "Dumbledore" is the one who got chocolate ice cream from the store.

    So, it's not just statistical probabilities; the models' have an ability to comprehend context and generate meaningful responses based on that context.

  • A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
  • If enforcement means big tech companies have to throw out models because they used personal information without knowledge or consent, boo fucking hoo

    A) this article isn't about a big tech company, it's about an academic researcher. B) he had consent to use the data when he trained the model. The participants later revoked their consent to have their data used.

  • A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
  • How is “don’t rely on content you have no right to use” litteraly impossible?

    At the time they used the data, they had a right to use it. The participants later revoked their consent for their data to be used, after the model was already trained at an enormous cost.

  • A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
  • ok i guess you don’t get to use private data in your models too bad so sad

    You seem to have an assumption that all AI models are intended for the sole benefit of corporations. What about medical models that can predict disease more accurately and more quickly than human doctors? Something like that could be hugely beneficial for society as a whole. Do you think we should just not do it because someone doesn't like that their data was used to train the model?

  • A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data
  • There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

    What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can't train it at all. There's simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?

  • Remember me comrades!
  • That's not how the world works in the year 2023. Isolationism just isn't a conceivable possibility. All countries are interconnected, and what's happening in one country influences what's happening in other countries in major ways.

  • Remember me comrades!
  • I don't think you said anything meaningfully different from what I already said.

    You do not consider the abhorrent unethical nature of certain actions as being a valid argument against taking those actions in the pursuit of establishing a communist society. The only criticism you'll entertain is that certain actions may be ineffective or inefficient at accomplishing that goal.

  • Remember me comrades!
  • Nah, I fully know you as a person, including everything you've ever done and everything you ever will do, from just a couple of internet comments, and I judge you useless. So give up. Stop being a socialist. I, an internet stranger, know you are not contributing anything of value, so why bother?

  • Remember me comrades!
  • Well your political activism starts and ends with posting Lenin quotes in online discussion boards, so I don’t know that you are in any position to be calling other people useless.

  • Remember me comrades!
  • Yeah, you should read Marx if you want to understand the historical development of socialist ideas, but if that’s where your reading ends, then your ideas are stuck in the past.

    Socialism isn’ta religious dogma that is inflexible and unchanging. It’s an intellectual idea that grows and becomes more refined over time.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LI
    LittleLordLimerick @lemm.ee
    Posts 0
    Comments 122