Skip Navigation

OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further

pivot-to-ai.com OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further

Once you’ve trained your large language model on the entire written output of humanity, where do you go?  Here’s Ilya Sutskever, ex-OpenAI, admitting to Reuters that they’ve plateaued: [Reuters] Th…

42

You're viewing a single thread.

42 comments
  • LLMs are quite impressive as chatbots all things considered. The conversations with them are way more realistic and almost as funny as the ones with the IRC markov chain my friend made as a freshman CS student.

    Of course, out bot's training data only included the IRC channel's logs of a few years and the Finnish Bible we later threw in for shits and giggles. A training set of approximately zero terabytes in total.

    LLMs are less a marvel of machine learning algorithms (though I admit they might play a part) and more one of data scraping. Based on their claims, they have already dug through the vast majority of publicly accessible world wide web, so where do you go from there? Sure, there are a lot of books that are not on the web, but feeding them in the machine is about as hard as getting them on the web to begin with.

You've viewed 42 comments.