Skip Navigation
222 comments
  • A few years ago I remember people being amazed that prompts like "Markiplier drinking a glass of milk" could give them some blobs that looked vaguely like the thing asked for occasionally. Now there is near photorealistic video output. Same kind of deal with ability to write correct computer code and answer questions. Most of the concrete predictions/bets people made along the lines of "AI will never be able to do ______" have been lost.

    What reason is there to think it's not taking off, aside from bias or dislike of what's happening? There are still flaws and limitations for what it can do, but I feel like you have to have your head in the sand to not acknowledge the crazy level of progress.

    • It could do that 3 years ago.

    • It's absolutely taking off in some areas. But there's also an unsustainable bubble because AI of the large language model variety is being hyped like crazy for absolutely everything when there are plenty of things it's not only not ready for yet, but that it fundamentally cannot do.

      You don't have to dig very deeply to find reports of companies that tried to replace significant chunks of their workforces with AI, only to find out middle managers giving ChatGPT vague commands weren't capable of replicating the work of someone who actually knows what they're doing.

      That's been particularly common with technology companies that moved very quickly to replace developers, and then ended up hiring them back because developers can think about the entire project and how it fits together, while AI can't - and never will as long as the AI everyone's using is built around large language models.

      Inevitably, being able to work with and use AI is going to be a job requirement in a lot of industries going forward. Software development is already changing to include a lot of work with Copilot. But any actual developer knows that you don't just deploy whatever Copilot comes up with, because - let's be blunt - it's going to be very bad code. It won't be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate... You use it as a starting point, and then sculpt it into shape.

      It will make you faster, especially as you get good at the emerging software development technique of "programming" the AI assistant via carefully structured commands.

      And there's no doubt that this speed will result in some permanent job losses eventually. But AI is still leagues away from being able to perform the joined-up thinking that allows actual human developers to come up with those structured commands in the first place, as a lot of companies that tried to do away with humans have discovered.

      Every few years, something comes along that non-developers declare will replace developers. AI is the closest yet, but until it can do joined-up thinking, it's still just a pipe-dream for MBAs.

      • But any actual developer knows that you don’t just deploy whatever Copilot comes up with, because - let’s be blunt - it’s going to be very bad code. It won’t be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate… You use it as a starting point, and then sculpt it into shape.

        Yeah, but I don't know where you're getting the "never will" or "fundamentally cannot do" from. LLMs used to be only useful for coding if you ask for simple self-contained functions in the most popular languages, and now we're here; most requests with small scope, I'm getting a result that is better written than I could have done myself by spending way more time, it makes way fewer mistakes than before and can often correct them. That's with only using local models which became actually viable for me less than a year ago. So why won't it keep going?

        From what I can tell there is not very much actually standing in the way of sensible holistic consideration of a larger problem or codebase here, just context size limits and being more likely to forget things in the context window the longer it is, which afaik are problems being actively worked on where there's no reason they would be guaranteed to remain unsolved. This also seems to be what is holding back agentic AI from being actually useful. If that stuff gets cracked, I think it's going to mean things will start changing even faster.

    • Agreed. LLM Ai has gotten insanely good insanely fast, and an LLM of course isn’t going to magically turn into an AGI. That’s a whole different ball game.

    • Yes, the goal posts keep moving, but they do so for a rather solid reason: We humans are famously bad at understanding intelligence and at understanding the differences between human and computer intelligence.

      100 years ago, doing complex calculations was seen as something very complex that only reasonably smart humans could do. Computers could easily outcompete humans, because calculations are inherently easy for computers while very difficult for humans.

      30 years ago we thought that high-level chess was something reserved only to the smartest of humans, and that it was a decent benchmark for intelligence. Turns out, playing chess is something that benefits greatly from large memory and fast computations, so again, it was easy for computers while really hard for humans.

      Nowadays AI can do a lot of things we thought would be really hard to do, but that computers can actually do. But there's hardly any task performed by LLMs where it's actually better than a moderately proficient human being. (Apart from tasks like "Do homework task X", where again LLMs benefit from large memory since they can just regurgitate stuff from the training set.)

    • Linear growth can be faster than exponential growth. Exponential implys tomorrow we will see it advance faster then it did the day before so every day we would see even crazier shit.

  • We humans always underestimate the time it actually takes for a tech to change the world. We should travel in self-flying flying cars and on hoverboards already but we're not.

    The disseminators of so-called AI have a vested interest in making it seem it's the magical solution to all our problems. The tech press seems to have had a good swig from the koolaid as well overall. We have such a warped perception of new tech, we always see it as magical beans. The internet will democratize the world - hasn't happened; I think we've regressed actually as a planet. Fully self-drving cars will happen by 2020 - looks at calendar. Blockchain will revolutionize everything - it really only provided a way for fraudsters, ransomware dicks, and drug dealers to get paid. Now it's so-called AI.

    I think the history books will at some point summarize the introduction of so-called AI as OpenAI taking a gamble with half-baked tech, provoking its panicked competitors into a half-baked game of oneupmanship. We arrived at the plateau in the hockey stick graph in record time burning an incredible amount of resources, both fiscal and earthly. Despite massive influences on the labor market and creative industries, it turned out to be a fart in the wind because skynet happened a 100 years later. I'm guessing 100 so it's probably much later.

222 comments