Skip Navigation

Is "AI and robots taking over" an actual possible outcome of the current race to produce "smart" LLMs ?

I can't see it happening tbh, but like the USA government discussed putting restriction on AI development, I think OpenAI or some other companies asked them to do so!? And there were short/reels of high profile developers hyping out the fact that "we don't know what we're doing", and one of them quit his job. So why is all that hype? Is the "Matrix" route actually a possible future ?

26 comments
  • One thing that bothers me about high level devs just leaving because they realized what they created. Is that them leaving means one more possible road block is just gone. They will just be replaced with people that are more fresh faced and on the hype train of going harder and harder. Lots of folks I know that are finishing college are just leaning more and more into just using all of these AI to solve problems instead of learning to code (or just write things) themselves. Some are still trying and I support them in my little ways, but I can see how much like a drug things start small and can turn into just using it all the time. Comp Sci majors were already getting worse in their actual understanding of how things work before LLMs (just look at all the things that will never be optimized and just rely on higher spec PCs).

  • Just as an aside and in addition to the other comments here:

    There is a phenomenon called regulatory capture. It can take many different forms but the short version is that agencies and policies get perverted to only benefit one group. When the intention should be society at large.

    There is a process where the big players, say OpenAI, call for regulation of their industry, not because they feel it needs regulating but because the regulatory hurdles will keep competitors at bay. Meta pulled a stunt like that as well with social networks. So big hype company calling for regulation in their field is a red flag, accompanied by a loud alarm bell.

  • No. AI and robots don't care about anything. They don't care about taking over. Whoever controls them though, now we're talking. And that's much worse

  • I recently read a neat little book called "Rethinking Consciousness" by SA Graziano. It has nothing to do with AI, but is an attempt to describe the way our myriad neural systems come together to produce our experience, how that might differ between animals with various types of brains, and how our experience might change if some systems aren't present. It sounds obvious, but the simpler the brain, the simpler the experience. For example, organisms like frogs probably don't experience fear. Both frogs and humans have a set of survival instincts that help us detect movement, classify it as either threat or food or whatever, and immediately respond, but the emotional part of your brain that makes your stomach plummet just doesn't exist in them.

    Humans automatically respond to a perceived threat in the same way a frog does--in fact, according to the book, the structures in our brains that dictate our initial actions in those instinctive moments are remarkably similar. You know how your eyes will automatically shift to follow a movement you see in the corner of your vision? A frog responds in much the same way. It's not something you have to think about--often your eye will have darted over to the point of interest even before you realize you've noticed something. But your experience of that reaction is also much richer than it is possible for a frog's to be, because we have far more layers of systems that all interact to produce what we call consciousness. We have a much deeper level of thought that goes into deciding whether that movement was actually important to us.

    It's possible for us to continue to live even if we lose some parts of the brain--our personalities will change, our memory may get worse, or we may even lose things like our internal monologue, but we still manage to persist as conscious beings until our brains lose a large number of the overlying systems, or some very critical systems. Like the one that regulates breathing--though even that single function is somewhat shared between multiple systems, allowing you to breathe manually (have fun with that).

    All that to say the things we're currently calling AI just don't have that complexity. At best, these generative models could fill out a fraction of the layers that would be useful for a conscious mind. We have developed very powerful language processing systems, at least in terms of averaging out a vast quantity of data. Very powerful image processing. Audio processing. What we don't have--what, near as I can tell, we haven't made any meaningful progress on at all--is a system to coalesce all these processing systems into a whole. These systems always rely on a human to tell them what to process, for how long, and ultimately to check whether the result of a process is reasonable. Being able to process all of those types of input simultaneously, choosing which ones to focus on in the moment, and continuously choosing an appropriate response? Barely even a pipe dream. And even all of that would be distinct from a system to form anything like conscious thought.

    Right now, when marketing departments say "AI," what they're describing is like that automatic response to movement. Movement detected, eye focuses. Input goes in, output comes out. It's one small piece of the whole that's required when science fiction writers say "AI."

    TL;DR no, the current generative model race is just tech stock market hype. The absolute best it can hope for is to reproduce a small piece of the conscious mind. It might be able to approximate the processing we're capable of more quickly, but at a massively inflated energy expenditure, not to mention the research costs. And in the end it still needs a human double checking its work. We will need to develop a vast number of other increasingly complex systems before we even begin to approach a true AI.

26 comments