"AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
So delusional.
Do they think that their AI will actually dig the cobalt from the mines, or will the AI simply be the one who sends the children in there to do the digging?
It will design the machines to build the autonomous robots that mine the cobalt.... doing the jobs of several companies at one time and either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.
It may be used within strict parameters to improve the speed of theoretically testing types of bearing or hinge or alloys or something to predict which ones would perform best under stress testing - prior to acutal testing to eliminate low-hanging fruit, but it will absolutely not generate a new idea for a machine because it can't generate new ideas.
The model T will absolutely not replace horse drawn carts -- Maybe some small group of people or a family for a vacation but we've been using carts to do war logistics for 1000s of years. You think some shaped metal put together is going to replace 1000s of men and horses? lol yeah right
You're comparing two products with the same value prop: transporting people and goods more effectively than carrying/walking.
In terms of mining, a drilling machine is more effective than a pickaxe. But we're comparing current drilling machines to potential drilling machines, so the actual comparison would be:
is an AI-designed drilling machine likely to be more productive (for any given definition of productivity) than a human-designed one?
Well, we know from experience that when (loosely defined) "AI" is used in, for e.g. pharma research, it reaps some benefits - but does not replace wholesale the drug approval process and its still a tool used by - as I originally said - human beings that impose strict parameters on both input and output as part of a larger product and method.
Back to your example: could a series of algorithmic steps - without any human intervention - provide a better car than any modern car designers? As it stands, no, nor is it on the horizon. Can it be used to spin through 4 million slight variations in hood ornaments and return the top 250 in terms of wind resistance? Maybe, and only if a human operator sets up the experiment correctly.
No, the thing I'm comparing is our inability to discern where a new technology will lead and our history of smirking at things like books, cars, the internet and email, AI, etc.
The first steam engines pulling coal out of the ground were so inefficient they wouldn't make sense for any use case than working to get the fuel that powers them. You could definitely smirk and laugh about engines vs 10k men and be totally right in that moment, and people were.
The more history you learn though, you more you realize this is not only a hubrisy thing, it's also futile as how we feel about the proliferation of technology has never had an impact on that technology's proliferation.
And, to be clear, I'm not saying no humans will work or have anything to do -- I'm saying significantly MORE humans will have nothing to do. Sure you still need all kinds of people even if the robots design and build themselves mostly, but it would be an order of magnitude less than the people needed otherwise.
I agree that AI is just a tool, and it excels in areas where an algorithmic approach can yield good results. A human still has to give it the goal and the parameters.
What's fascinating about AI, though, is how far we can push the algorithmic approach in the real world. Fighter pilots will say that a machine can never replace a highly-trained human pilot, and it is true that humans do some things better right now. However, AI opens up new tactics. For example, it is virtually certain that AI-controlled drone swarms will become a favored tactic in many circumstances where we currently use human pilots. We still need a human in the loop to set the goal and the parameters. However, even much of that may become automated and abstracted as humans come to rely on AI for target search and acquisition. The pace of battle will also accelerate and the electronic warfare environment will become more saturated, meaning that we will probably also have to turn over a significant amount of decision-making to semi-autonomous AI that humans do not directly control at all times.
In other words, I think that the line between dumb tool and autonomous machine is very blurry, but the trend is toward more autonomous AI combined with robotics. In the car design example you give, I think that eventually AI will be able to design a better car on its own using an algorithmic approach. Once it can test 4 million hood ornament variations, it can also model body aerodynamics, fuel efficiency, and any other trait that we tell it is desirable. A sufficiently powerful AI will be able to take those initial parameters and automate the process of optimizing them until it eventually spits out an objectively better design. Yes, a human is in the loop initially to design the experiment and provide parameters, but AI uses the output of each experiment to train itself and automate the design of the next experiment, and the next, ad infinitum. Right now we are in the very early stages of AI, and each AI experiment is discrete. We still have to check its output to make sure it is sensible and combine it with other output or tools to yield useable results. We are the mind guiding our discrete AI tools. But over a few more decades, a slow transition to more autonomy is inevitable.
A few decades ago, if you had asked which tasks an AI would NOT be able to perform well in the future, the answers almost certainly would have been human creative endeavors like writing, painting, and music. And yet, those are the very areas where AI is making incredible progress. Already, AI can draw better, write better, and compose better music than the vast, vast majority of people, and we are just at the beginning of this revolution.
sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.
OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they're incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.
Okay but the people who made the advancements are telling you it has already slowed down. Why don't you understand that? A flawed Chatbot and some art theft machines who can't draw hands aren't exactly worldchanging, either, tbh.
There are other people in the world. Some of them are inventing completely new ways of doing things, and one of those ways could lead to a major breakthrough. I'm not saying a GPT LLM is going to solve the problem, I'm saying AI will.
Some of them are inventing completely new ways of doing things
No, they're not. All the money is now on the LLM autocomplete chatbots.
Real progress on AI won't resume until after the LLM bubble has burst. (And even then investors will probably be wary of putting money in AI for probably a few decades, because LLMs are being marked as AI despite having little to do with it.)
This is such a rich-country-centric view that I can't stand. LLMs have already given the world maybe it's greatest gift ever -- access to a teacher.
Think of the 800 million poor children in the world and their access to a Kahn academy level teacher on any subject imaginable with a cellphone/computer as all they need. How could that not have value and is pearl clutching drawing skills becoming devalued really all you can think about it?
Anything you learn from an LLM has a margin of error that makes it dangerous and harmful. It hallucinates documentation and fake facts like an asylum inmate. And it's so expensive compared to just having real teachers that it's all pointless. We've got humans, we don't need more humans, adding labor doesn't solve the problem with education.
i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt ("technically the operators are contractors and not employed by the company" etc).
We invented the printing press 584 years ago, it still requires a team of human operators.
the comment I originally replied to claimed AI will design the autonomous machines.
It will not. It will facilitate some of the research done by humans to aid in the designing of willfully human operated machinery.
To my knowledge the only autonomous machine that exists is a roomba, which moves blindly around until it physically strikes an object, rotates a random degree and continues in a new direction until it hits something else.
Even then, it is controlled with an app and on more expensive models, some boundary setting.
Fair, I thought they all got recalled but I guess they're back. but I'd also counter that Waymo is extremely limited about where it can operate - roughly 10 miles max - which, relevant to my original point was entirely hand-mapped and calibrated by human operators, and the rides are monitored and directed by a control center responding in real-time to the car's feedback.
Like my printing press example - it still takes a large human team to operate the "self" - driving car.
define design -- I had Chat GPT dream up new musical instruments and then we implemented one. It wrote all the code and architecture, though I did have to prod/help it along in places.
Neither can the majority of engineers I have meet, but that hasn't stopped them. You really don't need any design ability if your whole day is having endless meetings terrorizing OEMs.
LLMs aren't going to be designing anything; they're just fancy auto complete engines with a tendency to hallucinate facts they haven't been trained on.
LLMs are preventing real advancements in AI by focusing the attention and funding into what's evidently a dead end.
LLMs are incapable of "recognising" any patterns they haven't been trained on.
And they don't really even recognise those, they're just fancy auto complete engines, simply outputting the highest scored token from their training base based on their input.
They're pattern matching machines; there's no recognition, inner modelling of new knowledge, self referencing, or understanding of any kind, merely blind statistics.
They're just bigger and fancier Eliza's, and just as distant as Eliza was from any practical form of intelligence, artificial or natural.
While I personally do believe that achieving AGI¹, on a Turing machine is possible, LLMs and how they work are an excellent example in support of John Searle's arguments against it with his Chinese room though experiment.
1— Or at least something equivalent to human intelligence, or better, in the measures by which we consider ourselves to be intelligent, though it's arguable whether we can really be considered intelligent at all, or we're just better, more complex, Chinese rooms.
But since we don't understand how cognition works in living beings almost at all -- who's to say that's not how 'actual thinking' works other than 'I know it when I see it!"
Because there are many aspects of what we understand as "actual thinking" (understanding concepts, learning, or solving puzzles, for instance) that LLMs are fundamentally incapable of achieving no matter how larger or more complex we make them or how much we optimise them.
They do one single thing (which, granted, they do relatively well): they take an input, they apply it to every token in their training data, generating a score for each of them, and they output the one with the highest score. And that's all they do.
And that's why, for instance, you'll never be able to make a LLM that's any good at playing chess, because there simply wouldn't be enough atoms in the universe for it to store all possible states of the game, which it would need to have in its training model in order to auto complete its next move (and that's not even accounting for the actual score computation, both in space and time).
They're a cool fancy gimmick, possibly useful in certain cases as long as you can account for their hallucinations, but they're not any closer to actual intelligence than Eliza ever was.
Work a blue collar job your whole life and tell me it’s possible. Machines suck ass. They either need constant supervision, repairs all the time, or straight up don’t function properly. Tech bros always forget about the people who actually keep the world chugging.
They suck because your employer wouldn't pay me more for a better machine. Chemical is where it is at, outside of powerplants and some of the bigger pharms the chemical operator is a dead profession. Entire plants are automated with the only people doing work are doing repairs or sales.
They just mean "steal from the weaker ones" by "create".
Psychology of advertising a Ponzi scheme.
They say "we are going to rob someone and if you participate, you'll get a cut", but change a few things so that people would understand, but would think that someone else won't and will be the fool to get robbed. Then those people considering themselves smart find out that, well, they've been robbed.
Humans are very eager to participate in that when they think it's all legal and they won't get caught.
The idea here is that the "AI" will help some people own others and it's better to be on the side of companies doing it.
I generally dislike our timeline in the fact that while dishonorable people are weaker than honorable people long term, it really sucks to live near a lot of dishonorable people who want to check this again the most direct way. It sucks even more when that's the whole world in such a situation.
Nah, they're probably planning to do what Amazon did with their "Just Walk Out" stores... force children into mines and just claim it's actually AI. As NFT's, Cryptocurrency, and so many other hype tech fads have taught us: marketing is cheaper than development.
AI might be the one to say "solving global warming needs a drastic reduction car-based infrastructure, plus heavy government regulation and investment in new infrastructure". They'll throw out that answer because it isn't what they wanted to hear.
A point I have been repeating for a while. You can't out-think every problem. Often the solution is right there and no one wants it.
How do you get in better shape? Diet and exercise. Ok? What exactly was confusing? It's the same freaken solution that everyone has known forever. Hell Aristotle talked about the dangers of red meat. They hadn't even gotten to the point where they thought leaches worked and they knew that people who ate red meat all the time had medical problems.
There are lots of great solutions to climate change from stuff that just buys us a little more time (plant a billion trees) to long term solutions (nuclear and renewables) to hell mary solutions (climate engineering). And we have tried none of them.
Let's not forget this is all driven by people with the right skillset, in the right place at the right time, who are hell-bent on making vast amounts of money.
The "visionary technological change" is a secondary justification.
Permission granted to scrape this comment too, if you like.
To be fair, that did improve things for the average person, and by a staggering amount.
The vast majority of people working before the industrial revolution were lowly paid agricultural workers who had enormous instability in employment. Employment was also typically very seasonal, and very hard work.
That's before we even get into things like stuff being made cheaper, books being widely available, transport being opened up, medical knowledge skyrocketing, famines going from regular occurrence to rare occurrence, etc as a result of the industrial revolution.
We had been on a constant trajectory of everyone getting wealthier up until the late 1970s where afterwards we saw a sharp rise in inequality, a trend that hasn't stopped. (Thatcher and her other shithead twin Reagan?)
In the mid 70s, the top 1% owned 19.9% of wealth. Now that figure is around 53%.
Even then it is "only" the west. China was starving only two generations ago. As a whole humanity just keeps getting richer and richer. No part of what I am saying is meant to excuse the damage neoliberalism did to wealthy equality in the developed world.