Sam Altman’s ChatGPT promises to transform the global economy. But it also poses an enormous threat. Here, a scientist who appeared with Altman before the US Senate on AI safety flags up the danger in AI – and in Altman himself
The political right actually has one good point that we on the left don't always appreciate: taxes on middle class people should be lower.
Specifically, very liberal tax exemptions on things like 401Ks, including the ability to transfer wealth across generations.
Combine that with higher taxes on the wealthy, and it will be possible to shift power to the middle class.
Consider the total market cap of the S&P 500, rounded up it's about 50 trillion. Divide that among 130 million households and each household should own about $400K in stock on average.
Full equality is neither achievable nor desired by most people, so a good scheme would be to let every household hold up to $1M in wealth, tax exempt.
Problem is, people usually by far overestimate their position in society.
If you say "tax the rich" a whole lot of people feel like it's about them even though they barely count as middle class.
Here in Germany I've had countless debates about inheritance tax. If your parents die, you only have to pay taxes (10%) on anything over 400k, and that's per child. That means, most people will never pay a cent of inheritance tax, yet they are horrified by the idea of it, because they firmly believe, their parents shitty house in a village somewhere will bankrupt them and their two siblings.
People fundamentally don't understand their own wealth and how tiny their wealth is compared to the billionaires class.
I'm going to attract downvotes, but this article doesn't convince me he's becoming powerful and that we should be very afraid. He's a grifter, sleezy, and making a shit ton of money.
Anyone who has used these tools knows they are useful, but they aren't the great investment the investors claim they are.
Being able to fool a lot of people into believing the intelligence doesn't make it good. When it can fool experts in a field, actively learn, or solve problems without training on the issue, that's impressive.
Generative AI is just a new method of signal processing. The input signal, the text prompt, is passed through a function (the model) to produce another signal (the response). The model is produced by a lot of input text, which can largely be noise.
To get AGI it needs to be able to process a lot of noise, and many different signals. "Reading text" can be one "signal" on a "communication" channel - you can have vision, and sound on it too - body language, speech. But a neural network with human ability would require all five senses, and reflexes to them - fear, guilt, trust, comfort, etc. We are no where near that.
Strong agree here. You hit on a lot of the core issues on LLMs, so I'll say my opinions on the economic aspects.
It's been more than a year since chatGPT released this plague of "slap AI on the product and consumers will put their children down for collateral to buy!" which imo we haven't seen whatsoever. Investors still have a hard-on for the term AI that goes into the stratosphere but even that is starting to change a little.
Consumers level of AI distrust has risen considerably and consumers have seen past the hype. Wrapping this back around to the CEOs level of power, I just don't think LLMs are actually going to have enough marketability for general consumers to become juggernaut corpos.
LLMs absolutely have use cases but they don't fit into most consumer products. No one wants AI washers or rice cookers or friggin AI spoons and shoehorning them in decreases interest in the product.
That's also how I feel about "smart" devices in general. I don't want a smart refrigerator, I just want it to work. The same goes for other appliances, like my laundry machine, dishwasher, and rice cooker. The one area I kind of want it, TVs, has been ruined by stupid tracking and ads.
What's going to kill AI isn't AI itself, it's AI being forced into products where it doesn't make sense, and then ads being thrown in on top to try to make some sort of profit from it.
The article seems to be based on a number of flawed premises.
Firstly, that chatgpt is the only LLM. It's not, and better, stronger, cheaper alternatives are likely to emerge.
Secondly, that LLMs are a step on the way to AGI. Like any minute now they're going to evolve. They're not, they're a one trick pony which is making coherent sentences. That's it.
The same AI imaging systems that we used to get a 'picture' of a black hole a few years ago can be trained on the EM signatures of HDMI cables, meaning they now have a TEMPEST like system that can reliably read and decode any monitor within a thousand feet.
That same system can be trained on social media temperature and be used to identify all sorts of metrics such as degree of depression, gender, career, degree of sociopathy, and a ton of other things like whether a person is pregnant even before they know it.
LLMs are a toy, crosslinking AIs are a menace.
But hardly anyone is looking at the serious problem.
For now it's all butthurt artists angry that people are making porn with their publicly available art without paying them.
The real issue is our right to privacy and how we will be targeted once that is irrelevant.
Imagine if Drumpf gets into office again and one of his junior suckups says "Hey we can identify every social media account that spoke bad about you, and have a good chance of connecting them with a real world address and identity.". With SCOTUS given presidential immunity, what do you think he will do with that knowledge?
Exactly. And that's why we're in a bubble. Once the execs are finally convinced by their tech people that LLMs aren't some kind of magic bullet, we'll see a pretty big correction. As an investor, I'm not exactly looking forward to that, but as someone who works in tech, I'm honestly not worried about my job.
The one comment I have here is that you may be overlooking the impact LLMs will have on the tech sector.
Basically Homeless just created a wasp-shooting real-world first-person shooter machine with high speed, accuracy, and strength motors, controllers, etc, controlled via Python, using Claude with little knowledge of how to do the hardware or software.
The productivity aspects, especially among those who go through the education system from this day forward, will be forever changed. There are already plenty of developers who wouldn't give up what they now have access to. Despite the black hole of money it is now, power and wealth will come over time.
Homeless just created a wasp-shooting real-world first-person shooter machine with high speed, accuracy, and strength motors, controllers, etc, controlled via Python, using Claude with little knowledge of how to do the hardware or software.
... Is homeless a company? Are we talking about a video game.. a robot... White Anglo Saxon Protestants... What?
I agree overall, but fooling experts isn’t what would make AI valuable. Being able to do valuable tasks would make it valuable. And it’s just not good enough at valuable tasks to be valuable.
And silicons' nowhere near as energy efficient as biological neurons. There needs to be a massive energy breakthrough like fusion or actual biological processors becoming a thing to see any significant improvements.
The regime needs these young rich Buck stories to keep normies in line. There is always some young bro or chick "breaking" something while a generation of slaves produces the value.
Dude has been paying for these articles for a while now. These "tech entrepreneurs" are getting weirder and weirder. Musk zuck and this guy, it does make you wonder where the fuck is we going here.
Wrt things like this, and climate change, the rise of fascism globally, the outsourcing of jobs, etc., people should be much more afraid than they are currently.
I think people used to be thus during the cold war era, yet WWIII never materialized and people are just burnt out from being told that they need to be afraid all the time - e.g. those school drills where kids knelt down and placed heads behind the backs of their necks, like that would somehow stop an atomic bomb?
Now, "nothing bad can ever happen", even as we see heat spikes of 100 degrees in the Arctic and the 4 globally hottest days ever recorded were all last week, on top of 14 straight months of record-setting temps too. The octogenarian leaders whose fingers can't even type on a mobile phone somehow "lead" our nation even in areas like technology. Note: I am not getting on their case for being chronologically old or even not knowing things (ignorance is easily cured), I am condemning them for choosing to remain in their ignorance (obstinacy), even while retaining their positions of power & authority (corruption) rather than cede to those who actually know stuff.
I have no problems with someone choosing not to learn about difficult matters - e.g. vaccinations - but in that case, don't vote. Your choices to be lazy & willfully uninformed should not dictate mine to remain alive.
Yeah I went off on a tangent here, b/c Altman is simply one more example of all that has come before - e.g. Huffman and Musk did it before him, and Bezos before that, and so on, and it will fucking never end. Protect yourself as best you can... somehow. e.g. coming to the Fediverse (which soon, with Sublinks and Piefed, will offer alternatives beyond just Lemmy) seems a great first step to me:-).