It's crazy how much fud is flying around, and legitimately buries good open research. It's also crazy what these giant corporations are explicitly saying what they're going to do, and that anyone buys it. TSMC's allegedly calling Sam Altman a 'podcast bro' is spot on, and I'd add "manipulative vampire" to that.
Talk to any long-time resident of localllama and similar "local" AI communities who actually dig into this stuff, and you'll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.
I had a professor in college that said when an AI problem is solved, it is no longer AI.
Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they're just tools we use without thinking about them.
I'm sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it's just the keyboard app on my phone.
I don't know why. The people marketing it have absolutely no understanding of what they're selling.
Best part is that I get paid if it works as they expect it to and I get paid if I have to decommission or replace it. I'm not the one developing the AI that they're wasting money on, they just demanded I use it.
That's true software engineering folks. Decoupling doesn't just make it easier to program and reuse, it saves your job when you need to retire something later too.
I make DNNs (deep neural networks), the current trend in artificial intelligence modeling, for a living.
Much of my ancillary work consists of deflating tempering the C-suite's hype and expectations of what "AI" solutions can solve or completely automate.
DNN algorithms can be powerful tools and muses in scientific endeavors, engineering, creativity and innovation. They aren't full replacements for the power of the human mind.
I can safely say that many, if not most, of my peers in DNN programming and data science are humble in our approach to developing these systems for deployment.
If anything, studying this field has given me an even more profound respect for the billions of years of evolution required to display the power and subtleties of intelligence as we narrowly understand it in an anthropological, neuro-scientific, and/or historical framework(s).
That's about right. I've been using LLMs to automate a lot of cruft work from my dev job daily, it's like having a knowledgeable intern who sometimes impresses you with their knowledge but need a lot of guidance.
The only time I've seen AI work well are for things like game development, mainly the upscaling of textures and filling in missing frames of older games so they can run at higher frames without being choppy. Maybe even have applications for getting more voice acting done... If the SAG and Silicon Valley can find an arrangement for that that works out well for both parties..
If not for that I'd say 10% reality was being.... incredibly favorable to the tech bros
Like with any new technology. Remember the blockchain hype a few years back? Give it a few years and we will have a handful of areas where it makes sense and the rest of the hype will die off.
Everyone sane probably realizes this. No one knows for sure exactly where it will succeed so a lot of money and time is being spent on a 10% chance for a huge payout in case they guessed right.
And then people will complain about that saying it’s almost all hype and no substance.
Then that one tech bro will keep insisting that lemmy is being unfair to AI and there are so many good use cases.
No one is denying the 10% use cases, we just don’t think it’s special or needs extra attention since those use cases already had other possible algorithmic solutions.
Tech bros need to realize, even if there are some use cases for AI, there has not been any revolution, stop trying to make it happen and enjoy your new slightly better tool in silence.
Mr. Torvalds is truly a generous man, giving the current AI market an analysis of 10% usefulness is probably a decimal or two more than will end up panning out once the hype bubble pops.
There was a great article in the Journal of Irreproducible Results years ago about the development of Artificial Stupidity (AS). I always do a mental translation to AS when ever I see AI.
AI is nothing more than a way for big businesses to automate more work and fire more people.
and do that at the expense of 30+ years of power reduction and efficiency gains, to the point that private companies are literally buying/building/restarting old power plants just to cover the insane power demand, because literally operating a power plant is cheaper than paying the energy costs.
For the common every day person its 3d tv and every other bullshit fad that burned brilliantly for all of 3 seconds before snuffing itself out, leaving people to have had paid for overpriced garbage thats no longer useful.
I admit I understand nothing about ai and haven't used it in any way nor do I plan to. It feels wrong for me and I believe it might fuck us harder than social media ever could.
But the pictures it creates, the stories and conversations don't seem like hot air. And I guess, compared to the internet we are at the stage where the modem is still singing the songs of its people. There is more to come.
I heard it can code at a level where entry positions might be in danger to be swapped for ai. It detects cancer visually, recognizes people by the way they walk in China.
Also I fear that vulnerable persons might fall for those conversation bots in a world where there is less and less personal contact.
Gotta admit I'm a little afraid it will make most of us useless in the future.
In a way he’s right, but it depends! If you take even a common example like Chat GPT or the native object detection used in iPhone cameras, you’d see that there’s a lot of cool stuff already enabled by our current way of building these tools. The limitation right now, I think, is reacting to new information or scenarios which a model isn’t trained on, which is where all the current systems break. Humans do well in new scenarios based on their cognitive flexibility, and at least I am unaware of a good framework for instilling cognitive flexibility in machines.
I know tons of full stack developers who use AI to GREATLY speed up their workflow. I've used AI image generators to put something I wanted into the concept stage before I paid an artist to do the work with the revisions I wanted that I couldn't get AI to produce properly.
And first and foremost, they're a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it. They are terrible at deducing things themselves, because they can't 'think', or coming up with solutions that others haven't already - but so long as people are aware of those limitations, then they're a pretty good tool to have.
It's a reactionary opinion when people jump to the 'but they're stealing art!' -- isn't your brain also stealing art when it's inspired by others art? Artists don't just POOF, and have the capability to be artists. They learn slowly over time, using others as inspiration or as training to improve. That's all stable diffusors do - just a lot faster.