Oh surprise surprise, looks like generative AI isn't going to fulfill Silicon Valley and Hollywood studios' dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!
As I said here before, generative AIs are not universal solution to everything that has ever existed like they are hyped up to be, but neither are they useless. At the end of the day, they are ultimately tools. Complex, powerful, useful tools, but tools nonetheless. A good artist can create better work faster with the help of a diffusion model, the same way LLM code generation can help a good programmer finish their project faster and better. (I think). All of these AI models are trained on data from data from everyone on Internet, which is why I think its reasonable that everyone should have access to these generative AI models for the benefit of humanity and not profit, and not just those who took other people's work for free to trained the models. In other words, these generative AI models should belong to everyone.
And here lies my distaste for Sam Altman: OpenAI was founded as a nonprofit for the benefit of humanity, but at the first chance of money he immediately started venture capitalisting and put anything from GPT-2 onwards under locks and keys for money, and now it looks like that they are being crushed under the weight of their own operating costs while groups like Facebook and Stability catches up with actual open models, I will not be sad if "Open"AI fails.
(For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)
I have to admit, playing with these generative models is pretty fun.
Silicon like usual thinking these things are as big as the invention as the internet, and trying to get their money in there the first place. AI was and still is a massive game changer, but nothing can live up to the hype of which they throw a stupid amount of money at these things. They didn't learn their lesson after crypto or the "metaverse" either lol. I see AI being a tool, an incredibly useful one. That also means it has a lot of jobs it simply can't do. It can't replace artists, but artists can use it as a tool to help them work off of things.
What I don't like about the article is that the phrasing 'paying off' can apply to making investors money OR having worthwhile use cases. AI has created plenty of use cases from language learning to code correction to companionship to brainstorming, etc.
It seems ironic that a consumer-facing website is framing things from a skeptical "But is it making rich people richer?" perspective
You'd think at this point that investors would wait for a thing to fill out the question mark second step in their business plan before investing in it, but you'd be way, way wrong.
Every new tech company comes to the investor panel with:
build expressive to run new tool and give it away to end users for free
Do people really not understand that we are in the early stages of ai development? The first time most people were made aware of LLMs was, like, 6 months ago. What ChatGPT can do is impressive for a self contained application, but is far from mature enough to do the things people are complaining it can’t do.
The point the industry is trying to warn about is that this technology is past its infancy and moving into, from a human comparison standpoint, childhood or adolescence. But, it iterates significantly faster than humans, so the time it can do the type of things people are bitching about is years, not decades, away.
If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.
It should also worry investors open-source AI is only months behind the big tech leaders. I looked into AI voice cloning lately. There's a few really pricey options. Like $25 a month for a couple of hours voice cloning.
However, there's already an open-source version of what they're selling.
Yeah, so far. It's super early in the modern incarnation of AI that actually has the chance to pay off, LLMs.
This isn't like Bitcoin where there's huge hype for a pretty small market opportunity. We all realize the promise, we are just still figuring out how to get rid of hallucinations and making it consistent and tuned to a certain business usage.
Great, now factor in the cost of data collection if not subsidizing usage that you are effectively getting free RLHF from...
The one thing that's been pretty much a guarantee over the last 6 months is that if there's a mainstream article with 'AI' in the title, there's going to be idiocy abound in the text of it.
A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.
Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports.
OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high.
A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.
Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint.
To get around the fact that they’re hemorrhaging money, many tech platforms are experimenting with different strategies to cut down on costs and computing power while still delivering the kinds of services they’ve promised to customers.
The original article contains 432 words, the summary contains 172 words. Saved 60%. I'm a bot and I'm open source!