It also reminds me of crypto. Lots of people made money from it, but the reason why the technology persists has more to do with the perceived potential of it rather than its actual usefulness today.
There are a lot of challenges with AI (or, more accurately, LLMs) that may or may not be inherent to the technology. And if issues cannot be solved, we may end up with a flawed technology that, we are told, is just about to finally mature enough for mainstream use. Just like crypto.
To be fair, though, AI already has some very clear use cases, while crypto is still mostly looking for a problem to fix.
No, this isn't crypto. Crypto and NFTs were trying to solve for problems that already had solutions with worse solutions, and hidden in the messaging was that rich people wanted to get poor people to freely gamble away their money in an unregulated market.
AI has real, tangible benefits that are already being realized by people who aren't part of the emotion-driven ragebait engine. Stock images are going to become extinct in several years. People can make at least a baseline image of what they want, no matter the artistic ability. Musicians are starting to use AI tools. ChatGPT makes it easy to generate low-effort, high-time-consuming letters and responses like item descriptions, or HR responses, or other common draft responses. Code AI engines allow programmers to present reviewable solutions in real-time, or at least something to generate and tweak. None of this is perfect, but it's good enough for 80% of the work that can be modified after the initial pass.
Things like chess AI has existed for decades, and LLMs are just extensions of the existing generative AI technology. I dare you to tell Chess.com that "AI is a money pit that isn't paying off", because they would laugh their fucking asses off, as they are actively pouring even more money and resources into Torch.
The author here is a fucking idiot. And he didn't even bother to change the HTML title ("Microsoft's Github Copilot is Losing Huge Amounts of Money") from its original focus of just Github Copilot. Clickbait bullshit.
I totally agree. However, I do feel like the market around AI is inflated like NFTs and Crypto. AI isn't a bust, there will be steady progress at universities, research labs, and companies. There is too much hype right now, slapping AI on random products and over promising the current state of technology.
I love how suddenly companies started advertising things as AI that would have been called a chatbot a year ago. I saw a news article headlinethe other day that said that judges were going to improve the time they took to render judgments significantly by using AI.
Reading the content of the article they went on to explain that they would use it to draft the documents. Its like they never heard of templates
I'm still trying to transfer $100 from Kazakhstan to me here. By far the lowest fee option is actually crypto since the biggest difference is the currency conversion. If you have to convert anyway, might as well only pay 0.30% on both ends
Look into DJED on Cardano. It’s WAY cheaper than ETH (but perhaps not cheaper than some others). A friend of mine sent $10,000 to Thailand for less than a dollar in transaction fees. To 1bluepixel: Sounds like a use-case to me!
You still have to deal with ETH fees just to get the funds into the roll up. I admit that ETH was revolutionary when it was invented but the insane fee market makes it a non-starter and the accounts model is just a preposterously bad (and actually irreparably broken) design decision for a decentralized network, makes Ethereum near impossible to parallelize since the main chain is required for state and the contracts that run on it are non-deterministic.
I think that because it’s true. Smart contracts on Ethereum can fail and still charge the wallet. Because of the open ended nature of Ethereum’s design, a wallet can be empty when the contract finally executes, causing a failure. This doesn’t happen in Bitcoin and other utxo chains like Ergo, and Cardano (where all transactions must have both inputs and outputs accounted for FULLY to execute). Utxo boasts determinism while the accounts model can fail due to an empty wallet. Determinism makes concurrency harder for sure…but at least your entire chain isn’t one gigantic unsafe state machine. Ethereum literally is by definition non-deterministic.
Crypto found a problem to fix. The reason the problem remains: everything is run by that problem so it was astroturfed to death by parties that run the current financial system and the enemy of their enemy (who’s a friend), opportunistic scammers like SBF and Do Kwan.
Most social media uses it. Video and music streaming services. SatNav. Speech recognition. OCR. Grammar checks. Translations. Banks. Hospitals. Large chunks of internet infrastructure.
Automated mail sorting has been using AI to read post codes from envelopes for deacades, only back then - pre hype - it was just called Neural Networks.
At the time I learned this at Uni (back in the early 90s) it was already NNs, not algorithms.
(This was maybe a decade before OCR became widespread)
In fact a coursework project I did there was recognition of handwritten numbers with a neural network. The thing was amazingly good (our implementation actually had a bug and the thing still managed to be almost 90% correct on a test data set, so it somehow mostly worked its way around the bug) and it was a small NN with no need for massive training sets (which is the main difference with Large Language Models versus the more run-off-the-mill neural networks), this at a time when algorithmic number and character recognition were considered a very difficult problem.
Back then Neural Networks (and other stuff like Genetic Algorithms) were all pretty new and using it in automated mail sorting was recent and not yet widespread.
Nowadays you have it doing stuff like face recognition, built-in on phones for phone unlocking...
The key fact here is that it's not "AI" as conventionally thought of in all the scifi media we've consumed over our lifetimes, but AI in the form of a product that tech companies of the day are marketing. It's really just a complicated algorithm based off an expansive dataset, rather than something that "thinks". It can't come up with new solutions, only re-use previous ones; it wouldn't be able to take one solution for one thing and apply that to a different problem. It still needs people to steer it in the right direction, and to verify its results are even accurate. However AI is now probably better than people at identifying previous problems and remembering the solution.
So, while you could say that lots of things are "powered by AI", you can just as easily say that we don't have any real form of AI just yet.
Perhaps, but at best it's still a very basic form of AI, and maybe shouldn't even be called AI. Before things like ChatGPT, the term "AI" meant a full blown intelligence that could pass a Turing test, and a Turing test is meant to prove actual artificial thought akin to the level of human thought - something beyond following mere pre-programmed instructions. Machine learning doesn't really learn anything, it's just an algorithm that repeatedly measures and then iterates to achieve an ideal set of values for desired variables. It's very clever, but it doesn't really think.
I have to disagree with you in the machine learning definition. Sure, the machine doesn't think in those circumstances, but it's definitely learning, if we go by what you describe what they do.
Learning is a broad concept, sure. But say, if a kid is learning to draw apples, then is successful to draw apples without help in the future, we could way that the kid achieved "that ideal set of values."
Machine learning is a simpler type of AI than an LLM, like ChatGPT or AI image generators. LLM's incorporate machine learning.
In terms of learning to draw something, after a child learns to draw an apple they will reliably draw an apple every time. If AI "learns" to draw an apple it tends to come up with something subtley unrealistic, eg the apple might have multiple stalks. It fits the parameters it's learned about apples, parameters which were prescribed by its programming, but it hasn't truly understood what an apple is. Furthermore, if you applied the parameters it learned about apples to something else, it might completely fail to understand it all together.
A human being can think and interconnect its throughts much more intricately, we go beyond our basic programming and often apply knowledge learned in one thing to something completely different. Our understanding of things is much more expansive than AI. AI currently has the basic building blocks of understanding, in that it can record and recall knowledge, but it lacks the full amount of interconnections between different pieces and types of knowledge that human beings develop.
Thanks. I understood all that. But my point is that machine learning is still learning, just like machine walking is still walking. Can a human being be much better at walking than a machine? Sure. But that doesn't mean that the machine isn't walking.
Regardless, I appreciate your comment. Interesting discussion.