With all the recent hype around AI, I feel that a lot of people don't understand how it works and how it is useful. AI is useful at solving certain types of problems that are really difficult using traditional programming, like finding patterns that aren't obvious to us.
For example, object recognition is about finding patterns in images. Our brains are great at this, but writing a computer program capable of taking pixels and figuring out if the pattern is there is very hard.
Even if AI is sometimes going to misclassify objects, it can still be useful. For example, in a factory you can use AI to find defects in the production line. Even if you don't get it perfect, going from 100 defects per 1M products to 10 per million is a huge difference and saves the factory a lot of money.
Agree, but the joke to me is business folks thinking AI is a miracle and they can just shove it everywhere to print money. Where us devs know what you mean, and would like to add it in where it makes sense. Business thinks it's ready to replace us.
The key to "AI" is having a human there to take algorithms and apply them to the right problems.
This is what most people don't understand because many of the demos are quite impressive and narrowly tailored to prevent the fact from being obvious unless you know what you're looking for.
Most useful application so far seems to have been to predict protein folding. Have to check up on that, it should allow to cure all sorts of bad things.
I don't think that not educating people is an option. Even in the highly unlikely case that every job is hypothetically taken over by "AI": humans like to learnand hone their skills.
LLM costs $20 a month and needed only 60 hours of training, junior dev has been at it for years, costs as much for a half hour, and still needed me to repeatedly explain what a rectangle is
One key point here is: While you actually can replace a bunch of junior developers with AI in some places, any replaced junior developer will never become a senior developer that cannot be replaced by the AI because he/she is basically experince on two legs.
So, corporations, don't complain about the lack of experienced, senior personnal because YOU have been the main reason they don't exist.
To all the decaf haters: If you drink decaf, you actually like the taste of coffee without needing the caffeine. That's someone with taste, in my book.
Yeah, well for many of us it's decaf or no coffee due to health issues. You acting like it's a foolish, childish thing is just tribalism/elitism.
And for what it's worth, I'd put my decaf vs your coffee in a heartbeat. A good roaster with quality beans is great coffee, decaf or no. Just like Hoffman said.
It's saying that instead of spending all the resources needed to gather all of the training data for the LLM, just give a junior dev some coffee as the input instead.
The direct comparison is input and output. Coffee/training data is the input and the code is the output.
Yeah I don't really get this, you can come to deterministic mathematic conclusions with ML, it just requires different structuring of the problem. While area of a rectangle may not need optimization, there are many such places that do, like file compression, which requires perfectly accurate results.
You could say the same for a finite element model. A junior engineer with just 4 years of training can solve, explicitly, the deflection at the center of a slender, simple-simple beam of prismatic section and produce an exact (if slightly incorrect) answer. Building a FEM of the same can solve the problem and take longer (to make the model) with similar accuracy, both of which are good enough for design work.
Only a fool wouldn’t have a FEM around though, as it can solve problem that would take centuries for a human to solve. They may as well make a cartoon with the child digging a 3” hole in beach sand and then showing a backhoe making a jagged edged hole of the same size.
Part of the reason this is a great example is you can easily calculate the maximum stress of an I-beam IFF you know where to find the simple formula. Even a dense FEA mesh will always give an answer like 3x4=11.9974, it's worse. The education is how you know which formula to use.
The training data here is exaggerated more, actually. This task should take kilobytes, max, and would finish in a fraction of a second. Also, no self-respecting ML engineer would put together an ML system without accounting for every data type.
But a floating point issue is the exact type of issue a LLM would make (it does not understand what a floating point number is and why you should treat them differently). To be fair, a junior developer would make the same type of mistake.
A junior developer is, hopefully, being mentored by more senior coworkers who are extra careful with code reviews and would spot the bug for the dev. Machine generated code needs an even higher level of scrutiny.
It is relatively easy to teach a junior developer to write code that is easy to read and conforms to the teams style guide.