I mean if youre going to think of it that way any Turing complete language fits the bill, but what I mean by universal is a language you would reach for to solve any problem you have and it would be better than any other language. It's not a computer science problem it's a software engineering problem.
There can be a universal language in theory, but it's borderline impossible to achieve. Every domain has a different set of problems that it needs to solve, and language design involves tradeoffs that may make sense for one domain but not another. That's why I think language wars are silly, without context it's impossible to say which language is "better", because you could have different answers depending on what you're trying to do.
In the end you shouldn't be too concerned with it. There are lots of languages, but all of them fall under two or three paradigms where if you learn one language from that paradigm, your skills are mostly transferable.
It was my mistake, I said that we definitely know they don't vs. there is no evidence showing that there is. There aren't much studies to back this up. The whole point of the talk is that software engineering as a discipline is really poorly studied and we tend to make assertions like this without actually validating them.
If I was betting money on this(I.e. deciding where to focus my investment), the quality of the typesystem would only matter if the typesystem caught real problems that I face in my day to day work. For a Web app for instance, it makes no sense to use Rust vs a GC'd language because the kinds of bugs that you face in Web apps aren't really the kinds of issues that a borrow checker will help you with. The whole point of Rust being difficult is that it saves you time down the line, if it's difficult and it doesn't then that tradeoff doesn't make sense.
Hilel teaches formal verification for a living, he very much sees the value of automatically proving properties about your program, as do I, but the reality is that the typesystem doesn't necessarily help as much as we think it does.
DMD is the reference implementation as far as I know, so I don't think they have the same issue that C and C++ have with regards to needing to have a standard that pleases everyone. I agree that it has an issue positioning itself relative to other languages, but to me D is the good kind of boring. It has most of what you need, there is very little that is surprising in it, if you find yourself needing to do something, probably D has an easy-ish way of doing it.
There's a difference between tests and assertions. Students do test their code, however they don't write assertions, as I said because you want the cognitive load to be as low as possible so that they can master the basics. I'm fine with tests being provided to them, however they should be focusing on learning the constructs at the start.
In any field, the real life practice of a profession is something you learn working for an actual company, whether it's through an internship or an entry level job. Ideally there should be unions or syndicates setting these standards so that they're consistent across the field, just like with other knowledge based professions.
Universities are not corporate training programs, and they aren't supposed to be.
By the way, what you claimed “research shows” is so ridiculous that it’s hilarious that you wrote it while being serious.
There is still no research that definitively shows that static types reduce defects more than dynamic types, this is a fact. Turns out we are incredibly bad at studying this, so I don't know how you can say definitively that it is the case when even the people who study this for a living are not able to make that case.
The thing is the way they motivate new students to learn programming is by having them write programs that do something. Making a test green isn't as motivating as visually seeing the output of your work, and test fixtures can be complex to set up depending on the language. I mean students don't learn how to factor their code into methods until later into such a course, they're learning if statements and for loops and basic programming constructs. Don't you think having to explain setting up test fixtures and dependency inversion is a bit too much for people at that level?
It's not that there is evidence that it doesn't matter, but there is no evidence showing that it does.
> A good language matters. A good type system matters. A good use of a good language with its type system, patterns, abstractions, ecosystem, and all it got to offer matters.
Eh research shows otherwise. Rust eliminates defects for a very particular set of problems, but when it comes to logical correctness it isn't better or worse than other languages. If those problems are prominent in your domain(such as you have to write a ton of concurrent code), Rust makes sense. Otherwise being well rested will have a bigger impact on the quality of your code than the best type system in the world.
In terms of dev practices, the only practice demonstrated to have a consistent positive impact on code quality is code reviews. Testing as well, but whether it's TDD or other kinds of testing doesn't really matter.
If you wanted to introduce every industry best practice in an intro course you'd never get to the actual programming.
It would be good to have a 1 credit course(one hour a week) where you learn industry best practices like version control, testing and stuff like that. But it definitely shouldn't be at the start.
When a single entity reaps all of the rewards of that cooperation, people are much less motivated to do that.
Some people are politically motivated, there are tons of reasons, but it's a two way interaction in all of these cases.
With the FOSS model you get credited at least, so you are getting something out of it even if it's not monetary. With ChatGPT you don't even get that. You're feeding an AI that's being monetized by someone else, what possible incentive could people have to contribute anymore?
Yeah but will people still care about contributing that information if they're not going to be compensated for it in any way? Like people get something out of contributing to stack overflow, even if it's just recognition. This is gone with ChatGPT.
I can think of four aspects needed to emulate human response: basic knowledge on various topics, logical reasoning, contextual memory, and ability to communicate; and ChatGPT seems to possess all four to a certain degree.
LLM's cannot reason, nor can they communicate. They can give the illusion of doing so, and that's if they have enough data in the domain you're prompting them with. Try to go into topics that aren't as popular on the internet, the illusion breaks down pretty quickly. This isn't "we're not there yet", it's a fundamental limitation of the technology. LLM's are designed to mimick the style of a human response, they don't have any logical capabilities.
Regardless of what you think is or isn’t intelligent, for programming help you just need something to go through tons of text and present the information most likely to help you, maybe modify it a little to fit your context. That doesn’t sound too far fetched considering what we have today and how much information are available on the internet.
You're the one who brought up general intelligence not me, but to respond to your point: The problem is that people had an incentive to contribute that text, and it wasn't necessarily monetary. Whether it was for internet points or just building a reputation, people got something in return for their time. With LLM's, that incentive is gone, because no matter what they contribute it's going to be fed to a model that won't attribute those contributions back to them.
Today LLM's are impressive because they use information that was contributed by millions of people. The more people rely on ChatGPT, the less information will be available to train it on, and the less impressive these models are going to be over time.
Hey, if people are going to go back to reading manuals like we're in the 1980's again is it such a bad thing? /s
It's insane how a single tool managed to completely destroy the value collectively created by people in over a decade.
We're not able to properly define general intelligence, let alone build something that qualifies as intelligent.
Also being able to prove the relationship between different parts of the code enables a lot of productivity tooling like IDEs. Simple things like renaming a class or a struct become chores at best in a statically typed language, whereas in dynamic languages there is an element of risk in refactorings like that.
It's useful for audit trails and the like, generally OS audit logs only tell you who accessed the machine not what they did on the production database. Things like that. Databases like postgres come with admin tooling in general that SQLite isn't really meant for. As you said, backups as well are a problem.
The database in a state where it's violating some assumptions I'm making and I need to manually intervene without taking down my application for example. I need to have an audit trail on the changes being made to the database and who made them. I need to create replicas to implement failover. I need to replicate my application on multiple machines and all the replicas need to have the same view of the data. I need to mitigate the possibility of data leaks if I have multiple tenants sharing a database.
I'm not saying that you're wrong for using it. I'm just saying that it doesn't work for everything.
Sorry about that, it seems I unintentionally created a bit of controversy and am being a bit defensive.