Are people not getting tired of all this stack overflow praise when that website is complete garbage that marks every question as duplicate of something asked 11 years ago.
It's been a while since I picked up a new language, but I remember the last time I did, the hardest part was just figuring out how to do trivial things. It's crazy how long it took me to figure out how to do a simple thing like split a string into fields by a delimiter. I can literally paste a line of code into ChatGTP and say "convert this Python line into JavaScript" and it'll just do it. Fantastic.
Or yeah, I could spend five minutes reading the man page to remind myself of strptime's date format every time I need to format a date. Orrrrrr I can just ask the bot something like How do I format this date to look like "YYYY-MM-DD HH-MM-SS" in python? Wed Oct 25 05:37:04 PDT 2023.
Untill it completely makes up a date format that looks reasonable but won't work.I tried ChatGPT to research, in Rust, how to require at least one feature at compilation time just with Cargo.toml options. Turns out that's not supported, but that didn't prevent ChatGPT from trying to gaslight me with some hallucinations about options that would do this. It's a waste of time when you can't differentiate hallucinations from recollection, for an experienced dev parsing documentation without this uncertainty should be much more efficient.
The last I saw, AI models were very good at explaining what code did at a very superficial level, but not why it's doing that or why it's written that way.
I assume it's gotten better at that since then. (?)
e.g. They'd be able to write comments for x = 0 along the lines of "set variable x to 0" but not why it's being done or even why it might be a good idea.
Deeper question: What can AIs do with obfuscated code? Can they pick that apart and explain it? What if it's regular code with misleading function names?