Skip Navigation
131 comments
  • ill use copilot in place of most of the times ive searched on stackoverflow or to do mundane things like generate repeated things but relying solely on it is the same as relying solely on stackoverflow.

  • The interesting bit for me is that if you ask a rando some programming questions they will be 99% wrong on average I think.

    Stack overflow still makes more sense though.

  • I don't even bother trying with AI, it's not been helpful to me a single time despite multiple attempts. That's a 0% success rate for me.

  • I've used chatgpt and gemini to build some simple powershell scripts for use in intune deployments. They've been fairly simple scripts. Very few have of them have been workable solutions out of the box, and they've often filled with hallucinated cmdlets that don't exist or are part of a thirdparty module that it doesn't tell me needs to be installed. It's not useless tho, because I am a lousy programmer its been good to give me a skeleton for which I can build a working script off of and debug myself.

    I reiterate that I am a lousy programmer, but it has sped up my deployments because I haven't had to work from scratch. 5/10 its saved me a half hour here and there.

    • I'm a good programmer and I still find LLMs to be great for banging out python scripts to handle one-off tasks. I usually use Copilot, it seems best for that sort of thing. Often the first version of the script will have a bug or misunderstanding in it, but all you need to do is tell the LLM what it did wrong or paste the text of the exception into the chat and it'll usually fix its own mistakes quite well.

      I could write those scripts myself by hand if I wanted to, but they'd take a lot longer and I'd be spending my time on boring stuff. Why not let a machine do the boring stuff? That's why we have technology.

  • This is the best summary I could come up with:


    In recent years, computer programmers have flocked to chatbots like OpenAI's ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.

    That's a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.

    For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT's attempt to answer them.

    The team also performed a linguistic analysis of 2,000 randomly selected ChatGPT answers and found they were "more formal and analytical" while portraying "less negative sentiment" — the sort of bland and cheery tone AI tends to produce.

    The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn't catch AI-generated mistakes at 39 percent.

    The study demonstrates that ChatGPT still has major flaws — but that's cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.


    The original article contains 340 words, the summary contains 199 words. Saved 41%. I'm a bot and I'm open source!

  • We need a comparison against an average coder. Some fucking baseline ffs.

131 comments