Skip Navigation

Have you used so-called "AI" for actually productive work ?

With copilot included in Professional-grade Office 365 and some politician claiming that their government should use AI to be more efficient. I am curious on whether some of you did use "AI" to get some productive things done. Or if it's still mostly a toy for you.

65 comments
  • I used AI to generate random fake data to use in training on Excel , also to understand various concepts in my feild of study and to answer my sudden random questions

  • I’ve used it productively this week by…

    • Summarising and finding relevant parts of Microsoft Teams meetings.
    • Finding relevant parts of the labyrinthine policies I have to comply with.
    • Quickly finding out what’s going on with corporate events in the market.
    • Generating SQL code instead of starting a blank query from scratch (I can never remember the exact way to declare various structures)
    • At home, feeding private documents into Ollama for insight and producing compliance reports.
    • Instructions for stepping through flashing some temperature sensors.
  • I would say that I have used an LLM for productive tasks unrelated to work. I run a superhero RPG weekly, and have been using Egyptian & north African myths as the origin for my monsters of the week. The LLM has taken my research and the monster-creating phase of my prep from being multiple hours to sometimes under one hour - I do confirm everything the LLM tells me with either Wikipedia or a museum. I can also use an LLM to generate exemplary images of the various monsters for my players, as a visual aid.

    That means I have more time to focus on the "game" elements - like specific combats, available maps, and the like. I appreciate the acceleration it provides by being a combined natural-language search engine and summary tool. Frankly, if Ask Jeeves (aka ask(dot)com) was still good at parsing questions and providing clear results, I would be just as happy using it.

  • I don't use AI for productive work, for the same reasons I don't stir my soup with a dishrag.

    Pretty good for recipes, tho'.

  • Used it as a toy for the longest time but by now I had to do a lot of coding and I was actually able to make good use of code completion AI.

    Saved me about a quarter of my time. Definitely worth something. (FYI I use supermaven free tier).

    Also I'm using ChatGPT to ask dumb questions because that way I don't have to constantly interrupt other people. And also as a starting point to research something. I usually start with ChatGPT, then Google specific jargon and depending on the depth of the topic I will read either studies, articles or forum threads afterwards.

    It did take me a long time to figure out which AI and when to use it, so mandating this onto the entire government is a gong show more than anything.

    No AI is not useless, but it's always a very specific use case.

    If you're interested, I suggest using the free ChatGPT version to ask dumb questions together with Google to get a feel for what you get. Then you can better decide if it's worth it for you.

    • The amount of shit we have to clean up from devs using AI generated code nowadays is insane. Some people are using it to generate the bulk of their code and the output can be trash tier.

      I was supposed to have a nice long weekend to rest and I spent most of it cleaning up after clients who pushed AI generated code into production that froze all data processing. Even after we found the problem and fixed it for them, the data didn't catch up until yesterday afternoon. The entire holiday I had to spend with a laptop a few feet away on a Teams call because a dev used AI-gen code.

      I am not saying that it isn't helpful to your situation. What I am saying is that a growing number of outfits are starting to depend on "devs" who's code is mostly LLM generated, and they push it without understanding what it does or how it interacts with the environment they are deploying it in.

      • Yeah. I think AI literacy is a real thing and should be taken seriously. Before generating everyone should internalize the boundaries and limitations of any model used.

        If you have a hammer, everything's a nail. And that reflex exists with AI as well, so everyone who uses is has to be careful in regards to that.

    • Same.

      Gen AI is very helpful when you're working on modifying or filling holes in code.

  • Everyday, the company I work for have all their code in SAS, I use our LLM to translate it to python. I also write my python scripts and ask the llm to refract it and optimize it. Sometimes it save me 2 seconds so I just use my code that is usually simple, but other times it saves me half an hour.

  • I basically use it on rare rare occasion to help get me "unstuck" with creative tasks, I don't really use what it produces in the end, I wind up dismantling it entirely and rewriting it "properly" but it has a use you know?

  • Messed around for a while and then nothing. Not sure if I'm being AI-averse but i really can't find good use for it.

  • Really worth listening to this podcast as well. It's a guy teaching corporate teams to make best use of AI. He goes over how to use it to get really great use by using it as a discussion rather than just asking it a question and expecting and accurate answer in the first instance

    https://youarenotsosmart.com/2024/02/19/yanss-281-how-a-pernicious-cognitive-bias-limits-our-ability-to-use-chatbots-properly-and-to-overcome-it/

    AI has been most useful for tech support for me. I wouldn't have been able to switch to Linux completely if AI didn't instantly find solutions for me, rather than being told by the community to read tomes of documentation.

    I also use it a lot to find how to get office apps to do what I want.

    I'm famous at work for being a poet, when I actually just ask AI to write a short witty poem.

    You can use image generators to make nice personalised cards to share on special events.

    AI can make mind maps and things like that if you tell it what you want.

  • I've used it once to suggest a specific term that I'm going to use in my comic. I was utterly incapable of formulating a conventional search query for a search engine so, after endlessly browsing various thesauri, in the end I resorted to asking perplexity ai. Still took a bit and I had to fight it to get it to understand what I was asking but I did eventually find a term that fits. Felt dirty afterwards. Does that count as "productive"?

    The only other thing was the title of a book I read 30 years ago and had only vague memory of. So I gave it an approximate description including a plot point I thought I remembered. The first result it gave me wasn't it. But it claimed the plot involved the thing I remembered. I then asked again and the second result actually was the correct book - turns out I had almost completely misremembered the plot point but it still said "yep, this happens in this book". Very weird experience.

  • I find that the very best use case of LLMs are in the name it self--language. I can check my text is grammatically correct or not for example

  • I started using Debian full-time a year and half ago. It was a very frustrating experience initially and I leaned on LLMs heavily for advice. It was pretty hit or miss initially, and still occasionally gives wrong advice, but it has become much more helpful as the models have progressed. I have been able to restore a broken bup backup, learned the innards of systemd, troubleshoot scripts not launching correctly, optimized my Wayland config, correct fstab boot errors, configure my openWRT router, etc. Obviously I can just blindly copy/paste, but because I ask questions and try and tie things together, I learn along the way as well.

    Currently taking a stats course and use the paid version of Claude to check/correct my workings when I do an exercise outside of the course. Also, it's great for explaining concepts in relatable terms. For example I was having trouble understanding confidence intervals, but told Claude to explain it using Steph Curry's 3pt shooting % as example.

    There are going to be a lot of people left behind because they haven't kept up w/ the rate of progress and still see LLMs as they were when they first launched.

  • GitHub Copilot for fancy find and replace at work (rewriting a database migration from the old schema to the new schema). I pasted in the old migration, started the pattern and the AI finished for me.

65 comments