I really want to use AI like llama, ChatGTP, midjourney etc. for something productive. But over the last year the only thing I found use for it was to propose places to go as a family on our Hokaido Japan journey. There were great proposals for places to go.
But perhaps you guys have some great use cases for AI in your life?
I don't. Played with it a bit but as a capable writer and coder I don't find it fills a need and just shifts the effort from composition (which I enjoy) to editing and review (which I don't).
I don't and the energy consumption of public AI services is a stopper for "testing and playing around". I think I'll just wait until it takes over the world as advertised.
Not much. I totally agree with Linus Torvalds in that AIs are just overhyped autocorrects on steroids, and I despise that the artwork generators are all based on theft.
Pretty much all I use them for is to make my life easier at work, like turning a quick draft into a formal email.
Nope, nothing. There doesn't honestly seem to be anything I'd use it for, even then I wouldn't wanna support it as long as it uses Data its gotten by basically stealing. Maybe once that has gotten better I'll look more into it, but at the current moment I just don't have the heart to support it
Summarising articles / extracting information / transforming it according to my needs. Everyone knows LLM-bssed summaries are great, but not many folks utilise them to their full extent. For instance, yesterday, Sony published a blog piece on how a bunch of games were discounted on the PlayStation store. This was like a really long list that I couldn't be bothered reading, so I asked ChatGPT to display just the genres that I'm interested in, and sort them according to popularity. Another example is parsing changelogs for software releases, sometimes some of them are really long (and not sorted properly - maybe just a dump of commit messages), so I'd ask it to summarise the changes, maybe only show me new feature additions, or any breaking changes etc.
Translations. I find ChatGPT excellent at translating Asian languages - expecially all the esoteric terms used in badly-translated Chinese webcomics. I feed in the pinyin word and provide context, and ChatGPT tells me what it means in that context, and also provides alternate translations. This is a 100 times better than just using Google Translate or whatever dumb dictionary-based translator, because context is everything in Asian languages.
I'm not a programmer but I work with IT and I regularly need complex shell scripts. ChatGPT has enabled me to skip the largest portion of slamming my head against the wall by writing the script for me and then I can tune it if it gets something wrong. Saves me hours and hours of my life.
I use it to generate code documentation because I'm incapable of documenting things without sounding like a condescending ass. Paste in a function, tell it to produce docstrings and doctests, then edit the hell out of it to sound more human and use actual data in the tests.
Its also great for readmes. I have a template that I follow for that and only work on one section at a time.
I find them neat, but there's just too many issues I can't overlook.
The environmental impact of these technologies is immense, and growing exponentially.
A vast amount of the training data used for the big llms and image generators is not in the public domain, which is at best ethically grey but at worst just blatantly exploiting artists and other professionals.
If there existed some alternatives to the big names that avoided both of these issues, I'd love to use them for code autocomplete and image generation for ttrpgs, but as it stands the moral cost is too high.
Mostly for finding information that for whatever reason can be difficult to find using search engines. For example, I've used ChatGPT to ask spoiler-free questions about plot points in books I'm reading, which has worked rather well. It hasn't spoiled me yet, but rather tells me that giving more information would be a spoiler.
Last time I tried to look something up on Google, carefully, I got a massive spoiler for the end of the entire book series.
I also use it for code-related questions at times, but very rarely, and mostly when using a language I'm not used to. Such as when I wrote an expect script for the first (and perhaps only) time recently.
I find a ton of uses for quick Python scripts hammered out with Bing Chat to get random stuff done.
It's also super useful when brainstorming and fleshing out stuff for the tabletop roleplaying games I run. Just bounce ideas off it, have it write monologues, etc.
I use it all the time to write Microsoft Excel and Microsoft PowerApps formulas. I use it to draft and re-write e-mails. I use it to come up with ideas and brainstorm.
@jeena I only use DeepL to translate and that's it. I also started taking notes in .md files, so that could make for a good use case in the future if there was an AI that I could use without connecting to the internet (e.g. to only let me tell stuff based on the files I got). Otherwise I am pretty reticent on AI. Perhaps I watched too many fiction movies, but I am afraid it will become too sentient and somehow escape the human oversight, thus creating havoc in our lives.
The only practical thing I have found I can do with AI is brainstorm ideas (or rather expand upon little ideas I have but don't know where to go after) or figure out what's wrong with a snippet of code when I can't figure it out on my own.
I use it quite a bit. I don’t trust big companies who commercialize AI so I run my AIs locally on my old retired gaming desktop that I’ve turned into a homelab/media server.
I use Kobold.AI to self host an LLM like ChatGPT (Dolphin-Mistral7b if you are curious). I mainly use it for low effort knowledge searches for stuff that is easier typed out long and descriptive (since google struggles with this). Since it’s AI I have to be careful about what I search since I’ve seen it make stuff up but for the majority of what I use it for (Programming syntax, Linux troubleshooting, general questions) it’s pretty good.
I also have Stable Diffusion running as well using the ICBINP model (which is pretty decent for photorealistic images). I use this AI to generate social media display pictures and porn :) it’s fun because it’s a surprise at what you’re going to get but sometimes it generates absolute horrors. Anatomical horrors. Those are genuinely horrific. Other times it’s really good.
I’ve found it useful for getting approaches to programming projects. Rarely does it completely solve my problems, but it keeps me headed in the right direction.
I’m also partway through making my first ARG and it’s super useful for generating ideas, especially when I feed it my established lore because it can keep ideas within that universe.
I’ve found overall, it’s best to use it to fill in the gaps on ideas I have in general. I theoretically could make all of the content myself from scratch, but I’m honestly terrible at all the little details in many cases. It allows me to not dwell on the little stuff.
I've used it to make specific images for work proposals that stock sources may not have. Sometimes for fun, I vary it so it's in the style of a cartoon or a Japanese woodcut.
Out of say a year, I have used it once to help put a work quote into better formatting, the rest of the time I use it solely as a way to suggest films I would enjoy based on a previously warched list, it is actually good at that
A lot of translation and summarisation. ChatGPT is extremely good in absorbing a whole mix of comments in different languages and summarising them in English (or whatever other language).
For programming I don't use it so much anymore because it hallucinates too much, calling APIs that don't even exist. And when I lower the temperature the output is too sparse.
I'm also trying to build an assistant that can also communicate proactively (I intend to auto-prompt it when things happen and then evaluate if it should cause a message to me). But I need to get a local LLM going for that because running that through the ChatGPT API will be too costly.
Also, a replacement for some of my web searches. Sometimes I just want to know something and it's refreshing that it can give me an answer (even though it does need to be validated, it's much easier to do that when you know what you're looking for!)
I've found it helpful at work for things like preparing agendas for meetings, or creating an outline of a presentation or document I need to write.
I've also found it helpful when I'm trying to Google something where I need to be pretty specific and then I can't find exactly what I mean by searching.
I use it to see the answers to problems on my physics homework when I can't figure it or myself. It works far better than forums, which are mostly all paywalled these days.
I've only used DuckDuckGo's implementations of GPT and Claude. I haven't really found a use case yet. I don't trust it enough to for queries related to things I don't understand (gaps in my knowledge) and would rather solve these problems or learn these skills through exisiting sources of information that I know have had at least some level of human refinement/vetting. Personally I enjoy the challenge of problem solving in life, particularly when the solution involves learning a new skill that I can utilise again in the future. I find it interesting that AI is advertised as being able to maximise our capabilities as humans, because it appears to be used for the complete opposite in most cases. People want to use their brains less and just be spoonfed the answers.
So far, there have been two interesting uses I've seen for chat gpt.
One is I've used it to help me write regular expressions in the very rare time I need to for my job.
The other is kind of cool but also kind of troubling in a way. But I've come across a couple of therapy style chat bots that are essentially just reading off a list of "here's what to do for XYZ"
I've tested them a bit, and I've found I'm 1) concerned who gets access to the information shared. 2) If/when these kinds of bots will be used to manipulate people in a negative way. 3)The possibility of a bot replying in a bad way that could make an issue worse for someone
Overall, I like the idea of them. I find it's hard to process information if it's coming directly from myself, or accept compassion from myself. So funny enough, these chat bots actually work really well in that respect.
In some cases, I've had better discussions than I have had with actual therapists, which is funny but also sad.
So while there's some troubling possibilities, I think there's a lot of positives that I've seen from my time with it.
General purpose LLMs are starting to replace everyday queries I used to posit to Google. Perplexity can be quite good for this.
Copilot as enhanced autocomplete when writing code. A particularly good use-case is writing tests: with a few test cases already written, a sufficiently good test name will most often generate a well-written test case.
LLMs for lazy generation of SQL queries can sometimes be quite nice.
Writing assistance for things I find myself struggling to get written by myself. A writing unblocking tool, if you will.
It's reducing the effort and time I have to put into some things, and I appreciate that. It's far from perfect, but it doesn't have to be perfect to be useful.
I've been using ChatGPT in conjunction with search engines just to find things I need. For instance, I did an April Fools presentation for a work meeting and needed humorous real-life legal stories, so the AI was able to provide suggestions.
I also use it to for simple tasks, like organizing info into a table.
Mainly, though, my reason for using it is that, since I work in tech, I'm going to need to know how to use it well, and the best way to do that is being hands-on.
I'm a bit disappointed to the practical uses, but I still get some value out of AI.
I sometimes use chatgpt to tweak existing SQL scripts at work, and as a trouble shooting assistant. Also I use this tool ultimate vocal remover to extract stems from songs, mainly to make myself instrumentals to practice singing over.
Those are really only things I do regularly, despite trying different self hosted AI tools.
Most are cool but not very useful.
I'm using Claude (subbed) to help me do qualitative coding and summarizing within a very niche academic framework. I was encouraged to try it by an LLM researcher and frankly I'm happy with the results. I am using it as a tool to assist my work, not replace it, and I'm trying to balance the bias and insights of the tool with my own position as a researcher.
On that note, if anyone has any insights or suggestions to improve prompts, tools, or check myself while I tinker, please, tell me.
Nothing but have it write stories (not shared or used for anything but just for fun). That, and come up with names for things since I struggle with that.
It's not AI, instead think of it as a search with data mined pattern matching behind a chatbot. Marketing types are so stupidly fixated they can't figure out that over hyping something can cause disappointment and enough skepticism to kill any desire to use a product. It's a lot easier to see the usefulness of something without that kind of noise.
Almost nothing. I sometimes use it to rephrase a question or answer. I refuse to become dependent on AI or contribute to it more than I already unwittingly have.
Asking extremely niche scientific questions: I don't depend on these answers but in the answer is usually the specific terminology I can then search and find the answers I was looking for. I have learned a lot about the properties of metals and alloys this way and what the planet could look like with different compositions.
Re-phrasing things: At work when I'm drained and out of patience I can tell that what I'm writing in my emails is not really appropriate, so I have GPT re-phrase it. GPT's version is typically unusable of course but it kicks my brain in the direction of re-phrasing my email myself.
Brainstorming: The program has endless patience for my random story-related questions and gives me instant stupid or cliche answers. This is great for me because part of my creative process since I was a kid has been seeing in media something that was less than satisfying and my brain flying into all the ways I could have done it better. I ask the program for its opinion on my story question, say "no idiot, instead:" and what comes after is the idea I was looking for from my own mind. Sometimes by total chance it has a good suggestion, and I can work with that too.
Fun uses which are less common:
Comedy use: I once had it generating tweets from Karl Marx about smoking weed every day. The program mixed marxist philosophy and language with contemporary party music to endlessly amusing results. Having historical figures with plenty of reference material from their writings opining on various silly things is very funny to me, especially when the program makes obvious mistakes.
Language Manipulation: If some philosophical text which was written to be deliberately impenetrable is getting too annoying to read, the program is decent at translating. If I plug in a block of text written by Immanual Kant and have the program re-write it in the style of Mark Twain, the material instantly becomes significantly easier to understand. Re-writing it in the style of stereotypical gen-z is hilarious.