I've found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
I used to spend 1 month a year where all I did was write performance reports on people I supervise. Now I put the facts in let AI write the first draft, do some editing and I'm done in a week.
I used it a decent amount at my last job to write test reports that had a lot of similar text with minor changes.
I also use it for dnd to help me quickly make the outlines of side characters & flesh out my world.
I abhor it and I think anybody who does actually like it is using it unethically: for art (which they intend to profit off of), for writing papers or articles, and for writing bad code.
As a college student, best experience I've had is just generating stories that you can easily tell are AI written by use of specific language.
Second best was when I tried taking pokemon from older generations, taking their BST, telling an AI (perplexity) that I wanna give them gen 5 BST, providing a spreadsheet with all gen 5 pokemon w/BST and each individual stat, and using whatever it gives me as a baseline for making BST edits.
Otherwise, I wouldn't say I'm a big fan of AI since I don't have many uses for it myself.
I use it for coding (rarely pure copy paste), explaining code, use/examples, finding tools to use.
Better translation than Google translate for Japanese.
Asking for things that search engines only gives generic results for.
I built a spreadsheet for a client that sorts their email into threads and then segments various conversations into a different view based on shipment numbers mentioned in the conversations. But it's a lot of work to get something like this set up. Am thinking of going into consulting/implementation.
It's done a lot of bad/annoying things but I'd be lying if I said it hasn't enabled me to completely sidestep the enshittification of Google. You have to be smart about how you use it but at least you don't have to wade through all the SEO slop to find what you want.
And it's good for weird/niche questions. I used it the other day to find a list of meme songs that have very few/simple instruments so that I could find midi files for them that would translate well when going through Rust's in-game instruments. I seriously doubt I'd find a list like that on Google, even without the enshittification.
Demystifying obscure or non-existent documentation
Basic error checking my configs/code: input error, ask what the cause is, double check it's work. In hour 6 of late night homelab fixing this can save my life
I use it to create concepts of art I later commission. Most recently I used it to concept an entirely new avatar and I'm having a pro make it in their style for pay
DnD/Cyberpunk character art generation, this person does not exist website basically
duplicate checking / spot-the-diffetences, like pastebins "differences" feature because the MMO I play released prelim as well as full patch notes and I like to read the differences
I've been finding it useful for altering recipes to take my wife's allergies into account. I don't use it for much else. And certainly not for anything important.
My corp has been very skeptical and suspicious. So far the only allowed ai is to summarize slack. For channels that I want to keep in the loop but not waste time monitoring, it creates a nice summary of recent traffic.
I was trying to help one guy who used an online ai despite it being against policy. However he was just using it as a search engine to find a code solution and it took way too long to give him the wrong answer. A search engine would have been faster but he’d have to use his own judgement to identify the wrong answer. Pretty arrogant guy despite not knowing what he was doing, so I didn’t fight it when he insisted he was going to follow what it told him
It's great for parsing through the enshittified journalism. You know the classic recipe blog trope? If you ask chatgpt for a recipe, it just gives you one. Whether it's good or not is a different story, but chatgpt is leagues better at getting to the info you want than search has been for the last decade.
It helps make simple code when Im feeling lazy at work and need to get something out the door.
In personal life, I run a local llm server with SillyTavern, and get into some kinky shit that often makes for an intense masturbation session. Sorry not sorry.
it’s useful for programming from time to time. But not for asking open questions.
I’ve found having to double check is too unnerving and letting it just provide the links instantly is more my way of working.
Other than that it sometimes sketches things out when I have no idea what to do, so all in all it’s a glorified search engine for me.
Other than work I despise writing emails and reports and it fluffs them up.
I usually have to edit them afterwards to not make em look ai-made but it adds some „substance“.
I work on a 20+ year knowledge base for a big company that has had no real content management governance for pretty much that whole time.
We knew there was duplicate content in that database, but were talking about thousands of articles, with several more added daily.
With such a small team, identifying duplicate/redundant content was just an ad-hoc thing that could never be tackled as a whole without a huge amount of resources.
AI was able to comb through everything and find hundreds of articles with duplicate/redundant content within a few hours. Now we have a list of articles we can work through and clean up.
It helps when writing a lot of boilerplate or if I’m being lazy and want to solve something. However I do not need AI in everything I use. It seems everyone wants AI in their product whilst it’s doing the same thing everyone else is doing.
Generative AI has been an absolute game changer in my retouching work. Slightly worrying that it'll put me out of work sometime in the future, but for now it's saving me loads of time, handling the boring stuff so I can concentrate on the stuff it can't do.
I have horrible spelling and sometimes write in an archaic register. I also often write in a way that sounds rather aggressive which is not my intention most of the time. Ai helps me rewrite that shit and makes me more sensitive to tone in written text.
Of course just like normal spell check and auto completion feature one still needs to read it a final time.
Some of my friends enjoy fucking around with those character AIs. I never got the appeal, even as an RP nerd, RPing is a social activity to me, and computers aren't people
I have seen funny memes be made with Image Generators -- And tbqh as long as you're not pretending that being an AI prompter makes you an "artist", by all means go crazy with generating AI images for your furry porn/DnD campaign/whatever
https://goblin.tools/ is a cool little thing for people as intensely autistic as I am, and it runs off AI stuff.
Voice Recognition/Dictation technology powered by AI is a lot better than its pre-AI sibling. I've been giving it a shot lately. It helps my arthritis-ridden hands.
If you mean anything that utilizes machine learning ("AI" is a buzzword), then "AI" technology has been used to help scientists and doctors do their jobs better since the mid 90s
I've enjoyed some of the absurd things out can come up with. Surreal videos and memes (every president as a bodybuilder wrestler). However it's never been useful and the cost isn't worth the benefit, to me.
So I'm really bad about remembering to add comments to my code, but since I started using githubs ai code assistant thing in vs code, it will make contextual suggestions when you comment out a line. I've even gone back to stuff I made ages ago, and used it to figure out what the hell I was thinking when I wrote it back then 😆
It's actually really helpful.
I feel like once the tech adoption curve settles down, it will be most useful in cases like that: contextual analysis
I use silly tavern for character conversations, pretty fun. I have SD forge for Pomy diffusion, and use Suno and Udio. Almost all of that goes to DND, the rest for personal recreation.
Google and openai all fail to meet my use cases and if I cuss they get mad so fuck em.
I never use those for making money or any other personal progression, that would be wrong.
I use perplexity.ai more than google now. I still don’t love it and it’s more of a testament to how far google has fallen than the usefulness of AI, but I do find myself using it to get a start on basic searches. It is, dare I say, good at calorie counting and language learning things. Helps calculate calorie to gram ratios and the math is usually correct. It also helps me with German, since it’s good at finding patterns and how German people typically say what I am trying to say, instead of just running it through a translator which may or not have the correct context.
I do miss the days where I could ask AI to talk like Obama while he’s taking a shit during an earthquake. ChatGPT would let you go off the rails when it first came out. That was a lot of fun and I laughed pretty hard at the stupid scenarios I could come up with. I’m probably the reason the guardrails got added.
Garbage in; garbage out. Using AI tools is a skillset. I've had great use with LLMs and generative AI both, you just have to use the tools to their strengths.
LLMs are language models. People run into issues when they try to use them for things not language related. Conversely, it's wonderful for other tasks. I use it to tone check things I'm unsure about. Or feed it ideas and let it run with them in ways I don't think to. It doesn't come up with too much groundbreaking or new on its own, but I think of it as kinda a "shuffle" button, taking what I have already largely put together, and messing around with it til it becomes something new.
Generative AI isn't going to make you the next mona Lisa, but it can make some pretty good art. It, once again, requires a human to work with it, though. You can't just tell it to spit out an image and expect 100% quality, 100% of the time. Instead, it's useful to get a basic idea of what you want in place, then take it to another proper photo editor, or inpainting, or some other kind of post processing to refine it. I have some degree of aphantasia - I have a hard time forming and holding detailed mental images. This kind of AI approaches art in a way that finally kinda makes sense for my brain, so it's frustrating seeing it shot down by people who don't actually understand it.
I think no one likes any new fad that's shoved down their throats. AI doesn't belong in everything. We already have a million chocolate chip cookie recipes, and chatgpt doesn't have taste buds. Stop using this stuff for tasks it wasn't meant for (unless it's a novelty "because we could" kind of way) and it becomes a lot more palatable.
ChatGPT is incredibly good at helping you with random programming questions, or just dumping a full ass error text and it telling you exactly what's wrong.
This afternoon I used ChatGPT to figure out what the error preventing me from updating my ESXi server. I just copy pasted the entire error text which was one entire terminal windows worth of shit, and it knew that there was an issue accessing the zip. It wasn't smart enough to figure out "hey dumbass give it a full file path not relative" but eventually I got there. Earlier this morning I used it to write a cross apply instead of using multiple sub select statements. It forgot to update the order by, but that was a simple fix. I use it for all sorts of other things we do at work too. ChatGPT won't replace any programmers, but it will help them be more productive.
I got high and put in prompts to see what insane videos it would make. That was fun. I even made some YouTube videos from it. I also saw some cool & spooky short videos that are basically "liminal" since it's such an inhuman construction.
But generally, no. It's making the internet worse. And as a customer I definitely never want to deal with an AI instead of a human.
I thought it was pretty fun to play around with making limericks and rap battles with friends, but I haven't found a particularly usefull use case for LLMs.
Personally I use it when I can't easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.
That's a bit loaded question. By AI I assume you're refering to GenAI/LLMs rather than AI broadly.
I use it to correct my spelling on longer posts and I find that it improves the clarity and helps my point come across better.
I use Dall-E to create pictures I never could have before, because despite my interest in drawing, I just never bothered to learn it myself. GenAI enables me to skip the learning and go straight to creating.
I like that it can simulate famous people and allows me to ask 'them' questions that I never could in real life. For example, yesterday I spent a good while chatting with 'Sam Harris' about the morality of lying and the edge cases where it might be justified. I find discussions like this genuinely enjoyable and insightful.
I also like using the voice mode where I can just talk with it. As a non-native english speaker, I find it to be good practise to help me improve my spelling pronunciation.
AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.
Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.
Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.
When it just came out I had AI write fanfiction that no sane person would write, and other silly things. I liked that. That and trail cam photos of the Duolingo mascot.
I think my complaints are more with how capitalism treats new technology, though-- and not just lost jobs and the tool on the climate. Greed and competition is making it worse and worse as a technology that AI itself, within a years span, has been enshittified. There are use cases that it can do a world of good, though, just like everything else bad people ruin.
If used in the specific niche use cases its trained for, as long as its used as a tool and not a final product. For example, using AI to generate background elements of a complete image. The AI elements aren't the focus, and should be things that shouldn't matter, but it might be better to use an AI element rather than doing a bare minimum element by hand. This might be something like a blurred out environment background behind a peice of hand drawn character art - otherwise it might just be a gradient or solid colour because it isn't important, but having something low-quality is better than having effectively nothing.
In a similar case, for multidisciplinary projects where the artists can't realistically work proficiently in every field required, AI assets may be good enough to meet the minimum requirements to at least complete the project. For example, I do a lot of game modding - I'm proficient with programming, game/level design, and 3D modeling, but not good enough to make dozens of textures and sounds that are up to snuff. I might be able to dedicate time to make a couple of most key resources myself or hire someone, but seeing as this is a non-commercial, non-monitized project I can't buy resources regularly. AI can be a good enough solution to get the project out the door.
In the same way, LLM tools can be good if used as a way to "extend" existing works. Its a generally bad idea to rely entirely on them, but if you use it to polish a sentence you wrote, come up with phrasing ideas, or write your long if-chain for you, then it's a way of improving or speeding up your work.
Basically, AI tools as they are, should be seen as another tool by those in or adjacent to the related profession - another tool in the toolbox rather than a way to replace the human.
I think it’s a fun toy that is being misused and forced into a lot of things it isn’t ready for.
I’m doing a lot with AI but it’s pretty much slop. I use self hosted stable diffusion, Ollama, and whisper for a discord bot, code help, writing assistance, and I pay elevenlabs for TTS so I can talk to it. It’s been pretty useful. It’s all running on an old computer with a 3060. Voice chat is a little slow and has its own problems but it’s all been fun to learn.
My primary use of AI is for programming and debugging. It's a great way to get boilerplate code blocks, bootstrap scripts, one-liner shell commands, creating regular expressions etc. More often than not, I've also learned new things because it ends up using something new that I didn't know about, or approaches I didn't know were possible.
I also find it's a good tool to learn about new things or topics. It's very flexible in giving you a high level summary, and then digging deeper into the specifics of something that might interest you. Summarizing articles, and long posts is also helpful.
Of course, it's not always accurate, and it doesn't always work. But for me, it works more often than not and I find that valuable.
Like every technology, it will follow the Gartner Hype Cycle. We are definitely in the times of "everything-AI" or AI for everything - but I'm sure things will calm down and people will find it valuable for a number of specific things.
Tbh it’s made a pretty significant improvement in my life as a software developer. Yeah, it makes shit up/generates garbage code sometimes, but if you know how to read code, debug, and program in general, it really saves a lot of grunt work and tedious language barriers. It can also be a solid rubber duck for debugging.
Basically any time I just need a little script to take x input and give me y output, or a regex, I’ll have ChatGPT write it for me.
The only things I use and I know they have AI are Spotify recommendations, live captions on videos and DLSS. I don't find generative AI to be interesting, but there's nothing wrong with machine learning itself imo if it's used for things that have purpose.
Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn't. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).
But for very specific purposes it's worth considering as an option.
Text-to-image generation has been worth it to get a jumping-off point for a sketch, or to get a rough portrait for a D&D character.
Regular old ChatGPT has been good on a couple occasions for humor (again D&D related; I asked it for a "help wanted" ad in the style of newspaper personals and the result was hilariously campy)
In terms of actual problem solving... There have been a couple instances where, when Google or Stack Overflow haven't helped, I've asked it for troubleshooting ideas as a last resort. It did manage to pinpoint the issue once, but usually it just ends up that one of the topics or strategies it floats prove to be useful after further investigation. I would never trust anything factual without verifying, or copy/paste code from it directly though.
To me AI is useless. Its not intelligent, its just a blender that blends up tons of results into one hot steaming mug of "knowledge". If you toss a nugget of shit into a smoothie while it's being blended, it's gonna taste like shit. Considering the amount of misinformation on the internet, everything AI spits out is shit.
It is purely derivative, devoid of any true originality with vague facade of intelligence in an attempt to bypass existing copyright law.
I use ChatGPT and Copilot as search engines, particularly for programming concepts or technical documentation. The way I figure, since these AI companies are scraping the internet to train these models, it’s incredibly likely that they’ve picked up some bit of information that Google and DDG won’t surface because SEO.
I went for a routine dental cleaning today and my dentist integrated a specialized AI tool to help identify cavities and estimate the progress of decay. Comparing my x-rays between the raw image and the overlay from the AI, we saw a total of 5 cavities. Without the AI, my dentist would have wanted to fill all of them. With the AI, it was narrowed down to 2 that need attention, and the others are early enough that they can be maintained.
I'm all for these types of specialized AIs, and hope to see even further advances in the future.
Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code "shaders" for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.
I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It's not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I've already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It's extremely helpful, and definitely faster for what I'd already have to do.
I guess it depends on what you use it for ¯\_(ツ)_/¯.
I hope it continues to improve. I hope we get full open source. If I could "teach" it to do certain tasks someday, that would be friggin awesome.
Only one I ever use is the meta AI built into messenger because my friends and I can have it make silly and often extremely cursed pictures that make us laugh
I usually keep abreast of the scene so I'll give a lot of stuff a try. Entertainment wise, making music and images or playing dnd with it is fun but the novelty tends to wear off. Image gen can be useful for personal projects.
Work wise, I mostly use it to do deep dives into things like datasheets and libraries, or doing the boring coding bits. I verify the info and use it in conjunction with regular research but it makes things a lot easier.
Oh, also tts is fun. The actor who played Dumbledore reads me the news and Emma Watson tells me what exercise is next during my workout, although some might frown on using their voices without consent.
I have a custom agent that i ask questions to that then goes and finds sources then answers my question. Can do math by writing python code and using the result. I uae it almost exclusively instead of regular search. Ai makes coding far quicker giving examples remeber shit i cant remeber how to use writing basic functions etc.
Writing emails.
Making profile pictures.
I used to enjoy the tldr bot on lemmy till some fascist decided to kill it instead of just letting people block it.
For the most part it's not useful, at least not the way people use it most of the time.
It's an engine for producing text that's most like the text it's seen before, or for telling you what text it's seen before is most like the text you just gave it.
When it comes to having a conversation, it can passibly engage in small talk, or present itself as having just skimmed the Wikipedia article on some topic.
This is kinda nifty and I've actually recently found it useful for giving me literally any insignificant mental stimulation to keep me awake while feeding a baby in the middle of the night.
Using it to replace thinking or interaction gives you a substandard result.
Using it as a language interface to something else can give better results.
I've seen it used as an interface to a set of data collection interfaces, where all it needed to know how to do was tell the user what things they could ask about, and then convert their responses into inputs for the API, and show them the resulting chart. Since it wasn't doing anything to actually interpret the data, it never came across as "wrong".
Playing with it on my own computer, locally hosting it and running it offline, has been pretty cool. I find it really impressive when it's something open source and community driven. I also think there are a lot of useful applications for things that are traditionally not solvable with traditional programming.
However a lot of the pushed corporate AI feels not that useful, and there's something about it that really rubs me the wrong way.
I've never had AI code run straight off the bat - generally because if I've resorted to asking an AI, I've already spent an hour googling - but it often gives me a starting point to narrow my search.
There's been a couple of times it's been useful outside of coding/config - for example, finding the name of some legal concepts can be fairly hard with traditional search, if you don't know the surrounding terminology.
I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.
I’ve used it to fill in the gaps for DND storyline. I’ll give it a prompt and a couple of story arcs then I’ll tell it to write in a certain style, say a cowardly king or dogmatic paladin. From there it will spit out a story. If I don’t like certain affects, I’ll tell it to rewrite a section with some other detail in mind. It does a fantastic job and saves me some of the guesswork.
I like messing with the locally hosted AI available. We have a locally hosted LLM trained on our command media at work that is occasionally useful. I avoid it otherwise if I didn't set it up myself or know who did.
I have had fun with ChatGPT, but in terms of integrating it into my workflow: no. It just gives me too much garbage on a regular basis for me not to have to check and recheck anything it produces, so it's more efficient to do it myself.
And as entertainment, it's more expensive than e.g. a game, over time.
duck.ai is very helpful for niche/specific questions I have but can’t find online. It’s also helpful for super quick questions that don’t really warrant a forum post. However, I always take things with a grain of salt.
It's an overly broad term, and the "hype" use-cases dominate the discussion in a way that lacks vision. I'm using machine learning to optimize hardware accelerated processing for particle physics. So, ya, it's not all slop. And what is, may very well evolve.
I’m not impressed with the LLMs. They do make great synonym generators.
Stable diffusion and other image diffusers are genuinely amazing. And I’m not talking about asking copilot to make Fortnite shrek. There are incredibly complex ways in which you can fine tune to tell it how to shape and refine the image. It has and is going to continue to revolutionize graphical art. And once the math shrinks down it’s going to be everywhere.
I use AI every day. I think it's an amazing tool. It helps me with work, with video games, with general information, with my dog, and with a whole lot of other things. Obviously verify the claims if it's an important matter, but it'll still save you a lot of time. Prompting AI with useful queries is a skill set that everyone should be developing right now. Like it or not, AI is here and it's going to impact everyone.
It stimulates my brain, and I enjoy the randomness of it all. It's like how in nature things can be perfectly imperfect - random and still beautiful - unintentional and still emotion-inducing. Sure, I see the ethical issues with how an AI is trained and how capitalism cares more about profit than people leading to job loss or exploitation; however, those are separate issues in my mind, and I can still find joy in the random output of an AI. I could easily tunnel on the bad parts of AI and what's happening as the world devours a new technology, but I still see benefits it can bring in the medical research and engineering fields.
Kitboga has used AI (STT, LLMs, and TTS) to waste the time of Scammers.
There are AI tools being used to develop new cures which will benefit everyone.
There are AI tools being used to help discover new planets.
I use DLSS for gaming.
I run a lot of my own local AI models for various reasons.
Whisper - for Audio Transcriptions/Translations.
Different Diffusion Models (SD or Flux) - for some quick visuals to recap a D&D session.
Tesseract OCR - to scan an image and extract any text that it can find (makes it easy to pull out text from any image and make it searchable).
Local LLMs (Llama, Mixtral) for brainstorming ideas, reformatting text, etc. It's great for getting started with certain subjects/topics, as long as I verify everything that it says.
Going through data and writing letters are the only tasks I've seen AI be useful for. I still wouldn't trust it as far as I could kick it's ass and I'd check it well before submitting for work.
That said, I did find some use for chatGPT last year. I had it explain to me some parts of Hawking's paper on black hole particle creation, this was only useful for this one case because Hawking had a habit of stating something is true without explaining it and often without providing useful references. For the record, chatGPT was not good at this task, but with enough prodding and steering I was eventually able to get it to explain some concepts well enough for my usage. I just needed to understand a topic, I definitely wasn't asking chatGPT to do any writing for me, most of what it spits out is flat out wrong.
I once spent a day trying to get it to solve a really basic QM problem, and it couldn't even keep the maths consistent from one line to another.