How many of you are using ChatGpt to help you with your work, and not telling your boss/co-workers?
Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!
High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).
The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.
Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.
I don't have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.
A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.
I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.
I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.
I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.
I've been using it a little to automate really stupid simple programming tasks. I've found it's really bad at producing feasible code for anything beyond the grasp of a first-year CS student, but there's an awful lot of dumb code that needs to be written and it's certainly easier than doing it by hand.
As long as you're very precise about what you want, you don't expect too much, and you check its work, it's a pretty useful tool.
not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.
Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being
Think about the requirements
Work out how best to achieve those requirements within the code I'm working on
Write the code
It would be
Think about the requirements
Work out how best to achieve those requirements within the code I'm working on
Start writing the code and wait for the auto complete
Read the auto complete and decide if it does exactly what I want
Do one of the following depending on 4
5a. Use the autocomplete as-is
5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed
5c. Ignore the autocomplete and write the code yourself
idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.
I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets
EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form
A lot of people are going to get fucked if they are...
It's using the "startup method" where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can't do today.
Eventually they'll introduce a paid version that might be able to do what the free one did.
But if you're just blindly trusting it, you might have months of low quality work and haven't noticed.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We're going to see that happen more and more as resources are cut back.
Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.
I've done so on rare occasion, but every time it made stuff up. Wanted terraform examples for specific things... and it completely invented resource types that don't exist.
Only used it a couple of times for work when researching some broad topics like data governance concepts.
It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.
My supervisor uses ChatGPT to write emails to higher ups and it's kinda embarrassing lol. One email he's not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.
I've used it a couple times when I can't fully put into words that I'm trying to say, but I use it more for inspiration than anything. I've also used it once or twice in my personal life for translating.
I'm a DM using ChatGPT to help me build things for my DnD campaign/world and not telling my players. Does that count? I still do most of the heavy lifting but it's nice to be able to brainstorm and get ideas bounced back. I don't exactly have friends to do that with.
I find it helpful to translate medical abbreviations to English. Our doctors tend to go overboard with abbreviations, there are lots I know but there are always a few that leave me scratching my head. ChatGPT seems really good at guessing what they mean! There are other tools I can use, but ChatGPT is faster and more convenient - I can give it context and that makes it more accurate.
Not at all. Had a few experiments, then we had a talk about it at work, decided fuck we're not giving these people our source code, and left it at that.
I mean in the end, all ChatGPT could reliably do was scavenge man-pages for me. Which is neat, but also a rather benign trick tbh.
I use it for help with formal language sometimes, but I do not trust it and would never try to pass off a whole generated text as mine. I always review it and try to make it sound my own.
Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.
I'm also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.
My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.
I have very few writing tasks that don't require careful consideration, so it's not super useful in my day to day. But it can be helpful to get the ball rolling on an outline or first draft so I'm not staring at a blank sheet of paper.
There was some issue that came up relating to network shares on a Windows domain that didn't make sense to me and a colleague. I asked GPT to describe why we were seeing whatever behavior and it defined the scope of the feature in a way that completely demystified my coworker. I'm a Mac and Linux guy, so while I could loosely grasp it, it was gone from my mind shortly after. Windows domains and file sharing has always been bizarre to me.
Anyway, we didn't hide it. He gave it credit when explaining the answer to the rest of the team in a meeting. This was around the end of last year. The company since had layoffs and I'm looking for a new job, but I did have it reformat my resume and it did a great job. I've never been great at page-layout stuff, as I'm a plain text warrior.
I'm interested in finding ways to use it but when if I'm writing code I really like the spectrum of different answers on stack overflow with comment's on WHY they did it that way. Might use it for boring emails though.
I'm a devops engineer, use it daily. Not to write e-mails, but to frequently ask the best approach to solve an issue or bash/sql/anything queries. My boss and colleagues know about it and use it too though
I've found ChatGPT is good for small tasks that require me to code in languages I don't use often and don't know well. My prime examples are writing CMakeLists.txt files, and generating regex patterns.
Also, if I want to write a quick little bash script to do something, it's much better at remembering syntax and string handling tricks than me.
I know many people my slightly younger than me are using chatgpt to breeze though university assignments. Apparently there's one website that uses gpt that even draws diagrams for you, so you don't have to make 500 UML and class diagrams that take forever to create.
I use ChatGPT fairly frequently. For example, I often have to write a business email. I'm usually pretty good at it. But sometimes I don't have the time or desire to find the right wording. This is where ChatGPT comes into play: I have trained my writing style using several examples and then simply have the quickly written emails beautified.
My boss doesn't know about it, but I don't hide it either. My company is very, very slow on the technical side and will only understand the benefits of AI in a few years.
I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.
I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch
The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use
Edit: just fixed its translation method. Now the watch will hear non-english speech and automatically translate it for me too (uses Whisper API)
Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.
I am the boss and I've had to cajole a couple of my employees into using it.
Any employer that thinks using ChatGPT carefully and judiciously is a bad thing is mistaken. When it works it's a great productivity boost, but you have to know when to kick it to the curb when it starts hallucinating.
As a coder, we have had discussions about using it at work. Everyone's fine with it for generation of test data, or for generating initial code skeletons but it still requires us to review every line. It saves a bit of time but that's all.
I use it at work but gladly tell the boss... It's only pluses if we can do more trivial work faster. More time to relax. They don't watch what I do during the day. The boss relaxes also. All good.
I use it to speed up writing scripts on occasion, while attempting to abstract out any possibly confidential data.
I'm still fairly sure it's not allowed, however. But considering it would be easy to trace API calls and I haven't been approached yet, I'm assuming no one really cares.
I've run emails through it to check tone since I'm hilariously bad at reading tone through text, but I'm pretty limited in how I can make use of that. There's info I deal with that is sensitive and/or proprietary that I can't paste into just any text box without potential severe repercussions.
Aside from asking it coding questions (which are generally a helpful pointer in the right direction), i also ask it alot of questions like “Turn these values into an array” or something similar when i have to make an array of values (or anything else that’s repetitive) and am too lazy to do it myself. Just a slight speedup in work.
I've used it a couple times to draft reports, most of what it writes is pretty garbage but it's good for generating general filter sentences and structure and stuff that I don't want to waste the time thinking about.
I've also used it to generate Facebook posts, it's awesome at this however recently I've had to make a point to telling it not to include emojis or the posts get overloaded
I dont see any reason to not use it to (keyword) help with your work. I think it would be wise to not use its responses verbatim, as well as to fact-check anything that it gives you.
Additionally, turn off chat history and do not enter any details about yourself, or your employer, into the prompts. Keep things generic whenever you can.
I tried it once or twice and it worked well. It's too stupid now to be worth the attempt. The amount of time spent fixing its mistakes has resulted in net zero time savings.
I absolutely kept it from my boss. The she told me in a 1:1 how extensively she uses it. I was like, hey I can help! Definitely haven’t told my VP though. Also then they blocked it, so I have to either use it in my iPad, or stick to Bard and BingAI on the laptop.
I suffer from the curse of the blank page, so getting something on the page to edit and expand is a lifesaver for me. It is also useful to adjust tone, and do simple things like document functions. Easy to correct if wrong.
As a manager, it does a great job of writing a bunch of ideas around a subject I need to explain that is not proprietary info. Turned writing a proposal that would have taken me hours to layout and format into just a few seconds with mere minutes tweaking to get just right.
Definately. ChatGPT for coding help, and learning new coding topics. And Gamma for presentations - if only for the nice formatting of content and stock imagery.
Proudly told my coworker about experiment with LLM to help with documentation we're pretty close from what we would need. I don't have yet the paygrade to do my own experiment on my work time but I am close enough to be able to start experimenting on my work time and tell my boss you see this is why I desserve that paygrade
I use it as a software Developer but I’m not hiding it from my boss.
Mainly I use it generate me mock data, but also for helping me understand code blocks or if I want to sort some complex data and my head is baffled.
People seem to miss the point in that if I don’t understand software development then ChatGPT is of little help. With the sorting of data, it can give me 90% complete solutions but you have to know what you’re doing to debug it.
I use it and encourage my staff and other departments to use it.
I feel that we're at a horse vs tractor or human computer vs digital computer event. In the next 10+ years those who are AI ignorant will be under employed or unemployed. Get it now and learn to use it as a force multiplier just like tractors and digital computers were.
The arguments against AI eerily mirror the arguments against tractors and digital computers.
Yeah I use it, but only as a rubber duckie. I never put in code unless I understand what it's doing, and most of the time I'm just using it as a sounding board. Since it never returns the right code on the first try anyways haha
I’m a family doctor, so I haven’t yet. It’s not a validated tool to source medical information, and I can’t paste any patient identifiers into it, so even if I wanted its input it’s way faster to just use my standard medical resources.
Our EMR plans to do some testing later this year for generative AI in areas that don’t have to be medically validated like notes to patients. I will likely sign up to pilot it if that option is offered.
I use it for D&D, though, along with a mixture of other tools, random generators, and my own homebrew. My players are aware of this.
I run a board game store, so just for a chuckle I asked it about what's popular this year or what to order and kept getting the same answer about only having accurate data from 2021 and prior.
I've used it in a few occasions, mostly to find better terms and adjusting the tone for my emails. Also finding what acronym stands for and understanding technical issues. Asking to explain like I'm a 5 yo or beginner saved me some time from doing long researches on google.
Used in small doses to generate text with some degree of precision is helpful. I do find it to be a good way to cut out boring email writing. But I would recommend it more as a text generation tool than a fact generation tool. With the right expectations and work flow it fits right in. And no I don't consider it plagiarism if the client's demand is boring.
I've used ai in general a few years ago as a companion till for writing seo optimized articles. It was ok at that time, and would do maybe 30% of the work I needed, but I would still have to go back in and make major edits or it would only pop out a sentence at a time so I would be contently prompting it.
My wife is a full time writer for a company and she uses it all the time to create emails and speeches. She says the leaks and bounds in actual usability is pretty insane. Like, one prompt can give her an entire speech.