On Using AI For Commercial Uses
On Using AI For Commercial Uses
Source (Bluesky)
On Using AI For Commercial Uses
Source (Bluesky)
I ordered some well rated concert ear protection from the maker's website. The order waited weeks to ship after a label was printed and likely forgotten. I went to find a place to call or contact a human there, all they had was a self-described AI chat robot that just talked down to me condescendingly. It simply would not believe my experience.
I eventually got the ear protection but I won't be buying from them again. Can't even staff some folks to check email. I eventually found their PR email address but even that was outsourced to a PR firm that never got back to me. Utter shit, AI.
I'm glad you mentioned the company directly as I also want to steer clear of companies like this.
That would've been such an easy disputed charge and get the plugs somewhere else. I'm not wasting a second on something like that, just telling my credit card company they didn't uphold their end of the deal, and that's that. I will lose hearing out of spite if this happened to me, because I'm an idiot.
I will lose hearing out of spite if this happened to me
Genuinely admire your self awareness
I've lost hearing for stupider reasons. Spite seems downright reasonable to me.
I’ll waste a few moments. It becomes a puzzle. Assuming you managed to make it through the maze, you retrospectively analyze where would 99% of the country have dropped out of the flow and given up?
Then it’s an email to the attorney general if necessary! (I mean that’s been rare but when something is egregious)
🤓
Absolutely. Cc dispute is an under-used method of recourse.
That's really good to know about these things. They've been on sale through Woot. I guess there's a good reason for that.
Wow, that’s extremely disappointing. I had a really positive experience with them a few years ago when I wanted to exchange what I got (it was too quiet for me), and they just sent me a free pair after I talked to an actual person on their chat thing. It’s good to know that’s not how they are anymore if I ever need to replace them.
Never thought about ear protection for concerts, sounds cool. I'll have to look into other options though, if anyone has any recommendations, let me know
A number of companies make "tuned" ear plugs to allow some sound through with a desired frequency curve, but reduce SPL to safe levels. I've used Etymotic, which sound great but I personally like a little more reduction. Alpine which I thought had enough reduction but too much coloring, and I settled on Earpeace, for like $25 on-line. Silicone, re-usable and easy to clean and they come with three filters to swap in or out depending on your needs / tastes.
Oh man, sad that's the customer service cause I deeply love my loops. I was already carrying them with me everywhere I went so I grabbed a pill keychain thing and attached them to my keys so I'd never forget to grab them.
Yeah this happened back earlier this year. I had lost a pair from a purchase years ago and replaced them. Guessing they are laying off people/support contracts like so many stupid business owners. I was sure that my order would be stuck in limbo forever after the experience, but they eventually showed up. Never again.
I think this problem will get worse because many websites that's used for "your own research" will lose human traffic to watch ads and more bots just scraping their data, reducing motivation to keep the websites running. Most people just take the least resistant path so AI search will be the default soon I think
Yes, I hate this timeline
It's so annoying because you're correct. I'm finding it harder and harder to use a search engine for things. Hell, the web in general is becoming unusable. It's all shit.
I just feel so mad. I want the old web back. I just want my duckduckgo to work well.
Same here.
Eventually they will pay AI companies to integrate advertisements into the llm's outputs.
Omg I can see it happening. Instead of annoying intrusive ads, this new type will be so natural as if your close friend is suggesting it.
More dystopian future. Yes we need it /s
Alright I don’t like the direction of AI same as the next person, but this is a pretty fucking wild stance. There are multiple valid applications of AI that I’ve implemented myself: LTV estimation, document summary / search / categorization, fraud detection, clustering and scoring, video and audio recommendations... "Using AI” is not the problem, "AI charlatan-ing" is. Or in this guy’s case, "wholesale anti-AI stanning". Shoehorning AI into everything is admittedly a waste, but to write off the entirety of a very broad category (AI) is just silly.
I don't think AI is actually that good at summarizing. It doesn't understand the text and is prone to hallucinate. I wouldn't trust an AI summary for anything important.
Also search just seems like overkill. If I type in "population of london", i just want to be taken to a reputable site like wikipedia. I don't want a guessing machine to tell me.
Other use cases maybe. But there are so many poor uses of AI, it's hard to take any of it seriously.
I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.
This right here. Whenever I've tried using an LLM to summarize, I spent more time fact-checking it (and finding the inevitable misunderstandings and outright hallucinations—they're always there for anything of substance!) than I'd spend writing my own damned summary.
There is, however, one use case I've found where LLMs work better than alternatives ... provided you do due diligence. To put it bluntly, Google Translate and its ilk of similar slop from Bing, Baidu, etc. suck. They are god-awful at translation of anything but straightforward technical writing or the most tediously dull prose. LLMs are far better translators (and can be instructed to highlight cultural artifacts, possible transcription errors, etc.) ...
... as long as you back-translate in a separate session to check for hallucination.
Oh, and Google Translate-style translators really suck at Classical Chinese. LLMs do much better (provided you do the back-translation check for hallucination).
I guess this really depends on the solution you’re working with.
I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.
When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.
I don’t think AI is actually that good at summarizing.
It really depends on the type and size of text you want it to summarize.
For instance, it'll only give you a very, very simplistic overview of a large research paper that uses technical terms, but if you want to to compress down a bullet point list, or take one paragraph and turn it into some bullet points, it'll usually do that without any issues.
Edit: I truly don't understand why I'm getting downvoted for this. LLMs are actually relatively good at summarizing small, low-context-necessary pieces of information into bullet points. They're quite literally made as code that interprets the likelihood of text based on an input. Giving it a small amount of text to rewrite or recontextualize is one of its best strengths. That's why it was originally mostly implemented as a tool to reword small isolated sections in articles, emails, and papers, before the technology was improved.
It's when they get to larger pieces of information, like meetings, books, wikipedia articles, etc, that they begin to break down, due to the nature of the technology itself. (context windows, lack of external resources that humans are able to integrate into their writing, but LLMs can't fully incorporate on the same level)
Its just a statistics game. When 99% of stuff that uses or advertises the use of "AI" is garbage, then having a mental heuristic that filters those out is very effective. Yes you will miss those 1% of useful things, but thats not really an issue for most people. If you need it you can still look for it.
I have ADHD and I have to ask A LOT of questions to get my brain around concepts sometimes, often cause I need to understand fringe cases before it "clicks", AI has been so fucking helpful to be able to just copy a line from a textbook and say "I'm not sure what they meen by this, can you clarify" or "it says this, but also this, aren't these two conflicting?" and having it explain has been a game changer for me. I still have to be sure to have my bullshit radar on, but thats solved by actually reading to understand and not just taking the answer as is. In fact, scrutinizing the answer against what I've learned and asking further questions has felt like its made me more engaged with the material.
Most issues with AI are issues with capitalism.
Congratulations to the person who downvoted this
They use a tool to improve their life?! Screw them!
Here’s hoping over the next few years we see little baby-sized language models running on laptops entirely devour the big tech AI companies, and that those models are not only open source but ethically trained. I think that will change this community here.
I get why they’re absolutist (AI sucks for many humans today) but above your post as well you see so much drive-by downvoting, which will obviously chill discussion.
But what about me and my overly simplistic world views where there is no room for nuance? Have you thought about that?
Edit for clarity: Don't hate the science behind the tech, hate the people corrupting the tech for quick profit.
I use claude to ask it coding questions. I don't use it to generate my code; I mostly use it to do a kind of automated code review to look for obvious pitfalls. It's pretty neat for that
I don't use any other AI-powered products. I don't let it generate emails, I don't let it analyze data. If your site comes with a built in LLM powered feature, I assume
AI is the new Crypto. If you are vaguely associated with it, I assume there's something criminal going on
AI is the new Crypto. If you are vaguely associated with it, I assume there’s something criminal going on
Nothing to add here. I just like this so much that I want it duplicated.
I mostly use it to do a kind of automated code review
Same here, especially when I'm working with plain JS. Just yesterday I was doing some benchmarking and it fixed a variable reference in my code unprompted by commenting the small fix as part of the answer when I asked it something else. I copy-pasted and it worked perfectly. It's great for small scope stuff like that.
But then again, I had to turn off Codeium that same day when writing documentation because it kept giving me useless and distracting, paragraph-long suggestions restating the obvious. I know it's not meant for that, but jeez, it reminded me so much of Bing's awfully distracting autocomplete.
I've never felt this sort of technology before that, when it works, it feels like you're gliding on ice, and when it doesn't, it feels like ice skating on a dirt road.
I use AI to script code.
For my minecraft server.
I rely on expert humans to do tech work for my team and their tools.
I am not anti-AI per-say, I just know what works best and what leads to best results.
Using AI is telling people they shouldn't care about your IP because you clearly don't care about theirs when it passes through the AI lens.
Stop making using AI sound based
The only time I disagree with this is when the business is substituting "AI" in for "machine learning". I've personally seen that work in applications where traditional methods don't work very well (vision guided industrial robot movement in this case).
These new LLM models and vision models have their place in software stack. They do enable some solutions that have been nearly impossible in the past (mandatory xkcd ref: https://xkcd.com/1425/ , this is now trivial task)
ML works very well on large data sets and numbers, but it is poor at handling text data. LLM's again are shit with large data and numbers, but they are good at handling small text data. It is a tool, and properly used very powerful one. And it is not a magic bullet.
One easy example from real world requirements: you have five paragraph of human written text, and you need to summarize it to header automatically. Five years ago if some project owner would have request this feature, I would have said string.substring(100), live with it. Now it is pretty much one line of code.
Even though I understand your sentiment that different types of AI tools have their place, I'm going to try clarifying some points here. LLMs are machine learning models; the 'P' in 'GPT' – "pretrained" – refers to how it's already done some learning. Transformer models (GPTs, BERTs, etc.) are a type of deep learning is a branch of machine learning is a field of artificial intelligence. (edit: so for a specific example of how this looks nested: AI > ML > DL > Transformer architecture > GPT > ChatGPT > ChatGPT 4.0.) The kind of "vision guided industrial robot movement" the original commenter mentions is a type of deep learning (so they're correct it's machine learning, but incorrect that it's not AI). At this point, it's downright plausible that the tool they're describing uses a transformer model instead of traditional deep learning like a CNN or RNN.
I don't entirely understand your assertion that "LLMs are shit with large data and numbers", because LLMs work with the largest data in human history. If you mean you can't feed a large, structured dataset into ChatGPT and expect it to be able to categorize new information from that dataset, then sure, because: 1) it's pretrained, not a blank slate that specializes on the new data you give it, and 2) it's taking it in as plaintext rather than a structured format. If you took a transformer model and trained it on the "large data and numbers", it would work better than traditional ML. Non-transformer machine learning models do work with text data; LSTMs (a type of RNN) do exactly this. The problem is that they're just way too inefficient computationally to scale well to training on gargantuan datasets (and consequently don't generate text well if you want to use it for generation and not just categorization). In general, transformer models do literally everything better than traditional machine learning models (unless you're doing binary classification on data which is always cleanly bisected, in which case the perceptron reigns supreme /s). Generally, though, yes, if you're using LLMs to do things like image recognition, taking in large datasets for classification, etc., what you probably have isn't just an LLM; it's a series of transformer models working in unison, one of which will be an LLM.
Edit: When I mentioned LSTMs, I should clarify this isn't just text data: RNNs (which LSTMs are a type of) are designed to work on pieces of data which don't have a definite length, e.g. a text article, an audio clip, and so forth. The description of the transformer architecture in 2017 catalyzed generative AI so rapidly because it could train so efficiently on data not of a fixed size and then spit out data not of a fixed size. That is: like an RNN, the input data is not of a fixed size, and the transformed output data is not of a fixed size. Unlike an RNN, the data processing is vastly more efficient in a transformer because it can make great use of parallelization. RNNs were our main tool for taking in variable-length, unstructured data and categorizing it (or generating something new from it; these processes are more similar than you'd think), and since that describes most data, suddenly all data was trivially up for grabs.
Now it is pretty much one line of code.
… and 5kW of GPU time. 😆
Huh? Deep learning is a subset of machine learning is a subset of AI. This is like saying a gardening center is substituting "flowers" in for "chrysanthemums".
i use AI every day in my daily work, it writes my emails, performance reviews, project updates etc.
.....and yeah, that checks out!
I used to work in a software architecture team that used AI to write retrospectives, and upcoming projects, and everything needed to have a positive spin, that sounds good but mean nothing.
Extra funny when I find out people use AI to summarize it. So the comical cycle of bullet points to text and back again is real.
I had enough working at the company when my team was working on the new "fantastic" platform, cut corners to reach the deadline on something that will not be used by anyone... and its being built for the explicit purpose of making a better development and working environment.
LLMs != AI
LLMs strict subset of AI
Pls be a bit more specific about what you hate about the wide field of AI. Otherwise it's almost like saying you hate computers, because they can run applications that you don't like.
I use AI as a tool. AI should be a tool to help with job, not to take jobs. Same as calculator. Yep people will be able to code faster with AIs help, so that might mean less demand, at least for IT. But u still gotta know what the exact prompt u need to ask
Meanwhile, Terence Tao: https://youtube.com/watch?v=zZr54G7ec7A
You have standards?
It's completely clownshit to think that you're going to be able to differentiate AI within 2 years. Maybe less than that. Veo2 is insane.
Peoples are lazy when they using calculators!
If I ran a business I think the only thing I'd have "ai" do would be basic social media stuff because I don't want to ever get into that kind of stuff though I think could make it in windows basic
Reply to questions with company phone number/email, post company pictures from certain folder every 5 hours or so, like a post from random local person, like a random post from a random person anywhere(extra funny if porn or horrible opinion)
You're describing a predictable automation.
"AI" is not predictable.
That's why I put quotation marks around it
I have to disagree with that one but not completely. It really depends on what type of company I interact with . is that an independent small company or a big corp . also what type of AI (generate picture or generate summary etc..) And is the application fullfit or not . ex if you generate a logo or a picture in a small business is the style of the picture correct or is it the same as everyone , also did you check if the image was correct etc... But for big corps yeah they can go fuck themselves, they have the budget to pay artist
So.... about his AI generated picture beside his name...
Just because it's generic doesn't mean it's AI generated
I've been using a cartoonish profile picture for my work emails, teams portrait and other communications for many years. There is almost no way to tell that kind of icon apart from AI generated icons at that size anyway.
And even if it was, that's not the point of the conversation. Fixating on that is such bad faith it betrays a defensiveness about AI generated content, so it's particularly important that someone like you get this message, let me reiterate clearly:
I have a role of responsibility, I hire people and use company budget to make decisions on other companies and products we'll be paying for. When making these decisions I don't look at the email signatures of people or the icons they use. I look at their presentation materials and if that shit is AI generated I know immediately it's just a couple people pretending to be an agency or company, or some company that doesn't quality-control their slides and presentation decks. It shows laziness. I would rather go with a company that has data and specs rather than lean on graphics anyway. So if those graphics are also lazy AF that's a hard pass. Not my first rodeo, I've learned to listen to experience.
Cool, my work for my company with AI for medical scans has detected thousands upon thousands of tumors and respiratory diseases, long before even the most well trained doctor could have spotted them, and as a result saved many of those people's lives. But it's good to know we're all just lazy pieces of shit because we use AI.
Assuming what you're describing works (and i have no particular reason to doubt, beyond the generally poor reputation of AI), that's a different beast than "lol i fired all the copywriters, artists, and support staff so I, the owner, could keep more profits for myself!". Or, "I didn't pay attention in English 101 and don't know how to write, so I'll have expensive auto suggest do it for me"
that’s a different beast
I think what was being implied though was that the original poster was saying that any use or talk of AI by a company immediately invalidates it, regardless of there being any specific traits like firing workers present. (e.g. "Using AI" was the only prerequisite they mentioned)
So it seems like, based on the original wording, if they saw a hospital going "we use top of the line AI to identify tumors and respiratory diseases early" they would just disregard that hospital entirely, without actually caring how the AI works, is implemented, or affects the employment of the other people working there, even though it's wholly beneficial.
At least, that's just my reading of it though.
Yeah that's my point. AI has a lot of problems that need to be addressed, put people are getting so mad about AI that conversation around it is getting more and more extreme to the point people are talking about all AI being bad.
When people talk about "AI" nowadays, they're usually talking about LLMs and other generative AI, especially if it's used to replace workers or human effort. Analytical AI is perfectly valid and is a wonderful tool!
Machine learning is not artificial intelligence.
The problem is that anything even remotely related to AI is just being called "AI," whether it's by the average person or marketing people.
So when you go to a company's website and you see "powered by AI," they could be talking about LLMs, or an ML model to detect cancer, and the average person won't know the difference between the technologies.
So if someone universally rejects anything that says it "uses AI" just because what's usually called "AI" is just badly implemented LLMs that make the experience worse, they're going to inevitably catch nearly every ML model in the crossfire too, since most companies are calling their ML use cases "AI powered," and that means rejecting companies that develop models like those that detect tumors, predict protein folding patterns, identify anomalies in other health characteristics, optimize traffic routes in cities, etc, even if those use cases aren't even related to LLMs and all the flaws they often bring.
No they are the same thing.
The core algorithm we built upon is practically the same one used by AI image generators, the main difference is that we have deeper convolutional layers and more of them and we don't do any GAN stuff that the newer image generators use.
Sounds like Brian can't figure out AI.
Wrong /c/ my guy
Did you forget a /s ?
Hes saying that the businesses he's interacting with can't.
I don’t care if one loves or hates AI, this shit is funny.
Can no one take a joke anymore?
Ironically, an LLM could’ve made his post grammatically correct and understandable.
If you had a hard time understanding the point being made in that post, you could probably be replaced by AI and we wouldn't notice the difference.
His post is fairly gramatically correct and quite understandable