Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
One would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.
Musk got banned in Path of Exile 2 for cheating. I'm not sure what angle to take here, but you gotta admit that it's a bit funny/satisfying. (how does such a busy [assume I'm making air quotes with my fingers] guy have time to play video games? why is he so obsessed with status that he'd try to cheat his way up the leaderboards, and not for the first time either?)
Occasionally, they get an entire sector wrong — see the excess of enthusiasm for cleantech in the 2000s, or the crypto blow-up of the past few years.
In aggregate, though, and on average, they’re usually right.
First off, please note that this describes two of the most recent tech bubbles and doesn't provide any recent counterexamples of a seemingly-ridicilous new gimmick that actually stuck around past the initial bubble. Effectively this says: yes, they're 0 for 2 in the last 20 years, but this time they can't all be wrong!
But more than that I think there's an underlying error in acting like "the tech sector" is a healthy and competitive market in the first place. They may not directly coordinate or operate in absolute lockstep, but the main drivers of crypto, generative AI, metaverse, SaaS, and so much of the current enshittifying and dead-ending tech industry comes back to a relatively small circle of people who all live in the same moneyed Silicon Valley cultural and informational bubble. We can even identify the ideological underpinnings of these decisions in the TESCREAL bundle, effective altruism and accelerationism, and "dark enlightenment" tech-fascism. This is not a ruthlessly competitive market that ferrets out weakness. It's more like a shared cult of personality that selects for whatever makes the guys in top feel good about themselves. The question isn't "how can all these different groups be wrong without someone undercutting them", it's "how can these few dozen guys who share an ideology and information bubble keep making the exact same mistakes as one another" and the answer should be to question why anyone expects anything else!
Brief overlapping thoughts between parenting and AI nonsense, presented without editing.
The second L in LLM remains the inescapable heart of the problem. Even if you accept that the kind of "thinking" (modeling based on input and prediction of expected next input) that AI does is closely analogous to how people think, anyone who has had a kid should be able to understand the massive volume of information they take in.
Compare the information density of English text with the available data on the world you get from sight, hearing, taste, smell, touch, proprioception, and however many other senses you want to include. Then consider that language is inherently an imperfect tool used to communicate our perceptions of reality, and doesn't actually include data on reality itself. The human child is getting a fire hose of unfiltered reality, while the in-training LLM is getting a trickle of what the writers and labellers of their training data perceive and write about. But before we get just feeding a live camera and audio feed, haptic sensors, chemical tests, and whatever else into a machine learning model and seeing if it spits out a person, consider how ambiguous and impractical labelling all that data would be. At the very least I imagine the costs of doing so are actually going to work out to be less efficient than raising an actual human being and training them in the desired tasks.
Human children are also not immune to "hallucinations" in the form of spurious correlations. I would wager every toddler has at least a couple of attempts at cargo cult behavior or inexplicable fears as they try to reason a way to interact with the world based off of very little actual information about it. This feeds into both versions of the above problem, since the difference between reality and lies about reality cannot be meaningfully discerned from text alone and the limited amount of information being processed means any correction is inevitably going to be slower than explaining to a child that finding a "Happy Birthday" sticker doesn't immediately make it their (or anyone else's) birthday.
Human children are able to get human parents to put up with their nonsense ny taking advantage of being unbearably sweet and adorable. Maybe the abundance of horny chatbots and softcore porn generators is a warped fun house mirror version of the same concept. I will allow you to fill in the joke about Silicon Valley libertarians yourself.
IDK. Felt thoughtful, might try to organize it on morewrite later.
I'm not too surprised by this happening (and I see the specter of the same thing approaching with salt (bought by vmware bought by broadcom...)), but god am I tired of how fucking effective the method is
It’s a good trick to be instantly dismissed.
No, really, that’s the latest I had in terms of company policy. If you’re caught using AI for anything, you’re out the door. It’s a lawsuit waiting to happen (and a lawsuit we cannot defend against). Gross misconduct, not eligible for rehire, and all that. Same as intentionally misrepresenting data (because it is).
(Pharma)
You want my off-the-cuff take, this is definitely gonna fuck c.ai's image even further, and could potentially leave them wide open to a lawsuit.
On a wider front, this is likely gonna give AI another black eye, and push us one step further to the utter destruction of AI as a concept I predicted a couple months ago.
(it was always shaky, but mostly only shown by infosec folks who signed up as amazon s3, etc)
TL;DR: scammer buys .com domain for journalist’s name, registers it on bluesky, demands money to hand it over or face reputational damage, uses other fake accounts with plausible names and backgrounds to encourage the mark to pay up. Fun stuff. The best bit is when the sockpuppets got one of the real people they were pretending to be banned from bluesky.
On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free.
It had a very focused area of expertise, but for sincerity, you couldn't beat 1-900-MIX-A-LOT.
Y'all, with Proton enshittifying (scribe and wallet nonsense), I think I am never going to sign up for another all-in-one service like this. Now I gotta determine what to do about:
Proton Mail
Proton VPN
Proton Drive
Proton Calendar
and I'd be forced to reassess my password manager if hadn't already been using BitWarden when Proton Pass came out.
Self-hosting is a non-starter (too lazy to remember a new password for my luggage). Any thoughts? Are other Proton users here jumping ship? Should I just resign myself to using Proton until they eventually force some stupid ass "Chatbot will look at the contents of your Drive and tell you which authorities to surrender yourself to"?
Interesting article about netflix. I hadn’t really thought about the scale of their shitty forgettable movie generation, but there are apparently hundreds and hundreds of these things with big names attached and no-one watches them and no-one has heard of them and apparently Netflix doesn’t care about this because they can pitch magic numbers to their shareholders and everyone is happy.
“What are these movies?” the Hollywood producer asked me. “Are they successful movies? Are they not? They have famous people in them. They get put out by major studios. And yet because we don’t have any reliable numbers from the streamers, we actually don’t know how many people have watched them. So what are they? If no one knows about them, if no one saw them, are they just something that people who are in them can talk about in meetings to get other jobs? Are we all just trying to keep the ball rolling so we’re just getting paid and having jobs, but no one’s really watching any of this stuff? When does the bubble burst? No one has any fucking clue.”
What a colossal waste of money, brains, time and talent. I can see who the market for stuff like sora is, now.
Lol lmao (For the people not into Dutch, our main alt-right politician lost a lot of money investing in the luna cryptocurrency (of course he is into crypto, and of course this site (which is a pro crypto site, so they pivot to his bitcoin holdings (which is no shock we know cryptofash people pay the fash in crypto)) is using the 'register now and get the first 10 bucks free!' trick casinos also pull).
Days since last comparison of Chat-GPT to shitty university student: zero
More broadly I think it makes more sense to view LLMs as an advanced rubber ducking tool - like a broadly knowledgeable undergrad you can bounce ideas off to help refine your thinking, but whom you should always fact check because they can often be confidently wrong.