Stubsack: weekly thread for sneers not worth an entire post, week ending 3rd November 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
almost every smart person I talk to in tech is in favor of mandatory eugenic polygynous marriages in order to deal with the fertility crisis. people are absolutely fed up with the lefty approach of using generational insolvency as a pretextual cudgel to install socialism.
I wonder if the OpenAI habit of naming their models after the previous ones' embarrassing failures is meant as an SEO trick. Google "chatgpt strawberry" and the top result is about o1. It may mention the origin of the codename, but ultimately you're still streered to marketing material.
Either way, I'm looking forward to their upcoming AI models Malpractice, Forgery, KiddieSmut, ClassAction, SecuritiesFraud and Lemonparty.
We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.
Firstly, if this is literally true they're completely fucking cooked.
Had a first-hand AI encounter today at the grocery store. The self-checkout now has a script that monitors an overhead video feed to make sure you're not getting tricky about what scanned and what got put into the bagging area, and if it thinks you're shady it will stop you from proceeding and summon an employee with no notification that something is wrong.
The new self-checkout process is as follows:
Scan your item
Hold the item plainly before you so the overhead camera doesn't get confused, looking like a Catholic priest about to deliver communion.
Place item in bagging area. Try not to have to shift things around to find a place.
Swear as the nom-mutable voice instructions tell you to bag "your... Item." Legitimately feels like they got as far as assembling the voice lines before anyone realised that having the compu-checker read every purchase out loud would lead to at best an unworkable cacophony if not several immediate lawsuits.
GOTO 1
Even as antisocial and impatient as I am I've found self-checkout to be a UX disaster, but somehow it keeps getting worse.
NASB, I had a jarring experience this morning watching Patrick Boyle's latest video "Big Tech is Going Nuclear!" (not gonna link it) where 5 mins in he introduces the sponsor and it's an AI presentation slide generator, which he said he used for the images in his video. This after he mentioned the data on generating one image using the same amount of energy as charging a smartphone. The thing is he seems careful to not mention that it is a gen ai product–he never says AI–rather a piece of software that helps making presentations.
It kinda made me panic stop the video, like an instant "well, done with you" - not sure if he continued to make a joke of it or anything. I mean, I'm sure (I hope) he was given a lot of money for the spot, but damn! Just when I thought I had a foundational understanding of people
A woman was scheduled to give a talk at an AI conference. The organizers run her photo through an AI image expansion program to get the aspect-ratio right (how did we ever manage to show photos of speakers before AI existed?).
The AI image expansion invents a bra / undershirt which wasn't visible in the original photo.
Was browsing ebay, looking for some piece of older used consumer electronics. Found a listing where the description text was written like crappy ad copy. Cheap over-the-top praising the thing. But zero words about the condition of the used item, i.e. the actually important part was completely missing. And then at the end of the description it said... this description text was generated by AI.
AI slop is like mold, it really gets everywhere and ruins everything.
Over the summer, Jesse Pollak, a cryptocurrency investor and executive at Coinbase, launched Abundant Oakland, an advocacy organization that funds “moderate” candidates running in Oakland races. The organization is explicitly linked to similarly named entities in San Francisco and Santa Monica.
Abundant Oakland has a related political action committee, Vibrant Oakland, which, campaign filings show, has received donations from Pollak ($115,000), the Oakland police officers association ($50,000), cryptocurrency executive Konstantin Richter ($60,000), the northern California carpenters regional council ($150,000) and a Pac controlled by Piedmont landlord Chris Moore ($100,000).
My enshittification story*: Instagram has been suggesting people for me to follow. It markets them to me by saying “friend X follows this person!” But friend X does not follow this person. Friend X has no tenable connection to this person. Why are you bullshitting me, Zuck? Is the autoplag outflow drain hooked up to Insta?
(Github project supposedly for AI assisted mass job application, including using the AI to cater resume to job posting. God I'm terrified of ever having to return to the job market this is fucking insane.)
Bezos' open interference in the Washington Post's editorial section has pushed Walter Bright into a very funny series of public admissions that he did not have to make. See the orange site here for his ongoing libertarian meltdown.
Want to get even better results with GenAI? The new Google Prompting Essentials course will teach you 5 easy steps to write effective prompts for consistent, useful results.
Note: Got an email ad from Coursera. I had to highlight the message because the email's text was white-on-white.
How the chicken fried fuck does anyone make a course about "prompt engineering"? It's like seeing a weird sports guy systematize his pregame rituals and then sell a course on it.
Step 1: Grow a beard, preferably one like that Leonidas guy in 300.
Step 2: If your team wins, never wash those clothes, and be sure to wear those clothes every game day. That's not stank, that's the luck diffusing out into the universe.
Step 3: Use the force to make the ball go where it needs to go. Also use it to scatter and confuse the opposition.
Step 4: Ask God(s) to intervene, he/she/they love(s) your team more!
Step 5: Change allegiance to a better team if things go downhill, because that means your current team has lost the Mandate of Heaven.
On a personal note, it feels to me like any use of AI, regardless of context, is gonna be treated as a public slight against artists, if not art as a concept going forward. Arguably, it already has been treated that way for a while.
I specifically bring this up because Tilghman wasn't some random CEO or big-name animator - he was just some random college student making a non-profit passion project with basically zero budget or connections. It speaks volumes about how artists view AI that even someone like him got raked over the coals for using it.
TL;DR: Our main characters have bilked a very credulous US State Department. 100 Million tax dollars will now be converted into entropy. There will also be committees.
"I think were going to add a whole new category of content which is AI generated or AI summarized content, or existing content pulled together by AI in some way,” the Meta CEO said. “And I think that that’s gonna be very exciting for Facebook and Instagram and maybe Threads, or other kinds of feed experiences over time."
Facebook is already one Meta platform where AI generated content, sometimes referred to as “AI slop,” is increasingly common.
In separate investigations completed by the blockchain firms Chaos Labs and Inca Digital and shared exclusively with Fortune, analysts found that Polymarket activity exhibited signs of wash trading, a form of market manipulation where shares are bought and sold, often simultaneously and repeatedly, to create a false impression of volume and activity. Chaos Labs found that wash trading constituted around one-third of trading volume on Polymarket’s presidential market, while Inca Digital found that a “significant portion of the volume” on the market could be attributed to potential wash trading, according to its report.
Adobe is going all in on generative AI models and tools, even if that means turning away creators who dislike the technology. Artists who refuse to embrace AI in their work are “not going to be successful in this new world without using it,” says Alexandru Costin, vice president of generative AI at Adobe.
Personally, I think this is gonna backfire pretty damn hard on Adobe - artists' already distrust and hate them as it is, and Procreate, their chief competition, earned a lot of artists' goodwill by publicly rejecting gen-AI some time ago. All this will likely do is push artists to jump ship, viewing Adobe as actively hostile to their continued existence.
On a wider note, it seems pretty clear to me Alexandru Costin's drank the technological determinist Kool-Aid and has come to believe autoplag's dominance is inevitable. He's not the first person I've seen drink that particular Kool-Aid, he's almost certainly not the last, and I suspect that the mass-drinking of that Kool-Aid's fueling the tech industry's relentless doubling-down on gen-AI. A doubling-down I expect will bite them in the ass quite spectacularly.
In a previous post of mine, I noted how the public generally feels that the jobs people want to do (mainly creative jobs) are the ones being chiefly threatened by AI, with the dangerous, boring and generally garbage jobs being left relatively untouched.
Looking at this, I suspect the public views anyone working on/boosting AI as someone who knows full well their actions are threatening people's livelihoods/dream jobs, and is actively, willingly and intentionally threatening them, either out of jealousy for those who took the time to develop the skills, or out of simple capitalist greed.
Got linked to this UFO sightings timeline in Popbitch today. Thought it looked quite interesting and quite fun. Then I realized the information about individual UFO sightings was being supplied by bloody Co-pilot, and therefore was probably even less accurate than the average UFOlogy treatise.
PS: Does anyone know anything about using Arc-GIS to make maps? I have an assignment due tomorrow and I'm bricking it.
It's interesting that not even Apple, with all their marketing knowledge, can come up with anything convincing why users might need "Apple Intelligence"[1]. These new ads are not quite as terrible as that previous "Crush" AI ad, but especially the one with the birthday... I find it just alienating.
Whatever one may think about Apple and their business practices, they are typically very good at marketing. So if even Apple can't find a good consumer pitch for GenAI crap, I don't think anyone can.
I know it's Halloween, but this popped up in my feed and was too spooky even for me 😱
As a side note, what are peoples feelings about Wolfram? Smart dude for sho, but some of the shit he says just comes across as straight up pseudoscientific gobbledygook. But can he out guru Big Yud in a 1v1 on Final Destination (fox only, no items) ? 🤔
You want my take, the employee in question (who also got a GoFundMe) should sue Logan for defamation - solid case aside, I wanna see that blonde fucker get humbled for once.
To repeat a previous point of mine, it seems pretty safe to assume "luddite horror" is gonna become a bit of a trend. To make a specific (if unrelated) prediction, I imagine we're gonna see AI systems and/or their supporters become pretty popular villains in the future - the AI bubble's produces plenty of resentment towards AI specifically and tech more generally, and the public's gonna find plenty of catharsis in watching them go down.
Is there a group that more consistently makes category errors than computer scientists? Can we mandate Philosophy 101 as a pre-req to shitting out research papers?
I’m currently using Flutter. It’s good! And useful! Much better than AI. It being mostly developed by Google has been a bit of a worry since Google is known to shoot itself in the foot by killing off its own products.
So while it’s no big deal to have an open source codebase forked, just wanted to highlight this part of the article:
Carroll also claimed that Google’s focus on AI caused the Flutter team to deprioritize desktop platforms, and he stressed the difficulty of working with the current Flutter team
Described as “Flutter+” by Carroll, Flock “will remain constantly up to date with Flutter, he said. Flock will add important bug fixes, and popular community features, which the Flutter team either can’t, or won’t implement.”
For some reason the previous week's thread doesn't show up on the feed for me (and didn't all week)... nvm, i somehow managed to block froztbyte by accident, no idea how
a quick interest check: I kind of want to use our deployment’s spare capacity to host an invite-only WriteFreely instance where our regulars can host longer form articles
…but WriteFreely’s UI is so sub-optimal the official instance (write.as) runs a proprietary fork with a lot of the jank removed, and I don’t really consider WF to be production ready out of the box.
we can point the WF backend at arbitrary directories for its templates, page definitions, and static assets though, so maybe I could host those on codeberg and do a CI job that’d pull main every time it updates so we could collaboratively improve WF’s frontend? it’s not a job I want to take on alone (our main instance needs to take priority), but a community-run WF instance would be pretty unique
the pros of doing this are that WriteFreely at least seems to have very slim resource requirements and it’ll at least reliably host long form Markdown on the web
the downsides are again, it’s janky as fuck (it only supports Mailgun of all things for email, but if you disable that the frontend will still claim it can send password reset emails… but it’ll check the config and display an error if you click the reset link??? but they could have just hidden the reset UI entirely with the same logic???? also I don’t like the editing experience), and it’s not really what I’d consider federated — it shoots an Article into ActivityPub whenever you post, but it’s one-way so replies, boosts, and favorites won’t show up from ActivityPub which makes it feel a bit pointless. there might be a frontend-only way to link a blog post to the Mastodon or Lemmy thread it’s associated with on another instance though, which would allow for a type of comment system? but I haven’t looked much into it. write.as just has a separate proprietary service for comments that nobody else can use.
this definitely won’t replace Wordpress but does it sound like an interesting project to take on?
On a personal note, I suspect "luddite horror" (alternatively called "techno-horror") is probably gonna blow up in popularity pretty soon - between boiling resentment against tech in general, and the impending burst of the AI bubble, I suspect audiences are gonna be hungry as hell for that kinda stuff.
Additionally, I suspect AI as a whole (and likely its supporters) will find itself becoming a pop-culture punchline much the same way NFTs/crypto did. Beyond getting pushed into everyone's faces whether they liked it or not, public embarrassments like Google's glue pizza debacle and ChatGPT's fake cases have already given comedians plenty of material to use, whilst the ongoing slop-nami turned "AI" as a term into a pretty scathing pejorative within the context of creative arts.