Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 2 March 2025

awful.systems /post/3552170

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

163 comments
  • Whilst flipping through LessWrong for things to point and laugh at, I discovered that Sabine Hossenfelder is apparently talking about "AI" now.

    Sabine Hossenfelder is a theoretical physicist and science communicator who provides analysis and commentary on a variety of science and technology topics.

    She also provides transphobia using false balance rhetoric.

    x.AI released its most recent model, Grok 3, a week ago. Grok 3 outperformed on most benchmarks

    And truly, no fucks were given.

    Grok 3 still features the same problems of previous LLM models, including hallucinations

    The fundamental problem remains fundamental? You don't say.

  • New piece from Baldur Bjarnason: AI and Esoteric Fascism, which focuses heavily on our very good friends and their link to AI as a whole. Ending quote's pretty solid, so I'm dropping it here:

    I believe that the current “AI” bubble is an outright Neo-Nazi project that cannot be separated from the thugs and fascists that seem to be taking over the US and indivisible from the 21st century iteration of Esoteric Neo-Nazi mysticism that is the TESCREAL bundle of ideologies.

    If that is true, then there is simply no scope for fair or ethical use of these systems.

    Anyways, here's my personal sidenote:

    As I've mentioned a bajillion times before, I've predicted this AI bubble would kill AI as a concept, as its myriad harms and failures indelibly associate AI with glue pizzas, artists getting screwed, and other such awful things. After reading through this, its clear I've failed to take into account the political elements of this bubble, and how it'd affect things.

    My main prediction hasn't changed - I still expect AI as a concept to die once this bubble bursts - but I suspect that AI as a concept will be treated as an inherently fascist concept, and any attempts to revive it will face active ridicule, if not outright hostility.

    • Well, how do you feel about robotics?

      On one hand, I fully agree with you. AI is a rebranding of cybernetics, and both fields are fundamentally inseparable from robotics. The goal of robotics is to create artificial slaves who will labor without wages or solidarity. We're all ethically obliged to question the way that robots affect our lives.

      On the other hand, machine learning (ML) isn't going anywhere. In my oversimplification of history, ML was originally developed by Markov and Shannon to make chatbots and predict the weather; we still want to predict the weather, so even a complete death of the chatbot industry won't kill ML. Similarly, some robotics and cybernetics research is still useful even when not applied to replacing humans; robotics is where we learned to apply kinematics, and cybernetics gave us the concept of a massive system that we only partially see and interact with, leading to systems theory.

      Here's the kicker: at the end of the day, most people will straight-up refuse to grok that robotics is about slavery. They'll usually refuse to even examine the etymology, let alone the history of dozens of sci-fi authors exploring how robots are slaves or the reality today of robots serving humans in a variety of scenarios. They fundamentally don't see that humans are aggressively chauvinist and exceptionalist in their conception of work and labor. It's a painful and slow conversation just to get them to see the word robota.

      • Good food for thought, but a lot of that rubs me the wrong way. Slaves are people, machines are not. Slaves are capable of suffering, machines are not. Slaves are robbed of agency they would have if not enslaved, machines would not have agency either way. In a science fiction world with humanlike artificial intelligence the distinction would be more muddled, but back in this reality equivocating between robotics and slavery while ignoring these very important distinctions is just sophistry. Call it chauvinism and exceptionalism all you want, but I think the rights of a farmhand are more important than the rights of a tractor.

        It's not that robotics is morally uncomplicated. Luddites had a point. Many people choose to work even in dangerous, painful, degrading or otherwise harmful jobs, because the alternative is poverty. To mechanize such work would reduce immediate harm from the nature of the work itself, but cause indirect harm if the workers are left without income. Overconsumption goes hand in hand with overproduction and automation can increase the production of things that are ultimately harmful. Mechanization has frequently lead to centralization of wealth by giving one party an insurmountable competitive advantage over its competition.

        One could take the position that the desire to have work performed for the lowest cost possible is in itself immoral, but that would need some elaboration as well. It's true that automation benefits capital by removing workers' needs from the equation, but it's bad reductionism to call that its only purpose. Is the goal of PPE just to make workers complain less about injuries? I bought a dishwasher recently. Did I do it in order to not pay myself wages or have solidarity for myself when washing dishes by hand?

        The etymology part is not convincing either. Would it really make a material difference if more people called them "automata" or something? Čapek chose to name the artificial humanoid workers in his play after an archaic Czech word for serfdom and it caught on. It's interesting trivia, but it's not particularly telling specifically because most people don't know the etymology of the term. The point would be a lot stronger if we called it "slavetronics" or "indenture engineering" instead of robotics. You say cybernetics is inseparable from robotics but I don't see how steering a ship is related to feudalist mode of agricultural production.

  • so Firefox now has terms of use with this text in them:

    When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

    this is bad. it feels like the driving force behind this are the legal requirements behind Mozilla’s AI features that nobody asked for, but functionally these terms give Mozilla the rights to everything you do in Firefox effectively without limitation (because legally, the justification they give could apply to anything you do in your browser)

    I haven’t taken the rebranded forks of Firefox very seriously before, but they might be worth taking a close look at now, since apparently these terms of use only apply to the use of mainline Firefox itself and not any of the rebrands

    • The corporate dickriding over at Reddit about this is exhausting.

      When you use Firefox or really any browser, you're giving it information like website addresses, form data, or uploaded files. The browser uses this information to make it easier to interact with websites and online services. That's all it is saying.

      How on Earth did I use Firefox to interact with websites and services in the last 20+ years then without that permission?

      Luckily the majority opinion even over there seems to be that this sucks bad, which might to be in no small part due to a lot of Firefox's remaining userbase being privacy-conscious nerds like me. So, hey, they're pissing on the boots on even more of their users and hope no one will care. And the worst part? It will probably work because anything Chromium-based is completely fucking useless now that they've gutted uBlock Origin (and even the projects that retain Manifest v2 support don't work as well as Firefox, especially when it comes to blocking YouTube ads), and most Webkit-based projects have either switched to Chromium or disappeared (RIP Midori).

      • tech apologists love to tell you the legal terms attached to the software you’re using don’t matter, then the instant the obvious happens, they immediately switch to telling you it’s your fault for not reading the legal terms they said weren’t a big deal. this post and its follow-up from the same poster are a particularly good take on this.

        also:

        When you use Firefox or really any browser, you’re giving it information

        nobody gives a fuck about that, we’re all technically gifted enough to realize the browser receives input on interaction. the problem is Mozilla receiving my website addresses, form data, and uploaded files (and much more), and in fact getting a no-restriction license for them and their partners to do what they please with that data. that’s new, that’s what the terms of use cover, and that’s the line they crossed. don’t let anybody get that shit twisted — including the people behind one of the supposedly privacy-focused Firefox forks

    • did some digging and apparently the (moz poster) it's this person. check the patents.

      mega groan

    • related, but tonight I will pour one out for Conkeror

    • NGL I always wanted to use IceWeasel just to say I did, but now it might be because it's the last bastion!

    • Sigh. Not long ago I switched from Vivaldi back to Firefox because it has better privacy-related add-ons. Since a while ago, on one machine as a test, I've been using LibreWolf, after I went down the rabbit hole of "how do I configure Firefox for privacy, including that it doesn't send stuff to Mozilla" and was appalled how difficult that is. Now with this latest bullshit from Mozilla... guess I'll switch everything over to LibreWolf now, or go back to Vivaldi...

      Really hope they'll leave Thunderbird alone with such crap...

      I often wish I could just give up on web browsers entirely, but unfortunately that's not practical.

    • I hate how much firefox has been growing to this point of being the best, by a smaller and smaller margin, of a fucking shit bunch

    • Yeah...that could be a real deal breaker. Doesn't this give them the right to MITM all traffic coming through the browser?

      • Maybe. The latter part of the sentence matters, too

        …you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

        Good luck getting a lawyer to give a definitive answer to what exactly counts as helping you do those things.

        The whole sentence is a little ambiguous itself. Does the "as you indicate with your use of Firefox" refer to

        • A) the whole sentence (i.e. "[You using Firefox indicates that] when you upload […] you hereby grant […] to help you navigate, experience, and interact with online content.") or
        • B) only to the last part of it (i.e. "When you upload […] you hereby grant […] to help you navigate, experience, and interact with online content [in the ways that you] indicate with your use of Firefox.")

        B seems fairly innocuous and the intended effect is probably "if you send data to a website using our browser, don't sue us for sending the data you asked us to send". The mere act of uploading or inputting information through Firefox does not — in my (technical, not legal) expert opinion — indicate that Mozilla could help me navigate, experience, or interact with online content by MITMing the uploaded or input data.

        A is a lot scarier, since the interpretation of what it means to "help you navigate, experience, and interact with online content" does not depend on how you use Firefox. Anything that Mozilla can successfully argue to help you do those things is fair game, whether you ask for it or not, which seems a lot more abusable.

        Opera Mini was (is?) an embedded/mobile browser for Symbian dumbphones and other similar devices that passed all traffic through a proxy to handle rendering on server side and reduce processing effort on the (typically slow and limited) mobile devices. This could be construed as helping the user navigate, experience, and interact with online content, so there is precedent of a browser MITMing its users' data for arguably helpful purposes.

        I would never accept hijacking my web upload and input data for training an LLM or whatever mass data harvesting fad du jour happens to be in fashion at a given time and I do not consider it helpful for any purpose for a web browser to do such things. Alas, the 800-pound gorilla might have some expensive reality-bending lawyers on its side.

      • legally, it absolutely does, and it gets even worse when you dig deeper. Mozilla is really going all in on being a bunch of marketing creeps.

  • after Proton’s latest PR push to paint their CEO as absolutely not a fascist failed to convince much of anyone (feat. a medium article I’m not gonna read cause it’s a waste of my time getting spread around by brand new accounts who mostly only seem to post about how much they like Proton), they decided to quietly bow out of mastodon and switch to the much more private and secure platform of… fucking Reddit of all things, where Proton can moderate critical comments out of existence (unfun fact: in spite of what most redditors believe, there’s no rule against companies moderating their own subs — it’s an etiquete violation, meaning nobody gives a fuck) and accounts that only post in defense of Proton won’t stick out like a sore thumb

    • I decided to waste my fucking time and read the awful medium article that keeps getting linked and, boy fucking howdy, it’s exactly what I thought it was. let’s start with the conclusion first:

      TLDR: my conclusion is that it is far more likely that Proton and its CEO are actually liberals.

      which is just a really weird thing to treat like a revelation when we’ve very recently seen a ton of liberal CEOs implement fash policies, including one (Zuckerberg) who briefly considered running as a Democrat before he was advised that nobody found him the least bit appealing

      anyway, let’s go to the quick bullet points this piece of shit deserves:

      • it’s posted by an account that hasn’t done anything else on medium
      • the entire thing is written like stealth PR and a bunch of points are copied straight out of Proton’s marketing. in fact, the tone and structure are so off that I’m just barely not willing to accuse this article of being generated by an LLM, because it’s just barely not repetitive enough to entirely read like AI
      • they keep doing the “nobody (especially the filthy redditors) read Andy or Proton’s actual posts in full” rhetorical technique, which is very funny when people on mastodon were frantically linking archives of those posts after they got deleted, and the posts on Reddit were deleted in a way that was designed to provoke confusion and cover Proton’s tracks. I can’t blame anyone for going on word of mouth if they couldn’t find an archive link.
      • like every liberal-presenting CEO turned shithead, Andy has previously donated a lot of money to organizations associated with the Democrats
      • not a single word about how Proton’s tied up in bitcoin or boosting LLMs and where that places them politically
      • also nothing about how powerless the non-profit associated with Proton is in practice
      • Andy can’t be a shithead, he hired a small handful of feminists and occasionally tweets about how much he supports left-wing causes! you know, on the nazi site
      • e: “However, within the context of Trump’s original post that Andy is quoting, it seems more likely that “big business” = Big Tech, and “little guys” = Little Tech, but this is not obvious if you did not see the original post, and this therefore caused outrage online.” what does this mean. that’s exactly the context I read into Andy’s original post, and it’s a fucking ridiculous thing to say and a massive techfash dogwhistle loud and shrill enough that everybody heard it. it’s fucking weird to falsely claim you’re being misinterpreted and then give an explanation that’s completely in line with the damning shit you’re being accused of, then for someone else to come along and pretend that somehow absolves you

      there’s more in there but I’m tired of reading this article, the writing style really is fucking exhausting

      e: also can someone tell me how shit like this can persuade anyone? it’s one of the most obvious, least persuasive puff pieces I’ve ever read. did the people who love proton more than they love privacy need something, anything to latch onto to justify how much they like the product?

    • @self @BlueMonday1984 I really wish I hadn't moved to Proton - something I did partly because they had a presence here, and seemed to be a Mastondon sort of business.

      I would change again, but that is difficult.

  • So they had the new Claude hooked up to some tools so that it could play Pokemon red. Somewhat impressive (at least to me!) It was able to beat lt surge after several days of play. They had a stream demo'ing it on twitch and despite the on paper result of getting 3 gym badges, poor fellas got stuck in Viridian forest trying to find the exit to the maze.

    As far as finding the exit goes... I guess you could say he was stumped? (MODS PLEASE DONT BAN)

    strim if anyone is curious. Yes, i know this is clever advertising for anthropic, but i do find it cute and maybe someone else will?

    https://www.twitch.tv/claudeplayspokemon

    • It looks fun!

      My inner grouch wanted to add:

      There were a metric shit ton of hand-crafted, artisanal, exhaustive full-text walkthroughs for the OG Pokemon games even twenty years ago. They're all part of the training corpus, so all you have to do to make this work is automate prompt generation based on current state and then capture the most likely key words in the LLM's outputs for conversion to game commands. Plus, a lot of "intelligence" could be hiding in the invisible "glue" that ties the whole together, up to and including an Actual Individual.

      I'd be shocked if this worked for a 2025 release

      • I had a similar disc with one of my friends! Anthropic is bragging that the model was not trained to play pokemon, but pokemon red has massive wikis for speed running that based on the reasoning traces are clearly in the training data. Like the model trace said it was "training a nidoran to level 12 b.c. at level 12 nidoran learns double kick which will help against brock's rock type pokemon", so it's not going totally blind in the game. There was also a couple outputs when it got stuck for several hours where it started printing things like "Based on the hint..." which seemed kind of sus. I wouldn't be surprised if it there is some additional hand holding going on in the back based on the game state (i.e., go to oaks, get a starter, go north to viridian, etc.) that help guide the model. In fact, I'd be surprised if this wasn't the case.

      • One more tidbit, I checked in and it's been stuck in Mt Moon first floor for 6 hours. Just out of curiosity, I asked an OAI model "what do I do if im stuck in mount moon 1F" and it spit a step-by-step guide how to navigate the cave with the location of each exit and what to look for, so yeah, even without someone hardcoding hints in the model, just knowing the game state and querying what's next suffices to get the next step to progress the game.

  • I stumbled upon this poster while trying to figure out what linux distro normal people are using these days, and there’s something about their particular brand of confident incorrectness. please enjoy the posts of someone who’s either a relatively finely tuned impolite disagreement bot or a human very carefully emulating one:

    • weirdly extremely into everything red hat
    • outrageously bad takes, repeated frequently in all the Linux beginner subs, never called out because “hey fucker I know you’re bullshitting and no I don’t have to explain myself” gets punished by the mods of those subs
    • very quickly carries conversation into nested subthreads where the downvotes can’t get them
    • accuses other posters of using AI to generate the posts they disagree with
    • when called out for sounding like AI, explains that they use it “only to translate”
    • just the perfect embodiment of a fucking terrible linux guy, I swear this is where the microsoft research money goes
    • as in, distro for normal people? (for arbitrary value of normal, that is) distrowatch ranks mint #1, and i also use it because i'm lazy and while i could use something else, It Just Works™

      • that’s the one I ended up grabbing, and from the setup-only usage I’ve been giving it, it’s surprisingly good

    • there’s a post where they claim that secure boot is worthless on linux (other than fedora of course) and it’s not because secure boot itself is worthless but because someone can just put malware in your .bashrc and, like, chef’s kiss

      • They're really fond of copypasta:

        The issue with Arch isn't the installation, but rather system maintenance. Users are expected to handle system upgrades, manage the underlying software stack, configure MAC (Mandatory Access Control), write profiles for it, set up kernel module blacklists, and more. Failing to do this results in a less secure operating system.
        The Arch installation process does not automatically set up security features, and tools like Pacman lack the comprehensive system maintenance capabilities found in package managers like DNF or APT, which means you'll still need to intervene manually. Updates go beyond just stability and package version upgrades. When software that came pre-installed with the base OS reaches end-of-life (EOL) and no longer receives security fixes, Pacman can't help—you'll need to intervene manually. In contrast, DNF and APT can automatically update or replace underlying software components as needed. For example, DNF in Fedora handles transitions like moving from PulseAudio to PipeWire, which can enhance security and usability. In contrast, pacman requires users to manually implement such changes. This means you need to stay updated with the latest software developments and adjust your system as needed.

    • These are also — and I do not believe there are any use cases that justify this — not a counterbalance for the ruinous financial and environmental costs of generative AI. It is the leaded gasoline of tech, where the boost to engine performance didn’t outweigh the horrific health impacts it inflicted.

      ed reads techtakes? i wonder how far this analogy disseminated

    • Baldur's given his thoughts on Bluesky - he suspects Zitron's downplayed some of AI's risks, chiefly in coding:

      There’s even reason to believe that Ed’s downplaying some of the risks because they’re hard to quantify:

      • The only plausible growth story today for the stock market as a whole is magical “AI” productivity growth. What happens to the market when that story fails?
      • Coding isn’t the biggest “win” for LLMs but its biggest risk

      Software dev has a bad habit of skipping research and design and just shipping poorly thought-out prototypes as products. These systems get increasingly harder to update over time and bugs proliferate. LLMs for coding magnify that risk.

      We’re seeing companies ship software nobody in the company understands, with edge cases nobody is aware of, and a host of bugs. LLMs lead to code bases that are harder to understand, buggier, and much less secure.

      LLMs for coding isn’t a productivity boon but the birth of a major Y2K-style crisis. Fixing Y2K cost the world’s economy over $500 billion USD (corrected for inflation), most of it borne by US institutions and companies.

      And Y2K wasn’t promising magical growth on the order of trillions so the perceived loss of a failed AI Bubble in the eyes of the stock market would be much higher

      On a related note, I suspect programming/software engineering's public image is going to spectacularly tank in the coming years - between the impending Y2K-style crisis Baldur points out, Silicon Valley going all-in on sucking up to Trump, and the myriad ways the slop-nami has hurt artists and non-artists alike, the pieces are in place to paint an image of programmers as incompetent fools at best and unrepentant fascists at worst.

    • Bruh, Anthropic is so cooked. < 1 billion in rev, and 5 billion cash burn. No wonder Dario looks so panicked promising super intelligence + the end of disease in t minus 2 years, he needs to find the world's biggest suckers to shovel the money into the furnace.

      As a side note, rumored Claude 3.7(12378752395) benchmarks are making rounds and they are uh, not great. Still trailing o1/o3/grok except for in the "Agentic coding benchmark" (kek), so I guess they went all in on the AI swe angle. But if they aren't pushing the frontier, then there's no way for them to pull customers from Xcels or people who have never heard of Claude in the first place.

      On second thought, this is a big brain move. If no one is making API calls to Clauderino, they aren't wasting money on the compute they can't afford. The only winning move is to not play.

  • In todays ACX comment spotlight, Elon-anons urge each other to trust the plan:

    • Did Daniel B. Miller forget to type a whole paragraph or was completing that thought with even the tiniest bit of insight or slightly useful implications just too much thinking? Indeed, maybe people don't usually take over governments just for the sake of taking over governments. Maybe renowned shithead Elon Musk wants to use his power as an unelected head of shadow government to accomplish a goal. Nice job coming up with that one, dear Daniel B. Miller.

      What could be the true ambition behind his attempt to control the entire state apparatus of the wealthiest nation state in the world? Probably to go to a place really far away where the air is unbreathable, it's deathly cold, water is hard to get and no life is known to exist. Certainly that is his main reason to perform weird purges to rid the government of everyone who knows what a database is or leans politically to the left of Vidkun Quisling.

      On one hand I wish someone were there to "yes-and?" citizen Miller to add just one more sentence to give a semblance of a conclusion to this coathook abortion of an attempted syllogism, but on the other I would not expect a conclusion from the honored gentleperson Danny Bee of the house of Miller to be any more palatable than the inanity preceding.

      Alas, I cannot be quite as kind to comrade anomie, whose curt yet vapid reply serves only to flaunt the esteemed responder's vocabulary of rat jargon and refute the saying "brevity is the soul of wit". Leave it to old friend of Sneer Club Niklas Boström to coin a heptasyllabic latinate compound for the concept that sometimes a thing can help you do multiple different other things. A supposed example of this phenomenon is that a machine programmed to consider making paperclips important and not programmed to consider humans existing important could consider making paperclips important and not consider humans existing important. I question whether this and other thought experiments on the linked Wikipedia page — fascinating as they are in a particular sense — are necessary or even helpful to elucidate the idea that political power could potentially be useful for furthering certain goals, possibly including interplanetary travel. Right.

      • Don't forget that the soil is incredibly toxic and that what little atmosphere exists smells like getting continuously Dutch Ovened forever

    • People must believe there is a plan, as the alternative 'I was conned by some asshole' is too much to bear.

      • Can you blame someone for hoping that maybe Musk might plan to yeet himself to Mars. I'd be in favor, though I'd settle for cheaper ways to achieve similar results.

163 comments