Skip Navigation
"Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • "If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024
  • From the replies:

    In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.

    Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.

    There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.

    And a good sneer:

    With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024
  • Not A Sneer But: "Princ-wiki-a Mathematica: Wikipedia Editing and Mathematics" and a related blog post. Maybe of interest to those amongst us whomst like to complain.

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024
  • the team have a bit of an elon moment

    "Oh shit, which one of them endorsed the German neo-Nazis?"

    Aaron likes a porn post

    "Whew."

  • ruh roh
  • Please don't make posts to TechTakes that are just bare images without a description. The description can be simple, like "Screenshot from YouTube saying 'Ad blockers violate YouTube's Terms of Service'". Some of our participants rely upon screenreaders. Or are crotchety old people who remember an Internet that wasn't all three websites sharing snapshots of the other two websites.

  • "Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • I don't see what useful information the motte and bailey lingo actually conveys that equivocation and deception and bait-and-switch didn't. And I distrust any turn of phrase popularized in the LessWrong-o-sphere. If they like it, what bad mental habits does it appeal to?

    The original coiner appears to be in with the brain-freezing crowd. He's written about the game theory of "braving the woke mob" for a Tory rag.

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024
  • In the department of not smelling at all like desperation:

    On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free.

    It had a very focused area of expertise, but for sincerity, you couldn't beat 1-900-MIX-A-LOT.

  • "Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • Petition to replace "motte and bailey" per the Batman clause with "lying like a dipshit".

  • Microsoft refuses a documentation fix because AI might not understand text in tables
  • Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”

    Christ, what an asshole.

  • Kevin Roose is gonna write "A Latecomer's Guide To AI" and the bubble will burst hard two months later
  • Max Kennerly's reply:

    For a client I recently reviewed a redlined contract where the counterparty used an "AI-powered contract platform." It had inserted into the contract a provision entirely contrary to their own interests.

    So I left it in there.

    Please, go ahead, use AI lawyers. It's better for my clients.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Max Kennerly:

    For a client I recently reviewed a redlined contract where the counterparty used an "AI-powered contract platform." It had inserted into the contract a provision entirely contrary to their own interests.

    So I left it in there.

    Please, go ahead, use AI lawyers. It's better for my clients.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Adam Christopher comments on a story in Publishers Weekly.

    Says the CEO of HarperCollins on AI:

    "One idea is a “talking book,” where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author."

    Please, just make it stop, somebody.

    Robert Evans adds,

    there's a pretty good short story idea in some publisher offering an AI facsimile of Harlan Ellison that then tortures its readers to death

    Kevin Kruse observes,

    I guess this means that HarperCollins is getting out of the business of publishing actual books by actual people, because no one worth a damn is ever going to sign a contract to publish with an outfit with this much fucking contempt for its authors.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • There's a whole lot of assuming-the-conclusion in advocacy for many-worlds interpretations — sometimes from philosophers, and all the time from Yuddites online. If you make a whole bunch of tacit assumptions, starting with those about how mathematics relates to physical reality, you end up in MWI country. And if you make sure your assumptions stay tacit, you can act like an MWI is the only answer, and everyone else is being un-mutual irrational.

    (I use the plural interpretations here because there's not just one flavor of MWIce cream. The people who take it seriously have been arguing amongst one another about how to make it work for half a century now. What does it mean for one event to be more probable than another if all events always happen? When is one "world" distinct from another? The arguments iterate like the construction of a fractal curve.)

  • TPOT hits the big time!
  • "Ah," said Arthur, "this is obviously some strange usage of the word scientist that I wasn't previously aware of."

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • The peer reviewers didn't say anything about it because they never saw it: It's an unilluminating comparison thrown into the press release but not included in the actual paper.

  • The Professor Assigns Their Own Book — But Now With a Tech Bubble in the Middle Step

    The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".

    The logic the professor gives completely baffles me:

    > "Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."

    I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."

    I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?

    The sales talk in the news release is really going all in selling that undercoat.

    > Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.

    Back in my day, we called that "having a book" and "writing a lesson plan".

    Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.

    (Hat tip.)

    13
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    Last week's thread

    (Semi-obligatory thanks to @dgerard for starting this)

    317
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 29 September 2024

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    Last week's thread

    (Semi-obligatory thanks to @dgerard for starting this)

    167
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 22 September 2024

    Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    Last week's thread

    (Semi-obligatory thanks to @dgerard for starting this)

    200
    Off-Topic: Music Recommendation Thread

    So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

    39
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024

    Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you’ll near-instantly regret.

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > >Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    221
    Honest Government Ad | AI

    Bumping this up from the comments.

    2
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 16 June 2024

    Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    14
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 June 2024

    Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    14
    Neil Gaiman on spicy autocomplete
    www.tumblr.com Neil Gaiman

    I apologize if you’ve been asked this question before I’m sure you have, but how do you feel about AI in writing? One of my teachers was “writing” stories using ChatGPT then was bragging about how go…

    > Many magazines have closed their submission portals because people thought they could send in AI-written stories. > > For years I would tell people who wanted to be writers that the only way to be a writer was to write your own stories because elves would not come in the night and do it for you. > > With AI, drunk plagiaristic elves who cannot actually write and would not know an idea or a sentence if it bit their little elvish arses will actually turn up and write something unpublishable for you. This is not a good thing.

    2
    Cybertruck owners allege pedal problem as Tesla suspends deliveries
    arstechnica.com Cybertruck owners allege pedal problem as Tesla suspends deliveries

    Owners will have to wait until April 20 for deliveries to resume.

    > Tesla's troubled Cybertruck appears to have hit yet another speed bump. Over the weekend, dozens of waiting customers reported that their impending deliveries had been canceled due to "an unexpected delay regarding the preparation of your vehicle." > > Tesla has not announced an official stop sale or recall, and as of now, the reason for the suspended deliveries is unknown. But it's possible the electric pickup truck has a problem with its accelerator. [...] Yesterday, a Cybertruck owner on TikTok posted a video showing how the metal cover of his accelerator pedal allegedly worked itself partially loose and became jammed underneath part of the dash. The driver was able to stop the car with the brakes and put it in park. At the beginning of the month, another Cybertruck owner claimed to have crashed into a light pole due to an unintended acceleration problem.

    Meanwhile, layoffs!

    0
    Google Books Is Indexing AI-Generated Garbage
    www.404media.co Google Books Is Indexing AI-Generated Garbage

    Google said it will continue to evaluate its approach “as the world of book publishing evolves.”

    > Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history.

    0
    Elon Musk’s Tunnel Reportedly Oozing With Skin-Burning Chemical Sludge
    futurism.com Elon Musk’s Tunnel Reportedly Oozing With Skin-Burning Chemical Sludge

    Elon Musk's Boring Company has only built a few miles of tunnel underneath Vegas — but those tunnels have taken a toxic toll.

    [Eupalinos of Megara appears out of a time portal from ancient Ionia] Wow, you guys must be really good at digging tunnels by now, right?

    0
    New York taxpayers are paying for spicy autocomplete to tell landlords they can discriminate
    themarkup.org NYC’s AI Chatbot Tells Businesses to Break the Law – The Markup

    The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income

    > In October, New York City announced a plan to harness the power of artificial intelligence to improve the business of government. The announcement included a surprising centerpiece: an AI-powered chatbot that would provide New Yorkers with information on starting and operating a business in the city. > > The problem, however, is that the city’s chatbot is telling businesses to break the law.

    0
    Chris Langan and the "Cognitive Theoretic Model of the Universe"? Oh boy!

    a lesswrong: 47-minute read extolling the ambition and insights of Christopher Langan's "CTMU"

    a science blogger back in the day: not so impressed

    > [I]t’s sort of like saying “I’m going to fix the sink in my bathroom by replacing the leaky washer with the color blue”, or “I’m going to fly to the moon by correctly spelling my left leg.”

    Langan, incidentally, is a 9/11 truther, a believer in the "white genocide" conspiracy theory and much more besides.

    0
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 31 March 2024

    Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

    Any awful.systems sub may be subsneered in this subthread, techtakes or no.

    If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

    > The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    0
    Elsevier keeps publishing articles written by spicy autocomplete

    If you've been around, you may know Elsevier for surveillance publishing. Old hands will recall their running arms fairs. To this storied history we can add "automated bullshit pipeline".

    In Surfaces and Interfaces, online 17 February 2024:

    > Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2].

    In Radiology Case Reports, online 8 March 2024:

    > In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice.

    Edit to add this erratum:

    > The authors apologize for including the AI language model statement on page 4 of the above-named article, below Table 3, and for failing to include the Declaration of Generative AI and AI-assisted Technologies in Scientific Writing, as required by the journal’s policies and recommended by reviewers during revision.

    Edit again to add this article in Urban Climate:

    > The World Health Organization (WHO) defines HW as “Sustained periods of uncharacteristically high temperatures that increase morbidity and mortality”. Certainly, here are a few examples of evidence supporting the WHO definition of heatwaves as periods of uncharacteristically high temperatures that increase morbidity and mortality

    And this one in Energy:

    > Certainly, here are some potential areas for future research that could be explored.

    Can't forget this one in TrAC Trends in Analytical Chemistry:

    > Certainly, here are some key research gaps in the current field of MNPs research

    Or this one in Trends in Food Science & Technology:

    > Certainly, here are some areas for future research regarding eggplant peel anthocyanins,

    And we mustn't ignore this item in Waste Management Bulletin:

    > When all the information is combined, this report will assist us in making more informed decisions for a more sustainable and brighter future. Certainly, here are some matters of potential concern to consider.

    The authors of this article in Journal of Energy Storage seems to have used GlurgeBot as a replacement for basic formatting:

    > Certainly, here's the text without bullet points:

    0
    "Super-Recursive Algorithms" and other Wikipedian Weirdness

    So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

    How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

    Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

    So: What's the worst/weirdest Wikipedia article in your field of specialization?

    10
    SneerClub Classic: Big Yud's Mad Men Cosplay

    !

    Yudkowsky writes,

    > How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

    Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

    0
    Billionaires push AI apocalypse risk through college student groups

    > Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

    See, it's like marketing the idea, in a multilevel way

    0
    blakestacey blakestacey @awful.systems
    Posts 24
    Comments 340
    Moderates