Skip Navigation
Featured
Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024
  • Debating post-truth weirdos for large sums of money may seem like a good business idea at first, until you realize how insufferable the debate format is (and how no one normal would judge such a thing).

  • "Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • Sadly all my best text encoding stories would make me identifiable to coworkers so I can't share them here. Because there's been some funny stuff over the years. Wait where did I go wrong that I have multiple text encoding stories?

    That said I mostly just deal with normal stuff like UTF-8, UTF-16, Latin1, and ASCII.

  • "Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • Senior software engineer programmer here. I have had to tell coworkers "don't trust anything chat-gpt tells you about text encoding" after it made something up about text encoding.

  • "Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • Remember when you could read through all the search results on Google rather than being limited to the first hundred or so results like today? And boolean search operators actually worked and weren't hidden away behind a "beware of leopard" sign? Pepperidge Farm remembers.

  • Casey Newton drinks the kool-aid
  • But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

    So what harms has Mr. Yudkowski enumerated? Off the top of my head I can remember:

    1. Diamondoid bacteria
    2. What if there's like a dangerous AI in the closet server and it tries to convince you to connect your Nintendo 3DS to it so it can wreak havoc on the internet and your only job is to ignore it and play your nintendo but it's so clever and sexy
    3. What if we're already in hell: the hell of living in a universe where people get dust in their eyes sometimes?
    4. What if we're already in purgatory? If so we might be able to talk to future robot gods using time travel; well not real time travel, more like make believe time travel. Wouldn't that be spooky?
  • Mass resignations at Intelligence as Elsevier actually bothers slightly for once
  • Ah yes, the journal of intelligence:

    First, Kanazawa’s (2008) computations of geographic distance used Pythagoras’ theorem and so the paper assumed that the earth is flat (Gelade, 2008). Second, these computations imply that ancestors of indigenous populations of, say, South America traveled direct routes across the Atlantic rather than via Eurasia and the Bering Strait.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Mirror bacteria? Boring! I want an evil twin from the negaverse who looks exactly like me except right hande-- oh heck. What if I'm the mirror twin?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • what the heck is an eigenrobot??

    Update: It is too late, Sneerclub, I have seen everything.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • I mean, unrestricted skepticism is the appropriate response to any press release, especially coming out of silicon valley megacorps these days.

    Indeed, I've been involved in crafting a silicon valley megacorp press release before. I've seen how the sausage is made! (Mine was more or less factual or I wouldn't have put my name on it, but dear heavens a lot of wordsmithing goes into any official communication at megacorps)

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Maybe I'm being overzealous (I can do that sometimes).

    But I don't understand why this particular experiment suggests the multiverse. The logic appears to be something like:

    1. This algorithm would take a gazillion years on a classical computer
    2. So maybe other worlds are helping with the compute cost!

    But I don't understand this argument at all. The universe is quantum, not classical. So why do other worlds need to help with the compute? Why does this experiment suggest it in particular? Why does it make sense for computational costs to be amortized across different worlds if those worlds will then have to go on to do other different quantum calculations than ours? It feels like there's no "savings" anyway. Would a smaller quantum problem feasible to solve classically not imply a multiverse? If so, what exactly is the threshold?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Can we all take a moment to appreciate this absolutely wild take from Google's latest quantum press release (bolding mine) https://blog.google/technology/research/google-willow-quantum-chip/

    Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

    The more I think about it the stupider it gets. I'd love if someone with an actual physics background were to comment on it. But my layman take is it reads as nonsense to the point of being irresponsible scientific misinformation whether or not you believe in the many worlds interpretation.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Speaking of imposters, there's a screenshot of a fake manifesto substack post (since deleted) which has been linked a couple times on reddit.

    The only problem is it was first published to substack over four hours after the arrest was reported according to the post's own json-ld metadata. People be trying to stir things up.

    Amatuers. Can't even publish the post early and edit it later for that extra bit of plausible deniability.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Friends don't let friends OSINT. That said... people have found his twitter (still up), goodreads (deleted), github (still up, already full of troll github issues), linkedin (I guess deleted), an interview about has school games club (deleted), his game development studio which published one iphone game (facebook profile deleted).

    His twitter account links to his linktree, but that only contains some inscrutable emoji rather than any links so hasn't really been reported:

    💻🤓 - 🥷🏃‍♂️🧘‍♂️🏋️ - 📚🤓 - 🦍🧠 - 🍄🧠 - 🐄👨‍⚖️ - ☯️

    (I'm sure his inevitable groupies will be puzzling over the meaning of cow judge for years to come)

    The youtube page you found is less talked about, though a reddit comment on one of them said "anyone else thinking burntbabylon is Luigi?".

    NYTimes also reports a steam account, facebook account, and instagram account (couldn't find any of these).

    https://www.nytimes.com/2024/12/09/nyregion/uhc-suspect-video-games.html

    Other NYTimes articles are now investigating his health issues (back surgery, etc)

  • Itch․io taken down by bogus AI-sourced phishing claim from Funko Pop, BrandShield
  • Also this wasn't necessarily a DMCA request.

    itch.io said this on the hackernews thread (bolding mine):

    The BrandShield software is probably instructed to eradicate all "unauthorized" use of their trademark, so they sent reports independently to our host and registrar claiming there was "fraud and phishing" going on, likely to cause escalation instead of doing the expected DMCA/cease-and-desist.

    And BrandShield's response / nonpology (bolding mine):

    BrandShield serves as a trusted partner to many global brands. Our AI-driven platform detects potential threats and provides analysis; then our team of Cybersecurity Threat hunters and IP lawyers decide on what actions should be taken. In this case, an abuse was identified from the itch.io subdomain. BrandShield remains committed to supporting our clients by identifying potential digital threats and infringements and we encourage platforms to implement stronger self-regulation systems that prevent such issues from occurring.

    Which translated into English is possibly* something like "We would be very happy if the general public thought this was a normal DMCA takedown. Our chatbot said the website was a phishing page. Our overworked cybersecurity expert hunter agreed after looking at it for zero milliseconds. We encourage itch.io to get wrecked."

    This difference matters because site hosts and domain registrars can be extremely proactive about any possibility of fraud / abuse / hacks, and there's less of a standard legal process for them.

    * Dear Funko please do not call my mom.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • On the third day of OpenAI my true love enemy gave to me three french hens Sora.

    The version of Sora we are deploying has many limitations. It often generates unrealistic physics and struggles with complex actions over long durations.

    "12 days of OpenAI" lol. Such marketing.

    Big eye roll to this part too:

    We’re introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.

  • "Defining AI"
  • if you’re benefiting from some particular way of drawing a boundary around and thinking about AI, I’d really like to hear about it.

    A bit of a different take than their post, but since they asked:

    I've noticed a lot of people use "AI" when they really mean "LLM and/or diffusion model". I can't count the number of times someone at my job has said AI when solely describing LLMs. at this point I've given up on clarifying or correcting the point.

    This isn't entirely because LLM is a mouthful to say, but also because it's convenient for tech companies if people don't look at the algorithm behind the curtain (flawed, as all algorithms are) and instead see it as magic.

    It's blindingly obvious to anyone who's looked that LLMs and generative image models cannot reason or exhibit actual creativity (c.f. the post about poetry here). Throw enough training data and compute at one and it may be able to multiply better (holy smokes stop the presses a neural network being able to multiply numbers???), or produce obviously bad output x% less of the time, but by this point we've more or less reached the bounds of what the technology can do. The industry's answer is stuff like RAG or manual blacklists, which just serves to hide it's capabilities behind a curtain.

    Everyone wants AI money, but classic chatbots don't make money unless they're booking vacations for customers, writing up doctor's notes, or selling you cars.

    But LLMs can't actually do this, so in particular any tool in the space has to be uninterrogated enough both to give customers plausible deniability, and to keep the bubble going before they figure it out.

    Look at my widget! It's an ✨AI✨! A magical mystery box that makes healthcare, housing, hiring, organ donation, and grading decisions with maybe no bias at all... who can say? Look buster if you hire a human they'll definitely be biased!

    If you use "statistical language model" instead of "AI" in this sentence then people start asking uncomfortable questions about how appropriate it is to expect a mad-libs algorithm trained on 4chan to not be racist.


    … an insurance pricing formula, for example, might be considered AI if it was developed by having the computer analyze past claims data, but not if it was a direct result of an expert’s knowledge, even if the actual rule was identical in both cases. [page 13]

    This is an interesting quote indeed, as expert systems used to be on the forefront of AI; now it's apparently not considered AI at all.

    Eventually LLMs will just be considered LLMs, and image generators will just be considered image generators, and people will stop ascribing ✨magic✨ to them; they will join the rank of expert systems, tree search algorithms, logic programming, and everyone else that we just take for granted as another tool in the toolbox. The bubble people will then have to come up with some shinier newer system to attract money.

  • The Professor Assigns Their Own Book — But Now With a Tech Bubble in the Middle Step
  • Asking students to pay good money for LLM slop is bold.

    Also did the press release really need a generative image? "Ah yes, this image will make my press release look nice and professional and reassure the audience that AI is being used with due care."

    Of neriacular latin to: an evoolitun on nance langusages.

    Nanolu.age languga, Lgugar lanilan, pachnans, NlbN, Latolcean, Framen, ArpianhCATifN, Dvnutalmnk's, Sgiaiviaesgn, Italian, Ioveimneilaeiawepoew, Pfowchance -> Vullaq Luainles, Leawenowas, Laohaixisahh Aimwyvrnestrattidn, Frooidangs $Chha

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 8th December 2024
  • Not everyone can respond to consultations about wanting to die, but a robot can accept anything you say.

    I didn't really understand just how absurd this is before looking up the robot.

    It is essentially Furby on wheels. It has extremely slick marketing, makes weird cooing sounds, has a weird camera sprouting out of it's head like a fungus, has big LED eyes, scoots around randomly, stores your face on the cloud "remembers up to 1000 people", and you can (as the kids say) boop the snoot. That's about it.

    I'm trying to imagine someone going "Lovot, sometimes I don't want to go on. I'm sorry I didn't mean that. Thank you for always listening" and it being all "coo chirp gigigi tweeeee" while wiggling it's stupid little Lovot arms... and I just can't.

  • Ilya Sutskever's new AI super-intelligence startup raises a billion dollars. Unclear what they actually do.

    https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

    http://web.archive.org/web/20240904174555/https://ssi.inc/

    I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

    They aren't interested in anything besides "superintelligence" which strikes me as an optimistic business strategy. If you are "cracked" you can join them:

    > We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

    23
    sailor_sega_saturn Sailor Sega Saturn @awful.systems

    I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence... Sailor Saturn.

    Posts 1
    Comments 275