Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SC
Posts
4
Comments
141
Joined
2 yr. ago

  • Yeah it's really not productive to engage directly.

    I'd almost categorize Penrose as a borderline case of noble disease himself for stuff he's said about Quantum Consciousness and relatedly the halting problem and Godel's incompleteness theorem. But he actually has a proposed mechanism (involving microtubules) that is testable and falsifiable and the physics half of what he is talking about is within his domain of expertise.

  • Stephen Hawking was starting to promote AI doomerism in 2014. But he's not a Nobel prize winner. Yoshua Bengio is a doomer, but no Nobel prize either, although he is pretty decorated in awards. So yeah looks like one winner and a few other notable doomers that aren't actually Nobel Prize winners somehow became winners plural in Scott's argument from authority. Also, considering the long list of example of Noble Disease, I really don't think Nobel Prize winner endorsement is a good way to gauge experts' attitudes or sentiment.

  • He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?

    Literally the only difference between Scott's beliefs and AI:2027 as a whole is his prophecy estimate is a year or two later. (I bet he'll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn't happen in 2028.)

    Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods... as in Kat Woods... as in a member of Nonlinear, the EA "organization" whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for "hiring" an underpaid (really underpaid, like couldn't afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.

  • Yeah I think long term Trump wrecking US soft power might be good for the world. There is going to be a lot of immediate suffering because a lot of those programs were also doing good things (in addition to strengthening US soft power or pushing a neocolonial agenda or whatever else).

  • I was just about to point out several angles this post neglects but it looks like from the edit this post is just intended to address a narrower question. Among the angles outside the intended question: philanthropy by the ultra-wealthy often serves as a tool for reputation laundering and influence building. I guess the same criticism can be made about a lot of conventional philanthropy, but I don't think that should absolve EA.

    This post somewhat frames the question as a comparison between EA and conventional philanthropy and foreign aid efforts... which okay, but that is a low bar especially when you look at some of the stuff the US has done with it's foreign aid.

  • You can make that point empirically just looking at the scaling that's been happening with ChatGPT. The Wikipedia page for generative pre-trained transformer has a nice table. Key takeaway, each model (i.e. from GPT-1 to GPT-2 to GPT-3) is going up 10x in tokens and model parameters and 100x in compute compared to the previous one, and (not shown in this table unfortunately) training loss (log of perplexity) is only improving linearly.

  • This is especially ironic with all of Elon's claims about making Grok truth seeking. Well, "truth seeking" was probably always code for making an LLM that would parrot Elon's views.

    Elon may have failed at making Grok peddle racist conspiracy theories like he wanted, but this shouldn't be taken as proof that LLMs can't be manipulated that way. He probably went with the laziest option possible of directly prompting it as opposed to fine tuning it on racist content or anything more advanced.

  • I think they also want recognition/credit for spending 5 minutes (or less) typing some words at an image generator as if that were comparable to people who develop technical skills and then create effortful meaningful work just because the outputs are (superficially) similar.

  • The latest twist I'm seeing isn't blaming your prompting (although they're still eager to do that), it's blaming your choice of LLM.

    "Oh, you're using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren't trying the right models, so allow me to educate you with all my prompt fondling experience. You're trying to make some general point? Clearly you just need to try another model."

  • It can make funny pictures, sure. But it fails at art as an endeavor to communicate an idea, feeling, or intent of the artist, the promptfondler artists are providing a few sentences instruction and the GenAI following them without any deeper feelings or understanding of context or meaning or intent.

  • GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale linearly (edit: actually I'm wrong, looking at the wikipedia page... so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving ... but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it's training. And it would need more quality tokens than they have left, they've already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that's exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you're getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.

    Lower model temperatures makes it pick it's best guess for next token as opposed to randomizing among probable guesses, they don't improve on what the best guess is and you can still get hallucinations even picking the "best" next token.

    And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.

  • Bro, sneerclub and techtakes are for sneering at bad technology and those that worship it, not for engaging in apologia for it (or worse yet, tone policing the sneering). If you don't like it, you can ask the mods for an exit pass out (if they haven't generously given you one already).

  • Joe Rogan Experience!

    ...side note my most prominent irl conversation about Joe Rogan was with a relative who was trying to convince me it was a good thing that Joe Rogan platformed a celebrity who was saying 1x1=2 (Terrence Howard). Literally beyond parody.

  • SneerClub @awful.systems

    Sneerquence classics: Eliezer on GOFAI (half serious half sneering effort post)

    SneerClub @awful.systems

    China and AGI: A New Yellow Peril and Red Scare

    SneerClub @awful.systems

    Is Scott and others like him at fault for Trump... no it's the "elitist's" fault!

    SneerClub @awful.systems

    In Case You Had Any Doubts About Manifest Being Full Of Racists