Skip Navigation
Casey Newton drinks the kool-aid

In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

>That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI. > >His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him. > >People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize. > >And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

[...]

>Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful. > >And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming. > >Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

6
Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Same. I'm not being critical of lab-grown meat. I think it's a great idea.

    But the pattern of things he's got an opinion on suggests a familiarity with rationalist/EA/accelerationist/TPOT ideas.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • Do you have a link? I'm interested. (Also, I see you posted something similar a couple hours before I did. Sorry I missed that!)

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024
  • So it turns out the healthcare assassin has some.... boutique... views. (Yeah, I know, shocker.) Things he seems to be into:

    • Lab-grown meat
    • Modern architecture is rotten
    • Population decline is an existential threat
    • Elon Musk and Peter Thiel

    How soon until someone finds his LessWrong profile?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 20 October 2024
  • As anyone who's been paying attention already knows, LLMs are merely mimics that provide the "illusion of understanding".

  • Adderall in Higher Doses May Raise Psychosis Risk

    Excerpt: >A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

    Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

    5
    That tracing woodgrains peice on David Gerard is out
  • I'm noticing that people who criticize him on that subreddit are being downvoted, while he's being upvoted.

    I wouldn't be surprised if, as part of his prodigious self-promotion of this overlong and tendentious screed, he's steered some of his more sympathetic followers to some of these forums.

    Actually it's the wikipedia subreddit thread I meant to refer to.

  • I'm on Paris Marx's podcast this week rambling about crypto, AI hype and social media. Oh yeah and Jack Dorsey, our supposed subject. 1 hour.
  • As a longtime listener to Tech Won't Save Us, I was pleasantly surprised by my phone's notification about this week's episode. David was charming and interesting in equal measure. I mostly knew Jack Dorsey as the absentee CEO of Twitter who let the site stagnate under his watch, but there were a lot of little details about his moderation-phobia and fash-adjacency that I wasn't aware of.

    By the way, I highly recommend the podcast to the TechTakes crowd. They cover many of the same topics from a similar perspective.

  • a16z is working hard to get stuck with last year’s Nvidia chips
  • For me it gives off huge Dr. Evil vibes.

    If you ever get tired of searching for pics, you could always go the lazy route and fall back on AI-generated images. But then you'd have to accept the reality that in few years your posts would have the analog of a geocities webring stamped on them.

  • That tracing woodgrains peice on David Gerard is out
  • Trace seems a bit... emotional. You ok, Trace?

  • That tracing woodgrains peice on David Gerard is out
  • So now Steve Sailer has shown up in this essay's comments, complaining about how Wikipedia has been unfairly stifling scientific racism.

    Birds of a feather and all that, I guess.

  • That tracing woodgrains peice on David Gerard is out
  • what is the entire point of singling out Gerard for this?

    He's playing to his audience, which includes a substantial number of people with lifetime subscriptions to the Unz Review, Taki's crapazine and Mankind Quarterly.

  • That tracing woodgrains peice on David Gerard is out
  • why it has to be quite that long

    Welcome to the rationalist-sphere.

  • That tracing woodgrains peice on David Gerard is out
  • Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet.

    Hey Trace, since you're undoubtedly reading this thread, I'd like to make a plea. I know Scott Alexander Siskind is one of your personal heroes, but maybe you should consider digging up some dirt in his direction too. You might learn a thing or two.

  • Nvidia is being a bubble stock again
  • Please touch grass.

  • Nvidia is being a bubble stock again
  • The next AI winter can't come too soon. They're spinning up coal-fired power plants to supply the energy required to build these LLMs.

  • TracingWoodgrains launches a defense of Manifest's controversial reputation, all without betraying a basic understanding of what the word "controversial" means.
  • Until a month ago, TW was the long-time researcher for "Blocked and Reported", the podcast hosted by Katie 'TERF' Herzog and relentless sealion Jesse Singal.

  • Effective Obfuscation
    newsletter.mollywhite.net Effective obfuscation

    Silicon Valley's "effective altruism" and "effective accelerationism" only give a thin philosophical veneer to the industry's same old impulses.

    Effective obfuscation

    Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

    1
    IQ is largely a pseudoscientific swindle
  • Stephen Jay Gould's The Mismeasure of Man is always a good place to start.

  • IQ is largely a pseudoscientific swindle
  • This is good:

    Take the sequence {1,2,3,4,x}. What should x be? Only someone who is clueless about induction would answer 5 as if it were the only answer (see Goodman’s problem in a philosophy textbook or ask your closest Fat Tony) [Note: We can also apply here Wittgenstein’s rule-following problem, which states that any of an infinite number of functions is compatible with any finite sequence. Source: Paul Bogossian]. Not only clueless, but obedient enough to want to think in a certain way.

    Also this:

    If, as psychologists show, MDs and academics tend to have a higher “IQ” that is slightly informative (higher, but on a noisy average), it is largely because to get into schools you need to score on a test similar to “IQ”. The mere presence of such a filter increases the visible mean and lower the visible variance. Probability and statistics confuse fools.

    And:

    If someone came up w/a numerical“Well Being Quotient” WBQ or “Sleep Quotient”, SQ, trying to mimic temperature or a physical quantity, you’d find it absurd. But put enough academics w/physics envy and race hatred on it and it will become an official measure.

  • "Yudkwosky is a genius and one of the best people in history."

    [All non-sneerclub links below are archive.today links]

    Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

    For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe: >Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

    As you can see, he's really into superlatives. And Jordan Peterson: >Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

    At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk: >Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

    As he hinted in his tweet to Roko, he has an enlightened view about women and gender: >Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

    And: >That leaves us with, you guessed, a metric ton of men who are no longer in families.

    Yep, I guessed about 12 men.

    0
    "Tech Right" scribe Richard Hanania promoted white supremacy for years under a pen name
    www.huffpost.com This Man Has The Ear Of Billionaires — And A White Supremacist Past He Kept A Secret

    Hanania is championed by tech moguls and a U.S. senator, but HuffPost found he used a pen name to become an important figure in the “alt-right.”

    This Man Has The Ear Of Billionaires — And A White Supremacist Past He Kept A Secret

    Excerpt: >Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

    He's also a big eugenics supporter: >“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

    (Reminds me a lot of the things Scott Siskind has written in the past.)

    Some people who have been friendly with Hanania:

    • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
    • Hamish McKenzie, CEO of Substack
    • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
    • Tyler Cowen, libertarian econ blogger and George Mason University prof
    • J.D. Vance, US Senator from Ohio
    • Steve Sailer, race (pseudo)science promoter and all-around bigot
    • Amy Wax, racist law professor at UPenn.
    • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
    0
    Grimes pens new TREACLES anthem: "I wanna be software, upload my mind"
    genius.com Grimes & Illangelo – I Wanna Be Software

    [Verse 1] / I wanna be software / Upload my mind / Take all my data / What will you find? / I wanna be software / Battery heart / Infinite options / State of the art / I wanna be

    0
    Urbit announced its summer accelerator program as "u/acc". Someone asked me "wtf is an Urbit" so I wrote this. May be useful. "what if networked Lisp machines, but for Nazis"
  • "TempleOS on the blockchain"

    Ok that's some quality sneer. A bit obscure and esoteric, but otherwise perfect for those who know anything about Temple OS.

  • Bloomberg gives Yud a big slobbery one: "He’s worth hearing out."

    Ugh.

    >But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

    0