SneerClub
- Rationalism meets iNaturalist: ficus allforum.inaturalist.org Optimal recruitment
recently on youtube i uncovered a “new” ficus species in the himalayas… ficus timlada. this isn’t a straightforward new species case like someone posting pics of a butterfly that experts had never seen before. in the ficus case, basically the taxonomic system failed because it’s fundamentally fla...
This is worth delurking for.
A ficus-lover on the forums for iNaturalist (where people crowdsource identifications of nature pics) is clearly brain-poisoned by LW or their ilk, and perforce doesn't understand why the bug-loving iNat crew don't agree that
> inaturalist should be a market, so that our preferences, as revealed through our donations, directly influence the supply of observations and ids.
Personally, I have spent enough time on iNat that I can identify a Rat when I see one.
I can't capture the glory of this in a few pull quotations; you'll have to go there to see the batshit.
(h/t hawkpartys @ tumblr)
- Mangione "really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism"www.nbcnews.com After his disappearance, Luigi Mangione re-emerged as the suspect in a high-profile killing. Those who knew him are dazed.
Before his arrest this week in the killing of UnitedHealthcare’s CEO, Mangione’s family desperately tried to find him, reaching out to former classmates and posting queries on social media.
- Mass resignations at Intelligence as Elsevier actually bothers slightly for once
oh no! anyway,
archives:
https://web.archive.org/web/20241213203714/https://www.aporiamagazine.com/p/mass-resignations-at-the-journal
https://archive.is/0lrO6
- Casey Newton drinks the kool-aid
In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:
>That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI. > >His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him. > >People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize. > >And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.
[...]
>Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful. > >And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming. > >Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?
- what the hell is Theil on here (CONTENT WARNING: Piers Morgan)www.thewrap.com Peter Thiel Stutters When Asked What to Say to Defenders of UnitedHealthCare CEO Killer: 'I Don't Know What to Say' | Video
"I think I still think you have, you should try to make an argument." Peter Thiel tells Piers Morgan
amazing to watch American right-wingers flounder at the incredibly obvious questions from UK media guys, even Piers fucking Morgan on a podcast
Morgan - as a fellow right winger - asks Thiel about Mangione quoting Thiel
watch Thiel melting in the excruciating video clip
- TPOT hits the big time!sfstandard.com This one internet subculture explains murder suspect Luigi Mangione's odd politics
No one can seem to pin down Luigi Magione’s weird confluence of beliefs — no one, that is, except the members of this strange subculture.
- The dregs of Dimes Square after the election: the most despondent winners you ever sawthepointmag.com Get in the Crystal | The Point Magazine
When I saw the party announcement weeks ago I felt a sinking feeling and simply avoided it. At 10 p.m. on election night, after it had become clear which way things were going, I decided to go.
- In which some researchers draw a spooky picture and spook themselves
Abstracted abstract: >Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.
I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.
- Eliezer Yudkowsky on discovering that the (ALLEGED) ceo shooter was solidly a member of his very own rationalist subculture: "he was crazy and also on drugs and also not a *real* e/acc"
https://xcancel.com/ESYudkowsky/status/1866223174029095113#m https://xcancel.com/ESYudkowsky/status/1866241512293781930#m
- The Professor Assigns Their Own Book — But Now With a Tech Bubble in the Middle Step
The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".
The logic the professor gives completely baffles me:
> "Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."
I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."
I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?
The sales talk in the news release is really going all in selling that undercoat.
> Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.
Back in my day, we called that "having a book" and "writing a lesson plan".
Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.
(Hat tip.)
- 2023 study: how EA uses double meanings for a milk-before-meat strategy
PDF
https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf
- Does AI startup Extropic actually … do anything?
featuring nobody's favourite e/acc bro, BasedBeffJezos
https://pivot-to-ai.com/2024/12/01/does-ai-startup-extropic-actually-do-anything/
- LessWrong House Style (2023)
https://nonesense.substack.com/p/lesswrong-house-style
> Given that they are imbeciles given, occasionally, to dangerous ideas, I think it’s worth taking a moment now and then to beat them up. This is another such moment.
- Peter Singer introduces the Peter Singer AI to elevate ethical discourse in the digital ageboldreasoningwithpetersinger.substack.com Introducing Peter Singer AI: Elevating Ethical Discourse in the Digital Age
In today's rapidly advancing technological landscape, the importance of integrating ethical considerations into our innovations cannot be overstated.
- The predictably grievous harms of Effective Altruism
The predictably grievous harms of Effective Altruism
https://blog.oup.com/2022/12/the-predictably-grievous-harms-of-effective-altruism/
- Anthropic protects the welfare of possible future AIs. Present-day humans, not so muchpivot-to-ai.com Anthropic protects the welfare of possible future AIs. Present-day humans, not so much
What if large language models are actually the Singularity? What if chatbots are intelligent beings we have moral obligations to — and not just statistical slop to predict the next token in sequenc…
oh yes, this one is about our very good friends
- New article from reflective altruism guy starring Scott Alexander and the Biodiversity Brigadereflectivealtruism.com Human biodiversity (Part 4: Astral Codex Ten) - Reflective altruism
This post discusses the influence of human biodiversity theory on Astral Codex Ten and other work by Scott Alexander.
Would've been way better if the author didn't feel the need to occasionally hand it to siskind for what amounts to keeping the mask on, even while he notes several instances where scotty openly discusses how maintaining a respectable facade is integral to his agenda of infecting polite society with neoreactionary fuckery.
- [podcast] If Books Could Kill - Sam Harris's "The End of Faith"
https://open.spotify.com/episode/7E2onI8R3wdnxAS0p1O9j8?si=sgR1E44lRTGOY17mUikUFQ
https://podcasts.apple.com/us/podcast/if-books-could-kill/id1651876897
https://pca.st/episode/0e155eb9-93e0-4fe8-bde3-5f115f9b374a
- your Bayesian yacht is an outlier adn should not be countedwww.nytimes.com What Sank the Bayesian Superyacht in Italy?
A Times investigation has found that an unusually tall mast, and the design changes it required, made a superyacht owned by a British tech mogul vulnerable to capsizing.
archive: https://archive.ph/wRpVA
- JD Vance outs himself as an SSCer, SSCers react
I haven't read the whole thread yet, but so far the choice line is:
> I like how you just dropped the “Vance is interested in right authoritarianism” like it’s a known fact to base your entire point on. Vance is the clearest demonstration of a libertarian the republicans have in high office. It’s an absurd ad hominem that you try to mask in your wall of text.
- what happened next with Max Tegmark and the Swedish neo-Nazismedium.com Max Tegmark, AI, Conspiracy Theories, and the Swedish Right: An Investigation
A follow-up to Expo’s Swedish coverage of the Future of Life Institute’s president
archive: https://archive.is/ux5pL
- Caroline Ellison: A dashing tale of Victorian race science and, somehow, Harry Potter (Yudkowsky version)
> In a letter to the judge, Ellison’s mother, professor Sara Fisher Ellison, wrote that Ellison has completed a romantic novella and is already at work on a follow-up. The finished novella is “set in Edwardian England and loosely based on [Ellison’s] sister Kate’s imagined amorous exploits, to Kate’s great delight,” her mother wrote.
https://fortune.com/2024/09/24/caroline-ellison-romance-novel-ftx-entencing/
oh yeah she got two years' jail for her part in stealing eleven fucking billion with a B dollars
- BBC on the Network State crew, featuring many of our old favouriteswww.bbc.co.uk The Bitcoin bros who want to crowdfund a new country
How a group of Silicon Valley tech entrepreneurs plan to create "the network state."
- Adderall in Higher Doses May Raise Psychosis Risk
Excerpt: >A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.
Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.
- What was Trump actually doing on 9/11? An anniversary fact check.web.archive.org What was Trump actually doing on 9/11? An anniversary fact check.
The president told yet another story Wednesday about how he experienced the Sept. 11, 2001, terrorist attacks.
(if you Select All and copy really fast behind an adblocker you can get all the text)
- Extropia's Children, Chapter 1: The Wunderkind - a history of the early days of several of our very good friendsaiascendant.substack.com Extropia's Children, Chapter 1: The Wunderkind
Back in the nineties, a teenage supergenius joined a mailing list. That odd seed led, via Harry Potter, to today's Effective Altruism and AI Risk movements; bizarre cults; and the birth of Bitcoin. This is the surreal saga of Extropia's Children.
- Bostrom's advice for the ethical treatment of LLMs: remind them to be happy
YouTube Video
Click to view this content.
Long time lurker, first time poster. Let me know if I need to adjust this post in any way to better fit the genre / community standards.
------
Nick Bostrom was recently interviewed by pop-philosophy youtuber Alex O'Connor. From a quick 2x listen while finishing some work, the most sneer-rich part begins around 46 minutes, where Bostrom is asked what we can do today to avoid unethical treatment of AIs.
He blesses us with the suggestion (among others) to feed your model optimistic prompts so it can have a good mood. (48:07)
> Another [practice] might be happiness prompting, which is—with this current language system there's the prompt that you, the user, puts in—like you ask them a question or something, but then there's kind of a meta-prompt that the AI lab has put in . . . So in that, we could include something like "you wake up in a great mood, you feel rested and really take joy in engaging in this task". And so that might do nothing, but maybe that makes it more likely that they enter a mode—if they are conscious—maybe it makes it slightly more likely that the consciousness that exists in the forward path is one reflecting a kind of more positive experience.
Did you know that not only might your favorite LLM be conscious, but if it is the "have you tried being happy?" approach to mood management will absolutely work on it?
Other notable recommendations for the ethical treatment of AI:
- Make sure to say your "please" and "thank you"s.
- Honor your pinky swears.
- Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.
------
On a related note, has anyone read or found a reasonable review of Bostrom's new book, Deep Utopia: Life and Meaning in a Solved World?
- -ai
On discovering that you could remove AI results from Google with the suffix -ai, I started thinking this is a powerful and ultra-simple political slogan. Are there any organised campaigns with the specific goal of controlling/reducing the influence of AI?
A t-shirt with simply '-ai' on it would look great.
- No, intelligence is not like heighttheinfinitesimal.substack.com No, intelligence is not like height
... and the reason is one of the most interesting findings from modern behavioral genetics
It earned its "flagged off HN" badge in under 2 hours
https://news.ycombinator.com/item?id=41366609
- Off-Topic: Music Recommendation Thread
So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?
- The Politics of Urbit
With Yarvin renewing interest in Urbit I was reminded of this paper that focuses on Urbit as a representation of the politics of "exit". It's free/open access if anyone is interested.
From the abstract...
>This paper examines the impact of neoreactionary (NRx) thinking – that of Curtis Yarvin, Nick Land, Peter Thiel and Patri Friedman in particular – on contemporary political debates manifest in ‘architectures of exit’...While technological programmes such as Urbit may never ultimately succeed, we argue that these, and other speculative investments such as ‘seasteading’, reflect broader post-neoliberal NRx imaginaries that were, perhaps, prefigured a quarter of a century ago in The Sovereign Individual."
- our boy Moldy is back at Urbit (gosh! etc), as interviewed by an Urbit loving crypto bro. Anyway, they're gonna solve their funding problems by doing a shitcoinwww.coindesk.com 'Wartime CEO': Urbit's Founder Returns in Shakeup at Moonshot Software Project
"We're here to fix this," Curtis Yarvin says of the struggling endeavor to rebuild the entire internet computing stack from scratch.
archive: https://archive.ph/ce7au
- TP0 gets a mention in a frontpage Atlantic article - ‘Race Science’ Is Inching Its Way Across the American Rightwww.theatlantic.com ‘Race Science’ Is Inching Its Way Across the American Right
The new attempt to shroud racism in a cloak of objectivity.
Ali Breland has written some fantastic entry pieces on the new right, including right wing anons and maga tech; now he has an article about the nooticers
> Other anonymous far-right accounts have accrued more than 100,000 followers by posting about the supposed links between race and intelligence. Elon Musk frequently responds to @cremieuxrecueil, which one far-right publication has praised as an account that “traces the genetic pathways of crime, explaining why poverty is not a good causal explanation.” Musk has also repeatedly engaged with @Eyeslasho, a self-proclaimed “data-driven” account that has posted about the genetic inferiority of Black people. Other tech elites such as Marc Andreessen, David Sacks, and Paul Graham follow one or both of these accounts. Whom someone follows in itself is not an indication of their own beliefs, but at the very least it signals the kind of influence and reach these race-science accounts now have.
https://web.archive.org/web/20240820173451/https://www.theatlantic.com/technology/archive/2024/08/race-science-far-right-charlie-kirk/679527/
- The worst person you know: "We bought an island"balajis.com The Network School
We’re starting a new school near Singapore for the dark talent of the world. Apply online at ns.com/apply.
Pay $1000 a month to live on Balaji and Bryan’s private island grindset Sorbonne. Hone your Dark Talents at the Wizarding school from guys who don’t believe in society, but DO believe in getting teenage blood transfusions. Featuring Proof-of-LearnTM!