Skip Navigation
4 comments
  • Abstract: This paper presents some of the initial empirical findings from a larger forth-coming study about Effective Altruism (EA). The purpose of presenting these findings disarticulated from the main study is to address a common misunderstanding in the public and academic consciousness about EA, recently pushed to the fore with the publication of EA movement co-founder Will MacAskill’s latest book, What We Owe the Future (WWOTF). Most people in the general public, media, and academia believe EA focuses on reducing global poverty through effective giving, and are struggling to understand EA’s seemingly sudden embrace of ‘longtermism’, futurism, artificial intelligence (AI), biotechnology, and ‘x-risk’ reduction. However, this agenda has been present in EA since its inception, where it was hidden in plain sight. From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused onpresenting rich qualitative data to make legible the distinction between public-facing EA and core EA.

    • From page 17:

      Rather than encouraging critical thinking, in core EA the injunction to take unusual ideas seriously means taking one very specific set of unusual ideas seriously, and then providing increasingly convoluted philosophical justifications for why those particular ideas matter most.

      ding ding ding

      • You must prove yourself in the Outer Circle before being granted leave to study the Inner Mysteries. Or at least attend the right parties.