Sailor Sega Saturn @ sailor_sega_saturn @awful.systems Posts 1Comments 362Joined 2 yr. ago

Great no problem I'll just read through the sequences and be all caught up!
Hopefully 2025 will be a nice normal year--
Cybertruck outside of Trump hotel explodes violently and no once can figure out if it was a bomb or just Cybertruck engineering
Huh. I guess it'll be another weird one.
(I know I know, low effort post, I'm sick in bed and bored)
Once a month or so Awful Systems casually mentions a racist in some sub-sub-culture who I had never heard about before and then I get to spend an hour doing background research on obscure net drama from 2013 or whatever.
Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz
"reputational cost" eh? Let's see Mr. Moskovitz's reasoning in his own words:
I guess "we're too racist and weird for even a Facebook exec" doesn't have quite the same ring to it though.
Yes but if I donate to Lightcone I can get a T-shirt for $1000! A special edition T-shirt! Whereas if I donated $1000 to Archive Of Our Own all I'd get is... a full sized cotton blanket, a mug, a tote bag and a mystery gift.
Holy smokes that's a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can't admit that so are desperately trying to delay the inevitable.
Also don't miss this promise from the middle:
Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. [...] Building an LLM-based editor. [...] AI prompts and tutors as a content type on LW
It's like an anti-donation message. "Hey if you donate to me I'll fill your forum with digital noise!"
Days since last comparison of Chat-GPT to shitty university student: zero
More broadly I think it makes more sense to view LLMs as an advanced rubber ducking tool - like a broadly knowledgeable undergrad you can bounce ideas off to help refine your thinking, but whom you should always fact check because they can often be confidently wrong.
Seriously why does everyone like this analogy?
Debating post-truth weirdos for large sums of money may seem like a good business idea at first, until you realize how insufferable the debate format is (and how no one normal would judge such a thing).
Sadly all my best text encoding stories would make me identifiable to coworkers so I can't share them here. Because there's been some funny stuff over the years. Wait where did I go wrong that I have multiple text encoding stories?
That said I mostly just deal with normal stuff like UTF-8, UTF-16, Latin1, and ASCII.
Senior software engineer programmer here. I have had to tell coworkers "don't trust anything chat-gpt tells you about text encoding" after it made something up about text encoding.
Remember when you could read through all the search results on Google rather than being limited to the first hundred or so results like today? And boolean search operators actually worked and weren't hidden away behind a "beware of leopard" sign? Pepperidge Farm remembers.
But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.
So what harms has Mr. Yudkowski enumerated? Off the top of my head I can remember:
- Diamondoid bacteria
- What if there's like a dangerous AI in the closet server and it tries to convince you to connect your Nintendo 3DS to it so it can wreak havoc on the internet and your only job is to ignore it and play your nintendo but it's so clever and sexy
- What if we're already in hell: the hell of living in a universe where people get dust in their eyes sometimes?
- What if we're already in purgatory? If so we might be able to talk to future robot gods using time travel; well not real time travel, more like make believe time travel. Wouldn't that be spooky?
Ah yes, the journal of intelligence:
First, Kanazawa’s (2008) computations of geographic distance used Pythagoras’ theorem and so the paper assumed that the earth is flat (Gelade, 2008). Second, these computations imply that ancestors of indigenous populations of, say, South America traveled direct routes across the Atlantic rather than via Eurasia and the Bering Strait.
Mirror bacteria? Boring! I want an evil twin from the negaverse who looks exactly like me except right hande-- oh heck. What if I'm the mirror twin?
what the heck is an eigenrobot??
Update: It is too late, Sneerclub, I have seen everything.
I mean, unrestricted skepticism is the appropriate response to any press release, especially coming out of silicon valley megacorps these days.
Indeed, I've been involved in crafting a silicon valley megacorp press release before. I've seen how the sausage is made! (Mine was more or less factual or I wouldn't have put my name on it, but dear heavens a lot of wordsmithing goes into any official communication at megacorps)
Maybe I'm being overzealous (I can do that sometimes).
But I don't understand why this particular experiment suggests the multiverse. The logic appears to be something like:
- This algorithm would take a gazillion years on a classical computer
- So maybe other worlds are helping with the compute cost!
But I don't understand this argument at all. The universe is quantum, not classical. So why do other worlds need to help with the compute? Why does this experiment suggest it in particular? Why does it make sense for computational costs to be amortized across different worlds if those worlds will then have to go on to do other different quantum calculations than ours? It feels like there's no "savings" anyway. Would a smaller quantum problem feasible to solve classically not imply a multiverse? If so, what exactly is the threshold?
Can we all take a moment to appreciate this absolutely wild take from Google's latest quantum press release (bolding mine) https://blog.google/technology/research/google-willow-quantum-chip/
Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
The more I think about it the stupider it gets. I'd love if someone with an actual physics background were to comment on it. But my layman take is it reads as nonsense to the point of being irresponsible scientific misinformation whether or not you believe in the many worlds interpretation.