scruiser @ scruiser @awful.systems Posts 4Comments 177Joined 2 yr. ago
Sneerquence classics: Eliezer on GOFAI (half serious half sneering effort post)
Is Scott and others like him at fault for Trump... no it's the "elitist's" fault!
Example #"I've lost count" of LLMs ignoring instructions and operating like the bullshit spewing machines they are.
Another thing that's been annoying me about responses to this paper... lots of promptfondlers are suddenly upset that we are judging LLMs by abitrary puzzle solving capabilities... as opposed to the arbitrary and artificial benchmarks they love to tout.
So, I've been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I've noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don't involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can't do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].
Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.
I don't really have anywhere I'm going with this, just something I noted that I don't want to waste the energy repeatedly re-explaining on reddit, so I'm letting a primal scream out here to get it out of my system.
Just one more training run bro. Just gotta make the model bigger, then it can do bigger puzzles, obviously!
The promptfondlers on places like /r/singularity are trying so hard to spin this paper. "It's still doing reasoning, it just somehow mysteriously fails when you it's reasoning gets too long!" or "LRMs improved with an intermediate number of reasoning tokens" or some other excuse. They are missing the point that short and medium length "reasoning" traces are potentially the result of pattern memorization. If the LLMs are actually reasoning and aren't just pattern memorizing, then extending the number of reasoning tokens proportionately with the task length should let the LLMs maintain performance on the tasks instead of catastrophically failing. Because this isn't the case, apple's paper is evidence for what big names like Gary Marcus, Yann Lecun, and many pundits and analysts have been repeatedly saying: LLMs achieve their results through memorization, not generalization, especially not out-of-distribution generalization.
A surprising number of the commenters seem to be at least considering the intended message... which makes the contrast of the number of comments failing at basic reading comprehension that much more absurd (seriously, it's absurd how many comments somehow missed that the author was living in and working from Brazil and felt it didn't reflect badly on them to say as much in the HN comments).
I struggle to think of a good reason why such prominent figures in politics and tech would associate themselves with such an event.
There is no good reason, but there is an obvious bad one: these prominent figures have racist sympathies (if they aren't "outright" racist themselves) and in between a lack of empathy and position of privilege don't care about the negative effects of boosting racist influencers.
I've been waiting for this. I wish it had happened sooner, before DOGE could do as much damage it did, but better late than never. Donald Trump isn't going to screw around, and, ironically, DOGE has shown you don't need congressional approval or actual legal authority to screw over people funded by the government, so I am looking forward to Donald screwing over SpaceX or Starlink's government contracts. On the returning end... Elon doesn't have that many ways of properly screwing with Trump, even if he has stockpiled blackmail material I don't think it will be enough to turn MAGA against Trump. Still, I'm somewhat hopeful this will lead to larger infighting between the techbro alt-righters and the Christofascist alt-righters.
- "tickled pink" is a saying for finding something humorous
- "BI" is business insider, the newspaper that has the linked article
- "chuds" is a term of online alt-right losers
- OFC: of fucking course
- "more dosh" mean more money
- "AI safety and alignment" is the standard thing we sneer at here: making sure the coming future acasual robot god is a benevolent god. Occasionally reporter misunderstand it to mean or more PR-savvy promptfarmers misrepresent it to mean stuff like stopping LLMs from saying racist shit or giving you recipes that would accidentally poison you but this isn't it's central meaning. (To give the AI safety and alignment cultists way too much charity, making LLMs not say racist shit or give harmful instructions has been something of a spin-off application of their plans and ideas to "align" AGI.)
I've seen articles and blog posts picking at bits and pieces of Google's rep (lots of articles and blogs on their roll in ongoing enshittification and I recall one article on Google rejecting someone on the basis of a coding interview despite that person being the creator and maintainer of a very useful open source library, although that article was more a criticism of coding interviews and the mystique of FAANG companies in general), but many of these criticism portray the problems as a more recent thing, and I haven't seen as thorough a take down as mirrorwitch's essay.
It is definitely of interest, it might be worth making it a post on its own. It's a good reminder than even before Google cut the phrase "don't be evil", they were still a megacoporation, just with a slightly nicer veneer.
Wow that blows past dunning-kurger overestimation into straight up time cube tier crank.
The space of possible evolved biological minds is far smaller than the space of possible ASI minds
Achkshually, Yudkowskian Orthodoxy says any truly super-intelligent minds will converge on Expected Value Maximization, Instrumental Goals, and Timeless-Decision Theory (as invented by Eliezer), so clearly the ASI mind space is actually quite narrow.
Actually, as some of the main opponents of the would-be AGI creators, us sneerers are vital to the simulation's integrity.
Also, since the simulator will probably cut us all off once they've seen the ASI get started, by delaying and slowing down rationalists' quest to create AGI and ASI, we are prolonging the survival of the human race. Thus we are the most altruistic and morally best humans in the world!
Yeah, the commitment might be only a token amount of money as a deposit or maybe even less than that. A sufficiently reliable and cost effective (which will include fuel costs and maintenance cost) supersonic passenger plane doesn't seem impossible in principle? Maybe cryptocurrency, NFTs, LLMs, and other crap like Theranos have given me low standards on startups: at the very least, Boom is attempting to make something that is in principle possible (for within an OOM of their requested funding) and not useless or criminal in the case that it actually works and would solve a real (if niche) need. I wouldn't be that surprised if they eventually produce a passenger plane... a decade from now, well over the originally planned budget target, that is too costly to fuel and maintain for all but the most niche clientele.
I just now heard about here. Reading about it on Wikipedia... they had a mathematical model that said their design shouldn't generate a sonic boom audible from ground level, but it was possible their mathematical model wasn't completely correct, so building a 1/3 scale prototype (apparently) validated their model? It's possible their model won't be right about their prospective design, but if it was right about the 1/3 scale then that is good evidence their model will be right? idk, I'm not seeing much that is sneerable here, it seems kind of neat. Surely they wouldn't spend the money on the 1/3 scale prototype unless they actually needed the data (as opposed to it being a marketing ploy or worse yet a ploy for more VC funds)... surely they wouldn't?
iirc about the Concorde (one of only two supersonic passenger planes), it isn't so much that supersonic passenger planes aren't technologically viable, its more a question of economics (with some additional issues with noise pollution and other environmental issues). Limits on their flight path because of the sonic booms was one of the problems with the Concorde, so at least they won't have that problem. And as to the other questions... Boom Supersonic's webpage directly addresses these questions, but not in any detail, but at least they address them...
Looking for some more skeptical sources... this website seems interesting: https://www.construction-physics.com/p/will-boom-successfully-build-a-supersonic . They point out some big problems with Boom's approach. Boom is designing both its own engine and it's own plane, and the costs are likely to run into the limits of their VC funding even assuming nothing goes wrong. And even if they get a working plane and engine, the safety, cost, and reliability needed for a viable supersonic passenger plane might not be met. And... XB-1 didn't actually reach Mach 2.2 and was retired after only a few flight. Maybe it was a desperate ploy for more VC funding? Or maybe it had some unannounced issues? Okay... I'm seeing why this is potentially sneerable. There is a decent chance they entirely fail to deliver a plane with the VC funding they have, and even if they get that far it is likely to fail as a commercially viable passenger plane. Still, there is some possibility they deliver something... so eh, wait and see?
As the other comments have pointed out, an automated search for this category of bugs (done without LLMs) would do the same job much faster, with much less computational resources, without any bullshit or hallucinations in the way. The LLM isn't actually a value add compared to existing tools.
Of course, part of that wiring will be figuring out how to deal with the the signal to noise ratio of ~1:50 in this case, but that’s something we are already making progress at.
This line annoys me... LLMs excel at making signal-shaped noise, so separating out an absurd number of false positives (and investigating false negatives further) is very difficult. It probably requires that you have some sort of actually reliable verifier, and if you have that, why bother with LLMs in the first place instead of just using that verifier directly?
He hasn't missed an opportunity to ominously play up genAI capabilities (I remember him doing so as far back as AI dungeon), so it will be a real break for him to finally admit how garbage their output is.