Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
28
Comments
358
Joined
1 yr. ago

TechTakes @awful.systems

"Power Cut" - Ed Zitron on Microsoft's recent pullback on building servers

  • Starting things off here with a sneer thread from Baldur Bjarnason:

    Keeping up a personal schtick of mine, here's a random prediction:

    If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEM's public image.

    Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slop's soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AI's use in killing artists' jobs and livelihoods.

  • New opinion piece from the Guardian: AI is ‘beating’ humans at empathy and creativity. But these games are rigged

    The piece is one lengthy sneer aimed at tests trying to prove humanlike qualities in AI, with a passage at the end publicly skewering techno-optimism:

    Techno-optimism is more accurately described as “human pessimism” when it assumes that the quality of our character is easily reducible to code. We can acknowledge AI as a technical achievement without mistaking its narrow abilities for the richer qualities we treasure in each other.

  • New piece from Baldur Bjarnason: AI and Esoteric Fascism, which focuses heavily on our very good friends and their link to AI as a whole. Ending quote's pretty solid, so I'm dropping it here:

    I believe that the current “AI” bubble is an outright Neo-Nazi project that cannot be separated from the thugs and fascists that seem to be taking over the US and indivisible from the 21st century iteration of Esoteric Neo-Nazi mysticism that is the TESCREAL bundle of ideologies.

    If that is true, then there is simply no scope for fair or ethical use of these systems.

    Anyways, here's my personal sidenote:

    As I've mentioned a bajillion times before, I've predicted this AI bubble would kill AI as a concept, as its myriad harms and failures indelibly associate AI with glue pizzas, artists getting screwed, and other such awful things. After reading through this, its clear I've failed to take into account the political elements of this bubble, and how it'd affect things.

    My main prediction hasn't changed - I still expect AI as a concept to die once this bubble bursts - but I suspect that AI as a concept will be treated as an inherently fascist concept, and any attempts to revive it will face active ridicule, if not outright hostility.

  • Baldur's given his thoughts on Bluesky - he suspects Zitron's downplayed some of AI's risks, chiefly in coding:

    There’s even reason to believe that Ed’s downplaying some of the risks because they’re hard to quantify:

    • The only plausible growth story today for the stock market as a whole is magical “AI” productivity growth. What happens to the market when that story fails?
    • Coding isn’t the biggest “win” for LLMs but its biggest risk

    Software dev has a bad habit of skipping research and design and just shipping poorly thought-out prototypes as products. These systems get increasingly harder to update over time and bugs proliferate. LLMs for coding magnify that risk.

    We’re seeing companies ship software nobody in the company understands, with edge cases nobody is aware of, and a host of bugs. LLMs lead to code bases that are harder to understand, buggier, and much less secure.

    LLMs for coding isn’t a productivity boon but the birth of a major Y2K-style crisis. Fixing Y2K cost the world’s economy over $500 billion USD (corrected for inflation), most of it borne by US institutions and companies.

    And Y2K wasn’t promising magical growth on the order of trillions so the perceived loss of a failed AI Bubble in the eyes of the stock market would be much higher

    On a related note, I suspect programming/software engineering's public image is going to spectacularly tank in the coming years - between the impending Y2K-style crisis Baldur points out, Silicon Valley going all-in on sucking up to Trump, and the myriad ways the slop-nami has hurt artists and non-artists alike, the pieces are in place to paint an image of programmers as incompetent fools at best and unrepentant fascists at worst.

  • Ran across a piece of AI hype titled "Is AI really thinking and reasoning — or just pretending to?".

    In lieu of sneering the thing, here's some unrelated thoughts:

    The AI bubble has done plenty to broach the question of "Can machines think?" that Alan Turing first asked in 1950. From the myriad failures and embarrassments its given us, its given plenty of evidence to suggest they can't - to repeat an old prediction of mine, I expect this bubble is going to kill AI as a concept, utterly discrediting it in the public eye.

    On another unrelated note, I expect we're gonna see a sharp change in how AI gets depicted in fiction.

    With AI's public image being redefined by glue pizzas and gen-AI slop on one end, and by ethical contraventions and Geneva Recommendations on another end, the bubble's already done plenty to turn AI into a pop-culture punchline, and support of AI into a digital "Kick Me" sign - a trend I expect to continue for a while after the bubble bursts.

    For an actual prediction, I predict AI is gonna pop up a lot less in science fiction going forward. Even assuming this bubble hasn't turned audiences and writers alike off of AI as a concept, the bubble's likely gonna make it a lot harder to use AI as a plot device or somesuch without shattering willing suspension of disbelief.

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 2 March 2025

  • New piece from Brian Merchant: 'AI is in its empire era'

    Recently finished it, here's a personal sidenote:

    This AI bubble's done a pretty good job of destroying the "apolitical" image that tech's done so much to build up (Silicon Valley jumping into bed with Trump definitely helped, too) - as a matter of fact, it's provided plenty of material to build an image of tech as a Nazi bar writ large (once again, SV's relationship with Trump did wonders here).

    By the time this decade ends, I anticipate tech's public image will be firmly in the toilet, viewed as an unmitigated blight on all our daily lives at best and as an unofficial arm of the Fourth Reich at worst.

    As for AI itself, I expect it's image will go into the shitter as well - assuming the bubble burst doesn't destroy AI as a concept like I anticipate, it'll probably be viewed as a tech with no ethical use, as a tech built first and foremost to enable/perpetrate atrocities to its wielder's content.

  • In other news, Brian Merchant's going full-time on Blood in the Machine.

    Did notice a passage in the annoucement which caught my eye:

    Meanwhile, the Valley has doubled down on a grow-at-all-costs approach to AI, sinking hundreds of billions into a technology that will automate millions of jobs if it works, might kneecap the economy if it doesn’t, and will coat the internet in slop and misinformation either way.

    I'm not sure if its just me, but it strikes me as telling about how AI's changed the cultural zeitgeist that Merchant's happily presenting automation as a bad thing without getting backlash (at least in this context).

  • With the wide-ranging theft that AI bros have perpetrated, AI corps' abuse of the research exception, AI's ability to directly compete with original work (Exhibit A) and the myriad other things autoplag has unleashed on us, I suspect we're gonna see a significant weakening of fair use.

    Giving a concrete prediction, the research exception's gonna be at high risk of being repealed. OpenAI et al crossed the Rubicon when they abused it to launder their "research data" into making their autoplags - any future research case will have to contend with allegations of being a for-profit operation in disguise.

    On a more cultural front, if BlueSky's partnering with ROOST and the shitshow it kicked off is any indication, any use (if not mention) of AI is gonna lead to immediate accusations of theft. Additionally, to pull up an old comment of mine, FOSS licenses are likely gonna dive in popularity as people come to view any form of open-source as asking for AI bros to steal your code.

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 16th February 2025

  • People have worked out how to cram DeepSeek onto a Raspberry Pi

    Anyways, here's a quasi-related sidenote:

    Part of me suspects DeepSeek is gonna quickly carve out a good chunk of the market for itself - for SaaS services looking for spicy autocomplete or a slop generator to bolt on to their products, DeepSeek's high efficiency gives them a way to do it that doesn't immediately blow a massive hole in their finances.

  • I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.

    Me, predicting how anti-scraping efforts would evolve

    (I have nothing more to add, I just find this whole development pretty vindicating)

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 12th January 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 5th January 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 29th December 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 15th December 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 8th December 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 1st December 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 24th November 2024

    TechTakes @awful.systems

    Where's Your Ed At - Lost In The Future

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 17th November 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 10th November 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 3rd November 2024

    TechTakes @awful.systems

    You Can't Make Friends With The Rockstars - Ed Zitron on the tech press

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 20 October 2024

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 13 October 2024

    TechTakes @awful.systems

    "The Subprime AI Crisis" - Ed Zitron on the bubble's impending collapse

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024