Skip Navigation

do not negatively-polarize yourself into incorporating an LLM into your daily life because it's from China

it is fucking priceless that an innovation that contained such simplicities as "don't use 32-bit weights when tokenizing petabytes of data" and "compress your hash tables" sent the stock exchange into 'the west has fallen' mode. I don't intend to take away from that, it's so fucking funny

This is not the rights issue, this is not the labor issue, this is not the merits issue, this is not even the philosophical issue. This is the cognitive issue. When not exercised, parts of your brain will atrophy. You will start to outsource your thinking to the black box. You are not built different. It is the expected effect.

I am not saying this is happening on this forum, or even that there are tendencies close to this here, but I preemptively want to make sure it gets across because it fucked me up for a good bit. Through Late 2023–Early 2024 I found myself leaning into both AI images for character conceptualization and AI coding for my general workflow. I do not recommend this in the slightest.

For the former, I found that in retrospect, the AI image generation reified elements into the characters I did not intend and later regretted. For the latter, it essentially kneecapped my ability to produce code for myself until I began to wean off of it. I am a college student. I was in multiple classes where I was supposed to be actively learning these things. Deferring to AI essentially nullified that while also regressing my abilities. If you don't keep yourself sharp, you will go dull.

If you don't mind that or don't feel it is personally worth it to learn these skills besides the very very basics and shallows, go ahead, that's a different conversation but this one does not apply to you. I just want to warn those who did not develop their position on AI beyond "the most annoying people in the world are in charge of it and/or pushing it" (a position that, when deployed by otherwise-knowledgeable communists, is correct 95% of the time) that this is something you will have to be cognizant of. The brain responds to the unknowable cube by deferring to it. Stay vigilant.

66 comments
  • This is the cognitive issue. When not exercised, parts of your brain will atrophy. You will start to outsource your thinking to the black box. You are not built different. It is the expected effect.

    Tools not systems

  • I've run some underwhelming local LLMs and done a bit of playing with the commercial offerings.

    I agree with this post. My experiments are on hold, though I'm curious to just have a poke around DeepSeek's stuff just to get an idea of how it behaves.

    I am most concerned with next generation devices that come with this stuff built in. There's a reactionary sinophobe on youtube who produced a video with some pretty interesting talking points that, since the goal is to have these "AI assistants" basically observe everything you do with your device (and are blackboxes that rely on cloud hosted infrastructure) that this effectively negates E2E encryption. I am convinced by these arguments and in that respect the future looks particularly bleak. Having a wrongthink censor that can read all your inputs before you've even sent them and can flag you for closer surveillance and logging, combined with the three letter agencies really "chilling out" about eg Apple's refusal to assist in decrypting iPhones, it all looks quite fucked.

    There are obviously some use cases where LLMs are sort of unobjectionable, but even then, as OP points out, we often ignore the way our tools shape our minds. People using them as surrogates for human interaction etc are a particularly sad case.

    Even if you accept the (flawed) premise that these machines contain a spark of consciousness, what does it say about us that we would spin up one-time single use minds to exploit for a labor task and then terminate them? I don't have a solid analysis but it smells bad to me.

    Also China's efforts effectively represent a more industrial scale iteration of what the independent hacker and opensource communities have been doing anyway- proving that the moat doesn't really exist and that continuing to try and use brute force (scale) to make these tools "better" is inefficient and tunnel visioned.

    Between this and the links shared with me recently about China's space efforts, I am simply left disappointed that we remain in competition and opposition to more than half of the world when cooperation could have saved us a lot of time, energy, water, etc. It's sad and a shame.

    I cherish my coding ability. I don't mind playing with an LLM to generate some boilerplate to have a look at, but the idea that people who cannot even assess the function of the code that is generated are putting this stuff into production is really sad. We haven't exactly solved the halting problem yet have we? There's no real way for these machines to accurately assess code to determine that it does the task it is intended to do without side effects or corner cases that fail. These are NP-hard problems and we continue to ignore that fact.

    The hype driving this is clear startup bro slick talk grifting shit. Yes it's impressive that we can build these things but they are being misapplied and deferred to as authorities on topics by people who consider themselves to be otherwise Very Smart People. It's.. in a word.. pathetic.

  • It may be a petty thing, but I hate how people rely on AI programs to make pfp's and thumbnails. I'd rather get a shitty crayon drawing that you even put thirty seconds into.

  • this is me with google maps

    • Someone posted some studies on an earlier AI article that showed people's ability to navigate based on landmarks and such was way worse if they relied on GPS. So what OP said tracks as far as skills regressing.

      I don't miss printing Mapquest directions out on paper before leaving onto a cross-state trip though.

    • Look I can't navigate worth shit in a car and I never will. Give me a map and a compass and I can orienteer through the wilderness, but put me in a car and I'll get lost in a low density neighborhood.

  • There is "use the machine to write code for you" (foolish, a path to ruin) and there is "use the machine like a particularly incompetent coworker who nevertheless occasionally has an acceptable idea to iterate on".

    If you are already an expert, it is possible to interpret the hallucinations of the machine to avoid some pointless dead-end approaches. More importantly, you've had to phrase the problem in simple enough terms that it can't go too wrong, so you've mostly just got a notebook that spits text at you. There's enough bullshit in there that you cannot trust it or use it as is, but none of the ego attached that a coworker might have when you call their idea ridiculous.

    Don't use the machine to learn anything (it is trained on almost exclusively garbage), don't use anything it spits out, don't use it to "augment your abilities" (if you could identify the augmentation, you'd already have the ability). It is a rubber duck that does not need coffee.

    If your code is so completely brainless that the plagiarism machine can produce it, you're better off writing a code generator to just do it right rather than making a token generator play act as a VIM macro.

    • don't use it to "augment your abilities" (if you could identify the augmentation, you'd already have the ability

      I actually disagree with this take. I can work fine without LLMs, I've done it for a long time, but in my job i encounter tasks that are not production facing nor do they need the rigor of a robust software development lifecycle such as making the occasional demo or doing some legacy system benchmarking. These tasks are usually not very difficult to do but the require me writing python code or whatever (i'm more of a c++ goblin) so I just have whatever the LLM of the day is to write up some python functions for me and i paste them into my script that i build up and it works pretty well. I could sit there and search about for the right python syntax to filter a list or i can let the LLM do it because it'll probably get it right and if it's wrong it's close enough that I can repair it.

      Anyway these things are another (decadently power hungry) tool in the toolbag. I think it's probably like a low double digit productivity boost for certain tasks I have, so nothing really as revolutionary as the claims are being made about it, but I'm also not about to go write a code generator to hack together some python i'm never going to touch again.

  • Completely agree. AI should be just another tool to easy the life of the workers.

    I'm not so sure I even like the idea of AI like ChatGPT or Deepseek or the countless others, mostly because I avoided it till now and don't really know much about it, so I need to further investigate to actually form an opinion, tho I would be lying if I said I'm not intrigued by it. The only times I used one of these AIs have been recently to translate a few sentences with Gemini since Google already forced it on my phone. And honestly, with search engines going down the sewer, maybe Deepseek with it's search function could be useful.

    One thing I can't really understand tho is generative AI. I don't want to sound like a Luddite, but I really can't see the use of it. Like, it's one thing to have a very specialized AI tool for parts of the creative process, but generating whole images and voices? Just, why? It's depressing. You're removing the human part of these creative works and stealing in the process just to automate it for profit or for the sake of it. I already saw some AI generated ads here in Brasil from some big companies, including Coca-cola, and it just makes me mad knowing they did it just to cut costs by not paying actors, artists, designers, etc. It's fucked up.

    And not only that, but artists literally have the ability to draw, paint, sculpt, voice act, etc, whatever they want in their own style and process. Why would they want to generate their whole work for them removing themselves from the process? It just sounds completely dystopic to me.

    • The argument I could see is for people who want low-stakes imagery on a very low budget.

      I used some of the Stable Whatever models to generate some wallpapers for my PC. I'm not talented, so I can get maybe 30% of what I want by hand from a blank page, but if you roll the gacha a few times, I can say "this is 70% what I wanted, and I can clean it up and tweak it to 75%." If you use it that way, there's still some personal effort. Sure, it's still fairly soulless tat, but no more than the prepainted live-laugh-love signs in illegible cursive that clutter our thrift stores.

      Yes, if I'm not willing to increase my skills, I should just throw commissions at actual artists, but I'm still way too self conscious to say to someone in person "can you make the vampire prince look 15% more like Liu Kang, and have him carrying a chocolate gateau?"

      It also seems real common for blogs and low quality news sites where they need a header image but you don't need high quality stock photos (how many different photos of traders gesticulating at a display board do you need for economy articles?) Again, low stakes, low budget.

  • I used AI for an rpg character. Using the AI made me want to draw it myself instead because of how completely idiotic it was to make the AI do what I want. I'll just do it myself and it will look better

  • One thing to note about these AI boondoggles is that they represent a decentralization of information and are currently the most accessible means to promoting and accessing leftist or anti-imperialist information. At this point in time, the English (global) internet has been completely fine-tuned to serve US propaganda slop if you're looking for any political information. Google search is now wired to direct people straight to natopedia and that leads to state gov, cia gov, freedomhouse whenever you try to search for anything anti-imperialist. Information has been completely centralized so that any Western propaganda drivel is boosted to a dominant position and every "alternative platform" are also NATO lackeys, like how DuckDuckGo search results now also filter out all Russian sources.

    This is where it is actually useful for me because I would tolerate some AI hallucination (which can be reduced to be relatively marginal depending on your queries) over having to shovel through some shit BBC news just to learn about the Sahel state leadership or some natopedia article of small Soviet towns where you need to comb through every second sentence, because some CIA bootlick editor vomited RFE "sources" all over screaming about how it was a "secret KGB torture gulag" or "Stalin once ate all the grain there," just to find out some basic geographical or biographical details. I got Deepseek to compile a list of Marxist-Leninist states because the natopedia article had propaganda all over like claiming the DPRK was "not" ML because some western ultraleftist "Marxist" scholar claimed Juche was not Marxism. I'd prefer the risk of encountering some hallucination slop something like "among the notable Marxist-Leninist-Titoist states during the Cold War period was Asgard" than being made to analyze some ultraleft Western hegemony bootlick "scholar" slop for potential facts.

    Deepseek in particular is currently working rather well as a substitute for places like r/genzhou where you used to be able to ask questions about leftist history and theory before it was banned. Its ability to scrape search results means that it works fairly well for finding reading materials without as egregious hallucination as ChatGPT where it makes up book titles. I had it spit out book recommendations from Losurdo, Parenti and Grover Furr when I asked about non-Western slanted sources about the USSR.

    Ideally, of course, there wouldn’t be a need for AI to fill these gaps, but given the complete centralization of information and conditions of soft censorship that the Western platform monopolies allow them to enact, I'd say that there is a use case for these LLM chat engines provided that one exercises caution.

66 comments