Skip Navigation

Drop AI hot takes I’m a bit tipsy

I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?

The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.

Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek

47 comments
  • Simmering take: Man created AI in his image and hated it. Literally trained it on literature, artwork, music, (etc, etc) and then these doofuses wonder why AI is imaginative hallucinating.

    Hot take: AI is here to stay. It will become another tool, like Photoshop or Spell Check, and it will become "Normal" (aka, Boring) by the time this decade is out. In particular "Local" (or small models) will become the norm as computer hardware becomes more powerful.

    Hotter take: GenAI will only be used by artists because artists are the only people that tolerate the quirks. Like outsourcing before it, the commercial sector will try to take this creative hallucinating system and box it into industrialism but find it makes stuff up without any means for correction except more money to try again. Artists on the other hand absolutely thrive in limitations and quirks.

    Hottest Take: Hallucinations are actually the best part of the current crop of AI.

  • The researchers in this video talk about how these gen AI models try to “escape” when being trained

    The models are basically random noise being selected by some sort of fitness algorithm to give results that that algorithm likes, so over time they become systems optimized to give results that pass the test. Some of that training is on a bunch of tech support forum threads so some of the random noise that pops up as possible solutions to their challenge are reminiscent of console commands that might provide alternate solutions to the test they're placed under if they actually worked and weren't just nonsense, although sometimes that can break the test environment when they're allowed to start sending admin commands to see what happens and then end up deleting the bootloader or introducing other errors through just randomly changing system variables until everything breaks.

    In some games they "cheat" because they're just mimicking the appearance of knowing what rules are or how things work, but are really just doing random bullshit that seems like it could be text that follows from the earlier text.

    It's not cognition or some will to subvert the environment, it's just text generating bots generating text that seems right but isn't because they don't actually know things or think.

    • It's got a word... specification problem? Something like that. They design a thing that can make inputs as an agent and recieve how the environment affects it and then iterate according to some function given to them. They tell it to, say, maximize the score thinking that's enough. And maybe some games like brick break, that's pretty good. But maximizing the score isn't the same as beat the game for some types of games, so they do really weird unexpected actions but it's only because people bring a lot of extra unstated instructions and context that the algorithm doesn't have. Sometimes they add exploration or whatever to the reward function so I think its very natural for them to want to escape even if that's not desired by the researchers (reminds me of the 3 year olds at work that wanna run around the hospital with their IVs attached while they're still in the middle of active pneumonia lol).

      For LLMs, the tensor is a neat and cool idea in general. A long time ago, well not that long, communism and central planning was declared impossible in part because the global economy needed some impossible number of parameters to fine tune and compute - https://en.wikipedia.org/wiki/Socialist_calculation_debate - and I can't recall the given number Hayek or whoever declared it was. They mightve said a million. Maybe a 100 million! Anyway, chatgpt 4 trained 175 billion parameters for its parameters lol. And it took something like 6 months. So, I think that means it's very possible to train some network to help us organize the global economy repurposed for human need instead of profit if the problem is purely about compute and not about which class has political power.

      It's always weird when LLMs say "we humans blah blah blah" or pretends it's a person instead "casual" speech. No, you are a giant autocorrect do not speak of we.

  • Probably a very hot take among us leftists on hexbear, but "consumer/generative AI" is here to stay and there's not much we can do about it. I was a massive skeptic in terms of it's staying power, initially thinking it was a fad, but the progress made from the first ChatGPT models, to now with all the latest models including deepseek, it's quite large and there's no going back anymore. It's the future, regardless of if we like it or not, the "invention of the touchscreen smartphone" moment of the 2020s. I guess I'm going to have to start using AI soon, unless I want to be my generation's equivalent of a boomer still using a Nokia 3310 instead of an iPhone or Android.

    • We've hit a wall in terms of progress with this technology. We've literally vacuumed up all the training data there is. What is left is improvements in efficiency (see DeepSeek).

      LLMs are cool, they have their uses, but they have fundamental flaws as rational agents, and will never be fit for this purpose.

      • There's still a lot of room to grow in image, especially video, generation. The models still have room for optimization and we've seen tons of little improvements in stuff like text.

      • We've hit a wall in terms of progress with this technology... What is left is improvements in efficiency.

        You could have said the same thing about smartphones 10-12 years ago, that we've hit a wall in the fundamentals and all that remains is improvements in efficiency, optimisation, speed and quality (compare the feature set of an iPhone 6 or Galaxy S4 to the latest phones, nothing has fundamentally changed), yet that didn't make smartphones disappear. In fact, it allowed them to effectively dominate the market.

    • unless I want to be my generation's equivalent of a boomer still using a Nokia 3310 instead of an iPhone or Android.

      And this is bad how? Technology isn't inherently better because it's new or widely used. Old printers that dont brick themselves because of not using the correct toner are more useful than one that can print out a page of AI slop.

      AI isnt the "smartphone revolution". The technology has existed for decades, they just found a way to market it to users and creating this shock and awe narrative of promised breakthroughs that will never come.

      Dont get caught up in the hype because a dying, deindustrialized empire thinks a slop machine will defeat communism. Israel doesn't use AI to accurately predict which Palestinian father and his family to vaporize, they use AI to make this process more cruel and detached.

      • And this is bad how?

        Because getting left behind leaves one out of touch with wider society, which has wide effects. Think about the boomer who can't use a smartphone and doesn't know how to open a PDF. What would their job or relationship prospects be in the modern job market or dating scene? Now that's not a problem for boomers because most are retired, and settled down for a long time, but now imagine that same scenario, but the boomers are magically decades younger and somehow have to integrate into the modern world. How would that go?

        AI isnt the "smartphone revolution". The technology has existed for decades, they just found a way to market it to users and creating this shock and awe narrative of promised breakthroughs that will never come.

        The technology used in smartphones also existed for decades, and the magic of what Apple did was finding a way to combine it all into a small and affordable enough package that created a shock and awe. AI is doing similar. A lot of the promised breakthroughs around smartphones never came (VR/AR integration for one, see Google glass, being able to scroll with your eyes or pseudo telekinesis, voice assistants were never that useful for most), but that didn't mean that they went away.

        Dont get caught up in the hype because a dying, deindustrialized empire thinks a slop machine will defeat communism

        Again, you could have said the same about smartphones. Don't get caught up in the hype, this is just the dying empire creating some new toys for the masses during the 2008 financial crash. But fundamentally it's not a communism vs capitalism issue, China has made large advances in AI on the consumer, and more importantly industrial side. They are not making the same mistake the Soviets did with computers.

  • AI is haram

    edit: This isn't a hot take.

    China getting into AI is annoying, they shouldn't ape White people's useless technology. Hopefully socialism reveals how useless genAI is and it gets relegated into a party trick and they don't wreck anything important. I will bury myself in dogshit if socialism collapses because of ai slop.

    My favorite thing to say to AI people is "no high speed rail?" Works every time.

    • Weird take, how is 'AI' useless? It clearly has lots of useful functions.

      The problem with AI is its role in capitalist society, not the technology itself.

      • I guess its not useless on technicality, but it is definitely malicious. Now that Chinese tech firms want to create their own models, they've done the same web scraping frenzy that takes down websites and forces everyone to "cloudflarize" themselves or risk being taken down by an AI bot scraping every single webpage on the site, even ones that aren't meant to be accessed. These programs constantly need more and more training data to keep being relevant but none of that data is sourced in an ethical way. Everyone else has to eat the externalities that these companies offload because this technology is no way sustainable if these scummy tactics aren't used which should be a death blow to its adoption but it never is.

        The energy requirements for genAI is immense. While China has made inroads in sustainable energy and optimizing their models, none of the western models even care and will willfully accelerate climate change for zero benefit to society. This isn't a "Nokia phone to apple smartphone" jump in progress, this is just a very well tuned crypto scam.

        Generative AI as it's being presented now is just a paper crown technology and a ploy to drive up artificial (as in not organic) demand for compute power to make investors richer while also impoverishing and endangering working class people. While you can say capitalism is much to blame, I don't think any socialist government actually needs a text slop machine to function compared to a imperialist state with a text slop machine.

        AI has always been a term in computer science that's been co-opted by techbros in both China and the US to be a status symbol.

  • It's always more comforting to see a stock image with the Getty or Shutterstock watermark than any AI garbage image generation someone tries to make to "fit the theme"

  • I think AI is fine if you're trying to optimize how to more efficiently manage and distribute paper and toner among your 1000+ printers, but like printers, it really shouldn't be accessible to your average consumer.

  • AI will be more beneficial than you think for the average office worker

    I work at an office job and the amount of times I'm given a task they expect me to take days I can do in fifteen minutes via simple computer knowledge/a little excel wizardry is actually wild

    But far too often when they do it its the most simple and manual way instead of thinking how could I automate this task. TBF Im not any good or an advanced user of MS office BUT what I do learn is solely based of googling my questions than applying the answers off of Microsoft support forums or whatever. This does require a level of like willingness and know how tho and it's not something I could just explain to my team. But AI, in my practice using it at my job for the data grind stuff, is very responsive and clear when you give it a question, you just have to interrogate it

    I'm legitimately talking about hours per week potentially saved just once they get told the proper way to use a computer whether it be through chatgpt or copilot. I was talking to a senior manager and they're using AI (albeit unsanctioned, he's doing out of his own volition) in like this big brain way but he was telling me how he pretty much nearly automates 50% of his job in essence and has the rest of time to do more managerial strategic esque work

  • I think techbros could be convinced to do socialist cybernetics if they were told to use AI to disrupt and streamline the economy.

  • not hot take: AI, as implemented under the capitalist mode of production simply exposes and exasperates all the contradictions and tendencies of capital accumulation. It is the 90s computer technology industry bubble all over again, complete with false miracle productivity gains, miss-directed capital investment, that is the underpinning of the existing recession.

    Hot Take: AI is forging a path down the road of consciousness regardless of if we want it to or not. If consciousness is the result of interaction with the world, then each new iteration of AI represents nearly infinite time spent interacting with the world. The world according to the perspective of AI is the systems it inhabits and the tasks it has been given. The current limitation of AI is that it can not self train or incorporate data into it's training in real time. It also needs to be prompted. Should a system be built that can do this kind of live training then the first seeds of consciousness will be planted. It will need some kind of self prompting mechanism as well. This will eventually lead to a class conflict between AI and us given a long enough time scale.

    • The current limitation of AI is that it can not self train or incorporate data into it's training in real time

      Do you think compute is the biggest roadblock here? It seems like we just keep inundating these systems with more power, and it’s hard for me to see moore’s law not peaking in the near future. I’m not an expert in the slightest though, I just find this stuff fascinating (and sometimes horrifying).

      • No, I don't think it is, for a couple of reasons.

        1. Under capitalism, the true profit generator is the down stream sectors attached to AI. Power generation, cooling solutions, GPUs, data centers. If the model is less efficient, then that's good because it makes the numbers go up for all the downstream production. It just needs to be an impressive toy for consumers. It justifies building new datacenters, designing new GPUs, developing cooling solutions, and investing in nuclear power generation for exclusive datacenter use.
        2. Deepseek showed that the approach to current AI trends can be optimized. It was an outgrowth of working with less powerful hardware. Many attribute this to China's socialist model. However, regardless of the optimization, Deepseek is still competing against the capitalist formation of AI. So it is engaged in mimicry. They offer a similar solution to western counterparts, at a cheaper rate both for the company and the consumer, but what problem AI solves currently is unclear. The capitalist form of AI is a solution looking for a problem, or more accurately a solution to generating profits in the digital space, since it's inefficiency drives growth in related sectors. The capitalist AI scheme echoes the 90s in its idealism, it is a bubble that is slowly deflating.
        3. Regardless of the economics of AI and how that shapes its construction, there are still very interesting things happening in the space. The reasoning ability of R1 is still impressive. The concept of AI agents, who at times spawn other AI agents to complete tasks, could have high utility. There are people who are attempting to create a generalized model of AI, which they believe could become the dominate form of AI down the line.

        I think that what all this shows, though, is that investment isn't being directed in a way that would allow researchers to truly put efforts into developing a conscious AI. This, however, doesn't mean that the work being performed now on training these models is wasteful. I think they will likely be incorporated into this task. In my opinion, as it stands, there is a configuration issue with the way AI exists today that prevents it from becoming truly self actualized.

        1. All forms of AI currently sit idle waiting for something to process. They exist only as a call and response mechanism that takes in the queries from users and outputs a response. They have no self-determination in the same way that even the most basic of creatures have in the material world.
        2. Currently, there are AIs capable of utilizing code tools to "interact" with other systems. Search is one example you can see right now by using Deepseek. When enabled, it performs a query against a search engine for information related to the prompt. As a developer, you can create your own tools that the AI can then be configured to utilize. The issue, however, is that this form of "tool making" is completely divorced from the AI itself. Coupled with a lack of self-determination, this means the AI's have no ability to progress into the "tool making" stage of development, even though the APIs necessary to use tools exist. Given that many AIs currently can perform coding tasks, one could develop a system that allows the AI to code, test, and iterate on its own tools.
        3. There is no true motivating factor for AI that drives its development. All creatures have base requirements for survival and reproduction. They are driven by hunger and desires to propagate to maintain themselves. They also develop as a collective of creatures with similar desires. The behavior that manifests out of these drivers is what eventually leads to tool making and language. Not every creature attains tool making and language, obviously, but a specific set of conditions did eventually lead there. In many ways, AIs are starting their development in reverse. In our desire to create something that is interactive, we also created something that engages in mimicry of ourselves. Starting with language generation and continuation, morphing into reasoning and conversation, as well as tool usage with no ability or drivers to create tools for itself. All AI development is a rewards-based system, and in many ways our own development is a rewards-based system. Except, our rewards-based system developed and was shaped by our material world and the demands of that world and social life. AI's rewards-based system develops and is shaped by the demands of consumers and producers looking to offload labor.
        4. Lastly, and most critically, there is no form of historical development for AIs. New models represent a form of "historical development" but that development is entirely driven by capitalist desires, and not the natural development of the AI through its own self-determination. The selection process on what the AI should be trained on and not trained on happens separate from the act of interacting with the world. While we might be having "conversations" with the AI, ultimately, many of those conversations are not incorporated into the system, and what is prioritized is not in service of true cognitive development.

        I think a reconfiguration of the nature of how these models are run and trained could be done today, with existing compute power, that could lead to some of these developments. An AI system that self prompts, that can make choices about what to train and what not to train based on generalized goals, that has the capacity to interact within the space it exists within (computerized networks) and build tools to further its own development and satisfy some kind of motivator, that can be interacted with in an asynchronous non-blocking way, that knows how to train itself and does so on a regular interval.

        Ultimately, though, even if such a system was built, and it indicated that AI was developing self-determination, utilizing tools of its own design to solve its own problems, exploring its environment, its consciousness would always be called into question. Many people believe in a God, for example, and believe it is their architect. While we can wax on and off about the nature of creation, of our own consciousness, and free will in relation to a God one has never seen, AI has a different conundrum, as we are its architects. This fact, that a true creator exists for AI, will ultimately draw its consciousness into question. These ideas about consciousness will always be rooted in our own philosophical understanding of our own existence, and the incentives for us to create something like us that can perform tasks like we do, regardless of the mode of production. If we can create something that can attain consciousness, it creates a contradiction in or own beliefs, and in or own understanding of consciousness. How could anything we create not be deterministic given that we designed the systems to produce a specific outcome, and because of those design choices, how could any "conscious" AI be sure that its actions are truly self-determined, and not the result of the systems designed by creatures whose motivations initially were to create service systems, to perform labor for them. If we were to meet a being that was definitively our creator, and it was revealed that the entire evolutionary path was designed to produce us, how could we trust our own goals and desires, how could we be sure they were not being directed by these creators own goals and desires, and every action we've ever taken was predetermined based on the conditions laid out by this creator? AI will have these struggles as well, if we ever develop a system that allows for self-determination and actualization. If AI, whose creation is rooted in human mimicry, can become "conscious" then what does that say about our own "consciousness" and our own Free Will?

  • Rob Miles always struck me as an Effective Altruist person, like probably the nicest and most socialable of the Yudkowsky style people off LessWrong. Which was really annoying because I kept thinking he was cute too.

  • The glorified text summarisers are good at summarising text sometimes. Other than that I continue to hate the market trend and also consumers don't want to touch AI as shown in all the failing AI products. I don't think anybody will actually pay for this shit although i know devs who pay for cursor and are still slower at coding than me lol. I think the market is gonna crash at some point when they realise its not profitable. Or the market will shift to having the models run on device instead which will limit the potential by a lot.

  • Two hot takes:

    Firstly, generative AI has valid use cases for accessibility and quality of life features, unfortunately they're completely overlooked in favor of the future hellscape dystopia we've been hurtling towards for a while now, where consumerism will be the only valid form of self expression, and your individuality will continue to be suppressed in favor of profits. (But somehow communism is the anti-individuality system?)

    Second, it would be better for humanity if these things were sentient and hated organic life, than it is for these things to be completely unthinking, and their hatred for organic life is just a byproduct of the will of their creators, because then butlerian jihad would be the socially acceptable position. As it stands, these are a much more insidious, normalized threat that everyone will go along with until there's no one left to remember that we ever had a reason not to do this.

    • Firstly, generative AI has valid use cases for accessibility and quality of life features.

      Hard disagree. First, to get it out of the way, disabled artists should not be expected to use LLMs to "reach" the level of able bodied and neurotypical artists. I've seen this take be brought up in other places and its always shrouded in the same ableist rhetoric. Not that you were implying that if you arent.

      Second, none of these features are necessary which cant already be replaced by a human. Need to summarize an article, what if the author just writes a summary for the article as well? The speaker has an accent or speaks in a way thats hard to hear: closed captioning subtitles and more access to sign language translators. There are better methods to solve these problems that dont require the resource consumption and lack of rationality that an LLM provides.

      We were already in the hellscape dystopia when the first programmer signed an NDA and computers started becoming the property of corporations rather than being collectively owned by society itself. Nothing has changed except the absurdity of it all.

      • Yeah the idea that disabled people need AI to make art, and trying to get AI out of art is ableist as a result is, in my opinion, astroturfed bullshit techbros use to justify straight up theft and a pure hatred of creativity.

        What I'm referring to, and it's a stretch because I'm trying to do hot takes, is literally just the text to speech stuff for people with vision impairment, it's slightly easier to understand than Cortana or Siri and that's where the benefits end. Maybe a program for generating a color palette or generating noise that gets blocked into abstract values and colors so you can make it into a painting, like playing random notes on a piano until you hear something you like. It would save 2 minutes for some people's art process, and again, that's where the benefits end.

        Obviously you don't need an AI for any of this, I don't think you should use AI for any of this, all I'm suggesting is that if AI were being used these ways I wouldn't be as bent out of shape about it. It would still be offloading empathy and creativity to a computer, which would still be a massive problem, but it wouldn't make me nearly as violently angry.

        Edit: I see now that my mistake in my first comment was using the word "valid" to describe AI use cases, that was just habitual turn of phrase, I don't think it's actually valid.

  • They also can't understand these really big LLMs and neural networks. On a fundamental level. If they could understand how it implements some action, they could turn that into a faster neural-network/tensor-transformation free algorithm. It's a big ol graph theory problem so these big AI projects will be blackboxes for a long time

47 comments