Skip Navigation
100 comments
  • This is terrible. I'm going to ignore the issues concerning privacy since that's already been brought up here and highlight another major issue: it's going to get people hurt.

    I did a deep dive with gen AI for a month a few weeks ago.

    It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it's actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.

    I also noticed that as gen AI's context grew, it became less "objective". This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.

    If people started to use gen AI for therapy, it's very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.

    Gen AI cannot "think" of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can't "think" period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it's very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.

    If people then act on this advice, the consequences can be disastrous. I've read enough horror stories about this.

    Anyway, I think therapy might be one of the worst uses for gen AI.

  • You must know what you're doing and most people don't. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and "make beliefs" that aren't based in any science or empirical data.

  • There are ways that LLMs can be used to better one's life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.

    Obviously I'm not saying "replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives", but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can't calm themselves because they don't know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can't be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.

    So the problem here is capitalism, surprising no-one.

    • You're missing the most important point here; quoting:

      A human therapist might not or is less likely to share any personal details about your conversations with anyone. An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.

      Plus, an AI cannot really have your best interest at heart, plus these sorts of things open up a whole slew of very dytopian scenarios.

      OK, you said "capitalism" but that's way too broad.

      Also I find the example of a "mental health emergency" (as in, right now, not tonight or tomorrow) in a remote area, presumably with nobody else around to help, a bit contrived. But OK, in such extremely rare cases - presuming broadband internet still works, and the person in question is savvy enough to use the chatbot - it might be better than nothing.

      • But if you are facing mental health issues and a free or inexpensive AI that is available and doesn't burden your friends actually helps you, do you really care about your information and being profited from?

        Put it this way, if Google was being super transparent with you and said, "we'll help treat you, and in exchange we use your info to make a few thousand dollars." Will you the individual say, "no thanks I'd rather pay a few hundred per therapy session instead"?

        Even if you hate it, you have to admit it's hard to say no. Especially if it works.

      • Yeah, well, that's just, like, your opinion, man. And if you remove the very concept of capital gain from your "important point", I think you'll find your point to be moot.

        I'm also going to assume you haven't been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or "the ward" as it's also known) before I calmed down, taking resources from the treatment of people who weren't as unstable as I was, a problem which could've been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn't an extremely rare case as you claim.

        Anyway, my point stands.

      • You don't actually know what you're talking about but like many others in here you put this over the top anti-AI current thing sentiment above everything including simple awareness that you don't know anything. You clearly haven't interacted with many therapists and medical professionals in general as a non-patient if you think they're guaranteed to respect privacy. They're supposed to but off the record and among friends plenty of them yap about everything. They're often obligated to report patients in case of self harm etc which can get them involuntarily sectioned, and the patients may have repercussions from that for years like job loss, healthcare costs, homelessness, legal restrictions, stigma etc.

        There's nothing contrived or extemely rare about mental health emergencies and they don't need to be "emergencies" the way you understand it because many people are undiagnosed or misdiagnosed for years, with very high symptom severity and episodes lasting for months and chronically barely coping. Someone may be in any big city and won't change a thing, hospitals and doctors don't have magic pills that automatically cure mental illness assuming that patients have insight (not necessarily present during episodes of many disorders) or awareness that they have some mental illness and aren't just sad etc (because mental health awareness is in the gutter, example: your pretentious incredulity here). Also assuming they have friends available or that they even feel comfortable enough to talk about what bothers them to people they're acquainted with.

        Some LLM may actually end up convincing them or informing them that they do have medical issues that need to be seen as such. Suicidal ideation may be present for years but active suicidal intent (the state in which people actually do it) rarely lasts more than 30 minutes or a few hours at worst and it's highly impulsive in nature. Wtf would you or "friends" do in this case? Do you know any techniques to calm people down during episodes? Even unspecialized LLMs have latent knowledge of these things so there's a good chance they'll end up getting life saving advice as opposed to just doing it or interacting with humans who default to interpreting it as "attention seeking" and becoming even more convinced that they should go ahead with it because nobody cares.

        This holier than thou anti-AI bs had some point when it was about VLMs training on scraped art but some of you echo chamber critters turned it into some imaginary high moral prerogative that even turns off your empathy for anyone using AI even in use cases where it may save lives. Its some terminally online "morality" where supposedly "there is no excuse for the sin of using AI" and just echo chamber boosted reddit brainworms and fully performative unless all of you use fully ethical cobalt-free smartphones so you're not implicitly gaining convenience from the six million victims of the Congo cobalt wars so far, you never use any services on AWS and magically avoid all megadatacenters etc. Touch grass jfc.

100 comments