Skip Navigation
Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human?
  • What you're alluding to is the Turing test and it hasn't been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they're speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn't proof of an LLMs capabilities over more rudimentary chatbots.

    You're also suggesting that it minimises the complexity of its outputs. My determination is that what we're getting is the limit of what it can achieve. You'd have to prove that any allusion to higher intelligence can't be attributed to coercion by the user or it's just hallucinating based on imitating artificial intelligence from media.

    There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it's a sophisticated machine learning algorithm.

  • Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human?
  • I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they've been set up with a chatbox where you're interacting directly with something that attempts human-like responses, gives off the misconception that the thing you're talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn't do that good job of comprehending what exactly it's telling you. It's very confident when it gives responses which also means when it's wrong, it's very confidently delivering the incorrect response.

  • Nearly 75% of journalists killed in 2023 died in Israel’s war on Gaza: CPJ
  • You can see my post history if you want to content yourself that I don't just copy-paste responses. I like to tailor my answer depending on how much of an asshole the person I'm replying to is.

    Someone else commented what my point was but I'll make it clear myself. While Netanyahu believes it's favourable to classify all anything Jewish as being related to Israel, the inverse is what you're seeing play out.

    Attacking Israel means you're attacking the Jewish faith therefore, attacking members of the Jewish faith means you're attacking Israel. This isn't a position I hold, this is the situation Israel has placed Jews around the world in as a result of muddying the waters. Israel is perfectly willing to manipulate the horror of the Holocaust to get allies to support their violence against Palestinians.

  • Nearly 75% of journalists killed in 2023 died in Israel’s war on Gaza: CPJ
  • Israel are the ones who made being Jewish synonymous with being Israeli. So now if talk against Israel, you're being antisemitic. If you disagree with their conduct, you're being antisemitic. All they've done is muddy the waters so that criticism of their vile actions somehow means you're denying the Holocaust. How many times has Netanyahu brought up 7th Oct as justification for the actions they've committed against Palestinians? "Why should we stop bombing Gaza? Do you not remember 7th Oct?'

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PH
    Phanatik @kbin.social
    Posts 1
    Comments 399