Skip Navigation
'Every voter should be made aware': Governor sounds alarm on new Trump threat
  • Oakland is a great city, and doesn't deserve all the slander it gets. It's just the far right spreading fear and hatred by attacking anyone who doesn't share their politics or skin color.

    Reminds me of Charlie Kirk recently talking to a Berkeley grad and interrupting her to say "Berkeley is a slum, it's a hellhole" or something to that effect. It's so comically stupid, but as a Berkeley resident for 20+ years, I hope he keeps it up because it (hopefully) scares away other hateful idiots like him.

  • Biden: What's happening in Gaza is not genocide
  • In "safe" states like California, where Trump will never win, we can vote third party as a protest vote without worrying that we'll help Trump get elected.

    In states with a very thin margin ("swing" states), fewer votes for Biden could very well mean Trump winning that state.

  • Uber's new shuttle service sounds a lot like a bus route
  • If you click the article link, then use a process called "reading", you would see:

    The company has already launched similar services abroad in Egypt, Nigeria, and India. Now it’s bringing the concept to the United States.

    Edit: I misunderstood and assumed he hadn't read the article, which is entirely too common these days.

  • We have to stop ignoring AI’s hallucination problem
  • Most human training is done through the guidance of another

    Let's take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they're wrong, yes, but they are not trained in language until they've already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.

    An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

    you can in fact teach it something and it will maintain it during the session

    It's still not learning anything. LLMs have what's known as a context window that is used to augment the model for a given session. It's still just text that is used as part of the response process.

    They don’t think or understand in any way, full stop.

    I just gave you an example where this appears to be untrue. There is something that looks like understanding going on.

    You seem to have ignored the preceding sentence: "LLMs are sophisticated word generators." This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it's a part of. There is no thinking or understanding whatsoever.

    This is why Voroxpete@sh.itjust.works said in the original post to this thread, "They hallucinate all answers. Some of those answers will happen to be right." LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don't know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.

  • We have to stop ignoring AI’s hallucination problem
  • the argument that they can’t learn doesn’t make sense because models have definitely become better.

    They have to be either trained with new data or their internal structure has to be improved. It's an offline process, meaning they don't learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can't learn through any kind of self-directed research process like a human can.

    all of your shortcomings you’ve listed humans are guilty of too.

    LLMs are sophisticated word generators. They don't think or understand in any way, full stop. This is really important to understand about them.

  • Google is redesigning its search engine — and it’s AI all the way down
  • It most likely will be better initially, if for no other reason than they need to strongly differentiate themselves from Google (and Bing and DDG). I'm just not very optimistic for the long-term outlook in these times of "profit uber alles". I'd love to be wrong.

  • Google is redesigning its search engine — and it’s AI all the way down
  • It's no surprise that "free" search funded through advertising led to this. The economic incentives were always going to lead us to the pay-to-win enshittification that we see today.

    Paid search might look better initially, but a for-profit model will eventually lead to the same results. It might manifest differently, maybe through backroom deals they never talk about, but you'd better believe there will always be more profit to be made through such deals than through subscription fees.

  • Students walk out during Jerry Seinfeld’s commencement speech at Duke
  • I'm so far left that young leftists think I'm too extreme

    Sure you are, champ.

    they call be right wing for disagreeing with every tiny thing they say

    You're so close to a moment of self-awareness, and yet so far.

  • Constant warnings for what seems like ordinary activity
  • Yes, and if a duplicate does arrive (as appears to be happening), the current code doesn't do anything about the corresponding database error, resulting in a scary multi-line warning for something that could be safely ignored. A new Lemmy administrator (like me) has no way of knowing this is at best an info-level event, or even just a debug-level event since it has no real effect on anything.

  • Constant warnings for what seems like ordinary activity
  • Cool, I've been meaning to check out ngrok sometime. Looks really useful.

    I don't think there's a way to filter out the problem since it appears to be an automatic warning due to an uncaught error. I have some ideas on a code fix now, and may submit a PR for it in the near future.

  • Constant warnings for what seems like ordinary activity
  • It's definitely happening when I'm getting updates from lemmy.world, and while I don't know how to get at the HTTP details you're showing in your video, I do see a lot of 400's in the nginx log from Docker:

    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    
  • Lemmy Support @lemmy.ml insaan @leftopia.org
    Constant warnings for what seems like ordinary activity

    I setup a new instance a week or two ago, and have some subscriptions to communities on lemmy.world. The logs for the Lemmy process are currently showing a constant flood of warnings for what look like ordinary activity (likes, creates, undos, etc). Here are two recent entries:

    lemmy-1 | 2024-05-13T21:41:48.243217Z WARN lemmy_server::root_span_builder: Unknown: lemmy-1 | 0: lemmy_apub::insert_received_activity lemmy-1 | with ap_id=Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("lemmy.world")), port: None, path: "/activities/announce/like/3a96d6df-3229-4e5a-a6d1-ce2f6f3ca3d5", query: None, fragment: None } lemmy-1 | at crates/apub/src/lib.rs:198 lemmy-1 | 1: lemmy_apub::activities::community::announce::receive lemmy-1 | at crates/apub/src/activities/community/announce.rs:153 lemmy-1 | 2: lemmy_server::root_span_builder::HTTP request lemmy-1 | with http.method=POST http.scheme="http" http.host=leftopia.org http.target=/inbox otel.kind="server" request_id=43212a7e-1c3c-4b6a-84d6-4cd6082af392 lemmy-1 | at src/root_span_builder.rs:16 lemmy-1 | 2024-05-13T21:41:48.356594Z WARN lemmy_server::root_span_builder: Unknown: lemmy-1 | 0: lemmy_apub::insert_received_activity lemmy-1 | with ap_id=Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("lemmy.world")), port: None, path: "/activities/announce/like/9f70cd5b-175a-4ac3-a852-322494f01981", query: None, fragment: None } lemmy-1 | at crates/apub/src/lib.rs:198 lemmy-1 | 1: lemmy_apub::activities::community::announce::receive lemmy-1 | at crates/apub/src/activities/community/announce.rs:153 lemmy-1 | 2: lemmy_server::root_span_builder::HTTP request lemmy-1 | with http.method=POST http.scheme="http" http.host=leftopia.org http.target=/inbox otel.kind="server" request_id=050271b4-2081-4264-936a-329d70374239 lemmy-1 | at src/root_span_builder.rs:16

    Tailing the Docker logs is like watching a torrent of these warnings, and I'm worried about the stress its adding to my fairly meager VM. Any tips on how to debug this? I'm proficient with Rust, so I'm willing and able to jump into the code if it'll help.

    8
    Lemmy Support @lemmy.ml insaan @leftopia.org
    Federation questions on a new server

    I setup a new server a couple of days ago using the Ansible playbook, and some communities are federating successfully even though they're labeled as "subscribe pending", and others that are not populating even though they're labeled as "joined". See attached image.

    Q1: What does "subscribe pending" mean? The Memes and Politics communities shown in the image were setup a couple of days ago, but the content is flowing despite them still saying "subscribe pending".

    Q2: Why would a community be marked "joined" without the content flowing? I just subscribed to Lemmy Support about 15 minutes ago, and content doesn't seem to be flowing yet despite the status having quickly moved to "joined".

    Q3: How can I check the status of things like this either in the database or in the logs? Looking for some general troubleshooting tips, since the logs for the lemmy Docker container are a bit spammy.

    Here's something I noticed in the logs right after I initiated the subscription to lemmy_support:

    lemmy-1 | 2024-05-06T22:28:43.257408Z WARN lemmy_utils: error in spawn: Unknown: Failed to parse object https://lemmy.ml/c/lemmy_support/outbox with content {"error":"unknown","message":"Record not found"}: missing field `type` at line 1 column 48

    I believe I saw this same issue when I first subscribed to Memes and Politics the other day. What's the best way to debug this?

    Thanks in advance everyone, and thanks for all the amazing work.

    6
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IN
    insaan @leftopia.org
    Posts 2
    Comments 25