Skip Navigation
Self hoating an LLM for research
  • It's less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that's usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb's of RAM that's many times faster than the CPU's ram, which is the main reason it's faster for llm's.

    Most tpu's don't have much ram, and especially cheap ones.

  • Self hoating an LLM for research
  • Reasonable smart.. that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They're rather impressive for their size.

    For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.

    And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I'd say right half a gig to a gig of VRAM.

    As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.

    So no, you're not loading all the notes directly, and you won't have a smart model.

    For your hardware and use case.. try phi3-mini with a RAG system as a start.

  • Are right wingers creating FUD around Signal?
  • I'm not saying it's broken, but it has some design choices and functions that makes even Whatsapp a better choice for privacy minded people. Like rolling their own crypto and not having e2ee as default.

  • ByteDance won't sell TikTok, would rather pull it from the US
  • You realise there is no algorithm behind Lemmy, right?

    Of course there is. Even "sort by newest" is an algorithm, and the default view is more complicated than that.

    You aren't being shoved controversial polarizing content subliminally here.

    Neither are you on TikTok, unless you actively go looking for it

  • 4chan daily challenge sparked deluge of explicit AI Taylor Swift images
  • Hah as if. In the early 00s the mods were in maybe once or twice a day and there was tons of CP being posted.

    Worst I saw was a little girl chopped into pieces, and a many -page discussion / argument if it should be sorted as CP or Necro porn. That was the old 4chan.

  • Discussions related to Infosec.pub @infosec.pub Terrasque @infosec.pub
    Having some problems with subscription to !localllama@sh.itjust.works

    I subscribed to !localllama@sh.itjust.works and while posts show, all of them show 0 comments.

    If I go to https://sh.itjust.works/c/localllama it shows all posts having comments.

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TH
    Terrasque @infosec.pub
    Posts 1
    Comments 212