Skip Navigation
How can I use a local LLM on Linux to generate a long story?

I'm interested in automatically generating lengthy, coherent stories of 10,000+ words from a single prompt using an open source local large language model (LLM) on low-spec hardware like a laptop without GPU and with i5-8250U, 16GB DDR4-2400MHz. I came across the "Awesome-Story-Generation" repository which lists relevant papers describing promising methods like "Re3: Generating Longer Stories With Recursive Reprompting and Revision", announced in this Twitter thread from October 2022 and "DOC: Improving Long Story Coherence With Detailed Outline Control", announced in this Twitter thread from December 2022. However, these papers used GPT-3, and I was hoping to find similar techniques implemented with open source tools that I could run locally. If anyone has experience or knows of resources that could help me achieve long, coherent story generation with an open source LLM on low-spec hardware, I would greatly appreciate any advice or guidance.

11
...

I just watched "Fun with AI by Sheldon Cooper" on YouTube and I think that's just a preview of what's coming. I think it will be pretty much the same as social media where anything you say has already been said before dozens of times. But for video, so anything you want to watch has many different and similar spins. I think there will be more AI-generated movies similar to existing popular ones than there are fanfics written today. How do you think it will be like?

1
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CH
ChasingEnigma @lemmy.world
Posts 2
Comments 2