The comfyui Workflow is also embedded in this picture, you will have to install several custom extensions to make this work:
Generator: Comfyui
Positive Prompt: large format photo of (Masterpiece, best quality:1.2) A dancing ninja girl (wearing a wooden demon mask on the head:1.1) intricate ornamental carvings, medieval background, full body, cinematic lighting, Kodak Funsaver, Kodak Vision3, 50mm
Negative Prompt: bad anatomy, bad proportions, blurry, cloned face, deformed, disfigured, duplicate, extra arms, extra fingers, extra limbs, extra legs, fused fingers, gross proportions, long neck, malformed limbs, missing arms, missing legs, mutated hands, mutation, mutilated, morbid, out of frame, poorly drawn hands, poorly drawn face, too many fingers, ugly, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, out of frame, ugly, extra limbs, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck
Negative Embedding: FastNegativeV2
Sampler: Euler
Scheduler: normal
CFG: 6
Model: lyriel_v14
This is a quite interesting workflow as you can generate relatively long animations.
You draw the Motions of your character from a video. For this one i googled "dancing girl" and took one of the first i´ve found:
You can draw single images from the Video into comfyui. for this one i´ve skipped the first 500 Frames and took 150 Frames to generate the animation. The single images are scaled to the Resolution 512x512. This gives me a initial set of pictures to work with:
Via the openpose prepocessor you can get the Poses for every single image:
This can be fed to the openpose controlnet to get the correct pose for every single animation. Now we have following problem. We are all set with the Poses, but we also need a set of latent images which have to go trough the ksampler. The solution is to generate a single 512x512 latent image and blend it with every single VAE encoded Picture of the Video to get an empty latent image for every Frame:
We get a nice set of empty latents for the sampler:
then we let the ksampler together with the animate diff nodes and controlnet do its magic and we get a set of images for our animation ( The number of possible images seems to be limited by your system memory. i had no problem with 100, 150, 200, 250 images and have not tested higher numbers yet. I could not load the full video):
Last step is to put everthing together with the video combine node. You can set the frame rate here. 30 FPS seems to produce acceptable results.:
That's awesome work! You're getting better and better at this :)
It's too bad the embedding doesn't seem to work so well, maybe someone else has a solution for this?
I´ve tried to implement Loras in the Workflow and a face detailer to strengthen the lora effect. the Results are quite interesting (low quality comes from the webm format):
Wow, that looks quite consistent! It's so weird to see Trump happy... and in shape..
You might want to put the webm files behind spoilers, they take up a lot of space in the feed if you just want to scroll through. At least they do in my browser (Firefox).
Its all a bit trial and error right now. this animation took about 20 Minutes on my machine. I would love to do some more tests with different models and embeddings or even loras but unfortunately my time for this is somewhat limited.
I love to do the contests to test new things out :-)
Visions for the future:
If you could get a stable output for the background and the actors (maybe Loras?) you could "play out" your own scenes and transform them via stable diffusion to something great. thinking of epic fight scenes, or even short anmation films.
This whole stable diffusion thing is extremly interesting and in my opinion a game changer like the introduction of the mobile phone.
I'm glad you're enjoying the contests, your contributions are always welcome :)
Though you might want to consider making a post of your own, your work deserves a lot more exposure than just as a comment.
The idea of making your own consistent scenes sounds quite impressive, it's a bit out of my league though. Like you, I have limited time to invest in this hobby, I'll stick to my images :)