Your work is invaluable
I wasn't planning on doing Local video but now feel compelled to try it out. Thank you as always !
I've already become a member even though the amount is small and it's more of a recognition of your work... life as a Brazilian is difficult :)
One of the most clear, great pace and to the point tutors out there. Im into this for 6 months now, watched most of the open source ai related content and channels, by far this is the best one hands down. Other are either one of 2. Do not organize the video content, making way too long videos or trying to rush to avoid that, and that makes the video messy and unclear. Thank you again for all your time, efforts and valuable content Pixaroma.
I would like to thank you a lot for your honest explanation and clear work.
The new enhancements feel super smooth and efficient!:_hot:
Yo Man! I dont know what we would do without you, I am a total beginner with some slight experience and I already know a lot thanks to you, thanks a million <3
This model is for impatient people like me 🙂 Thanks for the video!
Thank you for your time and efforts!.
Thank you for the great workflows you share with us, much appreciated!
Video generation is BLAZINGLY fast and looks great, BUT the upscaling part took forever then and completely froze my 5060 Ti 16GB machine. After applying the following changes, it worked flawlessly, though: LTXV Tiled Sample Node: 🔸Change horizontal_tiles and vertical_tiles to 2 🔸Change latents_cond_strength to 0.23 VAE Decode (Tiled) Node: 🔸Change tile_size to 640 🔸Change overlap to 64
Your videos are very well prepared, explaining step by step the entire installation and workflow development process. The best site for beginners to learn ConfyUI without the disappointment of following the video and at the end of the process nothing works due to lack of dependencies, etc. I'm becoming a member: the internet needs people with your teaching skills.
Thank you for your effort to put it together and to show it to us! ;) Yes, I also have the feeling that it is very fast. Greetings 😊
Tested LTX 13B on my RTX 3060 12GB last week, and honestly, the results weren’t great. The artifacts before upscaling were pretty noticeable, and it was way slower than the LTX Lite 0.9.6 version I was using before, which is understandable. But yesterday, I tried the newly updated 13B distilled model and your workflow from Discord, and I’m honestly impressed. The speed has improved so much (8 steps), especially compared to something like Wan 2.1. The video quality, even before upscaling (768x512), is looking pretty solid! Now, LTX 13B distilled model works like a charm for me. I’m getting results in around 1-2 minutes for 4s/16fps, which is decent if you’re not in a mad rush. It’s fast enough if you have some patience. (not tested GGUF yet) Thanks for the vid, and keep having fun with these tools! 🚀
Amazing content as always! Any chance of a FramePack tutorial in the near future? Specifically, how to properly structure it and optimize it for fast iteration and reuse across different projects. Would really appreciate it—thanks for keeping these tools alive and well documented!
Well done, thank you for this tutorial!
Fantastic tutorial thanks for sharing you knowledge.
This i2v model works very well too, even when the prompt is empty. In my tests most part of the time the result is better without prompt.
Thank you :face-blue-smiling::face-red-droopy-eyes:
@pixaroma