Hi,
Quick honesty up front: I'm not a developer. I used Claude Code both for
setting things up and for writing this post — my English isn't this clean
on its own. The debugging happened on my actual machine, not in a
chat window.
One thing to be upfront about: it was Claude that actually diagnosed
what was going wrong — I just ran the commands and watched. I know
AI-assisted contributions can be a sore point in open source lately,
and I get it if you'd rather not have noise like this. I'm posting
anyway because the underlying problem is real, I lived through it.
There are some good resources for running ComfyUI on Strix Halo
(mostly Linux/Docker focused) and plenty for LTX-2 (mostly NVIDIA),
but I couldn't find one that covers the specific combination I was on:
LTX-2 + Strix Halo + Windows + the host-RAM trade-off you make when you
allocate most memory to the GPU.
If it's not useful, just say so and I'll back off.
I spent yesterday getting LTX-2.3 to produce its first video on a Ryzen
AI MAX+ 395 laptop (128GB unified memory, Windows). It works now. Took
way longer than I expected — most of the trouble seems to come from that on this kind of unified-memory APU, allocating most RAM to the
GPU leaves only ~32GB for the host, and a few things in the LTX-2 path
quietly assume more than that.
Some of what I hit overlaps with existing reports (#303, #344, #11726,
and Wan2GP #1341), but they're scattered. As I went I kept wishing for
one "ok here's actually what to do on this kind of machine" page.
Would a notes-style doc in this repo be welcome? I have a draft written
from a beginner's POV. If yes I'll open a PR. If you'd rather I post it
as a gist, or you don't think it belongs here at all, that's totally
fine — just want to ask before adding noise.
Thanks for the project.
Hi,
Quick honesty up front: I'm not a developer. I used Claude Code both for
setting things up and for writing this post — my English isn't this clean
on its own. The debugging happened on my actual machine, not in a
chat window.
One thing to be upfront about: it was Claude that actually diagnosed
what was going wrong — I just ran the commands and watched. I know
AI-assisted contributions can be a sore point in open source lately,
and I get it if you'd rather not have noise like this. I'm posting
anyway because the underlying problem is real, I lived through it.
There are some good resources for running ComfyUI on Strix Halo
(mostly Linux/Docker focused) and plenty for LTX-2 (mostly NVIDIA),
but I couldn't find one that covers the specific combination I was on:
LTX-2 + Strix Halo + Windows + the host-RAM trade-off you make when you
allocate most memory to the GPU.
If it's not useful, just say so and I'll back off.
I spent yesterday getting LTX-2.3 to produce its first video on a Ryzen
AI MAX+ 395 laptop (128GB unified memory, Windows). It works now. Took
way longer than I expected — most of the trouble seems to come from that on this kind of unified-memory APU, allocating most RAM to the
GPU leaves only ~32GB for the host, and a few things in the LTX-2 path
quietly assume more than that.
Some of what I hit overlaps with existing reports (#303, #344, #11726,
and Wan2GP #1341), but they're scattered. As I went I kept wishing for
one "ok here's actually what to do on this kind of machine" page.
Would a notes-style doc in this repo be welcome? I have a draft written
from a beginner's POV. If yes I'll open a PR. If you'd rather I post it
as a gist, or you don't think it belongs here at all, that's totally
fine — just want to ask before adding noise.
Thanks for the project.