Stable Diffusion on Linux using ROCm from a container
By Timo Jyrinki
This hackweek I’ve been playing a bit around with my desktop computer which has AMD Radeon 6600 XT graphics card which is based on the RDNA2 architecture. The idea was to find a way to utilize it for Stable Diffusion Version 2 latent text-to-image diffusion model without invading the host too much with randomly downloaded modules, but still using the GPU for computing. The graphics card has “only” 8GB RAM which is apparently only a starter amount in this field, so I needed to also check if that’s enough.
Shortly, I found out that while the code is open source, the model data is unfortunately not as once again new licenses have been developed (OpenRAIL license family) by people who have not fully understood or wanted to understand the wisdom in The Open Source Definition (or free software definition either). So ultimately this is just about studying and using these models for fun, not for serious use. Hopefully open source models will be also developed at some point in the future. I just fear this will only happen a long time later, after the effects of having vague ethical points in a copyright license are felt and “this is not what we intended, how could we have anticipated these problems?” said by the people creating and utilizing the data. (continued hopefully with “hmm, how could we re-license all of this to CC-BY-SA?”)
Since my Hackweek time is more limited than intended, and I also ended up battling broken pypi modules and other things, I’ll just leave here a Docker container git tree and a sample image generated below. To put it short, it worked like a breeze until it broke, thanks pip/pypi/numpy/something. Anyway, when it works, it initializes InvokeAI based web UI for inputting to Stable Diffusion. And yes, the ROCm stack works nicely on my desktop computer - I downloaded and used stable-diffusion-2.1-768 model data only, disabled nsfw filter to save VRAM, and created 768x768 images - the VRAM use was around 6.5GB out of 8GB available according to radeontop, and it worked like a charm!
Many of the dockerfiles around were both woefully outdated and unlicensed so I could not use those other than for inspiration - these are MIT licensed.
Here is also image of the UI running in web browser (you can also use just Python CLI):
The shakiness of pypi installation has ended after yesterday now, and this time I’ll commit the final docker container result for later use.