How to find soucecode of stable diffusion demo app
How to find stable diffusion demo sourcecode.
Posted: Thu, 2023-02-23 10:33
How to find soucecode of stable diffusion demo app
Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.
Dear developer,
We're glad to see you are getting the notice from our announcment for the Stable Diffusssion launched. This app demo still test internal and you can take more attention in our official site.
BR.
Wei
Hi Wei,
Any update on this? Is the Stable Diffusion Demo APP available now?
TThanks,
Conan353
Check this out and let me know if it's relevant :)
https://docs.qualcomm.com/bundle/publicresource/topics/80-64748-1/introd...
I don't see any relevantion
https://docs.qualcomm.com/bundle/snow rider 3d/publicresource/topics/80-64748-1/introduction.html
Run this stable diffusion demo on windows dev kit 2023 requires arm64 python package for windows.
But I cannot find the proper vision of arm64 python package for it.
Can anyone who ran it successfully let me know where to get the right version of arm64 python package?
https://docs.qualcomm.com/bundle/publicresource/topics/80-64748-1/introd...
In the demo, stable diffusion was quantized at a16w8 pricision, but when I tried to quantize stable diffusion at a8w8 by changing the activation_bit_width in the config.json from 16 to 8, it generated incorrect images.
Are threre any suggestions?
I followed the notebook for conversion, it looks like it was succesfull and I can run test inference on it. But when trying to export it raises that torch.onnx.export got an unexpected argument "use_external_data_format". Modifying the code I was able to export all models, and convert the text_encoder and vae decoder using QNN. But the U-Net returns that " after pruning disconected nodes, the model is empty", i explored the onnx file with Netron and it looks fine. Theres is some config tha must be changed?
I learned from the 8gen3 conference that it can stable diffusion in less than one second, can you provide demo or some details? Thank you @weihuan
I also ran into rafael.gfrancisco's problem, but was able to solve it by using a different version of the transformers library (I think). I ran into a separate problem where the output directories for each of the models (text encoder, unet, vae) did not generate any input_list.txt files.
Dear developer,
You can pick up more resources from https://docs.qualcomm.com/bundle/publicresource/topics/80-64748-1/model_....
BR.
Wei