We can put a face in Stable Diffusion using LoRA and dreambooth checkpoint models. But both require training a new model, which can be time-consuming. What if you can inject a face instantly at sampling without training?
It is fast and convenient!
It takes input images like these (images from the LoRA training dataset):
And you can generate images with any prompt.