As pretrained text-to-image diffusion models have become a useful tool for
image synthesis, people want to specify the results in various ways. In this
paper, we introduce a method to produce results with the same structure of a
target image but painted with colors from a reference ima