Abstract: The paper proposes a novel generative adversarial network for one-shot face
reenactment, which can animate a single face image to a different
pose-and-expression (provided by a driving image) while keeping its original
appearance. The core of our network is a novel mechanism called appearance
adaptive normalization, which can effectively integrate the appearance
information from the input image into our face generator by modulating the
feature maps of the generator using the learned adaptive parameters.
Furthermore, we specially design a local net to reenact the local facial
components (i.e., eyes, nose and mouth) first, which is a much easier task for
the network to learn and can in turn provide explicit anchors to guide our face
generator to learn the global appearance and pose-and-expression. Extensive
quantitative and qualitative experiments demonstrate the significant efficacy
of our model compared with prior one-shot methods.