Abstract
Facial expression synthesis has gained increasing attention in artificial intelligence applications. Existing methods use the identical facial image as input and generate the whole image for a new facial expression image, which can destroy an important identity/feature from the original image. Psychological research explains that the differences in facial expressions often appear in crucial areas, mainly in the eye and mouth. In this paper, we proposed to generate a new facial expression image from an identical facial image by minimizing the area of generating the image instead of generating the whole image. Our method is based on the Denoising Diffusion Probabilistic Model (DDPM) and text embedding for guiding the generator to produce a new image with design expression. Our method can generate realistic facial expression images while maintaining the identity from the input facial image.
Original language | English |
---|---|
Pages (from-to) | 283-295 |
Number of pages | 13 |
Journal | International Journal of Innovative Computing, Information and Control |
Volume | 20 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Feb 2024 |
Keywords
- Denoising diffusion probabilistic model
- Facial expression synthesis
- Image generative model
- Text-guided image generator
- Text-to-image