LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control

LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control

Summary

LivePortrait is an official PyTorch implementation for efficient portrait animation, bringing still images and videos to life with advanced stitching and retargeting control. It supports both human and animal subjects, offering various features like image-driven mode, regional control, and precise editing. Widely adopted by major video platforms, LivePortrait provides a robust solution for generating dynamic animated portraits.

Repository Info

Updated on October 12, 2025
View on GitHub

Introduction

LivePortrait is an innovative project that provides an efficient solution for animating portraits, bringing still images and videos to life. This repository contains the official PyTorch implementation of the paper "LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control". It offers advanced features like stitching and retargeting control, allowing for precise and high-quality animation. LivePortrait has been widely adopted by major video platforms such as Kuaishou, Douyin, Jianying, and WeChat Channels, as well as numerous startups and creators, demonstrating its effectiveness and versatility in generating dynamic animated portraits for both human and animal subjects.

Installation

Getting started with LivePortrait is straightforward. Follow these steps to set up your environment and run the project:

  1. Clone the Repository:

    git clone https://github.com/KwaiVGI/LivePortrait
    cd LivePortrait
    
  2. Prepare the Environment:
    Ensure you have git, conda, and FFmpeg installed. Then, create and activate a Conda environment:

    conda create -n LivePortrait python=3.10
    conda activate LivePortrait
    
  3. Install Dependencies:

    • For Linux and Windows Users:
      The X-Pose dependency, required for Animals mode, needs to be built. Install remaining dependencies:

      pip install -r requirements.txt
      

      (Optional: Check your CUDA versions and install the corresponding PyTorch version for optimal performance.)

    • For macOS with Apple Silicon Users:
      X-Pose is not supported on macOS, so Animals mode is unavailable. Use the macOS-specific requirements file:

      pip install -r requirements_macOS.txt
      
  4. Download Pretrained Weights:
    Download the necessary pretrained models from HuggingFace:

    huggingface-cli download KwaiVGI/LivePortrait --local-dir pretrained_weights --exclude "*.git*" "README.md" "docs"
    

    Alternatively, weights can be downloaded from Google Drive or Baidu Yun.

Examples

LivePortrait offers various ways to animate portraits, from command-line inference to a user-friendly Gradio interface.

Fast Hands-on (Humans)

Run basic inference to animate human portraits:

# For Linux and Windows users
python inference.py

# For macOS users with Apple Silicon
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py

You can also specify source images or videos:

# Source input is an image
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4

# Source input is a video
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4

Fast Hands-on (Animals)

For animating animal portraits, first build the MultiScaleDeformableAttention OP:

cd src/utils/dependencies/XPose/models/UniPose/ops
python setup.py build install
cd -

Then, run inference:

python inference_animals.py -s assets/examples/source/s39.jpg -d assets/examples/driving/wink.pkl --driving_multiplier 1.75 --no_flag_stitching

Gradio Interface

For a more interactive experience, use the Gradio interface:

# For humans mode
python app.py

# For animals mode (Linux with NVIDIA GPU only)
python app_animals.py

You can also enable torch.compile for faster inference (not supported on Windows and macOS):

python app.py --flag_do_torch_compile

Alternatively, try the online demo effortlessly on the HuggingFace Space.

Why Use LivePortrait

LivePortrait stands out for several compelling reasons:

  • Efficiency and Performance: Designed for efficient portrait animation, it delivers high-quality results quickly. An acceleration option with torch.compile further boosts inference speed by 20-30%.
  • Advanced Control: It provides sophisticated stitching and retargeting control, along with features like image-driven mode, regional control, and precise portrait editing, giving users fine-grained command over the animation process.
  • Versatile Application: Capable of animating both human and animal subjects, it supports various inputs including still images and videos. It also offers features like driving video auto-cropping and motion template making for privacy protection.
  • Wide Adoption and Community Support: LivePortrait has been adopted by major video platforms and boasts an active community that contributes projects, playgrounds, and tutorials, enhancing its capabilities and accessibility.
  • User-Friendly Experience: With a one-click installer for Windows and an intuitive Gradio interface, it's accessible to users of varying technical expertise. An online demo is also available on HuggingFace.

Links