Fangchang Ma dbcf6373c6 Merge pull request #15 from apple/fix_typo | 2 年之前 | |
---|---|---|
configs | 2 年之前 | |
data_preprocess | 2 年之前 | |
docs | 2 年之前 | |
gmpi | 2 年之前 | |
media | 2 年之前 | |
virtual_envs | 2 年之前 | |
.gitignore | 2 年之前 | |
ACKNOWLEDGEMENTS | 2 年之前 | |
CODE_OF_CONDUCT.md | 2 年之前 | |
CONTRIBUTING.md | 2 年之前 | |
LICENSE | 2 年之前 | |
README.md | 2 年之前 | |
environment.yml | 2 年之前 | |
launch.py | 2 年之前 | |
run_gmpi.py | 2 年之前 | |
setup.py | 2 年之前 |
Making a 2D GAN 3D-Aware
ECCV 2022 (Oral)
Generative Multiplane Images: Making a 2D GAN 3D-Aware, ECCV 2022 (Oral).
Xiaoming Zhao, Fangchang Ma, David Güera Cobo, Zhile Ren, Alexander G. Schwing, and Alex Colburn.
This code has been tested on Ubuntu 18.04 with CUDA 10.2.
conda env create -f environment.yml
cd /path/to/this/repo
export GMPI_ROOT=$PWD
Please download our pretrained checkpoints and place them under ${GMPI_ROOT}/ckpts
. The structure should be:
.
+-- ckpts
| +-- gmpi_pretrained
| | +-- FFHQ256
| | +-- FFHQ512
| | +-- FFHQ1024
| | +-- AFHQCat
| | +-- MetFaces
We use the following variables for illustration purposes.
# This can be FFHQ256, FFHQ512, FFHQ1024, AFHQCat, or MetFaces
export DATASET_NAME=FFHQ1024
export OUTPUT_DIR=${GMPI_ROOT}/ckpts/gmpi_pretrained/${DATASET_NAME}
# Set this to your favourate seed
export SEED=589
# - When psi = 1.0 there is no truncation, which is used for quantitative results in the paper.
# - To obtain better qualitative results, use psi < 1.0.
export TRUNCATION_PSI=1.0
The following command renders an image ${OUTPUT_DIR}/rendered.png
, along with:
mpi_alpha.png
: alpha maps for all planes,mpi_rgb.png
: the same RGB texture for all planes,mpi_rgba.png
: RGB-alpha images for all planes.conda activate gmpi && \
export PYTHONPATH=${GMPI_ROOT}:${GMPI_ROOT}/gmpi/models:$PYTHONPATH && \
python ${GMPI_ROOT}/gmpi/eval/vis/render_video.py \
--ckpt_path ${OUTPUT_DIR}/generator.pth \
--output_dir ${OUTPUT_DIR} \
--seeds ${SEED} \
--nplanes 96 \
--truncation_psi ${TRUNCATION_PSI} \
--exp_config ${OUTPUT_DIR}/config.pth \
--render_single_image 1
Note: We use nplanes = 96
in the paper for reporting quantitative and qualitative results, but GMPI is able to produce high-quality results even with 32 planes. Use a small nplanes
(e.g., 32) if your run into CUDA out-of-memoory errors.
The following command renders a video in ${OUTPUT_DIR}
, along with:
video_rgb.mp4
: video for the RGB rendering,video_depth.mp4
: video for the depth rendering.conda activate gmpi && \
export PYTHONPATH=${GMPI_ROOT}:${GMPI_ROOT}/gmpi/models:$PYTHONPATH && \
python ${GMPI_ROOT}/gmpi/eval/vis/render_video.py \
--ckpt_path ${OUTPUT_DIR}/generator.pth \
--output_dir ${OUTPUT_DIR} \
--seeds ${SEED} \
--nplanes 96 \
--truncation_psi ${TRUNCATION_PSI} \
--exp_config ${OUTPUT_DIR}/config.pth \
--render_single_image 0 \
--horizontal_cam_move 1
Notes:
nplanes
to some small number (e.g., 32) if your run into CUDA out-of-memoory errors.horizontal_cam_move
to 0 if you want a video with vertical camera motion.The following command produces a mesh ${OUTPUT_DIR}/mesh_${TRUNCATION_PSI}.ply
.
conda activate gmpi && \
export PYTHONPATH=${GMPI_ROOT}:${GMPI_ROOT}/gmpi/models:$PYTHONPATH && \
python ${GMPI_ROOT}/gmpi/eval/vis/extract_mesh.py \
--ckpt_path ${OUTPUT_DIR}/generator.pth \
--dataset ${DATASET_NAME} \
--save_dir ${OUTPUT_DIR} \
--exp_config ${OUTPUT_DIR}/config.pth \
--stylegan2_sanity_check 0 \
--truncation_psi ${TRUNCATION_PSI} \
--seed ${SEED} \
--chunk_n_planes -1
Notes:
chunk_n_planes
to some small positive numbers (e.g., 64) if your run into CUDA out-of-memoory errors.Please refer to TRAIN_EVAL.md for more details.
Xiaoming Zhao, Fangchang Ma, David Güera, Zhile Ren, Alexander G. Schwing, and Alex Colburn. Generative Multiplane Images: Making a 2D GAN 3D-Aware. ECCV 2022.
@inproceedings{zhao-gmpi2022, title = {Generative Multiplane Images: Making a 2D GAN 3D-Aware}, author = {Xiaoming Zhao and Fangchang Ma and David Güera and Zhile Ren and Alexander G. Schwing and Alex Colburn}, booktitle = {Proc. ECCV}, year = {2022}, }
This sample code is released under the LICENSE terms.
Some of this software was built on Nvidia codebase, as noted within the applicable files, and such Nvidia code is available under its own terms at https://github.com/NVlabs/stylegan2-ada-pytorch. The authors of this software are not responsible for the contents of third-party websites.