Dr.

Chun-Hao Paul Huang

Senior Research Scientist at Adobe Research London
Building the future of generative video and 3D — from research to production

Chun-Hao Paul Huang
Scroll

Who I Am

Full CV

I am a Senior Research Scientist at Adobe Research London. My work lives at the intersection of multi-modal generative AI, 3D vision, and creative tools used by millions.

I led the cross-org effort that shipped free-form 3D camera control in Adobe Firefly Video, built Adobe's first in-house zero-shot video generation model, and contributed to the first text-to-multiview model.

Prior to Adobe, I spent three years as a postdoc at the Max Planck Institute for Intelligent Systems with Dr. Michael Black, focusing on 3D human body reconstruction and human-scene interaction. I received my Ph.D. (summa cum laude) from TUM, with collaborations at INRIA, Disney Research, and Microsoft Research.

Shipped Technology
Firefly Video · 3D Camera Control
Outstanding Reviewer
CVPR 2025 · 711 / 12,593
Best Paper Finalist
CVPR 2022 · 33 / 8,000
Best Paper Finalist
CVPR 2021 · 32 / 7,500
Best Paper Runner-up
3DV 2013
🎓
Education
Ph.D. (summa cum laude), TUM · M.Sc. & B.Sc., NCKU
🔬
Research
Multi-modal GenAI · 3D & Video Generation
🏢
Current
Senior Research Scientist, Adobe Research London
🏛️
Previous
Postdoc, Max Planck Institute for Intelligent Systems

Highlighted Projects

SpaceTimePilot
CVPR 2026 4D Rendering

SpaceTimePilot: Generative Rendering Across Space and Time

Disentangles space and time in video diffusion for controllable generative rendering. Given a single video, freely steer camera viewpoint and temporal motion across the 4D space-time domain.

Project Page →
Firefly Video 3D Camera Control
Shipped Adobe Firefly

Firefly Video: 3D Camera Control

Led the cross-org effort to ship free-form 3D camera control in the Firefly Video Model. Adaptor-based conditioning adds only 3% runtime overhead. T2V shipped Aug 2025; I2V shipped Dec 2025.

Try it live →
JOG3R
BMVC 2025 3D Video

JOG3R: Towards 3D-Consistent Video Generators

Achieving 3D consistency in video generation through geometric priors, enabling cameras to move freely through generated scenes.

Project Page →
HUMOTO
ICCV 2025 Human-Object Dataset

HUMOTO: 4D Mocap Human-Object Interactions

A large-scale 4D dataset of motion-captured human-object interactions, enabling research in human motion understanding and generation.

Project Page →
Pix2Video
ICCV 2023 Video Editing

Pix2Video: Video Editing using Image Diffusion

Repurposes pretrained image diffusion models for zero-shot video editing, enabling consistent frame-to-frame style and content manipulation.

Project Page →
RICH
CVPR 2022 Human-Scene

RICH: Capturing Dense Human-Scene Contact

A dataset with videos of people in natural scenarios paired with ground-truth 3D body pose/shape and 3D scene scans for dense human-scene contact labeling.

Project Page →

Selected Publications