PhysGen3D: Crafting a Miniature Interactive World from a Single Image

1Tsinghua University, 2University of Illinois Urbana Champaign, 3Columbia University

PhysGen3D takes a single image as input and synthesizes videos of a miniature interactive world.

Abstract

Envisioning physically plausible outcomes from a single image requires a deep understanding of the world's dynamics. To address this, we introduce PhysGen3D, a novel framework that transforms a single image into an amodal, camera-centric, interactive 3D scene.

By combining advanced image-based geometric and semantic understanding with physics-based simulation, PhysGen3D creates an interactive 3D world from a static image, enabling us to "imagine" and simulate future scenarios based on user input. At its core, PhysGen3D estimates 3D shapes, poses, physical and lighting properties of objects, thereby capturing essential physical attributes that drive realistic object interactions. This framework allows users to specify precise initial conditions, such as object speed or material properties, for enhanced control over generated video outcomes.

We evaluate PhysGen3D's performance against closed-source state-of-the-art (SOTA) image-to-video models, including Pika, Kling, and Gen-3, showing PhysGen3D's capacity to generate videos with realistic physics while offering greater flexibility and fine-grained control. Our results show that PhysGen3D achieves a unique balance of photorealism, physical plausibility, and user-driven interactivity, opening new possibilities for generating dynamic, physics-grounded video from an image.

Pipeline

PhysGen3D Pipeline
Figure 1. PhysGen3D's framework pipeline. The system reconstructs 3D scenes from single images and enables interactive physics-based simulation.

Comparison

In this section, we compare the videos generated from our framework with three state-of-the-art (SOTA) I2V models: Gen-3, Pika and Kling. We carefully designed the prompt to describe the motion outcome, and uses motion brush to control Kling. Our framework employs initial velocity control. Results show that our method can follow text instructions while maintaining plausible physics.

"The dog is deflated and collapses."


Input Image

Input Image

Kling 1.0

Runway Gen-3

Pika 1.5

Ours

"The book falls and the orange rolls to the front."


Input Image

Input Image

Kling 1.0

Runway Gen-3

Pika 1.5

Ours


Dynamics Effects

In this section, we showcase the dynamics effects generated by our framework. We can generate various dynamics from the same input image by changing the initial velocity or editing the material. The results indicate our framework's capability to generate consistent and realistic physical behaviors.


Material

We change the materials of the two objects.

Input Image

Input Image

Rigid & Rigid

Elastic & Rigid

Soft & Soft

Motion

We change the initial velocity of the teddy bear.

Input Image

Input Image

Jump to the front

Jump to the right

Jump back


Applications

Our video generation framework, PhysGen3D, enables a range of exciting applications through its explicit representation. Here are just a few of the compelling use cases our system supports:


Dense 3D Tracking

Input Image

Input Image

Collapse

Bounce

Video Editing

We exchange one object between two scenes.

Image 1

Input Image 1

Image 2

Input Image 2

Video 1

Video2

We remove the chair while keeping the toy at initial position.

Input Image

Input Image

Collapse

BibTeX

@article{chen2025physgen3d,
  author    = {Chen, Boyuan and Jiang, Hanxiao and Liu, Shaowei and Gupta, Saurabh and Li, Yunzhu and Zhao, Hao and Wang, Shenlong},
  title     = {PhysGen3D: Crafting a Miniature Interactive World from a Single Image},
  journal   = {CVPR},
  year      = {2025},
}