Crafting a Miniature Interactive World from a Single Image

1Tsinghua University, 2University of Illinois Urbana Champaign, 3Columbia University

MiniTwin takes a single image as input and synthesizes videos of a miniature interactive world.

Abstract

Envisioning physically plausible outcomes from a single image requires a deep understanding of the world's dynamics. To address this, we introduce MiniTwin, a novel framework that transforms a single image into an amodal, camera-centric, interactive 3D scene.

By combining advanced image-based geometric and semantic understanding with physics-based simulation, MiniTwin creates an interactive 3D world from a static image, enabling us to "imagine" and simulate future scenarios based on user input. At its core, MiniTwin estimates 3D shapes, poses, physical and lighting properties of objects, thereby capturing essential physical attributes that drive realistic object interactions. This framework allows users to specify precise initial conditions, such as object speed or material properties, for enhanced control over generated video outcomes.

We evaluate MiniTwin's performance against closed-source state-of-the-art (SOTA) image-to-video models, including Pika, Kling, and Gen-3, showing MiniTwin's capacity to generate videos with realistic physics while offering greater flexibility and fine-grained control. Our results show that MiniTwin achieves a unique balance of photorealism, physical plausibility, and user-driven interactivity, opening new possibilities for generating dynamic, physics-grounded video from an image.

Pipeline

MiniTwin Pipeline
Figure 1. MiniTwin's framework pipeline. The system reconstructs 3D scenes from single images and enables interactive physics-based simulation.

Comparison

In this section, we compare the videos generated from our framework with three state-of-the-art (SOTA) I2V models: Gen-3, Pika and Kling. We carefully designed the prompt to describe the motion outcome, and uses motion brush to control Kling. Our framework employs initial velocity control. Results show that our method can follow text instructions while maintaining plausible physics.

"The dog is deflated and collapses."


Input Image

Input Image

Kling 1.0

Runway Gen-3

Pika 1.5

Ours

"The book falls and the orange rolls to the front."


Input Image

Input Image

Kling 1.0

Runway Gen-3

Pika 1.5

Ours

Dynamics Effects

In this section, we showcase the dynamics effects generated by our framework. We can generate various dynamics from the same input image by changing the initial velocity or editing the material. The results indicate our framework's capability to generate consistent and realistic physical behaviors.

Material

We change the materials of the two objects.

Input Image

Input Image

Rigid & Rigid

Elastic & Rigid

Soft & Soft


Related Links

There's a lot of excellent work that was introduced around the same time as ours.

Progressive Encoding for Neural Optimization introduces an idea similar to our windowed position encoding for coarse-to-fine optimization.

D-NeRF and NR-NeRF both use deformation fields to model non-rigid scenes.

Some works model videos with a NeRF by directly modulating the density, such as Video-NeRF, NSFF, and DyNeRF

There are probably many more by the time you are reading this. Check out Frank Dellart's survey on recent NeRF papers, and Yen-Chen Lin's curated list of NeRF papers.

BibTeX

@article{park2021nerfies,
  author    = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
  title     = {Nerfies: Deformable Neural Radiance Fields},
  journal   = {ICCV},
  year      = {2021},
}