Exploring 2D and 3D world creation using an LWM (Large World Model)

Exploring 2D and 3D world creation using an LWM (Large World Model)

Project Context

explores the transition from static 3D CAD geometry to generative world-building. The goal was to test how a Spatial Intelligence model interprets the intricate, "bubbly" shells and section cuts of the project when tasked with "lifting" 2D visual cues into a navigable 3D environment.

The Rhino-to-Point Cloud Pivot : try 1

I started the project with an existing Rhino model I designed for my thesis at SCI-Arc. Hoping to reconstruct a point cloud using world labs and see the outcome. I tested out exporting the model as a .ply file and discovered the export wasn't a point cloud but rather a mesh.

The Original SCI-Arc thesis Chunk Render

I then tried using 2D images to recreate the 3D model and soon found out the 2D image needed a ground plane. So I used Geminis image creator to add a ground plane to my 3D chunk render to provide the LWM with a consistent horizon line for better depth estimation.

Adding A Ground Plane With Gemini

Creating a Panorama with a prompt

The way the LLM works is with an initial prompt that creates a Panorama (uses less credit). it took me a while to connect the dots between the panorama and world model, but I finally figured it's purpose out after a few tries.

Reflections

The Translation Gap: This experiment highlighted the friction between top-down modeling (where every coordinate is fixed in Rhino) and bottom-up inference (where a World Model "guesses" the volume based on lighting and texture).


  • Atmospheric over Geometric: The LWM prioritized atmospheric persistence and cinematic lighting over the razor-sharp geometric precision required for technical architectural representation.


  • The Material Confusion of Chrome: Highly reflective surfaces like the "chrome bubbles" created a reconstruction paradox; the AI interpreted the reflections as separate physical volumes, leading to the "ghosting" artifacts and volumetric blur seen in the final world.


  • Spatial Intelligence vs. CAD: The project tested the limits of Neural Radiance Fields (NeRF) and Gaussian Splatting, proving that while AI can "dream" a convincing 360-degree environment, it currently lacks the semantic understanding to distinguish between a solid wall and a reflection.

Interested in working together? Let’s collaborate!

Interested in working together? Let’s collaborate!

Interested in working together? Let’s collaborate!

© Tarun hari 2026

The Spatial Organization Orchestrator