3D Floorplan Reconstruction

Our mission was to transform raw architectural blueprints into realistic and accurate 3D representations that could be explored in any browser—or even with VR headsets.

VR Floorplan Reconstruction

The goal of this project was to develop a solution for automatically creating 3D floor plans and virtual tours. Our mission was to transform raw architectural blueprints into realistic and accurate 3D representations that could be explored in any browser or with VR headsets, providing a valuable tool for real estate and architectural visualization.

Dataset

We curated a dataset of over 50,000 annotated floorplans, focusing on key architectural elements like walls, windows, doors, and dimension markers.

To accelerate annotation, we leveraged Segment Anything Model (SAM) for preliminary segmentation masks, which reduced manual labeling time by 40%.

Final annotations included pixel-level masks for structural elements and furniture placement zones, validated through a hybrid pipeline combining Labelbox tools and custom quality checks.


Floorplan Recognition

3D FloorPlan Reconstruction

Our core segmentation stack used a U-Net architecture with a ResNet-50 backbone, optimized for architectural drawings through domain-specific augmentations like line thickness variations and simulated scan artifacts.

The model achieved 94.2% mIoU on our test set by leveraging multi-scale feature fusion and a hybrid loss combining Dice and boundary-aware losses. For challenging elements like curved walls, we integrated Mask R-CNN as a refinement stage to capture fine geometric details.

Object Detection

Element localization used EfficientDet-D2 trained with focal loss to handle class imbalance between large structural components and small decorative elements.

We introduced rotational bounding boxes to better represent angled architectural features, improving window/door detection accuracy by 18% compared to standard axis-aligned approaches.


Spatial Coordinate Transformation

Converting 2D layouts to 3D required solving perspective distortions and inconsistent scaling.

We developed a hybrid approach using homography estimation for planar alignment and RANSAC-based outlier rejection to handle drawing artifacts.

Scale inference combined explicit dimension markers with learned priors about standard room sizes, achieving ±3% dimensional accuracy compared to ground-truth LIDAR scans.


Automatic Furnishing

The furnishing system combined rule-based layout principles with learned style preferences. Using Graph Neural Networks, we modeled spatial relationships between furniture items and room dimensions.

The system generated 3D renders in Unreal Engine 5 using procedural material generation for realistic textures.


System Integration

The final pipeline processed floorplans in under 90 seconds per 100m², leveraging ONNX Runtime for model inference and Open3D for 3D reconstruction.

VR exports used WebXR for browser compatibility, while high-fidelity renders utilized Unreal Engine’s Nanite geometry system.

The solution reduced 3D modeling costs by 70% for real estate clients compared to manual workflows.