Skip to content

MatchLab-Imperial/NAS3R

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

From None to All:
Self-Supervised 3D Reconstruction via Novel View Synthesis

Ranran Huang · Weixun Luo · Ye Mao · Krystian Mikolajczyk

Teaser

NAS3R is a self-supervised feed-forward framework that jointly learns explicit 3D geometry and camera parameters with no ground-truth annotations and no pretrained priors.

Table of Contents
  1. Installation
  2. Pre-trained Checkpoints
  3. Camera Conventions
  4. Datasets
  5. Running the Code
  6. Acknowledgements
  7. Citation

Installation

  1. Clone NAS3R.
git clone --recurse-submodules git@github.com:ranrhuang/NAS3R.git
cd NAS3R
  1. Create the environment, here we show an example using conda.
conda create -n nas3r python=3.11 -y
conda activate nas3r
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
pip install -e submodules/diff-gaussian-rasterization

Pre-trained Checkpoints

Our models are hosted on Hugging Face 🤗

Model name Training resolutions Training data Training settings
re10k_nas3r.ckpt 256x256 re10k RE10K, 2 views

We assume the downloaded weights are located in the checkpoints directory.

Datasets

Please refer to DATASETS.md for dataset preparation.

Running the Code

Training

# 2 view on NAS3R (VGGT-based architecture)
python -m src.main +experiment=nas3r/random/re10k wandb.mode=online wandb.name=nas3r_re10k


# Initialized by pretrained VGGT weights for better performance and stability.
python -m src.main +experiment=nas3r/pretrained/re10k wandb.mode=online wandb.name=nas3r_re10k_pretrained

Evaluation

Novel View Synthesis and Pose Estimation on NAS3R (VGGT-based architecture)

# RealEstate10K on NAS3R
python -m src.main +experiment=nas3r/random/re10k mode=test wandb.name=re10k \
    dataset/view_sampler@dataset.re10k.view_sampler=evaluation \
    dataset.re10k.view_sampler.index_path=assets/evaluation_index_re10k.json \
    checkpointing.load=./checkpoints/re10k_nas3r.ckpt \
    test.save_image=false 

Camera Conventions

We follow the pixelSplat camera system. The camera intrinsic matrices are normalized (the first row is divided by image width, and the second row is divided by image height). The camera extrinsic matrices are OpenCV-style camera-to-world matrices ( +X right, +Y down, +Z camera looks into the screen).

Acknowledgements

This project is built upon these excellent repositories:SPFSplatV2, SPFSplat, NoPoSplat, pixelSplat, DUSt3R, and CroCo. We thank the original authors for their excellent work.

Citation

@article{huang2026nas3r,
      title={From None to All: Self-Supervised 3D Reconstruction via Novel View Synthesis} ,
      author={Ranran Huang and Weixun Luo and Ye Mao and Krystian Mikolajczyk},
      journal={arXiv preprint arXiv: 2603.27455},
      year={2026}
}

About

[CVPR 2026] From None to All: Self-Supervised 3D Reconstruction via Novel View Synthesis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 99.2%
  • Other 0.8%