Lukas Koestler1* Daniel Grittner1* Michael Moeller2 Daniel Cremers1 Zorah Lähner2
*equal contribution
1Technical University of Munich
2University of Siegen
European Conference on Computer Vision (ECCV) 2022, Tel Aviv, Israel
(a) Overview of our method. We use the eigenfunctions of the Laplace-Beltrami operator (LBO) at each point as a point embedding. This overcomes the spectral bias of the multilayer perceptron (MLP), and hence the combined intrinsic neural field can represent a high-frequency function on the surface. Notice that the point can be inside a triangle, and the function is clearly more detailed than the discretization (insets). (b) An intrinsic neural texture field trained on one shape (top) can be transferred to a new shape (bottom) without retraining. (c) Due to our intrinsic approach (LBO eigenfunctions) local geometry is maintained in close but separate parts, whereas an extrinsic approach (Random Fourier Features) shows bleeding artifacts when trained with sparse supervision.
Neural fields have gained significant attention in the computer vision community due to their excellent performance in novel view synthesis, geometry reconstruction, and generative modeling. Some of their advantages are a sound theoretic foundation and an easy implementation in current deep learning frameworks. While neural fields have been applied to signals on manifolds, e.g., for texture reconstruction, their representation has been limited to extrinsically embedding the shape into Euclidean space. The extrinsic embedding ignores known intrinsic manifold properties and is inflexible wrt. transfer of the learned function. To overcome these limitations, this work introduces intrinsic neural fields, a novel and versatile representation for neural fields on manifolds. Intrinsic neural fields combine the advantages of neural fields with the spectral properties of the Laplace-Beltrami operator. We show theoretically that intrinsic neural fields inherit many desirable properties of the extrinsic neural field framework but exhibit additional intrinsic qualities, like isometry invariance. In experiments, we show intrinsic neural fields can reconstruct high-fidelity textures from images with state-of-the-art quality and are robust to the discretization of the underlying manifold. We demonstrate the versatility of intrinsic neural fields by tackling various applications: texture transfer between deformed shapes & different shapes, texture reconstruction from real-world images with view dependence, and discretization-agnostic learning on meshes and point clouds.
- 📣 Release main code and data
- Release local triangulation code as a Python package for ray-pointcloud intersection. Has been merged into nmwsharp/potpourri3d.
- Publish BigBIRD configs, data, and scripts. Pushed into banch bigbird and will be merged into master after aditional checks.
The following papers explore ideas similar to the ones presented within our work and will be referenced in an updated version of our arxiv paper. If there is another paper missing from the list, please reach out to us and we will add it. Papers are ordered by the initial publication date on arxiv. We believe that the variety of papers shows the potential of the underlying idea across a range of applications.
- Sign and Basis Invariant Networks for Spectral Graph Representation Learning
- Generalised Implicit Neural Representations
- Δ -PINNs: physics-informed neural networks on complex geometries
The data for the experiments can be downloaded by running the following command:
./download_data.sh
Run the following for creating a conda environment and installing the required dependencies:
conda create --name intrinsic-neural-fields python=3.6 -y
conda activate intrinsic-neural-fields
# Dependencies
pip3 install --ignore-installed certifi "trimesh[all]"
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch -y
pip3 install numpy==1.18.0
conda install -c conda-forge igl pyembree tqdm torchinfo==1.5.4 imageio tensorboardx opencv pyyaml -y
conda install scikit-image matplotlib -y
pip3 install lpips crc32c robust_laplacian
- Preprocess eigenfunctions:
python preprocess_eigenfunctions.py ... # See preprocess_eigenfunctions.py for the arguments
- Preprocess the training splits (train, val, test):
python preprocess_dataset.py ... # See preprocess_dataset.py for the arguments
Recommended: For convenience, you can also use the prepared preprocessing scripts:
./preprocessing_scripts/<SCRIPT_NAME>
After the preprocessing, run the following for trainings
python train.py <path-to-your-config> --allow_checkpoint_loading
After the training, you can run
python eval.py ... # See eval.py for the arguments
Recommended: For convenience, you can also use the prepared training scripts:
./training_scripts/<SCRIPT_NAME>
Please note that most scripts require the method as an input parameter. Please review the scripts for possible input parameters.
- Rendering a view from a given camera position
python render_view.py ... # See render_view.py for the arguments
- Bake the learned texture into a UV-map for exploration
python bake_texture_field.py ... # See bake_texture_field.py for the arguments
Custom datasets can be used with the existing code if they have the following folder structure:
dataset
|_ train.lst (Defines which views are preprocessed for training)
|_ test.lst (Defines which views are preprocessed for testing)
|_ val.lst (Defines which views are preprocessed for validation)
|_ <view-name> (Represents a view)
|_ depth
|_ cameras.npz (camera parameters)
|_ depth_0000.exr (depth map)
|_ image
|_ 000.png (GT view)
|_ ... (the rest does not matter)
|_ ...
The code is provided under a BSD 3-clause license. See the LICENSE file for details. Note also the different licenses for thirdparty submodules.