TUVF

: Learning Generalizable Texture UV Radiance Fields

An-Chieh Cheng
UC San Diego
Xueting Li
NVIDIA
Sifei Liu
NVIDIA
Xiaolong Wang
UC San Diego
equal advising

ICLR 2024

Paper (49.5MB) arXiv Code Video
We propose Texture UV Radiance Fields (TUVF), a category-level texture representation disentangled from 3D shapes. Our methods trains from only a collection of real-world images and a set of untextured shapes. Given a 3D shape, TUVF can synthesis realistic, high-fidelity, and diverse 3D consistent textures.

Abstract

Textures are a vital aspect of creating visually appealing and realistic 3D models. In this paper, we study the problem of generating high-fidelity texture given shapes of 3D assets, which has been relatively less explored compared with generic 3D shape modeling. Our goal is to facilitate a controllable texture generation process, such that one texture code can correspond to a particular appearance style independent of any input shapes from a category. We introduce Texture UV Radiance Fields (TUVF) that generate textures in a learnable UV sphere space rather than directly on the 3D shape. This allows the texture to be disentangled from the underlying shape and transferable to other shapes that share the same UV space, i.e., from the same category. We integrate the UV sphere space with the radiance field, which provides a more efficient and accurate representation of textures than traditional texture maps. We perform our experiments on real-world object datasets where we achieve not only realistic synthesis, but also substantial improvements over state-of-the-arts on texture controlling and editing.

Video


Canonical Surface Auto-encoder

Our Canonical Surface Auto-encoder learns smooth and dense correspondence on the surface. The color map indicates the correspondence mapping between each instance and the UV sphere.


Realistic Texture Synthesis

Texture synthesis results comparing to state-of-the-arts. Our method can synthesis more realistic and diverse textures.
Move the slider to change viewpoints.

Texturify

EpiGRAF

Ours

cars_texturify_59_1
cars_texturify_59_3
cars_texturify_59_4
cars_texturify_59_5
cars_epigraf_59_0
cars_epigraf_59_1
cars_epigraf_59_2
cars_epigraf_59_3
cars_ours_59_5
cars_ours_59_8
cars_ours_59_9
cars_ours_59_17

Texturify

EpiGRAF

Ours

chairs_texturify_2_0
chairs_texturify_2_1
chairs_texturify_2_2
chairs_texturify_2_3
chairs_epigraf_2_0
chairs_epigraf_2_1
chairs_epigraf_2_2
chairs_epigraf_2_3
chairs_ours_2_0
chairs_ours_2_1
chairs_ours_2_2
chairs_ours_2_3

Texturify

EpiGRAF

Ours

chairs_texturify_6_0
chairs_texturify_6_1
chairs_texturify_6_2
chairs_texturify_6_3
chairs_epigraf_6_0
chairs_epigraf_6_1
chairs_epigraf_6_2
chairs_epigraf_6_3
chairs_ours_6_0
chairs_ours_6_1
chairs_ours_6_2
chairs_ours_6_3

Disentangled Texture Transfer

TUVF is a novel texture representation that is disentangled from the underlying geometry of objects. This disentanglement allows TUVF to transfer textures across diverse 3D shapes while maintaining consistency in the appearance of the resulting textured objects.


Texture Editing and Transfer

We can perform direct editing on a given texture (adding a heart and an arrow) and transfer the edited texture on different 3D shapes through dense correspondence.


Overview of TUVF

We perform two-stage training: (i) We first train the Canonical Surface Auto-encoder, which learns decoders \(f_\theta\) and \(g_\theta\) predicting the coordinates and normals for each point on the UV sphere, given an encoded shape. (ii) We then train the Texture Feature Generator \(h_\theta\) which outputs a textured UV map. We can construct a Texture UV Radiance Field with the outputs from \(f_\theta\), \(g_\theta\), and \(h_\theta\), and render an RGB image as the output. We perform generative adversarial training to provide supervision for learning \(h_\theta\).


Citation

@article{cheng2023tuvf,
  author = {Cheng, An-Chieh and Li, Xueting and Liu, Sifei and Wang, Xiaolong},
  title = {TUVF: Learning Generalizable Texture UV Radiance Fields},
  journal = {International Conference on Learning Representations},
  year = {2024},
}

Acknowledgement

Webpage adapted from DreamFusion and SunStage.