Textures are a vital aspect of creating visually appealing and realistic 3D models. In this paper, we study the problem of generating high-fidelity texture given shapes of 3D assets, which has been relatively less explored compared with generic 3D shape modeling. Our goal is to facilitate a controllable texture generation process, such that one texture code can correspond to a particular appearance style independent of any input shapes from a category. We introduce Texture UV Radiance Fields (TUVF) that generate textures in a learnable UV sphere space rather than directly on the 3D shape. This allows the texture to be disentangled from the underlying shape and transferable to other shapes that share the same UV space, i.e., from the same category. We integrate the UV sphere space with the radiance field, which provides a more efficient and accurate representation of textures than traditional texture maps. We perform our experiments on real-world object datasets where we achieve not only realistic synthesis, but also substantial improvements over state-of-the-arts on texture controlling and editing.
Our Canonical Surface Auto-encoder learns smooth and dense correspondence on the surface. The color map indicates the correspondence mapping between each instance and the UV sphere.
Texture synthesis results comparing to state-of-the-arts.
Our method can synthesis more realistic and diverse textures.
Move the slider to change viewpoints.
TUVF is a novel texture representation that is disentangled from the underlying geometry of objects. This disentanglement allows TUVF to transfer textures across diverse 3D shapes while maintaining consistency in the appearance of the resulting textured objects.
We can perform direct editing on a given texture (adding a heart and an arrow) and transfer the edited texture on different 3D shapes through dense correspondence.
We perform two-stage training: (i) We first train the Canonical Surface Auto-encoder, which learns decoders \(f_\theta\) and \(g_\theta\) predicting the coordinates and normals for each point on the UV sphere, given an encoded shape. (ii) We then train the Texture Feature Generator \(h_\theta\) which outputs a textured UV map. We can construct a Texture UV Radiance Field with the outputs from \(f_\theta\), \(g_\theta\), and \(h_\theta\), and render an RGB image as the output. We perform generative adversarial training to provide supervision for learning \(h_\theta\).
@article{cheng2023tuvf,
author = {Cheng, An-Chieh and Li, Xueting and Liu, Sifei and Wang, Xiaolong},
title = {TUVF: Learning Generalizable Texture UV Radiance Fields},
journal = {International Conference on Learning Representations},
year = {2024},
}