Published On: Thu, Jun 23rd, 2022

TAVA: Template-free Animatable Volumetric Actors

Current technologies enable us to capture, reconstruct and encode increasingly realistic representations of 3D objects and humans. However, the problem of capturing dynamic scenes that can be animated in a meaningful way is not widely researched.

Example of clustering results. Image credit:  arXiv:2206.08929 [cs.CV]

Example of clustering results. Image credit: arXiv:2206.08929 [cs.CV]

A recent paper on proposes TAVA, a novel approach for Template-free Animatable Volumetric Avatars.

Researchers use coordinate-based radiance fields to capture appearance, which lets to develop high-quality, faithful renderings. The method creates a virtual actor from multiple sparse video views and 3D poses. This information is then used to create a canonical shape and a pose-dependent skinning function. The model can be used for both rendering and editing the virtual character.

Experiments demonstrate that the proposed approach outperforms state-of-the-art approaches for animating and rendering human actors. It is also shown how the method can be used to render animals.

Coordinate-based volumetric representations have the potential to generate photo-realistic virtual avatars from images. However, virtual avatars also need to be controllable even to a novel pose that may not have been observed. Traditional techniques, such as LBS, provide such a function; yet it usually requires a hand-designed body template, 3D scan data, and limited appearance models. On the other hand, neural representation has been shown to be powerful in representing visual details, but are under explored on deforming dynamic articulated actors. In this paper, we propose TAVA, a method to create T emplate-free Animatable Volumetric Actors, based on neural representations. We rely solely on multi-view data and a tracked skeleton to create a volumetric model of an actor, which can be animated at the test time given novel pose. Since TAVA does not require a body template, it is applicable to humans as well as other creatures such as animals. Furthermore, TAVA is designed such that it can recover accurate dense correspondences, making it amenable to content-creation and editing tasks. Through extensive experiments, we demonstrate that the proposed method generalizes well to novel poses as well as unseen views and showcase basic editing capabilities.

Research article: Li, R., “TAVA: Template-free Animatable Volumetric Actors”, 2022. Link to the article:
Project website:

Source link

Most Popular News

Local Business Directory, Search Engine Submission & SEO Tools