Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data
57
Zitationen
3
Autoren
2023
Jahr
Abstract
We present Viewset Diffusion, a diffusion-based generator that outputs 3D objects while only using multi-view 2D data for supervision. We note that there exists a one-to-one mapping between viewsets, i.e., collections of several 2D views of an object, and 3D models. Hence, we train a diffusion model to generate viewsets, but design the neural network generator to reconstruct internally corresponding 3D models, thus generating those too. We fit a diffusion model to a large number of viewsets for a given category of objects. The resulting generator can be conditioned on zero, one or more input views. Conditioned on a single view, it performs 3D reconstruction accounting for the ambiguity of the task and allowing to sample multiple solutions compatible with the input. The model performs reconstruction efficiently, in a feed-forward manner, and is trained using only rendering losses using as few as three views per viewset. Project page: szymanowiczs.github.io/viewset-diffusion.
Ähnliche Arbeiten
Deep learning
2015 · 80.232 Zit.
Learning Multiple Layers of Features from Tiny Images
2024 · 25.470 Zit.
GAN(Generative Adversarial Nets)
2017 · 21.794 Zit.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
2017 · 21.688 Zit.
SSD: Single Shot MultiBox Detector
2016 · 20.601 Zit.