A growing curation of Text-to-3D, Diffusion-to-3D works.
The First Curation of Text-to-3D, Diffusion-to-3D works. Heavily inspired by awesome-NeRF
02.04.2024
- Begin linking to project pages and codes09.02.2024
- Level One Categorization11.11.2023
- Added Tutorial Videos05.08.2023
- Provided citations in BibTeX06.07.2023
- Created initial listZero-Shot Text-Guided Object Generation with Dream Fields, Ajay Jain et al., CVPR 2022 | citation | site | code
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation, Aditya Sanghi et al., Arxiv 2021 | citation | site | code
PureCLIPNERF: Understanding Pure CLIP Guidance for Voxel Grid NeRF Models, Han-Hung Lee et al., Arxiv 2022 | citation | site | code
SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, Yen-Chi Cheng et al., CVPR 2023 | citation | site | code
DreamFusion: Text-to-3D using 2D Diffusion, Ben Poole et al., ICLR 2023 | citation | site | code
Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models, Jiale Xu et al., Arxiv 2022 | citation | site | code
Novel View Synthesis with Diffusion Models, Daniel Watson et al., Arxiv 2022 | citation | site | code
NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views, Dejia Xu et al., Arxiv 2022 | citation | site | code
Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Alex Nichol et al., Arxiv 2022 | citation | site | code
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures, Gal Metzer et al., Arxiv 2023 | citation | site | code
Magic3D: High-Resolution Text-to-3D Content Creation, Chen-Hsuan Linet et al., CVPR 2023 | citation | site | code
RealFusion: 360° Reconstruction of Any Object from a Single Image, Luke Melas-Kyriazi et al., CVPR 2023 | citation | site | code
Monocular Depth Estimation using Diffusion Models, Saurabh Saxena et al., Arxiv 2023 | citation | site | code
SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction, Zhizhuo Zho et al., CVPR 2023 | citation | site | code
NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion, Jiatao Gu et al., ICML 2023 | citation | site | code
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, Haochen Wang et al., CVPR 2023 | citation | site | code
High-fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation | site | code
TEXTure: Text-Guided Texturing of 3D Shapes, Elad Richardson Chen et al., SIGGRAPH 2023 | citation | site | code
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors, Congyue Deng et al., CVPR 2023 | citation | site | code
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models, Jamie Wynn et al., CVPR 2023 | citation | site | code
3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, Yuhan Li et al., CVPR 2023 | citation | site | code
DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model, Gwanghyun Kim et al., CVPR 2023 | citation | site | code
Novel View Synthesis with Diffusion Models, Daniel Watson et al., ICLR 2023 | citation | site | code
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, Zhengyi Wang et al., Arxiv 2023 | citation | site | code
3D-aware Image Generation using 2D Diffusion Models, Jianfeng Xiang et al., Arxiv 2023 | citation | site | code
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, Junshu Tang et al., ICCV 2023 | citation | site | code
GECCO: Geometrically-Conditioned Point Diffusion Models, Michał J. Tyszkiewicz et al., ICCV 2023 | citation | site | code
Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond, Mohammadreza Armandpour et al., Arxiv 2023 | citation | site | code
Generative Novel View Synthesis with 3D-Aware Diffusion Models, Eric R. Chan et al., Arxiv 2023 | citation | site | code
Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields, Jingbo Zhang et al., Arxiv 2023 | citation | site | code
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors, Guocheng Qian et al., Arxiv 2023 | citation | site | code
DreamBooth3D: Subject-Driven Text-to-3D Generation, Amit Raj et al., ICCV 2023 | citation | site | code
Zero-1-to-3: Zero-shot One Image to 3D Object, Ruoshi Liu et al., Arxiv 2023 | citation | site | code
ATT3D: Amortized Text-to-3D Object Synthesis, Jonathan Lorraine et al., ICCV 2023 | citation | site | code
Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation, Zibo Zhao et al., Arxiv 2023 | citation | site | code
Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions, Gene Chou et al., Arxiv 2023 | citation | site | code
HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance, Junzhe Zhu et al., Arxiv 2023 | citation | site | code
LERF: Language Embedded Radiance Fields, Justin Kerr et al., Arxiv 2023 | citation | site | code
3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Junyoung Seo et al., Arxiv 2023 | citation | site | code
MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Shitao Tang et al., Arxiv 2023 | citation | site | code
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization, Minghua Liu et al., Arxiv 2023 | citation | site | code
TextMesh: Generation of Realistic 3D Meshes From Text Prompts, Christina Tsalicoglou Liu et al., Arxiv 2023 | citation | site | code
Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, Xingqian Xu et al., Arxiv 2023 | citation | site | code
SceneScape: Text-Driven Consistent Scene Generation, Rafail Fridman et al., Arxiv 2023 | citation | site | code
CLIP-Mesh: Generating textured meshes from text using pretrained image-text models, Nasir Khalid et al., Arxiv 2023 | citation | site | code
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models, Lukas Höllein et al., Arxiv 2023 | citation | site | code
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction, Hansheng Chen et al., Arxiv 2023 | citation | site | code
PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion, Gwanghyun Kim et al., ICCV 2023 | citation | site | code
Shap-E: Generating Conditional 3D Implicit Functions, Heewoo Jun et al., Arxiv 2023 | citation | site | code
Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation, Aditya Sanghi et al., Arxiv 2023 | citation | site | code
3D VADER - AutoDecoding Latent 3D Diffusion Models, Evangelos Ntavelis et al., Arxiv 2023 | citation | site | code
DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views, Paul Yoo et al., Arxiv 2023 | citation | site | code
Cap3D: Scalable 3D Captioning with Pretrained Models, Tiange Luo et al., Arxiv 2023 | citation | site | code
InstructP2P: Learning to Edit 3D Point Clouds with Text Instructions, Jiale Xu et al., Arxiv 2023 | citation | site | code
3D-LLM: Injecting the 3D World into Large Language Models, Yining Hong et al., Arxiv 2023 | citation | site | code
Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation, Chaohui Yu et al., Arxiv 2023 | citation | site | code
RGB-D-Fusion: Image Conditioned Depth Diffusion of Humanoid Subjects, Sascha Kirch et al., Arxiv 2023 | citation | site | code
IT3D: Improved Text-to-3D Generation with Explicit View Synthesis, Yiwen Chen et al., Arxiv 2023 | citation | site | code
MVDream: Multi-view Diffusion for 3D Generation, Yichun Shi et al., Arxiv 2023 | citation | site | code
PointLLM: Empowering Large Language Models to Understand Point Clouds, Xu Runsen et al., Arxiv 2023 | citation | site | code
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image, Yuan Liu et al., Arxiv 2023 | citation | site | code
Large-Vocabulary 3D Diffusion Model with Transformer, Ziang Cao et al., Arxiv 2023 | citation | site | code
Progressive Text-to-3D Generation for Automatic 3D Prototyping, Han Yi et al., Arxiv 2023 | citation | site | code
DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation, Jiaxiang Tang et al., Arxiv 2023 | citation | site | code
SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D, Weiyu Li et al., Arxiv 2023 | citation | site | code
Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors, Yukang Lin et al., Arxiv 2023 | citation | site | code
GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors,Taoran Yi et al., Arxiv 2023 | citation | site | code
Text-to-3D using Gaussian Splatting, Zilong Chen et al., Arxiv 2023 | citation | site | code
Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model, Ruoxi Shi et al., Arxiv 2023 | citation | site | code
DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior, Jingxiang Sun et al., Arxiv 2023 | citation | site | code
HyperFields: Towards Zero-Shot Generation of NeRFs from Text, Sudarshan Babu et al., Arxiv 2023 | citation | site | code
Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping, Zijie Pan et al., Arxiv 2023 | citation | site | code
Text-to-3D with classifier score distillation, Xin Yu et al., Arxiv 2023 | citation | site | code
Noise-Free Score Distillation, Oren Katzir et al., Arxiv 2023 | citation | site | code
LRM: Large Reconstruction Model for Single Image to 3D, Yicong Hong et al., Arxiv 2023 | citation | site | code
One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion, Minghua Liu et al., Arxiv 2023 | citation | site | code
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching, Yixun Liang et al., Arxiv 2023 | citation | site | code
MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry and Texture, Lincong Feng et al., Arxiv 2023 | citation | site | code
Adversarial Diffusion Distillation, Axel Sauer et al., Arxiv 2023 | citation | site | code
MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers, Yawar Siddiqui et al., Arxiv 2023| citation | site | code
DreamPropeller: Supercharge Text-to-3D Generation with Parallel Sampling, Linqi Zhou et al., Arxiv 2023| citation | site | code
X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap Between Text-to-2D and Text-to-3D Generation, Yiwei Ma et al., Arxiv 2023 | citation | site | code
StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D, Pengsheng Guo et al., Arxiv 2023 | citation | site | code
CAD: Photorealistic 3D Generation via Adversarial Distillation, Ziyu Wan et al., Arxiv 2023 | citation | site | code
RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D, Lingteng Qiu et al., Arxiv 2023 | citation | site | code
Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion, Kira Prabhu et al., Arxiv 2023 | citation | site | code
Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors, Lihe Ding et al., Arxiv 2023 | citation | site | code
Text2Immersion: Generative Immersive Scene with 3D Gaussians, Hao Ouyang et al., Arxiv 2023 | citation | site | code
Stable Score Distillation for High-Quality 3D Generation, Boshi Tang et al., Arxiv 2023 | citation | site | code
Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object Structure via HyperNetworks, Christian Simon et al., Arxv 2023 | citation | site | code
HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D, Sangmin Woo et al., Arxv 2023 | citation | site | code
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity, Peihao Wang et al., Arxiv 2024 | citation | site | code
AGG: Amortized Generative 3D Gaussians for Single Image to 3D, Dejia Xu et al., Arxiv 2024 | citation | site | code
Topology-Aware Latent Diffusion for 3D Shape Generation, Jiangbei Hu et al., Arxiv 2024 | citation | site | code
AToM: Amortized Text-to-Mesh using 2D Diffusion, Guocheng Qian et al., Arxiv 2024 | citation
LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation, Jiaxiang Tang et al., Arxiv 2024 | citation | site | code
IM-3D: : Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation, Luke Melas-Kyriazi et al., Arxiv 2024 | citation | site | code
L3GO: Language Agents with Chain-of-3D-Thoughts for Generating Unconventional Objects, Yutaro Yamada et al., Arxiv 2024 | citation | site | code
MVD2: Efficient Multiview 3D Reconstruction for Multiview Diffusion, Xin-Yang Zheng et al., Arxiv 2024 | citation | site | code
Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability, Xuelin Qian et al., Arxiv 2024 | citation | site | code
SceneWiz3D: Towards Text-guided 3D Scene Composition, Qihang Zhang et al., CVPR 2024 | citation | site | code
TripoSR: Fast 3D Object Reconstruction from a Single Image Dmitry Tochilkin et al., Arxiv 2024 | citation | site | code
V3D: Video Diffusion Models are Effective 3D Generators Zilong Chen et al., Arxiv 2024 | citation | site | code
CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model Zhengyi Wang et al., Arxiv 2024 | citation | site | code
Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation Fangfu Liu et al., Arxiv 2024 | citation | site | code
Isotropic3D: Image-to-3D Generation Based on a Single CLIP Embedding, Pengkun Liu et al., Arxiv 2024 | citation | site | code
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion, Vikram Volet et al., Arxiv 2024 | citation | site | code
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing, Hansheng Chen et al., Arxiv 2024 | citation | site | code
GVGEN: Text-to-3D Generation with Volumetric Representation, Xianglong He et al., Arxiv 2024 | citation | site | code
BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis, Lutao Jiang et al., Arxiv 2024 | citation | site | code
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis, Kevin Xie et al., Arxiv 2024 | citation | site | code
Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation, Fangfu Liu et al., Arxiv 2024 | citation | site | code
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation, Yinghao Xu et al., Arxiv 2024 | citation | site | code
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation, Yang Chen et al., Arxiv 2024 | citation | site | code
DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion, Yuanze Lin et al., Arxiv 2024 | citation | site | code
PointInfinity: Resolution-Invariant Point Diffusion Models, Zixuan Huang et al., Arxiv 2024 | citation | site | code
The More You See in 2D, the More You Perceive in 3D, Xinyang Han et al., Arxiv 2024 | citation | site | code
Hash3D: Training-free Acceleration for 3D Generation, Xingyi Yang et al., Arxiv 2024 | citation | site | code
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion, Jaidev Shriram et al., Arxiv 2024 | citation | site | code
TC4D: Trajectory-Conditioned Text-to-4D Generation, Sherwin Bahmani et al., Arxiv 2024 | citation | site | code
Zero-shot Point Cloud Completion Via 2D Priors, Tianxin Huang et al., Arxiv 2024 | citation | site | code
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models, Jiale Xu et al., Arxiv 2024 | citation | site | code
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields, Can Wang et al., Arxiv 2021 | citation | site | code
CG-NeRF: Conditional Generative Neural Radiance Fields, Kyungmin Jo et al., Arxiv 2021 | citation | site | code
TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition, Yongwei Chen et al., NeurIPS 2022 | citation | site | code
3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models, Gang Li et al., Arxiv 2022 | citation | site | code
NeRF-Art: Text-Driven Neural Radiance Fields Stylization, Can Wang et al., Arxiv 2022 | citation | site | code
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, Ayaan Haque et al., Arxiv 2023 | citation | site | code
Local 3D Editing via 3D Distillation of CLIP Knowledge, Junha Hyung et al., Arxiv 2023 | citation | site | code
RePaint-NeRF: NeRF Editing via Semantic Masks and Diffusion Models, Xingchen Zhou et al., Arxiv 2023 | citation | site | code
Text2Tex: Text-driven Texture Synthesis via Diffusion Models, Dave Zhenyu Chen et al., Arxiv 2023 | citation | site | code
Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor, Ruizhi Shao et al., Arxiv 2023 | citation | site | code
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, Rui Chen et al., Arxiv 2023 | citation | site | code
Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes, Dana Cohen-Bar et al., Arxiv 2023 | citation | site | code
MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR, Xudong Xu et al., Arxiv 2023 | citation | site | code
SATR: Zero-Shot Semantic Segmentation of 3D Shapes, Ahmed Abdelreheem et al., ICCV 2023 | citation | site | code
Texture Generation on 3D Meshes with Point-UV Diffusion, Xin Yu et al., ICCV 2023 | citation | site | code
Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts, Xinhua Cheng et al., Arxiv 2023 | citation | site | code
3D-GPT: Procedural 3D Modeling with Large Language Models, Chunyi Sun et al., Arxiv 2023 | citation | site | code
CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models, Ziyang Yuan et al., Arxiv 2023 | citation | site | code
Decorate3D: Text-Driven High-Quality Texture Generation for Mesh Decoration in the Wild, Yanhui Guo et al., NeurIPS 2023 | citation | site | code
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image, Tong Wu et al., Arxiv 2023 | citation | site | code
InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes, Mohamad Shahbazi et al., Arxiv 2024 | citation | site | code
ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields, JEdward Bartrum et al., Arxiv 2024 | citation | site | code
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation, Minglin Chen et al., Arxiv 2024| citation | site | code
BoostDream: Efficient Refining for High-Quality Text-to-3D Generation from Multi-View Diffusion, Yonghao Yu et al., Arxiv 2024 | citation | site | code
2L3: Lifting Imperfect Generated 2D Images into Accurate 3D, Yizheng Chen et al., Arxiv 2024 | citation | site | code
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting, Xiaoyu Zhou et al., Arxiv 2024 | citation | site | code
Disentangled 3D Scene Generation with Layout Learning, Dave Epstein et al., Arxiv 2024 | citation | site | code
MagicClay: Sculpting Meshes With Generative Neural Fields, Amir Barda et al., Arxiv 2024 | citation | site | code
TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation Yufei Liu et al., Arxiv 2024 | citation | site | code
InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting, Jiaxiang Tang et al., Arxiv 2024 | citation | site | code
SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer, Zijie Wu et al., Arxiv 2024 | citation | site | code
Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion, Tengfei Wang et al., Arxiv 2022 | citation | site | code
DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars, David Svitov et al., Arxiv 2023 | citation | site | code
ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image, Zhenzhen Weng et al., Arxiv 2023 | citation
AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control, Ruixiang Jiang et al., ICCV 2023 | citation | site | code
Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models, Byungjun Kim et al., ICCV 2023 | citation | site | code
DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance, Longwen Zhang et al., Arxiv 2023 | citation | site | code
HeadSculpt: Crafting 3D Head Avatars with Text, Xiao Han et al., Arxiv 2023 | citation | site | code
DreamHuman: Animatable 3D Avatars from Text, Nikos Kolotouros et al., Arxiv 2023 | citation | site | code
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields, Sungwon Hwang et al., Arxiv 2023 | citation | site | code
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose, Huichao Zhang et al., Arxiv 2023 | citation | site | code
TeCH: Text-guided Reconstruction of Lifelike Clothed Humans, Yangyi Huang et al., Arxiv 2023 | citation | site | code
HumanLiff: Layer-wise 3D Human Generation with Diffusion Model, Hu Shoukang et al., Arxiv 2023 | citation | site | code
TADA! Text to Animatable Digital Avatars, Tingting Liao et al., Arxiv 2023 | citation | site | code
One-shot Implicit Animatable Avatars with Model-based Priors, Yangyi Huang et al., ICCV 2023 | citation | site | code
Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model, Sungwon Hwang et al., Arxiv 2023 | citation | site | code
Text-Guided Generation and Editing of Compositional 3D Avatars, Hao Zhang et al., Arxiv 2023 | citation | site | code
HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation, Xin Huang et al., Arxiv 2023 | citation | site | code
HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting, Xian Liu et al., Arxiv 2023 | citation | site | code
Text-Guided 3D Face Synthesis: From Generation to Editing, Yunjie Wu wt al., Arxiv 2023 | citation | site | code
SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained Geometry and Appearance, Yuanyou Xu et al., Arxiv 2023 | citation | site | code
GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning, Ye Yuan et al., Arxiv 2023 | citation | site | code
Make-A-Character: High Quality Text-to-3D Character Generation within Minutes, Jianqiang Ren et al., Arxv 2023 | citation | site | code
En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data, Yifang Men et al., Arxiv 2024 | citation | site | code
HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting, Zhenglin Zhou et al., Arxiv 2024 | citation | site | code
InstructHumans: Editing Animatable 3D Human Textures with Instructions, Jiayin Zhu et al., Arxiv 2024 | citation | site | code
Text-To-4D Dynamic Scene Generation, Uriel Singer et al., Arxiv 2023 | citation | site | code
TextDeformer: Geometry Manipulation using Text Guidance, William Gao et al., Arxiv 2033 | citation | site | code
Consistent4D: Consistent 360 Degree Dynamic Object Generation from Monocular Video, Yanqin Jiang et al., Arxiv 2023 | citation | site | code
4D-fy:Text-to-4D Generation Using Hybrid Score Distillation Sampling, Lincong Feng et al., Arxiv 2023 | citation | site | code
Objaverse: A Universe of Annotated 3D Objects, Matt Deitke et al., Arxiv 2022 | citation
Objaverse-XL: A Universe of 10M+ 3D Objects, Matt Deitke et al., Preprint 2023 | citation
Describe3D: High-Fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation
Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting, Hao Ouyang et at., Arxiv 2023 | citation
Customize-It-3D: High-Quality 3D Creation from A Single Image Using Subject-Specific Knowledge Prior, Nan Huang et al., Arxiv 2023 | citation
Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering, Kim Youwan et al., Arxiv 2023 | citation
SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding, Baoxiong Jia et al., Arxiv 2024 | citation
threestudio: A unified framework for 3D content generation, Yuan-Chen Guo et al., Github 2023
Nerfstudio: A Modular Framework for Neural Radiance Field Development, Matthew Tancik et al., SIGGRAPH 2023
Mirage3D: Open-Source Implementations of 3D Diffusion Models Optimized for GLB Output, Mirageml et al., Github 2023