Image-to-3D
Free image-to-3D AI tools for transforming images into 3D assets for games, films, and design projects, optimizing your creative process.
Portrait3D can generate high-quality 3D heads with accurate geometry and texture from a single in-the-wild portrait image.
Toon3D can generate 3D scenes from two or more cartoon drawings. It’s far from perfect, but still pretty cool!
InstantMesh can generate high-quality 3D meshes from a single image in under 10 seconds. It uses advanced methods like multiview diffusion and sparse-view reconstruction, and it significantly outperforms other tools in both quality and speed.
Speaking of reconstruction. Key2Mesh is yet another model that takes on 3D human mesh reconstruction, this time by utilizing 2D human pose keypoints as input instead of relying on visual data due to scarcity in image datasets with 3D labels.
TexDreamer can generate high-quality 3D human textures from text and images. It uses a smart fine-tuning method and a unique translator module to create realistic textures quickly while keeping important details intact.
TripoSR can generate high-quality 3D meshes from a single image in under 0.5 seconds.
MeshFormer can generate high-quality 3D textured meshes from just a few 2D images in seconds.
LGM can generate high-resolution 3D models from text prompts or single-view images. It uses a fast multi-view Gaussian representation, producing models in under 5 seconds while maintaining high quality.
En3D can generate high-quality 3D human avatars from 2D images without needing existing assets.
Doodle Your 3D can turn abstract sketches into precise 3D shapes. The method can even edit shapes by simply editing the sketch. Super cool. Sketch-to-3D-print isn’t that far away now.
WonderJourney lets you wander through your favourite paintings, peoms and haikus. The method can generate a sequence of diverse yet coherently connected 3D scenes from a single image or text prompt.
ZeroNVS is a 3D-aware diffusion model that is able to generate novel 360-degree views of in-the-wild scenes from a single real image.
Zero123++ can generate high-quality, 3D-consistent multi-view images from a single input image using an image-conditioned diffusion model. It fixes common problems like blurry textures and misaligned shapes, and includes a ControlNet for better control over the image creation process.
Wonder3D is able to convert a single image into a high-fidelity 3D model, complete with textured meshes and color. The entire process takes only 2 to 3 minutes.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
PlankAssembly can turn 2D line drawings from three views into 3D CAD models. It effectively handles noisy or incomplete inputs and improves accuracy using shape programs.
Similar like ControlNet scribble for images, SketchMetaFace brings sketch guidance to the 3D realm and makes it possible to turn a sketch into a 3D face model. Pretty excited about progress like this, as this will bring controllability to 3D generations and make generating 3D content way more accessible.
PAniC-3D can reconstruct 3D character heads from single-view anime portraits. It uses a line-filling model and a volumetric radiance field, achieving better results than previous methods and setting a new standard for stylized reconstruction.
Make-It-3D can create high-quality 3D content from a single image by estimating 3D shapes and adding textures. It uses a two-step process with a trained 2D diffusion model, allowing for text-to-3D creation and detailed texture editing.
SceneDreamer can generate endless 3D scenes from 2D image collections. It creates photorealistic images with clear depth and allows for free camera movement in the environments.