back to home

openai / point-e

Point cloud diffusion for 3D model synthesis

6,855 stars
798 forks
80 issues
PythonJupyter Notebook

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing openai/point-e in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind-ai.vercel.app/repo/openai/point-e)
Preview:Analyzed by RepoMind

Repository Summary (README)

Preview

Point·E

Animation of four 3D point clouds rotating

This is the official code and model release for Point-E: A System for Generating 3D Point Clouds from Complex Prompts.

Usage

Install with pip install -e ..

To get started with examples, see the following notebooks:

  • image2pointcloud.ipynb - sample a point cloud, conditioned on some example synthetic view images.
  • text2pointcloud.ipynb - use our small, worse quality pure text-to-3D model to produce 3D point clouds directly from text descriptions. This model's capabilities are limited, but it does understand some simple categories and colors.
  • pointcloud2mesh.ipynb - try our SDF regression model for producing meshes from point clouds.

For our P-FID and P-IS evaluation scripts, see:

For our Blender rendering code, see blender_script.py

Samples

You can download the seed images and point clouds corresponding to the paper banner images here.

You can download the seed images used for COCO CLIP R-Precision evaluations here.