Computer-aided design (CAD) significantly enhances the efficiency, accuracy, and innovation of design processes by enabling precise 2D and 3D modeling, extensive analysis, and optimization. Existing methods for creating CAD models rely on latent vectors or point clouds, which are difficult to obtain and storage costs are substantial. Recent advances in Multi-modal Large Language Models (MLLMs) have inspired researchers to use natural language instructions and images for CAD model construction. However, these models still struggle with inferring accurate 3D spatial location and orientation, leading to inaccuracies in determining the spatial 3D starting points and extrusion directions for constructing geometries. This work introduces CAD-GPT, a CAD synthesis method with spatial reasoning-enhanced MLLM. Our method propose a 3D Modeling Spatial Mechanism for accurately inferring spatial information. This method maps 3D spatial positions and 3D sketch plane rotation angles into a 1D linguistic feature space using a specialized spatial unfolding mechanism while discretizing 2D sketch coordinates into an appropriate planar space, enabling precise determination of spatial position, sketch orientation, and translation. Extensive experiments demonstrate that CAD-GPT consistently outperforms existing state-of-the-art methods in CAD model synthesis, both quantitatively and qualitatively.
Dataset Construction: Utilizing the DeepCAD dataset, we generated 160k fixed-viewpoint CAD model images and 18k corresponding natural language captions. Enhancing Spatial Reasoning Capability: We designed a novel localization mechanism tailored for the 3D modeling process, enhancing the spatial reasoning capabilities of large language models by mapping 3D space into 1D through a tokenization method. Training strategy: The training was conducted in two phases: first, we fine-tuned on the image-CAD data, and then on the text-CAD data with a lower learning rate.
The models in the image demonstrate semantic sketch generation capabilities (e.g., a heart shape and the letter "E"), category-based CAD generation capabilities (e.g., a table, a chair, and a key), spatial reasoning abilities (e.g., a table and mutually perpendicular cylinders), and the capability to generate identical models with varying dimensions (e.g., three connectors with two circular holes of differing sizes).
Given a single image, CAD-GPT leverages its advanced spatial reasoning capabilities to accurately generate the modeling sequence of the CAD model depicted in the image.
Given a textual description, CAD-GPT can generate a CAD model that precisely aligns with the semantics of the description.
It is evident that with the addition of the 3D Modeling Spatial Mechanism, the model can accurately infer key parameters such as the 3D angles, 3D starting positions, and 2D sketch shapes during the modeling process, enabling precise model generation.
Based on the displayed models, we can observe CAD-GPT's advanced spatial reasoning capabilities and its ability to generate complex sketches.
If you find our project helpful, please kindly consider citing our paper.