--- library_name: diffusers pipeline_tag: text-to-image datasets: - OpenTO/OpenTO --- # Model Card for Optimize Any Topology This repository contains the **Optimize Any Topology (OAT)** model, a foundation-model framework for structural topology optimization, as presented in the paper [Optimize Any Topology: A Foundation Model for Shape- and Resolution-Free Structural Topology Optimization](https://huggingface.co/papers/2510.23667). ## Model Details ### Model Description OAT is a novel foundation-model framework that directly predicts minimum-compliance layouts for arbitrary aspect ratios, resolutions, volume fractions, loads, and fixtures. It combines a resolution- and shape-agnostic autoencoder with an implicit neural-field decoder and a conditional latent-diffusion model. The model is trained on OpenTO, a new corpus of 2.2 million optimized structures covering 2 million unique boundary-condition configurations. OAT significantly lowers mean compliance compared to prior models and delivers sub-1 second inference across various resolutions and aspect ratios, establishing it as a general, fast, and resolution-free framework for physics-aware topology optimization. - **Developed by:** Ahnobari et al. - **Model type:** Conditional Latent Diffusion Model for structural topology optimization. - **License:** [More Information Needed] ### Model Sources - **Repository:** https://github.com/ahnobari/OptimizeAnyTopology - **Paper:** https://huggingface.co/papers/2510.23667 ## Uses ### Direct Use OAT is intended for direct use in structural topology optimization. It allows engineers and researchers to quickly generate minimum-compliance layouts for complex design problems with arbitrary aspect ratios, resolutions, volume fractions, loads, and fixtures. Its high efficiency and accuracy make it suitable for rapid prototyping and design space exploration. ### Out-of-Scope Use This model is specialized for structural topology optimization. It is not intended for general-purpose image generation, text-based tasks, or other domains outside of structural design without significant adaptation or fine-tuning. ## How to Get Started with the Model For detailed instructions on environment setup, training, and generating samples using the model, please refer to the official [GitHub repository](https://github.com/ahnobari/OptimizeAnyTopology). The repository provides scripts for precomputing latents, training the autoencoder and diffusion model, and running inference to generate solutions. ## Training Details The Optimize Any Topology (OAT) model is trained in two stages: 1. **Neural Field Auto-Encoder (NFAE)**: This stage trains an autoencoder to map variable resolution and shapes of structures into a common latent space. 2. **Latent Diffusion Model (LDM)**: This stage trains a conditional latent-diffusion model, which generates samples by performing a conditional diffusion process on the pre-computed latent tensors obtained from the NFAE. ### Training Data The model is trained on the **OpenTO dataset**, which is publicly available on Hugging Face at [OpenTO/OpenTO](https://huggingface.co/datasets/OpenTO/OpenTO). This new corpus consists of 2.2 million optimized structures, covering 2 million unique boundary-condition configurations. ### Pre-Trained Checkpoints Pre-trained checkpoints for both the Auto Encoder and Latent Diffusion models are available on Hugging Face: * **Auto Encoder:** [OpenTO/NFAE](https://huggingface.co/OpenTO/NFAE) * **Latent Diffusion:** [OpenTO/LDM](https://huggingface.co/OpenTO/LDM) * **Auto Encoder Large Latent:** [OpenTO/NFAE_L](https://huggingface.co/OpenTO/NFAE_L) * **Latent Diffusion Large Latent:** [OpenTO/LDM_L](https://huggingface.co/OpenTO/LDM_L) These checkpoints can be loaded using the `.from_pretrained` function on the `NFAE` and `CTOPUNET` classes within the `OAT.Models` module. ## Citation If you find this work useful, please consider citing the paper: ```bibtex @article{ahnobari2025optimize, title={Optimize Any Topology: A Foundation Model for Shape- and Resolution-Free Structural Topology Optimization}, author={Ahnobari, Saman and others}, # Authors from the paper should be filled here journal={arXiv preprint arXiv:2510.23667}, year={2025} } ```