Add pipeline tag and improve model card
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,36 +1,55 @@
|
|
| 1 |
---
|
| 2 |
-
tags:
|
| 3 |
-
- video-to-4D
|
| 4 |
-
license: other
|
| 5 |
-
python_version: "3.11"
|
| 6 |
language:
|
| 7 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
| 11 |
-
<h1>ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion</h1>
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
-
</div>
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
##
|
| 25 |
|
| 26 |
-
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
|
|
|
|
| 32 |
|
|
|
|
|
|
|
| 33 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
@misc{sabathier2026actionmeshanimated3dmesh,
|
| 35 |
title={ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion},
|
| 36 |
author={Remy Sabathier and David Novotny and Niloy J. Mitra and Tom Monnier},
|
|
@@ -40,4 +59,7 @@ Please refer to our [Github Repo](https://github.com/facebookresearch/actionmesh
|
|
| 40 |
primaryClass={cs.CV},
|
| 41 |
url={https://arxiv.org/abs/2601.16148},
|
| 42 |
}
|
| 43 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: other
|
| 5 |
+
pipeline_tag: image-to-3d
|
| 6 |
+
tags:
|
| 7 |
+
- video-to-4D
|
| 8 |
+
arxiv: 2601.16148
|
| 9 |
+
python_version: '3.11'
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion
|
|
|
|
| 13 |
|
| 14 |
+
[**ActionMesh**](https://remysabathier.github.io/actionmesh/) is a generative model that predicts production-ready 3D meshes "in action" in a feed-forward manner. It adapts 3D diffusion to include a temporal axis, allowing the generation of synchronized latents representing time-varying 3D shapes.
|
| 15 |
|
| 16 |
+
[[Paper](https://huggingface.co/papers/2601.16148)] [[Project Page](https://remysabathier.github.io/actionmesh/)] [[GitHub](https://github.com/facebookresearch/actionmesh)] [[Demo](https://huggingface.co/spaces/facebook/ActionMesh)]
|
| 17 |
|
| 18 |
+
## Installation
|
|
|
|
| 19 |
|
| 20 |
+
ActionMesh requires an NVIDIA GPU with at least 32GB VRAM.
|
| 21 |
|
| 22 |
+
```bash
|
| 23 |
+
git clone https://github.com/facebookresearch/actionmesh.git
|
| 24 |
+
cd actionmesh
|
| 25 |
+
git submodule update --init --recursive
|
| 26 |
+
pip install -r requirements.txt
|
| 27 |
+
pip install -e .
|
| 28 |
+
```
|
| 29 |
|
| 30 |
+
## Quick Start
|
| 31 |
|
| 32 |
+
You can generate an animated mesh from an input video using the provided inference script. Model weights will be automatically downloaded on the first run.
|
| 33 |
|
| 34 |
+
### Basic Usage
|
| 35 |
+
|
| 36 |
+
```bash
|
| 37 |
+
python inference/video_to_animated_mesh.py --input assets/examples/davis_camel
|
| 38 |
+
```
|
| 39 |
|
| 40 |
+
### Fast Mode
|
| 41 |
|
| 42 |
+
For faster inference (as used in the Hugging Face demo), use the `--fast` flag:
|
| 43 |
|
| 44 |
+
```bash
|
| 45 |
+
python inference/video_to_animated_mesh.py --input assets/examples/davis_camel --fast
|
| 46 |
```
|
| 47 |
+
|
| 48 |
+
## Citation
|
| 49 |
+
|
| 50 |
+
If you find ActionMesh useful in your research, please cite:
|
| 51 |
+
|
| 52 |
+
```bibtex
|
| 53 |
@misc{sabathier2026actionmeshanimated3dmesh,
|
| 54 |
title={ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion},
|
| 55 |
author={Remy Sabathier and David Novotny and Niloy J. Mitra and Tom Monnier},
|
|
|
|
| 59 |
primaryClass={cs.CV},
|
| 60 |
url={https://arxiv.org/abs/2601.16148},
|
| 61 |
}
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
## License
|
| 65 |
+
The weights and code are provided under the license terms found in the [GitHub repository](https://github.com/facebookresearch/actionmesh). Please refer to the LICENSE file there for details.
|