Diffusers documentation
EasyAnimateTransformer3DModel
EasyAnimateTransformer3DModel
A Diffusion Transformer model for 3D data from EasyAnimate was introduced by Alibaba PAI.
The model can be loaded with the following code snippet.
from diffusers import EasyAnimateTransformer3DModel
transformer = EasyAnimateTransformer3DModel.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="transformer", torch_dtype=torch.float16).to("cuda")EasyAnimateTransformer3DModel
class diffusers.EasyAnimateTransformer3DModel
< source >( num_attention_heads: int = 48 attention_head_dim: int = 64 in_channels: typing.Optional[int] = None out_channels: typing.Optional[int] = None patch_size: typing.Optional[int] = None sample_width: int = 90 sample_height: int = 60 activation_fn: str = 'gelu-approximate' timestep_activation_fn: str = 'silu' freq_shift: int = 0 num_layers: int = 48 mmdit_layers: int = 48 dropout: float = 0.0 time_embed_dim: int = 512 add_norm_text_encoder: bool = False text_embed_dim: int = 3584 text_embed_dim_t5: int = None norm_eps: float = 1e-05 norm_elementwise_affine: bool = True flip_sin_to_cos: bool = True time_position_encoding_type: str = '3d_rope' after_norm = False resize_inpaint_mask_directly: bool = True enable_text_attention_mask: bool = True add_noise_in_inpaint_model: bool = True )
Parameters
- num_attention_heads (
int, defaults to48) — The number of heads to use for multi-head attention. - attention_head_dim (
int, defaults to64) — The number of channels in each head. - in_channels (
int, defaults to16) — The number of channels in the input. - out_channels (
int, optional, defaults to16) — The number of channels in the output. - patch_size (
int, defaults to2) — The size of the patches to use in the patch embedding layer. - sample_width (
int, defaults to90) — The width of the input latents. - sample_height (
int, defaults to60) — The height of the input latents. - activation_fn (
str, defaults to"gelu-approximate") — Activation function to use in feed-forward. - timestep_activation_fn (
str, defaults to"silu") — Activation function to use when generating the timestep embeddings. - num_layers (
int, defaults to30) — The number of layers of Transformer blocks to use. - mmdit_layers (
int, defaults to1000) — The number of layers of Multi Modal Transformer blocks to use. - dropout (
float, defaults to0.0) — The dropout probability to use. - time_embed_dim (
int, defaults to512) — Output dimension of timestep embeddings. - text_embed_dim (
int, defaults to4096) — Input dimension of text embeddings from the text encoder. - norm_eps (
float, defaults to1e-5) — The epsilon value to use in normalization layers. - norm_elementwise_affine (
bool, defaults toTrue) — Whether to use elementwise affine in normalization layers. - flip_sin_to_cos (
bool, defaults toTrue) — Whether to flip the sin to cos in the time embedding. - time_position_encoding_type (
str, defaults to3d_rope) — Type of time position encoding. - after_norm (
bool, defaults toFalse) — Flag to apply normalization after. - resize_inpaint_mask_directly (
bool, defaults toTrue) — Flag to resize inpaint mask directly. - enable_text_attention_mask (
bool, defaults toTrue) — Flag to enable text attention mask. - add_noise_in_inpaint_model (
bool, defaults toFalse) — Flag to add noise in inpaint model.
A Transformer model for video-like data in EasyAnimate.
Transformer2DModelOutput
class diffusers.models.modeling_outputs.Transformer2DModelOutput
< source >( sample: torch.Tensor )
Parameters
- sample (
torch.Tensorof shape(batch_size, num_channels, height, width)or(batch size, num_vector_embeds - 1, num_latent_pixels)if Transformer2DModel is discrete) — The hidden states output conditioned on theencoder_hidden_statesinput. If discrete, returns probability distributions for the unnoised latent pixels.
The output of Transformer2DModel.