Model summary
| Syntax | Description |
|---|---|
| Developer | Microsoft Research, Health Futures |
| Description | SNRAware models improve the signal-noise-ratio of MR complex images. It is provided as research-only model for reproducibility of the corresponding research. |
| Model architecture | This model is an instantiation of the Imaging Transformer architectures, where local, global and inter-frame signal and noise characteristics are learned. |
| Parameters | SNRAware-small: 27.7million parameters; SNRAware-medium: 55.1million parameters; SNRAware-large: 109million parameters |
| Inputs | Input to model is 5D tensor [B, C, T/F, H, W] for batch, channel, time/frame, height and width. The last channel in input is the g-factor map. |
| Context length | Not applicable |
| Outputs | Output tensor is in the shape of [B, C-1, T/F, H, W]. |
| GPUs | 16x B200 |
| Training time | 7 days |
| Public data summary (or summaries) | Not applicable |
| Dates | Nov 2025 |
| Status | Model checkpoints can be downloaded from https://huggingface.co/microsoft/SNRAware. Model may be subject to updates post-release. |
| Release date; Release date in the EU (if different) | Dec 2025 |
| License | MIT license |
| Model dependencies: | N/A |
| List and link to any additional related assets | https://github.com/microsoft/SNRAware/ |
| Acceptable use policy | N/A |
- Model overview
SNRAware is an imaging transformer model trained to denoise complex MR image data. Imaging transformers use attention modules to capture local, global, and inter-frame signal and noise characteristics. Denoising training used the SNRAware method, generating MR-realistic noises on the fly to create low SNR samples with unitary noise scaling. Model received low SNR complex images and g-factor maps as input, producing high SNR complex images as output. It is provided as research-only model for reproducibility of the corresponding research.
Please refer to the publication for technical details: https://pubs.rsna.org/doi/10.1148/ryai.250227
1.1 Alignment approach
This model was trained from scratch and did not require an alignment step.
- Usage
2.1 Primary use cases
This model is only suited to denoise complex MR images with the unitary noise scaling. The primary intended use is to support AI researchers reproducing and building on top of this work.
2.2 Out-of-scope use cases
Any deployed use case of the model --- commercial or otherwise --- is out of scope. Although we evaluated the models using publicly available research benchmarks, the models and evaluations are intended for research use only and not intended for deployed use cases.
2.3 Distribution channels
Model source code is available at https://github.com/microsoft/SNRAware/
Pre-trained models are available at https://huggingface.co/microsoft/SNRAware
2.4 Input formats
Input to model is 5D tensor [B, C, T/F, H, W] for batch, channel, time/frame, height and width.โฏ
2.5 Technical requirements and integration guidance
Recommend GPU should have >=16GB memory. NVIDIA A100 or newer GPUs are the best.
For additional details for using the model, see the corresponding code: https://github.com/microsoft/SNRAware/
2.6 Responsible AI considerations
This model is for the very dedicated using scenario. Only domain experts with good knowledge of MR imaging should use the model.
This model was developed using MR cardiac raw signal data and may not be generalizable. This model is evaluated on a narrow set of benchmark tasks, described in the corresponding research paper. As such, it is not suitable for use in any clinical setting. Under some conditions, the model may contain inaccuracies that may require additional mitigation strategies. While evaluation has included clinical input, this is not exhaustive; model performance will vary in different settings and is intended for research use only.
Further, this model was developed in part using data from a single provider network and from a specific class of conditions. As such, these data are enriched for patients receiving care in the surrounding area, a distribution that may not be representative of other sources of biomedical data.
- Data overview
3.1 Training, testing, and validation datasets
Data from the National Institutes of Health Cardiac MRI Raw Data Repository, hosted by the Intramural Research Program of the National Heart Lung and Blood Institute, were curated with the required ethical and/or secondary audit use approvals or guidelines permitting the retrospective analysis of anonymized data without requiring written informed consent for secondary usage for the purpose of technical development, protocol optimization, and/or quality control. The data was fully anonymized and used for training without exclusion. The training and test datasets are summarized in Table 1. The training set included 96605 cine series from 7590 patients, with 95% of the scans used for training and 5% for validation, and the test set included 3000 cine series, with no overlap.
- Quality and performance evaluation
Model was tested on the hold-out test sets for quality metrics of PSNR and SSIM. The imaging transformer models outperformed other model architectures, such as convolutional network and Swin transformer.
Please refer to the publication for more results.
4.1 Long context
Not relevant
4.2 Safety evaluation and red-teaming
Model outputs were compared to ground-truth images. The standard quality metrics were computed.
- Tracked capability evaluations
This model is not a frontier model.
Requests for additional information may be directed to MSFTAIActRequest@microsoft.com.
Appendix None