Refreshed model with speed optimizations.
Browse files- README.md +6 -68
- config.json +3 -2
- model.safetensors +2 -2
README.md
CHANGED
|
@@ -1,71 +1,9 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
pipeline_tag: time-series-forecasting
|
| 5 |
---
|
| 6 |
|
| 7 |
-
#
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
**Resources and Technical Documentation**:
|
| 12 |
-
* Paper: [A decoder-only foundation model for time-series forecasting](https://arxiv.org/abs/2310.10688), ICML 2024.
|
| 13 |
-
* [Google Research blog](https://research.google/blog/a-decoder-only-foundation-model-for-time-series-forecasting/)
|
| 14 |
-
* [GitHub repo](https://github.com/google-research/timesfm)
|
| 15 |
-
|
| 16 |
-
**Authors**: Google Research
|
| 17 |
-
|
| 18 |
-
This checkpoint is not an officially supported Google product. See [TimesFM in BigQuery](https://cloud.google.com/bigquery/docs/timesfm-model) for Google official support.
|
| 19 |
-
|
| 20 |
-
## Checkpoint `timesfm-2.5-200m`
|
| 21 |
-
|
| 22 |
-
`timesfm-2.5-200m` is the third open model checkpoint.
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
### Data
|
| 26 |
-
|
| 27 |
-
`timesfm-2.5-200m` is pretrained using
|
| 28 |
-
|
| 29 |
-
- [GiftEvalPretrain](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain)
|
| 30 |
-
- [Wikimedia Pageviews](https://meta.wikimedia.org/wiki/Pageviews_Analysis), cutoff Nov 2023 (see [paper](https://arxiv.org/abs/2310.10688) for details).
|
| 31 |
-
- [Google Trends](https://trends.google.com/trends/) top queries, cutoff EoY 2022 (see [paper](https://arxiv.org/abs/2310.10688) for details).
|
| 32 |
-
- Synthetic and augmented data.
|
| 33 |
-
|
| 34 |
-
### Install
|
| 35 |
-
|
| 36 |
-
`pip install` from PyPI coming soon. At this point, please run
|
| 37 |
-
|
| 38 |
-
```shell
|
| 39 |
-
git clone https://github.com/google-research/timesfm.git
|
| 40 |
-
cd timesfm
|
| 41 |
-
pip install -e .
|
| 42 |
-
```
|
| 43 |
-
|
| 44 |
-
### Code Example
|
| 45 |
-
|
| 46 |
-
```python
|
| 47 |
-
import numpy as np
|
| 48 |
-
import timesfm
|
| 49 |
-
model = timesfm.TimesFM_2p5_200M_torch.from_pretrained("google/timesfm-2.5-200m-pytorch")
|
| 50 |
-
|
| 51 |
-
model.compile(
|
| 52 |
-
timesfm.ForecastConfig(
|
| 53 |
-
max_context=1024,
|
| 54 |
-
max_horizon=256,
|
| 55 |
-
normalize_inputs=True,
|
| 56 |
-
use_continuous_quantile_head=True,
|
| 57 |
-
force_flip_invariance=True,
|
| 58 |
-
infer_is_positive=True,
|
| 59 |
-
fix_quantile_crossing=True,
|
| 60 |
-
)
|
| 61 |
-
)
|
| 62 |
-
point_forecast, quantile_forecast = model.forecast(
|
| 63 |
-
horizon=12,
|
| 64 |
-
inputs=[
|
| 65 |
-
np.linspace(0, 1, 100),
|
| 66 |
-
np.sin(np.linspace(0, 20, 67)),
|
| 67 |
-
], # Two dummy inputs
|
| 68 |
-
)
|
| 69 |
-
point_forecast.shape # (2, 12)
|
| 70 |
-
quantile_forecast.shape # (2, 12, 10): mean, then 10th to 90th quantiles.
|
| 71 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
tags:
|
| 3 |
+
- model_hub_mixin
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
+
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
|
| 7 |
+
- Code: [More Information Needed]
|
| 8 |
+
- Paper: [More Information Needed]
|
| 9 |
+
- Docs: [More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
|
@@ -6,12 +6,12 @@
|
|
| 6 |
"head_dim": 80,
|
| 7 |
"hidden_size": 1280,
|
| 8 |
"horizon_length": 128,
|
| 9 |
-
"quantile_horizon_length": 1024,
|
| 10 |
"intermediate_size": 1280,
|
| 11 |
"model_type": "timesfm",
|
| 12 |
"num_attention_heads": 16,
|
| 13 |
"num_hidden_layers": 20,
|
| 14 |
"patch_length": 32,
|
|
|
|
| 15 |
"quantiles": [
|
| 16 |
0.1,
|
| 17 |
0.2,
|
|
@@ -23,5 +23,6 @@
|
|
| 23 |
0.8,
|
| 24 |
0.9
|
| 25 |
],
|
| 26 |
-
"rms_norm_eps": 1e-06
|
|
|
|
| 27 |
}
|
|
|
|
| 6 |
"head_dim": 80,
|
| 7 |
"hidden_size": 1280,
|
| 8 |
"horizon_length": 128,
|
|
|
|
| 9 |
"intermediate_size": 1280,
|
| 10 |
"model_type": "timesfm",
|
| 11 |
"num_attention_heads": 16,
|
| 12 |
"num_hidden_layers": 20,
|
| 13 |
"patch_length": 32,
|
| 14 |
+
"quantile_horizon_length": 1024,
|
| 15 |
"quantiles": [
|
| 16 |
0.1,
|
| 17 |
0.2,
|
|
|
|
| 23 |
0.8,
|
| 24 |
0.9
|
| 25 |
],
|
| 26 |
+
"rms_norm_eps": 1e-06,
|
| 27 |
+
"torch_compile": false
|
| 28 |
}
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2f776efe6245e42b24bc4153ffdf61810140210e4bd3b01fb21f7aa779ab6ce8
|
| 3 |
+
size 925181104
|