text
stringlengths 7
328k
| id
stringlengths 14
166
| metadata
dict | __index_level_0__
int64 0
471
|
|---|---|---|---|
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SOURCEDIR = source
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
accelerate/docs/Makefile/0
|
{
"file_path": "accelerate/docs/Makefile",
"repo_id": "accelerate",
"token_count": 237
}
| 0
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Comparing performance between different device setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
and expect your results to line up.
But why?
There are three reasons for this that this tutorial will cover:
1. **Setting the right seeds**
2. **Observed Batch Sizes**
3. **Learning Rates**
## Setting the Seed
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible:
```python
from accelerate.utils import set_seed
set_seed(42)
```
Why is this important? Under the hood this will set **5** different seed settings:
```python
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# ^^ safe to call this function even if cuda is not available
if is_torch_xla_available():
xm.set_rng_state(seed)
```
The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state.
## Observed Batch Sizes
When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is
a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for,
as well as similarly for TPUs.
The below table can be used as a quick reference to try out different batch sizes:
<Tip>
In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
</Tip>
| Single GPU Batch Size | Multi-GPU Equivalent Batch Size | TPU Equivalent Batch Size |
|-----------------------|---------------------------------|---------------------------|
| 256 | 128 | 32 |
| 128 | 64 | 16 |
| 64 | 32 | 8 |
| 32 | 16 | 4 |
## Learning Rates
As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/clara-train-sdk/pt/model.html#classification-models-multi-gpu-training)], the learning rate should be scaled *linearly* based on the number of devices present. The below
snippet shows doing so with Accelerate:
<Tip>
Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their
learning rate or not.
</Tip>
```python
learning_rate = 1e-3
accelerator = Accelerator()
learning_rate *= accelerator.num_processes
optimizer = AdamW(params=model.parameters(), lr=learning_rate)
```
You will also find that `accelerate` will step the learning rate based on the number of processes being trained on. This is because
of the observed batch size noted earlier. So in the case of 2 GPUs, the learning rate will be stepped twice as often as a single GPU
to account for the batch size being twice as large (if no changes to the batch size on the single GPU instance are made).
## Gradient Accumulation and Mixed Precision
When using gradient accumulation and mixed precision, due to how gradient averaging works (accumulation) and the precision loss (mixed precision),
some degradation in performance is expected. This will be explicitly seen when comparing the batch-wise loss between different compute
setups. However, the overall loss, metric, and general performance at the end of training should be _roughly_ the same.
|
accelerate/docs/source/concept_guides/performance.md/0
|
{
"file_path": "accelerate/docs/source/concept_guides/performance.md",
"repo_id": "accelerate",
"token_count": 1463
}
| 1
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
The internal classes Accelerate uses to prepare objects for distributed training
when calling [`~Accelerator.prepare`].
## Datasets and DataLoaders
[[autodoc]] data_loader.prepare_data_loader
[[autodoc]] data_loader.skip_first_batches
[[autodoc]] data_loader.BatchSamplerShard
[[autodoc]] data_loader.IterableDatasetShard
[[autodoc]] data_loader.DataLoaderShard
[[autodoc]] data_loader.DataLoaderDispatcher
## Optimizers
[[autodoc]] optimizer.AcceleratedOptimizer
## Schedulers
[[autodoc]] scheduler.AcceleratedScheduler
|
accelerate/docs/source/package_reference/torch_wrappers.md/0
|
{
"file_path": "accelerate/docs/source/package_reference/torch_wrappers.md",
"repo_id": "accelerate",
"token_count": 381
}
| 2
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerated PyTorch Training on Mac
With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training.
This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.
Apple's Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new `"mps"` device.
This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS.
For more information please refer official documents [Introducing Accelerated PyTorch Training on Mac](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/)
and [MPS BACKEND](https://pytorch.org/docs/stable/notes/mps.html).
### Benefits of Training and Inference using Apple Silicon Chips
1. Enables users to train larger networks or batch sizes locally
2. Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture.
Therefore, improving end-to-end performance.
3. Reduces costs associated with cloud-based development or the need for additional local GPUs.
**Pre-requisites**: To install torch with mps support,
please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1 Macs](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1).
## How it works out of the box
It is enabled by default on MacOs machines with MPS enabled Apple Silicon GPUs.
To disable it, pass `--cpu` flag to `accelerate launch` command or answer the corresponding question when answering the `accelerate config` questionnaire.
You can directly run the following script to test it out on MPS enabled Apple Silicon machines:
```bash
accelerate launch /examples/cv_example.py --data_dir images
```
## A few caveats to be aware of
1. We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine.
It has major fixes related to model correctness and performance improvements for transformer based models.
Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
2. Distributed setups `gloo` and `nccl` are not working with `mps` device.
This means that currently only single GPU of `mps` device type can be used.
Finally, please, remember that, 🤗 `Accelerate` only integrates MPS backend, therefore if you
have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues).
|
accelerate/docs/source/usage_guides/mps.md/0
|
{
"file_path": "accelerate/docs/source/usage_guides/mps.md",
"repo_id": "accelerate",
"token_count": 861
}
| 3
|
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from contextlib import contextmanager
from functools import wraps
from typing import Dict, List, Optional, Union
import torch
import torch.nn as nn
from .hooks import (
AlignDevicesHook,
CpuOffload,
UserCpuOffloadHook,
add_hook_to_module,
attach_align_device_hook,
attach_align_device_hook_on_blocks,
)
from .utils import (
OffloadedWeightsLoader,
check_cuda_p2p_ib_support,
check_device_map,
extract_submodules_state_dict,
find_tied_parameters,
get_balanced_memory,
infer_auto_device_map,
is_mlu_available,
is_npu_available,
is_torch_version,
is_xpu_available,
load_checkpoint_in_model,
offload_state_dict,
parse_flag_from_env,
retie_parameters,
)
from .utils.other import recursive_getattr
logger = logging.getLogger(__name__)
@contextmanager
def init_empty_weights(include_buffers: bool = None):
"""
A context manager under which models are initialized with all parameters on the meta device, therefore creating an
empty model. Useful when just initializing the model would blow the available RAM.
Args:
include_buffers (`bool`, *optional*):
Whether or not to also put all buffers on the meta device while initializing.
Example:
```python
import torch.nn as nn
from accelerate import init_empty_weights
# Initialize a model with 100 billions parameters in no time and without using any RAM.
with init_empty_weights():
tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
<Tip warning={true}>
Any model created under this context manager has no weights. As such you can't do something like
`model.to(some_device)` with it. To load weights inside your empty model, see [`load_checkpoint_and_dispatch`].
Make sure to overwrite the default device_map param for [`load_checkpoint_and_dispatch`], otherwise dispatch is not
called.
</Tip>
"""
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
with init_on_device(torch.device("meta"), include_buffers=include_buffers) as f:
yield f
@contextmanager
def init_on_device(device: torch.device, include_buffers: bool = None):
"""
A context manager under which models are initialized with all parameters on the specified device.
Args:
device (`torch.device`):
Device to initialize all parameters on.
include_buffers (`bool`, *optional*):
Whether or not to also put all buffers on the meta device while initializing.
Example:
```python
import torch.nn as nn
from accelerate import init_on_device
with init_on_device(device=torch.device("cuda")):
tst = nn.Liner(100, 100) # on `cuda` device
```
"""
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
# TODO(shingjan): remove the torch version check once older versions are deprecated
if is_torch_version(">=", "2.0") and include_buffers:
with device:
yield
return
old_register_parameter = nn.Module.register_parameter
if include_buffers:
old_register_buffer = nn.Module.register_buffer
def register_empty_parameter(module, name, param):
old_register_parameter(module, name, param)
if param is not None:
param_cls = type(module._parameters[name])
kwargs = module._parameters[name].__dict__
kwargs["requires_grad"] = param.requires_grad
module._parameters[name] = param_cls(module._parameters[name].to(device), **kwargs)
def register_empty_buffer(module, name, buffer, persistent=True):
old_register_buffer(module, name, buffer, persistent=persistent)
if buffer is not None:
module._buffers[name] = module._buffers[name].to(device)
# Patch tensor creation
if include_buffers:
tensor_constructors_to_patch = {
torch_function_name: getattr(torch, torch_function_name)
for torch_function_name in ["empty", "zeros", "ones", "full"]
}
else:
tensor_constructors_to_patch = {}
def patch_tensor_constructor(fn):
def wrapper(*args, **kwargs):
kwargs["device"] = device
return fn(*args, **kwargs)
return wrapper
try:
nn.Module.register_parameter = register_empty_parameter
if include_buffers:
nn.Module.register_buffer = register_empty_buffer
for torch_function_name in tensor_constructors_to_patch.keys():
setattr(torch, torch_function_name, patch_tensor_constructor(getattr(torch, torch_function_name)))
yield
finally:
nn.Module.register_parameter = old_register_parameter
if include_buffers:
nn.Module.register_buffer = old_register_buffer
for torch_function_name, old_torch_function in tensor_constructors_to_patch.items():
setattr(torch, torch_function_name, old_torch_function)
def cpu_offload(
model: nn.Module,
execution_device: Optional[torch.device] = None,
offload_buffers: bool = False,
state_dict: Optional[Dict[str, torch.Tensor]] = None,
preload_module_classes: Optional[List[str]] = None,
):
"""
Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one
copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that
state dict and put on the execution device passed as they are needed, then offloaded again.
Args:
model (`torch.nn.Module`):
The model to offload.
execution_device (`torch.device`, *optional*):
The device on which the forward pass of the model will be executed (should be a GPU). Will default to the
model first parameter device.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
state_dict (`Dict[str, torch.Tensor]`, *optional*):
The state dict of the model that will be kept on CPU.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if execution_device is None:
execution_device = next(iter(model.parameters())).device
if state_dict is None:
state_dict = {n: p.to("cpu") for n, p in model.state_dict().items()}
add_hook_to_module(model, AlignDevicesHook(io_same_device=True), append=True)
attach_align_device_hook(
model,
execution_device=execution_device,
offload=True,
offload_buffers=offload_buffers,
weights_map=state_dict,
preload_module_classes=preload_module_classes,
)
return model
def cpu_offload_with_hook(
model: torch.nn.Module,
execution_device: Optional[Union[int, str, torch.device]] = None,
prev_module_hook: Optional[UserCpuOffloadHook] = None,
):
"""
Offloads a model on the CPU and puts it back to an execution device when executed. The difference with
[`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when
the `offload` method of the returned `hook` is called. Useful for pipelines running a model in a loop.
Args:
model (`torch.nn.Module`):
The model to offload.
execution_device(`str`, `int` or `torch.device`, *optional*):
The device on which the model should be executed. Will default to the MPS device if it's available, then
GPU 0 if there is a GPU, and finally to the CPU.
prev_module_hook (`UserCpuOffloadHook`, *optional*):
The hook sent back by this function for a previous model in the pipeline you are running. If passed, its
offload method will be called just before the forward of the model to which this hook is attached.
Example:
```py
model_1, hook_1 = cpu_offload_with_hook(model_1, cuda_device)
model_2, hook_2 = cpu_offload_with_hook(model_2, cuda_device, prev_module_hook=hook_1)
model_3, hook_3 = cpu_offload_with_hook(model_3, cuda_device, prev_module_hook=hook_2)
hid_1 = model_1(input)
for i in range(50):
# model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop.
hid_2 = model_2(hid_1)
# model2 is offloaded to the CPU just before this forward.
hid_3 = model_3(hid_3)
# For model3, you need to manually call the hook offload method.
hook_3.offload()
```
"""
hook = CpuOffload(execution_device=execution_device, prev_module_hook=prev_module_hook)
add_hook_to_module(model, hook, append=True)
user_hook = UserCpuOffloadHook(model, hook)
return model, user_hook
def disk_offload(
model: nn.Module,
offload_dir: Union[str, os.PathLike],
execution_device: Optional[torch.device] = None,
offload_buffers: bool = False,
preload_module_classes: Optional[List[str]] = None,
):
"""
Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as
memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and
put on the execution device passed as they are needed, then offloaded again.
Args:
model (`torch.nn.Module`): The model to offload.
offload_dir (`str` or `os.PathLike`):
The folder in which to offload the model weights (or where the model weights are already offloaded).
execution_device (`torch.device`, *optional*):
The device on which the forward pass of the model will be executed (should be a GPU). Will default to the
model's first parameter device.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")):
offload_state_dict(offload_dir, model.state_dict())
if execution_device is None:
execution_device = next(iter(model.parameters())).device
weights_map = OffloadedWeightsLoader(save_folder=offload_dir)
add_hook_to_module(model, AlignDevicesHook(io_same_device=True), append=True)
attach_align_device_hook(
model,
execution_device=execution_device,
offload=True,
offload_buffers=offload_buffers,
weights_map=weights_map,
preload_module_classes=preload_module_classes,
)
return model
def dispatch_model(
model: nn.Module,
device_map: Dict[str, Union[str, int, torch.device]],
main_device: Optional[torch.device] = None,
state_dict: Optional[Dict[str, torch.Tensor]] = None,
offload_dir: Optional[Union[str, os.PathLike]] = None,
offload_index: Optional[Dict[str, str]] = None,
offload_buffers: bool = False,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
force_hooks: bool = False,
):
"""
Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on
the CPU or even the disk.
Args:
model (`torch.nn.Module`):
The model to dispatch.
device_map (`Dict[str, Union[str, int, torch.device]]`):
A dictionary mapping module names in the models `state_dict` to the device they should go to. Note that
`"disk"` is accepted even if it's not a proper value for `torch.device`.
main_device (`str`, `int` or `torch.device`, *optional*):
The main execution device. Will default to the first device in the `device_map` different from `"cpu"` or
`"disk"`.
state_dict (`Dict[str, torch.Tensor]`, *optional*):
The state dict of the part of the model that will be kept on CPU.
offload_dir (`str` or `os.PathLike`):
The folder in which to offload the model weights (or where the model weights are already offloaded).
offload_index (`Dict`, *optional*):
A dictionary from weight name to their information (`dtype`/ `shape` or safetensors filename). Will default
to the index saved in `save_folder`.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
skip_keys (`str` or `List[str]`, *optional*):
A list of keys to ignore when moving inputs or outputs between devices.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
force_hooks (`bool`, *optional*, defaults to `False`):
Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a
single device.
"""
# Error early if the device map is incomplete.
check_device_map(model, device_map)
# for backward compatibility
is_bnb_quantized = (
getattr(model, "is_quantized", False) or getattr(model, "is_loaded_in_8bit", False)
) and getattr(model, "quantization_method", "bitsandbytes") == "bitsandbytes"
# We attach hooks if the device_map has at least 2 different devices or if
# force_hooks is set to `True`. Otherwise, the model in already loaded
# in the unique device and the user can decide where to dispatch the model.
# If the model is quantized, we always force-dispatch the model
if (len(set(device_map.values())) > 1) or is_bnb_quantized or force_hooks:
if main_device is None:
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
else:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
if main_device != "cpu":
cpu_modules = [name for name, device in device_map.items() if device == "cpu"]
if state_dict is None and len(cpu_modules) > 0:
state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules)
disk_modules = [name for name, device in device_map.items() if device == "disk"]
if offload_dir is None and offload_index is None and len(disk_modules) > 0:
raise ValueError(
"We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules "
f"need to be offloaded: {', '.join(disk_modules)}."
)
if (
len(disk_modules) > 0
and offload_index is None
and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")))
):
disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules)
offload_state_dict(offload_dir, disk_state_dict)
execution_device = {
name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items()
}
execution_device[""] = main_device
offloaded_devices = ["disk"] if main_device == "cpu" or main_device == "mps" else ["cpu", "disk"]
offload = {name: device in offloaded_devices for name, device in device_map.items()}
save_folder = offload_dir if len(disk_modules) > 0 else None
if state_dict is not None or save_folder is not None or offload_index is not None:
device = main_device if offload_index is not None else None
weights_map = OffloadedWeightsLoader(
state_dict=state_dict, save_folder=save_folder, index=offload_index, device=device
)
else:
weights_map = None
# When dispatching the model's parameters to the devices specified in device_map, we want to avoid allocating memory several times for the
# tied parameters. The dictionary tied_params_map keeps track of the already allocated data for a given tied parameter (represented by its
# original pointer) on each devices.
tied_params = find_tied_parameters(model)
tied_params_map = {}
for group in tied_params:
for param_name in group:
# data_ptr() is enough here, as `find_tied_parameters` finds tied params simply by comparing `param1 is param2`, so we don't need
# to care about views of tensors through storage_offset.
data_ptr = recursive_getattr(model, param_name).data_ptr()
tied_params_map[data_ptr] = {}
# Note: To handle the disk offloading case, we can not simply use weights_map[param_name].data_ptr() as the reference pointer,
# as we have no guarantee that safetensors' `file.get_tensor()` will always give the same pointer.
attach_align_device_hook_on_blocks(
model,
execution_device=execution_device,
offload=offload,
offload_buffers=offload_buffers,
weights_map=weights_map,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
tied_params_map=tied_params_map,
)
# warn if there is any params on the meta device
offloaded_devices_str = " and ".join(
[device for device in set(device_map.values()) if device in ("cpu", "disk")]
)
if len(offloaded_devices_str) > 0:
logging.warning(
f"Some parameters are on the meta device device because they were offloaded to the {offloaded_devices_str}."
)
# Attaching the hook may break tied weights, so we retie them
retie_parameters(model, tied_params)
# add warning to cuda and to method
def add_warning(fn, model):
@wraps(fn)
def wrapper(*args, **kwargs):
warning_msg = "You shouldn't move a model that is dispatched using accelerate hooks."
if str(fn.__name__) == "to":
to_device = torch._C._nn._parse_to(*args, **kwargs)[0]
if to_device is not None:
logger.warning(warning_msg)
else:
logger.warning(warning_msg)
for param in model.parameters():
if param.device == torch.device("meta"):
raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.")
return fn(*args, **kwargs)
return wrapper
model.to = add_warning(model.to, model)
if is_npu_available():
model.npu = add_warning(model.npu, model)
elif is_mlu_available():
model.mlu = add_warning(model.mlu, model)
elif is_xpu_available():
model.xpu = add_warning(model.xpu, model)
else:
model.cuda = add_warning(model.cuda, model)
# Check if we are using multi-gpus with RTX 4000 series
use_multi_gpu = len([device for device in set(device_map.values()) if device not in ("cpu", "disk")]) > 1
if use_multi_gpu and not check_cuda_p2p_ib_support():
logger.warning(
"We've detected an older driver with an RTX 4000 series GPU. These drivers have issues with P2P. "
"This can affect the multi-gpu inference when using accelerate device_map."
"Please make sure to update your driver to the latest version which resolves this."
)
else:
device = list(device_map.values())[0]
# `torch.Tensor.to(<int num>)` is not supported by `torch_npu` (see this [issue](https://github.com/Ascend/pytorch/issues/16)).
if is_npu_available() and isinstance(device, int):
device = f"npu:{device}"
elif is_mlu_available() and isinstance(device, int):
device = f"mlu:{device}"
elif is_xpu_available() and isinstance(device, int):
device = f"xpu:{device}"
if device != "disk":
model.to(device)
else:
raise ValueError(
"You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead."
)
# Convert OrderedDict back to dict for easier usage
model.hf_device_map = dict(device_map)
return model
def load_checkpoint_and_dispatch(
model: nn.Module,
checkpoint: Union[str, os.PathLike],
device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
no_split_module_classes: Optional[List[str]] = None,
offload_folder: Optional[Union[str, os.PathLike]] = None,
offload_buffers: bool = False,
dtype: Optional[Union[str, torch.dtype]] = None,
offload_state_dict: Optional[bool] = None,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
force_hooks: bool = False,
):
"""
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are
loaded and adds the various hooks that will make this model run properly (even if split across devices).
Args:
model (`torch.nn.Module`): The model in which we want to load a checkpoint.
checkpoint (`str` or `os.PathLike`):
The folder checkpoint to load. It can be:
- a path to a file containing a whole model state dict
- a path to a `.json` file containing the index to a sharded checkpoint
- a path to a folder containing a unique `.index.json` file and the shards of a checkpoint.
device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer
name, once a given module name is inside, every submodule of it will be sent to the same device.
To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more
information about each option see [here](../concept_guides/big_model_inference#designing-a-device-map).
Defaults to None, which means [`dispatch_model`] will not be called.
max_memory (`Dict`, *optional*):
A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU
and the available CPU RAM if unset.
no_split_module_classes (`List[str]`, *optional*):
A list of layer class names that should never be split across device (for instance any layer that has a
residual connection).
offload_folder (`str` or `os.PathLike`, *optional*):
If the `device_map` contains any value `"disk"`, the folder where we will offload weights.
offload_buffers (`bool`, *optional*, defaults to `False`):
In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as
well as the parameters.
dtype (`str` or `torch.dtype`, *optional*):
If provided, the weights will be converted to that type when loaded.
offload_state_dict (`bool`, *optional*):
If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if
the weight of the CPU state dict + the biggest shard does not fit. Will default to `True` if the device map
picked contains `"disk"` values.
skip_keys (`str` or `List[str]`, *optional*):
A list of keys to ignore when moving inputs or outputs between devices.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
force_hooks (`bool`, *optional*, defaults to `False`):
Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a
single device.
Example:
```python
>>> from accelerate import init_empty_weights, load_checkpoint_and_dispatch
>>> from huggingface_hub import hf_hub_download
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download the Weights
>>> checkpoint = "EleutherAI/gpt-j-6B"
>>> weights_location = hf_hub_download(checkpoint, "pytorch_model.bin")
>>> # Create a model and initialize it with empty weights
>>> config = AutoConfig.from_pretrained(checkpoint)
>>> with init_empty_weights():
... model = AutoModelForCausalLM.from_config(config)
>>> # Load the checkpoint and dispatch it to the right devices
>>> model = load_checkpoint_and_dispatch(
... model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"]
... )
```
"""
if isinstance(device_map, str) and device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]:
raise ValueError(
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or "
"'sequential'."
)
if isinstance(device_map, str):
if device_map != "sequential":
max_memory = get_balanced_memory(
model,
max_memory=max_memory,
no_split_module_classes=no_split_module_classes,
dtype=dtype,
low_zero=(device_map == "balanced_low_0"),
)
device_map = infer_auto_device_map(
model,
max_memory=max_memory,
no_split_module_classes=no_split_module_classes,
dtype=dtype,
offload_buffers=offload_buffers,
)
if offload_state_dict is None and device_map is not None and "disk" in device_map.values():
offload_state_dict = True
load_checkpoint_in_model(
model,
checkpoint,
device_map=device_map,
offload_folder=offload_folder,
dtype=dtype,
offload_state_dict=offload_state_dict,
offload_buffers=offload_buffers,
)
if device_map is None:
return model
return dispatch_model(
model,
device_map=device_map,
offload_dir=offload_folder,
offload_buffers=offload_buffers,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
force_hooks=force_hooks,
)
|
accelerate/src/accelerate/big_modeling.py/0
|
{
"file_path": "accelerate/src/accelerate/big_modeling.py",
"repo_id": "accelerate",
"token_count": 11237
}
| 4
|
# Copyright 2022 The HuggingFace Team and Brian Chao. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A utility for showing and hiding the terminal cursor on Windows and Linux, based on https://github.com/bchao1/bullet
"""
import os
import sys
from contextlib import contextmanager
# Windows only
if os.name == "nt":
import ctypes
import msvcrt # noqa
class CursorInfo(ctypes.Structure):
# _fields is a specific attr expected by ctypes
_fields_ = [("size", ctypes.c_int), ("visible", ctypes.c_byte)]
def hide_cursor():
if os.name == "nt":
ci = CursorInfo()
handle = ctypes.windll.kernel32.GetStdHandle(-11)
ctypes.windll.kernel32.GetConsoleCursorInfo(handle, ctypes.byref(ci))
ci.visible = False
ctypes.windll.kernel32.SetConsoleCursorInfo(handle, ctypes.byref(ci))
elif os.name == "posix":
sys.stdout.write("\033[?25l")
sys.stdout.flush()
def show_cursor():
if os.name == "nt":
ci = CursorInfo()
handle = ctypes.windll.kernel32.GetStdHandle(-11)
ctypes.windll.kernel32.GetConsoleCursorInfo(handle, ctypes.byref(ci))
ci.visible = True
ctypes.windll.kernel32.SetConsoleCursorInfo(handle, ctypes.byref(ci))
elif os.name == "posix":
sys.stdout.write("\033[?25h")
sys.stdout.flush()
@contextmanager
def hide():
"Context manager to hide the terminal cursor"
try:
hide_cursor()
yield
finally:
show_cursor()
|
accelerate/src/accelerate/commands/menu/cursor.py/0
|
{
"file_path": "accelerate/src/accelerate/commands/menu/cursor.py",
"repo_id": "accelerate",
"token_count": 763
}
| 5
|
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# We ignore warnings about stepping the scheduler since we step it ourselves during gradient accumulation
import warnings
from .state import AcceleratorState, GradientState
warnings.filterwarnings("ignore", category=UserWarning, module="torch.optim.lr_scheduler")
class AcceleratedScheduler:
"""
A wrapper around a learning rate scheduler that will only step when the optimizer(s) have a training step. Useful
to avoid making a scheduler step too fast when gradients went overflow and there was no training step (in mixed
precision training)
When performing gradient accumulation scheduler lengths should not be changed accordingly, Accelerate will always
step the scheduler to account for it.
Args:
scheduler (`torch.optim.lr_scheduler._LRScheduler`):
The scheduler to wrap.
optimizers (one or a list of `torch.optim.Optimizer`):
The optimizers used.
step_with_optimizer (`bool`, *optional*, defaults to `True`):
Whether or not the scheduler should be stepped at each optimizer step.
split_batches (`bool`, *optional*, defaults to `False`):
Whether or not the dataloaders split one batch across the different processes (so batch size is the same
regardless of the number of processes) or create batches on each process (so batch size is the original
batch size multiplied by the number of processes).
"""
def __init__(self, scheduler, optimizers, step_with_optimizer: bool = True, split_batches: bool = False):
self.scheduler = scheduler
self.optimizers = optimizers if isinstance(optimizers, (list, tuple)) else [optimizers]
self.split_batches = split_batches
self.step_with_optimizer = step_with_optimizer
self.gradient_state = GradientState()
def step(self, *args, **kwargs):
if not self.step_with_optimizer:
# No link between scheduler and optimizer -> just step
self.scheduler.step(*args, **kwargs)
return
# Otherwise, first make sure the optimizer was stepped.
if not self.gradient_state.sync_gradients:
if self.gradient_state.adjust_scheduler:
self.scheduler._step_count += 1
return
for opt in self.optimizers:
if opt.step_was_skipped:
return
if self.split_batches:
# Split batches -> the training dataloader batch size is not changed so one step per training step
self.scheduler.step(*args, **kwargs)
else:
# Otherwise the training dataloader batch size was multiplied by `num_processes`, so we need to do
# num_processes steps per training step
num_processes = AcceleratorState().num_processes
for _ in range(num_processes):
# Special case when using OneCycle and `drop_last` was not used
if hasattr(self.scheduler, "total_steps"):
if self.scheduler._step_count <= self.scheduler.total_steps:
self.scheduler.step(*args, **kwargs)
else:
self.scheduler.step(*args, **kwargs)
# Passthroughs
def get_last_lr(self):
return self.scheduler.get_last_lr()
def state_dict(self):
return self.scheduler.state_dict()
def load_state_dict(self, state_dict):
self.scheduler.load_state_dict(state_dict)
def get_lr(self):
return self.scheduler.get_lr()
def print_lr(self, *args, **kwargs):
return self.scheduler.print_lr(*args, **kwargs)
|
accelerate/src/accelerate/scheduler.py/0
|
{
"file_path": "accelerate/src/accelerate/scheduler.py",
"repo_id": "accelerate",
"token_count": 1577
}
| 6
|
#!/usr/bin/env python
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import io
import math
import time
from copy import deepcopy
from pathlib import Path
import numpy as np
import torch
from torch.utils.data import DataLoader, Dataset
from accelerate import Accelerator
from accelerate.data_loader import SeedableRandomSampler, prepare_data_loader
from accelerate.state import AcceleratorState
from accelerate.test_utils import RegressionDataset, are_the_same_tensors
from accelerate.utils import (
DataLoaderConfiguration,
DistributedType,
gather,
is_bf16_available,
is_datasets_available,
is_ipex_available,
is_mlu_available,
is_npu_available,
is_xpu_available,
set_seed,
synchronize_rng_states,
)
# TODO: remove RegressionModel4XPU once ccl support empty buffer in broadcasting.
if is_xpu_available():
from accelerate.test_utils import RegressionModel4XPU as RegressionModel
else:
from accelerate.test_utils import RegressionModel
def generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler=False):
"Creates a dataloader that can also use the `SeedableRandomSampler`"
if use_seedable_sampler:
# The SeedableRandomSampler is needed during distributed setups
# for full reproducability across processes with the `DataLoader`
sampler = SeedableRandomSampler(
generator=generator,
data_source=train_set,
num_samples=len(train_set),
)
return DataLoader(train_set, batch_size=batch_size, sampler=sampler)
else:
return DataLoader(train_set, batch_size=batch_size, shuffle=True, generator=generator)
def print_main(state):
print(f"Printing from the main process {state.process_index}")
def print_local_main(state):
print(f"Printing from the local main process {state.local_process_index}")
def print_last(state):
print(f"Printing from the last process {state.process_index}")
def print_on(state, process_idx):
print(f"Printing from process {process_idx}: {state.process_index}")
def process_execution_check():
accelerator = Accelerator()
num_processes = accelerator.num_processes
# Test main_process_first context manager
path = Path("check_main_process_first.txt")
with accelerator.main_process_first():
if accelerator.is_main_process:
time.sleep(0.1) # ensure main process takes longest
with open(path, "a+") as f:
f.write("Currently in the main process\n")
else:
with open(path, "a+") as f:
f.write("Now on another process\n")
accelerator.wait_for_everyone()
if accelerator.is_main_process:
with open(path) as f:
text = "".join(f.readlines())
try:
assert text.startswith("Currently in the main process\n"), "Main process was not first"
if num_processes > 1:
assert text.endswith("Now on another process\n"), "Main process was not first"
assert (
text.count("Now on another process\n") == accelerator.num_processes - 1
), f"Only wrote to file {text.count('Now on another process') + 1} times, not {accelerator.num_processes}"
except AssertionError:
path.unlink()
raise
if accelerator.is_main_process and path.exists():
path.unlink()
accelerator.wait_for_everyone()
# Test the decorators
f = io.StringIO()
with contextlib.redirect_stdout(f):
accelerator.on_main_process(print_main)(accelerator.state)
result = f.getvalue().rstrip()
if accelerator.is_main_process:
assert result == "Printing from the main process 0", f"{result} != Printing from the main process 0"
else:
assert f.getvalue().rstrip() == "", f'{result} != ""'
f.truncate(0)
f.seek(0)
with contextlib.redirect_stdout(f):
accelerator.on_local_main_process(print_local_main)(accelerator.state)
if accelerator.is_local_main_process:
assert f.getvalue().rstrip() == "Printing from the local main process 0"
else:
assert f.getvalue().rstrip() == ""
f.truncate(0)
f.seek(0)
with contextlib.redirect_stdout(f):
accelerator.on_last_process(print_last)(accelerator.state)
if accelerator.is_last_process:
assert f.getvalue().rstrip() == f"Printing from the last process {accelerator.state.num_processes - 1}"
else:
assert f.getvalue().rstrip() == ""
f.truncate(0)
f.seek(0)
for process_idx in range(num_processes):
with contextlib.redirect_stdout(f):
accelerator.on_process(print_on, process_index=process_idx)(accelerator.state, process_idx)
if accelerator.process_index == process_idx:
assert f.getvalue().rstrip() == f"Printing from process {process_idx}: {accelerator.process_index}"
else:
assert f.getvalue().rstrip() == ""
f.truncate(0)
f.seek(0)
def init_state_check():
# Test we can instantiate this twice in a row.
state = AcceleratorState()
if state.local_process_index == 0:
print("Testing, testing. 1, 2, 3.")
print(state)
def rng_sync_check():
state = AcceleratorState()
synchronize_rng_states(["torch"])
assert are_the_same_tensors(torch.get_rng_state()), "RNG states improperly synchronized on CPU."
if state.distributed_type == DistributedType.MULTI_GPU:
synchronize_rng_states(["cuda"])
assert are_the_same_tensors(torch.cuda.get_rng_state()), "RNG states improperly synchronized on GPU."
elif state.distributed_type == DistributedType.MULTI_XPU:
synchronize_rng_states(["xpu"])
assert are_the_same_tensors(torch.xpu.get_rng_state()), "RNG states improperly synchronized on XPU."
generator = torch.Generator()
synchronize_rng_states(["generator"], generator=generator)
assert are_the_same_tensors(generator.get_state()), "RNG states improperly synchronized in generator."
if state.local_process_index == 0:
print("All rng are properly synched.")
def dl_preparation_check():
state = AcceleratorState()
length = 32 * state.num_processes
dl = DataLoader(range(length), batch_size=8)
dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index, put_on_device=True)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result)
print(state.process_index, result, type(dl))
assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result."
dl = DataLoader(range(length), batch_size=8)
dl = prepare_data_loader(
dl,
state.device,
state.num_processes,
state.process_index,
put_on_device=True,
split_batches=True,
)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result)
assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result."
if state.process_index == 0:
print("Non-shuffled dataloader passing.")
dl = DataLoader(range(length), batch_size=8, shuffle=True)
dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index, put_on_device=True)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result).tolist()
result.sort()
assert result == list(range(length)), "Wrong shuffled dataloader result."
dl = DataLoader(range(length), batch_size=8, shuffle=True)
dl = prepare_data_loader(
dl,
state.device,
state.num_processes,
state.process_index,
put_on_device=True,
split_batches=True,
)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result).tolist()
result.sort()
assert result == list(range(length)), "Wrong shuffled dataloader result."
if state.local_process_index == 0:
print("Shuffled dataloader passing.")
def central_dl_preparation_check():
state = AcceleratorState()
length = 32 * state.num_processes
dl = DataLoader(range(length), batch_size=8)
dl = prepare_data_loader(
dl, state.device, state.num_processes, state.process_index, put_on_device=True, dispatch_batches=True
)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result)
assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result."
dl = DataLoader(range(length), batch_size=8)
dl = prepare_data_loader(
dl,
state.device,
state.num_processes,
state.process_index,
put_on_device=True,
split_batches=True,
dispatch_batches=True,
)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result)
assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result."
if state.process_index == 0:
print("Non-shuffled central dataloader passing.")
dl = DataLoader(range(length), batch_size=8, shuffle=True)
dl = prepare_data_loader(
dl, state.device, state.num_processes, state.process_index, put_on_device=True, dispatch_batches=True
)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result).tolist()
result.sort()
assert result == list(range(length)), "Wrong shuffled dataloader result."
dl = DataLoader(range(length), batch_size=8, shuffle=True)
dl = prepare_data_loader(
dl,
state.device,
state.num_processes,
state.process_index,
put_on_device=True,
split_batches=True,
dispatch_batches=True,
)
result = []
for batch in dl:
result.append(gather(batch))
result = torch.cat(result).tolist()
result.sort()
assert result == list(range(length)), "Wrong shuffled dataloader result."
if state.local_process_index == 0:
print("Shuffled central dataloader passing.")
def custom_sampler_check():
state = AcceleratorState()
class CustomDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return self.data[index]
class CustomBatchSampler:
def __init__(self, dataset_length: int, batch_size: int, shuffle: bool = True):
self.batch_size = batch_size
self.data_index = np.arange(dataset_length)
self.shuffle = shuffle
def __iter__(self):
num_batches = len(self)
if self.shuffle:
index = np.random.permutation(self.data_index)
else:
index = self.data_index
output = np.array_split(index, num_batches)
yield from output
def __len__(self):
return math.ceil(len(self.data_index) / self.batch_size)
dataset = CustomDataset(range(32 * state.num_processes))
sampler = CustomBatchSampler(len(dataset), batch_size=8)
dl = DataLoader(dataset, batch_sampler=sampler)
dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index)
# We need just ensure that `dl.batch_sampler` (or `dl.batch_sampler.batch_sampler` is indeed the old batch sampler
if hasattr(dl.batch_sampler, "batch_sampler"):
assert isinstance(
dl.batch_sampler.batch_sampler, CustomBatchSampler
), "Custom sampler was changed after calling `prepare_data_loader`"
else:
assert isinstance(
dl.batch_sampler, CustomBatchSampler
), "Custom sampler was changed after calling `prepare_data_loader`"
def check_seedable_sampler():
# Set seed
set_seed(42)
train_set = RegressionDataset(length=10, seed=42)
train_dl = DataLoader(train_set, batch_size=2, shuffle=True)
config = DataLoaderConfiguration(use_seedable_sampler=True)
accelerator = Accelerator(dataloader_config=config)
train_dl = accelerator.prepare(train_dl)
original_items = []
for _ in range(3):
for batch in train_dl:
original_items.append(batch["x"])
original_items = torch.cat(original_items)
# Set seed again and the epoch
set_seed(42)
train_dl.set_epoch(0)
new_items = []
for _ in range(3):
for batch in train_dl:
new_items.append(batch["x"])
new_items = torch.cat(new_items)
assert torch.allclose(original_items, new_items), "Did not obtain the same items with the same seed and epoch."
def check_seedable_sampler_in_batch_sampler_shard():
set_seed(42)
config = DataLoaderConfiguration(use_seedable_sampler=True)
accelerator = Accelerator(dataloader_config=config)
assert accelerator.num_processes > 1, "This test requires more than one process."
dataloader = DataLoader(list(range(10)), batch_size=1, shuffle=True)
prepared_data_loader = prepare_data_loader(
dataloader=dataloader,
use_seedable_sampler=True,
)
target_sampler = prepared_data_loader.batch_sampler.batch_sampler.sampler
assert isinstance(
target_sampler, SeedableRandomSampler
), "Sampler in BatchSamplerShard is not SeedableRandomSampler."
def mock_training(length, batch_size, generator, use_seedable_sampler=False):
set_seed(42)
generator.manual_seed(42)
train_set = RegressionDataset(length=length, seed=42)
train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
for epoch in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
loss.backward()
optimizer.step()
return train_set, model
def training_check(use_seedable_sampler=False):
state = AcceleratorState()
generator = torch.Generator()
batch_size = 8
length = batch_size * 4 * state.num_processes
train_set, old_model = mock_training(length, batch_size * state.num_processes, generator, use_seedable_sampler)
assert are_the_same_tensors(old_model.a), "Did not obtain the same model on both processes."
assert are_the_same_tensors(old_model.b), "Did not obtain the same model on both processes."
accelerator = Accelerator()
train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer)
set_seed(42)
generator.manual_seed(42)
for _ in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
accelerator.backward(loss)
optimizer.step()
model = accelerator.unwrap_model(model).cpu()
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training."
accelerator.print("Training yielded the same results on one CPU or distributed setup with no batch split.")
dataloader_config = DataLoaderConfiguration(split_batches=True, use_seedable_sampler=use_seedable_sampler)
accelerator = Accelerator(dataloader_config=dataloader_config)
train_dl = generate_baseline_dataloader(
train_set, generator, batch_size * state.num_processes, use_seedable_sampler
)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer)
set_seed(42)
generator.manual_seed(42)
for _ in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
accelerator.backward(loss)
optimizer.step()
model = accelerator.unwrap_model(model).cpu()
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training."
accelerator.print("Training yielded the same results on one CPU or distributes setup with batch split.")
if torch.cuda.is_available() or is_npu_available() or is_mlu_available():
# Mostly a test that FP16 doesn't crash as the operation inside the model is not converted to FP16
print("FP16 training check.")
AcceleratorState._reset_state()
dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler)
accelerator = Accelerator(mixed_precision="fp16", dataloader_config=dataloader_config)
train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer)
set_seed(42)
generator.manual_seed(42)
for _ in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
accelerator.backward(loss)
optimizer.step()
model = accelerator.unwrap_model(model).cpu()
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training."
if torch.cuda.is_available():
# Mostly a test that model.forward will have autocast when running unwrap_model(model, keep_fp32_wrapper=True)
print("Keep fp32 wrapper check.")
AcceleratorState._reset_state()
accelerator = Accelerator(mixed_precision="fp16")
model = torch.nn.Linear(2, 4)
model = accelerator.prepare(model)
model_with_fp32_wrapper = accelerator.unwrap_model(model, keep_fp32_wrapper=True)
# Run forward with fp16 as input.
# When the model is with mixed precision wrapper, no error will be raised.
input_tensor = torch.Tensor([1, 2]).to(dtype=torch.float16, device=accelerator.device)
output = model_with_fp32_wrapper(input_tensor)
# BF16 support is only for CPU + TPU, and some GPU
if is_bf16_available():
# Mostly a test that BF16 doesn't crash as the operation inside the model is not converted to BF16
print("BF16 training check.")
AcceleratorState._reset_state()
dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler)
accelerator = Accelerator(mixed_precision="bf16", dataloader_config=dataloader_config)
train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer)
set_seed(42)
generator.manual_seed(42)
for _ in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
accelerator.backward(loss)
optimizer.step()
model = accelerator.unwrap_model(model).cpu()
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training."
# IPEX support is only for CPU
if is_ipex_available():
print("ipex BF16 training check.")
AcceleratorState._reset_state()
dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler)
accelerator = Accelerator(mixed_precision="bf16", cpu=True, dataloader_config=dataloader_config)
train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer)
set_seed(42)
generator.manual_seed(42)
for _ in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
accelerator.backward(loss)
optimizer.step()
model = accelerator.unwrap_model(model).cpu()
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training."
# XPU support is only for XPU
if is_xpu_available():
print("xpu BF16 training check.")
AcceleratorState._reset_state()
dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler)
accelerator = Accelerator(mixed_precision="bf16", cpu=False, dataloader_config=dataloader_config)
train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer)
set_seed(42)
generator.manual_seed(42)
for _ in range(3):
for batch in train_dl:
model.zero_grad()
output = model(batch["x"])
loss = torch.nn.functional.mse_loss(output, batch["y"])
accelerator.backward(loss)
optimizer.step()
model = accelerator.unwrap_model(model).cpu()
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on XPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on XPU or distributed training."
def test_split_between_processes_dataset(datasets_Dataset):
state = AcceleratorState()
data = datasets_Dataset.from_list([dict(k=v) for v in range(2 * state.num_processes)])
with state.split_between_processes(data, apply_padding=False) as results:
assert (
len(results) == 2
), f"Each process did not have two items. Process index: {state.process_index}; Length: {len(results)}"
data = datasets_Dataset.from_list([dict(k=v) for v in range(2 * state.num_processes - 1)])
with state.split_between_processes(data, apply_padding=False) as results:
if state.is_last_process:
assert (
len(results) == 1
), f"Last process did not receive a single item. Process index: {state.process_index}; Length: {len(results)}"
else:
assert (
len(results) == 2
), f"One of the intermediate processes did not receive two items. Process index: {state.process_index}; Length: {len(results)}"
data = datasets_Dataset.from_list([dict(k=v) for v in range(2 * state.num_processes - 1)])
with state.split_between_processes(data, apply_padding=True) as results:
if state.num_processes == 1:
assert (
len(results) == 1
), f"Single process did not receive a single item. Process index: {state.process_index}; Length: {len(results)}"
else:
assert (
len(results) == 2
), f"Each process did not have two items. Process index: {state.process_index}; Length: {len(results)}"
state.wait_for_everyone()
def test_split_between_processes_list():
state = AcceleratorState()
data = list(range(0, 2 * state.num_processes))
with state.split_between_processes(data) as results:
assert (
len(results) == 2
), f"Each process did not have two items. Process index: {state.process_index}; Length: {len(results)}"
data = list(range(0, (3 * state.num_processes) - 1))
with state.split_between_processes(data, apply_padding=True) as results:
if state.is_last_process:
# Test that the last process gets the extra item(s)
num_samples_per_device = math.ceil(len(data) / state.num_processes)
assert (
len(results) == num_samples_per_device
), f"Last process did not get the extra item(s). Process index: {state.process_index}; Length: {len(results)}"
state.wait_for_everyone()
def test_split_between_processes_nested_dict():
state = AcceleratorState()
a = [1, 2, 3, 4, 5, 6, 7, 8]
b = ["a", "b", "c", "d", "e", "f", "g", "h"]
c = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8])
if state.num_processes in (1, 2, 4):
data = {"a": a, "b": b, "c": c}
data_copy = deepcopy(data)
with state.split_between_processes(data) as results:
if state.process_index == 0:
assert results["a"] == data_copy["a"][: 8 // state.num_processes]
elif state.num_processes == 2:
assert results["a"] == data_copy["a"][4:]
elif state.process_index == 3:
# We return a list each time
assert results["a"] == data_copy["a"][-2:], f'Expected: {data_copy["a"][-2]}, Actual: {results["a"]}'
if state.process_index == 0:
assert results["b"] == data_copy["b"][: 8 // state.num_processes]
elif state.num_processes == 2:
assert results["b"] == data_copy["b"][4:]
elif state.process_index == 3:
assert results["b"] == data_copy["b"][-2:]
if state.process_index == 0:
assert torch.allclose(
results["c"], data_copy["c"][: 8 // state.num_processes]
), f"Did not obtain expected values on process 0, expected `{data['c'][:8 // state.num_processes]}`, received: {results['c']}"
elif state.num_processes == 2:
assert torch.allclose(
results["c"], data_copy["c"][4:]
), f"Did not obtain expected values on process 2, expected `{data['c'][4:]}`, received: {results['c']}"
elif state.process_index == 3:
assert torch.allclose(
results["c"], data_copy["c"][-2:]
), f"Did not obtain expected values on process 4, expected `{data['c'][-2:]}`, received: {results['c']}"
state.wait_for_everyone()
def test_split_between_processes_tensor():
state = AcceleratorState()
if state.num_processes > 1:
data = torch.tensor([[0, 1, 2, 3], [4, 5, 6, 7]]).to(state.device)
with state.split_between_processes(data) as results:
if state.process_index == 0:
assert torch.allclose(results, torch.tensor([0, 1, 2, 3]).to(state.device))
else:
assert torch.allclose(results, torch.tensor([4, 5, 6, 7]).to(state.device))
state.wait_for_everyone()
def test_trigger():
accelerator = Accelerator()
# should start with being false
assert accelerator.check_trigger() is False
# set a breakpoint on the main process
if accelerator.is_main_process:
accelerator.set_trigger()
# check it's been activated across all processes
# calls `all_reduce` and triggers a sync
assert accelerator.check_trigger() is True
# check it's been reset after the sync
assert accelerator.check_trigger() is False
def test_reinstantiated_state():
import pytest
AcceleratorState._reset_state()
simple_model = torch.nn.Linear(1, 1)
# First define an accelerator
accelerator = Accelerator()
# Then call `reset_state`, breaking the state existing in the accelerator
AcceleratorState._reset_state()
# Now try and prepare a simple model, should raise the custom error early
with pytest.raises(AttributeError) as cm:
accelerator.prepare(simple_model)
assert "`AcceleratorState` object has no attribute" in str(cm.value.args[0])
assert "This happens if `AcceleratorState._reset_state()`" in str(cm.value.args[0])
def main():
accelerator = Accelerator()
state = accelerator.state
if state.local_process_index == 0:
print("**Initialization**")
init_state_check()
state.wait_for_everyone()
if state.distributed_type == DistributedType.MULTI_GPU:
num_processes_per_node = torch.cuda.device_count()
else:
num_processes_per_node = state.num_processes
# We only run this test on non-multinode
if num_processes_per_node == state.num_processes:
if state.process_index == 0:
print("\n**Test process execution**")
process_execution_check()
if state.process_index == 0:
print("\n**Test split between processes as a list**")
test_split_between_processes_list()
if state.process_index == 0:
print("\n**Test split between processes as a dict**")
test_split_between_processes_nested_dict()
if state.process_index == 0:
print("\n**Test split between processes as a tensor**")
test_split_between_processes_tensor()
if state.process_index == 0:
print("\n**Test split between processes as a datasets.Dataset**")
if is_datasets_available():
from datasets import Dataset as datasets_Dataset
test_split_between_processes_dataset(datasets_Dataset)
else:
print("Skipped because Hugging Face datasets is not available")
if state.local_process_index == 0:
print("\n**Test random number generator synchronization**")
rng_sync_check()
if state.local_process_index == 0:
print("\n**DataLoader integration test**")
dl_preparation_check()
if state.distributed_type != DistributedType.XLA:
central_dl_preparation_check()
custom_sampler_check()
check_seedable_sampler()
if state.num_processes > 1:
check_seedable_sampler_in_batch_sampler_shard()
# Trainings are not exactly the same in DeepSpeed and CPU mode
if state.distributed_type == DistributedType.DEEPSPEED:
return
if state.local_process_index == 0:
print("\n**Training integration test**")
training_check(use_seedable_sampler=False)
training_check(use_seedable_sampler=True)
if state.local_process_index == 0:
print("\n**Breakpoint trigger test**")
test_trigger()
if state.local_process_index == 0:
print("\n**Test reinstantiated state**")
test_reinstantiated_state()
if __name__ == "__main__":
main()
|
accelerate/src/accelerate/test_utils/scripts/test_script.py/0
|
{
"file_path": "accelerate/src/accelerate/test_utils/scripts/test_script.py",
"repo_id": "accelerate",
"token_count": 12971
}
| 7
|
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import gc
import importlib
import inspect
import json
import logging
import os
import re
import shutil
import tempfile
import warnings
from collections import OrderedDict, defaultdict
from typing import Dict, List, Optional, Tuple, Union
import packaging
import torch
import torch.nn as nn
from ..state import AcceleratorState
from .constants import SAFE_WEIGHTS_NAME, WEIGHTS_NAME
from .dataclasses import AutocastKwargs, CustomDtype, DistributedType
from .imports import (
is_mlu_available,
is_mps_available,
is_npu_available,
is_peft_available,
is_torch_xla_available,
is_xpu_available,
)
from .offload import load_offloaded_weight, offload_weight, save_offload_index
from .tqdm import is_tqdm_available, tqdm
from .versions import compare_versions
if is_npu_available(check_device=False):
import torch_npu # noqa: F401
if is_mlu_available(check_device=False):
import torch_mlu # noqa: F401
from safetensors import safe_open
from safetensors.torch import load_file as safe_load_file
WEIGHTS_INDEX_NAME = "pytorch_model.bin.index.json"
logger = logging.getLogger(__name__)
def is_peft_model(model):
from .other import extract_model_from_parallel
if is_peft_available():
from peft import PeftModel
return is_peft_available() and isinstance(extract_model_from_parallel(model), PeftModel)
def check_device_same(first_device, second_device):
"""
Utility method to check if two `torch` devices are similar. When dealing with CUDA devices, torch throws `False`
for `torch.device("cuda") == torch.device("cuda:0")` whereas they should be the same
Args:
first_device (`torch.device`):
First device to check
second_device (`torch.device`):
Second device to check
"""
if first_device.type != second_device.type:
return False
if first_device.type == "cuda" and first_device.index is None:
# In case the first_device is a cuda device and have
# the index attribute set to `None`, default it to `0`
first_device = torch.device("cuda", index=0)
if second_device.type == "cuda" and second_device.index is None:
# In case the second_device is a cuda device and have
# the index attribute set to `None`, default it to `0`
second_device = torch.device("cuda", index=0)
return first_device == second_device
def convert_file_size_to_int(size: Union[int, str]):
"""
Converts a size expressed as a string with digits an unit (like `"5MB"`) to an integer (in bytes).
Args:
size (`int` or `str`): The size to convert. Will be directly returned if an `int`.
Example:
```py
>>> convert_file_size_to_int("1MiB")
1048576
```
"""
mem_size = -1
err_msg = (
f"`size` {size} is not in a valid format. Use an integer for bytes, or a string with an unit (like '5.0GB')."
)
try:
if isinstance(size, int):
mem_size = size
elif size.upper().endswith("GIB"):
mem_size = int(float(size[:-3]) * (2**30))
elif size.upper().endswith("MIB"):
mem_size = int(float(size[:-3]) * (2**20))
elif size.upper().endswith("KIB"):
mem_size = int(float(size[:-3]) * (2**10))
elif size.upper().endswith("GB"):
int_size = int(float(size[:-2]) * (10**9))
mem_size = int_size // 8 if size.endswith("b") else int_size
elif size.upper().endswith("MB"):
int_size = int(float(size[:-2]) * (10**6))
mem_size = int_size // 8 if size.endswith("b") else int_size
elif size.upper().endswith("KB"):
int_size = int(float(size[:-2]) * (10**3))
mem_size = int_size // 8 if size.endswith("b") else int_size
except ValueError:
raise ValueError(err_msg)
if mem_size < 0:
raise ValueError(err_msg)
return mem_size
def dtype_byte_size(dtype: torch.dtype):
"""
Returns the size (in bytes) occupied by one parameter of type `dtype`.
Example:
```py
>>> dtype_byte_size(torch.float32)
4
```
"""
if dtype == torch.bool:
return 1 / 8
elif dtype == CustomDtype.INT2:
return 1 / 4
elif dtype == CustomDtype.INT4:
return 1 / 2
elif dtype == CustomDtype.FP8:
return 1
bit_search = re.search(r"[^\d](\d+)$", str(dtype))
if bit_search is None:
raise ValueError(f"`dtype` is not a valid dtype: {dtype}.")
bit_size = int(bit_search.groups()[0])
return bit_size // 8
def id_tensor_storage(tensor: torch.Tensor) -> Tuple[torch.device, int, int]:
"""
Unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. For
example, "meta" tensors all share the same storage, and thus their identifier will all be equal. This identifier is
guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with
non-overlapping lifetimes may have the same id.
"""
_SIZE = {
torch.int64: 8,
torch.float32: 4,
torch.int32: 4,
torch.bfloat16: 2,
torch.float16: 2,
torch.int16: 2,
torch.uint8: 1,
torch.int8: 1,
torch.bool: 1,
torch.float64: 8,
}
try:
storage_ptr = tensor.untyped_storage().data_ptr()
storage_size = tensor.untyped_storage().nbytes()
except Exception:
# Fallback for torch==1.10
try:
storage_ptr = tensor.storage().data_ptr()
storage_size = tensor.storage().size() * _SIZE[tensor.dtype]
except NotImplementedError:
# Fallback for meta storage
storage_ptr = 0
# On torch >=2.0 this is the tensor size
storage_size = tensor.nelement() * _SIZE[tensor.dtype]
return tensor.device, storage_ptr, storage_size
def shard_checkpoint(
state_dict: Dict[str, torch.Tensor], max_shard_size: Union[int, str] = "10GB", weights_name: str = WEIGHTS_NAME
):
"""
Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a
given size.
The sub-checkpoints are determined by iterating through the `state_dict` in the order of its keys, so there is no
optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the
limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB],
[6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB].
<Tip warning={true}>
If one of the model's weight is bigger that `max_sahrd_size`, it will end up in its own sub-checkpoint which will
have a size greater than `max_shard_size`.
</Tip>
Args:
state_dict (`Dict[str, torch.Tensor]`): The state dictionary of a model to save.
max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`):
The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit
(like `"5MB"`).
weights_name (`str`, *optional*, defaults to `"pytorch_model.bin"`):
The name of the model save file.
"""
max_shard_size = convert_file_size_to_int(max_shard_size)
sharded_state_dicts = [{}]
last_block_size = 0
total_size = 0
storage_id_to_block = {}
for key, weight in state_dict.items():
# when bnb serialization is used the weights in the state dict can be strings
# check: https://github.com/huggingface/transformers/pull/24416 for more details
if isinstance(weight, str):
continue
else:
storage_id = id_tensor_storage(weight)
# If a `weight` shares the same underlying storage as another tensor, we put `weight` in the same `block`
if storage_id in storage_id_to_block:
block_id = storage_id_to_block[storage_id]
sharded_state_dicts[block_id][key] = weight
continue
weight_size = weight.numel() * dtype_byte_size(weight.dtype)
# If this weight is going to tip up over the maximal size, we split.
if last_block_size + weight_size > max_shard_size:
sharded_state_dicts.append({})
last_block_size = 0
sharded_state_dicts[-1][key] = weight
last_block_size += weight_size
total_size += weight_size
storage_id_to_block[storage_id] = len(sharded_state_dicts) - 1
# If we only have one shard, we return it
if len(sharded_state_dicts) == 1:
return {weights_name: sharded_state_dicts[0]}, None
# Otherwise, let's build the index
weight_map = {}
shards = {}
for idx, shard in enumerate(sharded_state_dicts):
shard_file = weights_name.replace(".bin", f"-{idx + 1:05d}-of-{len(sharded_state_dicts):05d}.bin")
shard_file = shard_file.replace(
".safetensors", f"-{idx + 1:05d}-of-{len(sharded_state_dicts):05d}.safetensors"
)
shards[shard_file] = shard
for key in shard.keys():
weight_map[key] = shard_file
# Add the metadata
metadata = {"total_size": total_size}
index = {"metadata": metadata, "weight_map": weight_map}
return shards, index
def set_module_tensor_to_device(
module: nn.Module,
tensor_name: str,
device: Union[int, str, torch.device],
value: Optional[torch.Tensor] = None,
dtype: Optional[Union[str, torch.dtype]] = None,
fp16_statistics: Optional[torch.HalfTensor] = None,
tied_params_map: Optional[Dict[int, Dict[torch.device, torch.Tensor]]] = None,
):
"""
A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing
`param.to(device)` creates a new tensor not linked to the parameter, which is why we need this function).
Args:
module (`torch.nn.Module`):
The module in which the tensor we want to move lives.
tensor_name (`str`):
The full name of the parameter/buffer.
device (`int`, `str` or `torch.device`):
The device on which to set the tensor.
value (`torch.Tensor`, *optional*):
The value of the tensor (useful when going from the meta device to any other device).
dtype (`torch.dtype`, *optional*):
If passed along the value of the parameter will be cast to this `dtype`. Otherwise, `value` will be cast to
the dtype of the existing parameter in the model.
fp16_statistics (`torch.HalfTensor`, *optional*):
The list of fp16 statistics to set on the module, used for 8 bit model serialization.
tied_params_map (Dict[int, Dict[torch.device, torch.Tensor]], *optional*, defaults to `None`):
A map of current data pointers to dictionaries of devices to already dispatched tied weights. For a given
execution device, this parameter is useful to reuse the first available pointer of a shared weight on the
device for all others, instead of duplicating memory.
"""
# Recurse if needed
if "." in tensor_name:
splits = tensor_name.split(".")
for split in splits[:-1]:
new_module = getattr(module, split)
if new_module is None:
raise ValueError(f"{module} has no attribute {split}.")
module = new_module
tensor_name = splits[-1]
if tensor_name not in module._parameters and tensor_name not in module._buffers:
raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.")
is_buffer = tensor_name in module._buffers
old_value = getattr(module, tensor_name)
# Treat the case where old_value (or a custom `value`, typically offloaded to RAM/disk) belongs to a tied group, and one of the weight
# in the tied group has already been dispatched to the device, by avoiding reallocating memory on the device and just copying the pointer.
if (
value is not None
and tied_params_map is not None
and value.data_ptr() in tied_params_map
and device in tied_params_map[value.data_ptr()]
):
module._parameters[tensor_name] = tied_params_map[value.data_ptr()][device]
return
elif (
tied_params_map is not None
and old_value.data_ptr() in tied_params_map
and device in tied_params_map[old_value.data_ptr()]
):
module._parameters[tensor_name] = tied_params_map[old_value.data_ptr()][device]
return
if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None:
raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.")
if value is not None:
if old_value.shape != value.shape:
raise ValueError(
f'Trying to set a tensor of shape {value.shape} in "{tensor_name}" (which has shape {old_value.shape}), this look incorrect.'
)
if dtype is None:
# For compatibility with PyTorch load_state_dict which converts state dict dtype to existing dtype in model
value = value.to(old_value.dtype)
elif not str(value.dtype).startswith(("torch.uint", "torch.int", "torch.bool")):
value = value.to(dtype)
param = module._parameters[tensor_name] if tensor_name in module._parameters else None
param_cls = type(param)
device_quantization = None
with torch.no_grad():
# leave it on cpu first before moving them to cuda
# # fix the case where the device is meta, we don't want to put it on cpu because there is no data =0
if (
param is not None
and param.device.type != "cuda"
and torch.device(device).type == "cuda"
and param_cls.__name__ in ["Int8Params", "FP4Params", "Params4bit"]
):
device_quantization = device
device = "cpu"
# `torch.Tensor.to(<int num>)` is not supported by `torch_npu` (see this [issue](https://github.com/Ascend/pytorch/issues/16)).
if is_npu_available() and isinstance(device, int):
device = f"npu:{device}"
elif is_mlu_available() and isinstance(device, int):
device = f"mlu:{device}"
if is_xpu_available() and isinstance(device, int):
device = f"xpu:{device}"
if value is None:
new_value = old_value.to(device)
if dtype is not None and device in ["meta", torch.device("meta")]:
if not str(old_value.dtype).startswith(("torch.uint", "torch.int", "torch.bool")):
new_value = new_value.to(dtype)
if not is_buffer:
module._parameters[tensor_name] = param_cls(new_value, requires_grad=old_value.requires_grad)
elif isinstance(value, torch.Tensor):
new_value = value.to(device)
else:
new_value = torch.tensor(value, device=device)
if device_quantization is not None:
device = device_quantization
if is_buffer:
module._buffers[tensor_name] = new_value
elif value is not None or not check_device_same(torch.device(device), module._parameters[tensor_name].device):
param_cls = type(module._parameters[tensor_name])
kwargs = module._parameters[tensor_name].__dict__
if param_cls.__name__ in ["Int8Params", "FP4Params"]:
if param_cls.__name__ == "Int8Params" and new_value.dtype == torch.float32:
# downcast to fp16 if any - needed for 8bit serialization
new_value = new_value.to(torch.float16)
# quantize module that are going to stay on the cpu so that we offload quantized weights
if device == "cpu" and param_cls.__name__ == "Int8Params":
new_value = param_cls(new_value, requires_grad=old_value.requires_grad, **kwargs).to(0).to("cpu")
new_value.CB = new_value.CB.to("cpu")
new_value.SCB = new_value.SCB.to("cpu")
else:
new_value = param_cls(new_value, requires_grad=old_value.requires_grad, **kwargs).to(device)
elif param_cls.__name__ in ["QTensor", "QBitsTensor"]:
new_value = torch.nn.Parameter(new_value, requires_grad=old_value.requires_grad).to(device)
else:
new_value = param_cls(new_value, requires_grad=old_value.requires_grad).to(device)
module._parameters[tensor_name] = new_value
if fp16_statistics is not None:
module._parameters[tensor_name].SCB = fp16_statistics.to(device)
del fp16_statistics
# as we put the weight to meta, it doesn't have SCB attr anymore. make sure that it is not a meta weight
if (
module.__class__.__name__ == "Linear8bitLt"
and getattr(module.weight, "SCB", None) is None
and str(module.weight.device) != "meta"
):
# quantize only if necessary
device_index = torch.device(device).index if torch.device(device).type == "cuda" else None
if not getattr(module.weight, "SCB", None) and device_index is not None:
if module.bias is not None and module.bias.device.type != "meta":
# if a bias exists, we need to wait until the bias is set on the correct device
module = module.cuda(device_index)
elif module.bias is None:
# if no bias exists, we can quantize right away
module = module.cuda(device_index)
elif module.__class__.__name__ == "Linear4bit" and getattr(module.weight, "quant_state", None) is None:
# quantize only if necessary
device_index = torch.device(device).index if torch.device(device).type == "cuda" else None
if not getattr(module.weight, "quant_state", None) and device_index is not None:
module.weight = module.weight.cuda(device_index)
# clean pre and post foward hook
if is_npu_available():
torch.npu.empty_cache()
elif is_mlu_available():
torch.mlu.empty_cache()
elif is_xpu_available():
torch.xpu.empty_cache()
else:
torch.cuda.empty_cache()
# When handling tied weights, we update tied_params_map to keep track of the tied weights that have already been allocated on the device in
# order to avoid duplicating memory, see above.
if (
tied_params_map is not None
and old_value.data_ptr() in tied_params_map
and device not in tied_params_map[old_value.data_ptr()]
):
tied_params_map[old_value.data_ptr()][device] = new_value
elif (
value is not None
and tied_params_map is not None
and value.data_ptr() in tied_params_map
and device not in tied_params_map[value.data_ptr()]
):
tied_params_map[value.data_ptr()][device] = new_value
def named_module_tensors(
module: nn.Module, include_buffers: bool = True, recurse: bool = False, remove_non_persistent: bool = False
):
"""
A helper function that gathers all the tensors (parameters + buffers) of a given module. If `include_buffers=True`
it's the same as doing `module.named_parameters(recurse=recurse) + module.named_buffers(recurse=recurse)`.
Args:
module (`torch.nn.Module`):
The module we want the tensors on.
include_buffer (`bool`, *optional*, defaults to `True`):
Whether or not to include the buffers in the result.
recurse (`bool`, *optional`, defaults to `False`):
Whether or not to go look in every submodule or just return the direct parameters and buffers.
remove_non_persistent (`bool`, *optional*, defaults to `False`):
Whether or not to remove the non persistent buffer from the buffers. Useful only when include_buffers =
True
"""
yield from module.named_parameters(recurse=recurse)
if include_buffers:
non_persistent_buffers = set()
if remove_non_persistent:
non_persistent_buffers = get_non_persistent_buffers(module, recurse=recurse)
for named_buffer in module.named_buffers(recurse=recurse):
name, _ = named_buffer
if name not in non_persistent_buffers:
yield named_buffer
def get_non_persistent_buffers(module: nn.Module, recurse: bool = False):
"""
Gather all non persistent buffers of a given modules into a set
Args:
module (`nn.Module`):
The module we want the non persistent buffers on.
recurse (`bool`, *optional*, defaults to `False`):
Whether or not to go look in every submodule or just return the direct non persistent buffers.
"""
non_persistent_buffers_set = module._non_persistent_buffers_set
if recurse:
for _, m in module.named_modules():
non_persistent_buffers_set |= m._non_persistent_buffers_set
return non_persistent_buffers_set
class FindTiedParametersResult(list):
"""
This is a subclass of a list to handle backward compatibility for Transformers. Do not rely on the fact this is not
a list or on the `values` method as in the future this will be removed.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def values(self):
# TODO: at the next Transformers release (4.28.0) issue a deprecation warning here.
return sum([x[1:] for x in self], [])
def check_tied_parameters_in_config(model: nn.Module):
"""
Check if there is any indication in the given model that some weights should be tied.
Args:
model (`torch.nn.Module`): The model to inspect
Returns:
bool: True if the model needs to have tied weights
"""
# based on model.tie_weights() method
has_tied_word_embedding = False
has_tied_encoder_decoder = False
has_tied_module = False
if "PreTrainedModel" in [c.__name__ for c in inspect.getmro(model.__class__)]:
has_tied_word_embedding = (
hasattr(model, "config")
and getattr(model.config, "tie_word_embeddings", False)
and model.get_output_embeddings()
)
has_tied_encoder_decoder = (
hasattr(model, "config")
and getattr(model.config, "is_encoder_decoder", False)
and getattr(model.config, "tie_encoder_decoder", False)
)
has_tied_module = any(hasattr(module, "_tie_weights") for module in model.modules())
return any([has_tied_word_embedding, has_tied_encoder_decoder, has_tied_module])
def _get_param_device(param, device_map):
if param in device_map:
return device_map[param]
parent_param = ".".join(param.split(".")[:-1])
if parent_param == param:
raise ValueError(f"The `device_map` does not contain the module {param}.")
else:
return _get_param_device(parent_param, device_map)
def check_tied_parameters_on_same_device(tied_params, device_map):
"""
Check if tied parameters are on the same device
Args:
tied_params (`List[List[str]]`):
A list of lists of parameter names being all tied together.
device_map (`Dict[str, Union[int, str, torch.device]]`):
A map that specifies where each submodule should go.
"""
for tie_param in tied_params:
tie_param_devices = {}
for param in tie_param:
tie_param_devices[param] = _get_param_device(param, device_map)
if len(set(tie_param_devices.values())) > 1:
logger.warn(
f"Tied parameters are on different devices: {tie_param_devices}. "
"Please modify your custom device map or set `device_map='auto'`. "
)
def find_tied_parameters(model: nn.Module, **kwargs):
"""
Find the tied parameters in a given model.
<Tip warning={true}>
The signature accepts keyword arguments, but they are for the recursive part of this function and you should ignore
them.
</Tip>
Args:
model (`torch.nn.Module`): The model to inspect.
Returns:
List[List[str]]: A list of lists of parameter names being all tied together.
Example:
```py
>>> from collections import OrderedDict
>>> import torch.nn as nn
>>> model = nn.Sequential(OrderedDict([("linear1", nn.Linear(4, 4)), ("linear2", nn.Linear(4, 4))]))
>>> model.linear2.weight = model.linear1.weight
>>> find_tied_parameters(model)
[['linear1.weight', 'linear2.weight']]
```
"""
# Initialize result and named_parameters before recursing.
named_parameters = kwargs.get("named_parameters", None)
prefix = kwargs.get("prefix", "")
result = kwargs.get("result", {})
if named_parameters is None:
named_parameters = {n: p for n, p in model.named_parameters()}
else:
# A tied parameter will not be in the full `named_parameters` seen above but will be in the `named_parameters`
# of the submodule it belongs to. So while recursing we track the names that are not in the initial
# `named_parameters`.
for name, parameter in model.named_parameters():
full_name = name if prefix == "" else f"{prefix}.{name}"
if full_name not in named_parameters:
# When we find one, it has to be one of the existing parameters.
for new_name, new_param in named_parameters.items():
if new_param is parameter:
if new_name not in result:
result[new_name] = []
result[new_name].append(full_name)
# Once we have treated direct parameters, we move to the child modules.
for name, child in model.named_children():
child_name = name if prefix == "" else f"{prefix}.{name}"
find_tied_parameters(child, named_parameters=named_parameters, prefix=child_name, result=result)
return FindTiedParametersResult([sorted([weight] + list(set(tied))) for weight, tied in result.items()])
def retie_parameters(model, tied_params):
"""
Reties tied parameters in a given model if the link was broken (for instance when adding hooks).
Args:
model (`torch.nn.Module`):
The model in which to retie parameters.
tied_params (`List[List[str]]`):
A mapping parameter name to tied parameter name as obtained by `find_tied_parameters`.
"""
for tied_group in tied_params:
param_to_tie = None
# two loops : the first one to set param_to_tie , the second one to change the values of tied_group
for param_name in tied_group:
module = model
splits = param_name.split(".")
for split in splits[:-1]:
module = getattr(module, split)
param = getattr(module, splits[-1])
if param_to_tie is None and param.device != torch.device("meta"):
param_to_tie = param
break
if param_to_tie is not None:
for param_name in tied_group:
module = model
splits = param_name.split(".")
for split in splits[:-1]:
module = getattr(module, split)
setattr(module, splits[-1], param_to_tie)
def _get_proper_dtype(dtype: Union[str, torch.device]) -> torch.dtype:
"""
Just does torch.dtype(dtype) if necessary.
"""
if isinstance(dtype, str):
# We accept "torch.float16" or just "float16"
dtype = dtype.replace("torch.", "")
dtype = getattr(torch, dtype)
return dtype
def compute_module_sizes(
model: nn.Module,
dtype: Optional[Union[str, torch.device]] = None,
special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None,
buffers_only: bool = False,
):
"""
Compute the size of each submodule of a given model.
"""
if dtype is not None:
dtype = _get_proper_dtype(dtype)
dtype_size = dtype_byte_size(dtype)
if special_dtypes is not None:
special_dtypes = {key: _get_proper_dtype(dtyp) for key, dtyp in special_dtypes.items()}
special_dtypes_size = {key: dtype_byte_size(dtyp) for key, dtyp in special_dtypes.items()}
module_sizes = defaultdict(int)
module_list = []
if not buffers_only:
module_list = named_module_tensors(model, recurse=True)
else:
module_list = model.named_buffers(recurse=True)
for name, tensor in module_list:
if special_dtypes is not None and name in special_dtypes:
size = tensor.numel() * special_dtypes_size[name]
elif dtype is None:
size = tensor.numel() * dtype_byte_size(tensor.dtype)
elif str(tensor.dtype).startswith(("torch.uint", "torch.int", "torch.bool")):
# According to the code in set_module_tensor_to_device, these types won't be converted
# so use their original size here
size = tensor.numel() * dtype_byte_size(tensor.dtype)
else:
size = tensor.numel() * min(dtype_size, dtype_byte_size(tensor.dtype))
name_parts = name.split(".")
for idx in range(len(name_parts) + 1):
module_sizes[".".join(name_parts[:idx])] += size
return module_sizes
def compute_module_total_buffer_size(
model: nn.Module,
dtype: Optional[Union[str, torch.device]] = None,
special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None,
):
"""
Compute the total size of buffers in each submodule of a given model.
"""
module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes, buffers_only=True)
return module_sizes.get("", 0)
def get_max_layer_size(
modules: List[Tuple[str, torch.nn.Module]], module_sizes: Dict[str, int], no_split_module_classes: List[str]
):
"""
Utility function that will scan a list of named modules and return the maximum size used by one full layer. The
definition of a layer being:
- a module with no direct children (just parameters and buffers)
- a module whose class name is in the list `no_split_module_classes`
Args:
modules (`List[Tuple[str, torch.nn.Module]]`):
The list of named modules where we want to determine the maximum layer size.
module_sizes (`Dict[str, int]`):
A dictionary mapping each layer name to its size (as generated by `compute_module_sizes`).
no_split_module_classes (`List[str]`):
A list of class names for layers we don't want to be split.
Returns:
`Tuple[int, List[str]]`: The maximum size of a layer with the list of layer names realizing that maximum size.
"""
max_size = 0
layer_names = []
modules_to_treat = modules.copy()
while len(modules_to_treat) > 0:
module_name, module = modules_to_treat.pop(0)
modules_children = list(module.named_children()) if isinstance(module, torch.nn.Module) else []
if len(modules_children) == 0 or module.__class__.__name__ in no_split_module_classes:
# No splitting this one so we compare to the max_size
size = module_sizes[module_name]
if size > max_size:
max_size = size
layer_names = [module_name]
elif size == max_size:
layer_names.append(module_name)
else:
modules_to_treat = [(f"{module_name}.{n}", v) for n, v in modules_children] + modules_to_treat
return max_size, layer_names
def get_max_memory(max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None):
"""
Get the maximum memory available if nothing is passed, converts string to int otherwise.
"""
import psutil
if max_memory is None:
if not (torch.cuda.is_available() or is_npu_available() or is_mlu_available() or is_xpu_available()):
max_memory = {}
else:
# Make sure CUDA is initialized on each GPU to have the right memory info.
if is_npu_available():
for i in range(torch.npu.device_count()):
_ = torch.tensor(0, device=torch.device("npu", i))
max_memory = {i: torch.npu.mem_get_info(i)[0] for i in range(torch.npu.device_count())}
elif is_mlu_available():
for i in range(torch.mlu.device_count()):
_ = torch.tensor(0, device=torch.device("mlu", i))
max_memory = {i: torch.mlu.mem_get_info(i)[0] for i in range(torch.mlu.device_count())}
elif is_xpu_available():
for i in range(torch.xpu.device_count()):
_ = torch.tensor(0, device=torch.device("xpu", i))
max_memory = {i: torch.xpu.max_memory_allocated(i) for i in range(torch.xpu.device_count())}
else:
for i in range(torch.cuda.device_count()):
_ = torch.tensor([0], device=i)
max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())}
# allocate everything in the mps device as the RAM is shared
if is_mps_available():
max_memory["mps"] = psutil.virtual_memory().available
else:
max_memory["cpu"] = psutil.virtual_memory().available
return max_memory
for key in max_memory:
if isinstance(max_memory[key], str):
max_memory[key] = convert_file_size_to_int(max_memory[key])
# Need to sort the device by type to make sure that we allocate the gpu first.
# As gpu/npu/xpu are represented by int, we need to sort them first.
gpu_devices = [k for k in max_memory.keys() if isinstance(k, int)]
gpu_devices.sort()
# check if gpu/npu/xpu devices are available and if not, throw a warning
if is_npu_available():
num_devices = torch.npu.device_count()
elif is_mlu_available():
num_devices = torch.mlu.device_count()
elif is_xpu_available():
num_devices = torch.xpu.device_count()
else:
num_devices = torch.cuda.device_count()
for device in gpu_devices:
if device >= num_devices or device < 0:
logger.warning(f"Device {device} is not available, available devices are {list(range(num_devices))}")
# Add the other devices in the preset order if they are available
all_devices = gpu_devices + [k for k in ["mps", "cpu", "disk"] if k in max_memory.keys()]
# Raise an error if a device is not recognized
for k in max_memory.keys():
if k not in all_devices:
raise ValueError(
f"Device {k} is not recognized, available devices are integers(for GPU/XPU), 'mps', 'cpu' and 'disk'"
)
max_memory = {k: max_memory[k] for k in all_devices}
return max_memory
def clean_device_map(device_map: Dict[str, Union[int, str, torch.device]], module_name: str = ""):
"""
Cleans a device_map by grouping all submodules that go on the same device together.
"""
# Get the value of the current module and if there is only one split across several keys, regroup it.
prefix = "" if module_name == "" else f"{module_name}."
values = [v for k, v in device_map.items() if k.startswith(prefix)]
if len(set(values)) == 1 and len(values) > 1:
for k in [k for k in device_map if k.startswith(prefix)]:
del device_map[k]
device_map[module_name] = values[0]
# Recurse over the children
children_modules = [k for k in device_map.keys() if k.startswith(prefix) and len(k) > len(module_name)]
idx = len(module_name.split(".")) + 1 if len(module_name) > 0 else 1
children_modules = set(".".join(k.split(".")[:idx]) for k in children_modules)
for child in children_modules:
clean_device_map(device_map, module_name=child)
return device_map
def load_offloaded_weights(model, index, offload_folder):
"""
Loads the weights from the offload folder into the model.
Args:
model (`torch.nn.Module`):
The model to load the weights into.
index (`dict`):
A dictionary containing the parameter name and its metadata for each parameter that was offloaded from the
model.
offload_folder (`str`):
The folder where the offloaded weights are stored.
"""
if index is None or len(index) == 0:
# Nothing to do
return
for param_name, metadata in index.items():
if "SCB" in param_name:
continue
fp16_statistics = None
if "weight" in param_name and param_name.replace("weight", "SCB") in index.keys():
weight_name = param_name.replace("weight", "SCB")
fp16_statistics = load_offloaded_weight(
os.path.join(offload_folder, f"{weight_name}.dat"), index[weight_name]
)
tensor_file = os.path.join(offload_folder, f"{param_name}.dat")
weight = load_offloaded_weight(tensor_file, metadata)
set_module_tensor_to_device(model, param_name, "cpu", value=weight, fp16_statistics=fp16_statistics)
def get_balanced_memory(
model: nn.Module,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
no_split_module_classes: Optional[List[str]] = None,
dtype: Optional[Union[str, torch.dtype]] = None,
special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None,
low_zero: bool = False,
):
"""
Compute a `max_memory` dictionary for [`infer_auto_device_map`] that will balance the use of each available GPU.
<Tip>
All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the
meta device (as it would if initialized within the `init_empty_weights` context manager).
</Tip>
Args:
model (`torch.nn.Module`):
The model to analyze.
max_memory (`Dict`, *optional*):
A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset.
Example: `max_memory={0: "1GB"}`.
no_split_module_classes (`List[str]`, *optional*):
A list of layer class names that should never be split across device (for instance any layer that has a
residual connection).
dtype (`str` or `torch.dtype`, *optional*):
If provided, the weights will be converted to that type when loaded.
special_dtypes (`Dict[str, Union[str, torch.device]]`, *optional*):
If provided, special dtypes to consider for some specific weights (will override dtype used as default for
all weights).
low_zero (`bool`, *optional*):
Minimizes the number of weights on GPU 0, which is convenient when it's used for other operations (like the
Transformers generate function).
"""
# Get default / clean up max_memory
user_not_set_max_memory = max_memory is None
max_memory = get_max_memory(max_memory)
if is_npu_available():
num_devices = len([d for d in max_memory if torch.device(d).type == "npu" and max_memory[d] > 0])
elif is_mlu_available():
num_devices = len([d for d in max_memory if torch.device(d).type == "mlu" and max_memory[d] > 0])
elif is_xpu_available():
num_devices = len(
[
d
for d in max_memory
if (
d != "cpu"
and (torch.device(d).type == "xpu" or torch.xpu.get_device_properties(d).dev_type == "gpu")
)
and max_memory[d] > 0
]
)
else:
num_devices = len([d for d in max_memory if torch.device(d).type == "cuda" and max_memory[d] > 0])
if num_devices == 0:
return max_memory
if num_devices == 1:
# We cannot do low_zero on just one GPU, but we will still reserve some memory for the buffer
low_zero = False
# If user just asked us to handle memory usage, we should avoid OOM
if user_not_set_max_memory:
for key in max_memory.keys():
if isinstance(key, int):
max_memory[key] *= 0.9 # 90% is a good compromise
logger.info(
f"We will use 90% of the memory on device {key} for storing the model, and 10% for the buffer to avoid OOM. "
"You can set `max_memory` in to a higher value to use more memory (at your own risk)."
)
break # only one device
module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes)
per_gpu = module_sizes[""] // (num_devices - 1 if low_zero else num_devices)
# We can't just set the memory to model_size // num_devices as it will end being too small: each GPU will get
# slightly less layers and some layers will end up offload at the end. So this function computes a buffer size to
# add which is the biggest of:
# - the size of no split block (if applicable)
# - the mean of the layer sizes
if no_split_module_classes is None:
no_split_module_classes = []
elif not isinstance(no_split_module_classes, (list, tuple)):
no_split_module_classes = [no_split_module_classes]
# Identify the size of the no_split_block modules
if len(no_split_module_classes) > 0:
no_split_children = {}
for name, size in module_sizes.items():
if name == "":
continue
submodule = model
for submodule_name in name.split("."):
submodule = getattr(submodule, submodule_name)
class_name = submodule.__class__.__name__
if class_name in no_split_module_classes and class_name not in no_split_children:
no_split_children[class_name] = size
if set(no_split_children.keys()) == set(no_split_module_classes):
break
buffer = max(no_split_children.values()) if len(no_split_children) > 0 else 0
else:
buffer = 0
# Compute mean of final modules. In the first dict of module sizes, leaves are the parameters
leaves = [n for n in module_sizes if len([p for p in module_sizes if n == "" or p.startswith(n + ".")]) == 0]
module_sizes = {n: v for n, v in module_sizes.items() if n not in leaves}
# Once removed, leaves are the final modules.
leaves = [n for n in module_sizes if len([p for p in module_sizes if n == "" or p.startswith(n + ".")]) == 0]
mean_leaves = int(sum([module_sizes[n] for n in leaves]) / max(len(leaves), 1))
buffer = int(1.25 * max(buffer, mean_leaves))
per_gpu += buffer
# Sorted list of GPUs id (we may have some gpu ids not included in the our max_memory list - let's ignore them)
gpus_idx_list = list(
sorted(
device_id for device_id, device_mem in max_memory.items() if isinstance(device_id, int) and device_mem > 0
)
)
# The last device is left with max_memory just in case the buffer is not enough.
for idx in gpus_idx_list[:-1]:
max_memory[idx] = min(max_memory[0] if low_zero and idx == 0 else per_gpu, max_memory[idx])
if low_zero:
min_zero = max(0, module_sizes[""] - sum([max_memory[i] for i in range(1, num_devices)]))
max_memory[0] = min(min_zero, max_memory[0])
return max_memory
def calculate_maximum_sizes(model: torch.nn.Module):
"Computes the total size of the model and its largest layer"
sizes = compute_module_sizes(model)
# `transformers` models store this information for us
no_split_modules = getattr(model, "_no_split_modules", None)
if no_split_modules is None:
no_split_modules = []
modules_to_treat = (
list(model.named_parameters(recurse=False))
+ list(model.named_children())
+ list(model.named_buffers(recurse=False))
)
largest_layer = get_max_layer_size(modules_to_treat, sizes, no_split_modules)
total_size = sizes[""]
return total_size, largest_layer
def infer_auto_device_map(
model: nn.Module,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
no_split_module_classes: Optional[List[str]] = None,
dtype: Optional[Union[str, torch.dtype]] = None,
special_dtypes: Optional[Dict[str, Union[str, torch.dtype]]] = None,
verbose: bool = False,
clean_result: bool = True,
offload_buffers: bool = False,
):
"""
Compute a device map for a given model giving priority to GPUs, then offload on CPU and finally offload to disk,
such that:
- we don't exceed the memory available of any of the GPU.
- if offload to the CPU is needed, there is always room left on GPU 0 to put back the layer offloaded on CPU that
has the largest size.
- if offload to the CPU is needed,we don't exceed the RAM available on the CPU.
- if offload to the disk is needed, there is always room left on the CPU to put back the layer offloaded on disk
that has the largest size.
<Tip>
All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the
meta device (as it would if initialized within the `init_empty_weights` context manager).
</Tip>
Args:
model (`torch.nn.Module`):
The model to analyze.
max_memory (`Dict`, *optional*):
A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset.
Example: `max_memory={0: "1GB"}`.
no_split_module_classes (`List[str]`, *optional*):
A list of layer class names that should never be split across device (for instance any layer that has a
residual connection).
dtype (`str` or `torch.dtype`, *optional*):
If provided, the weights will be converted to that type when loaded.
special_dtypes (`Dict[str, Union[str, torch.device]]`, *optional*):
If provided, special dtypes to consider for some specific weights (will override dtype used as default for
all weights).
verbose (`bool`, *optional*, defaults to `False`):
Whether or not to provide debugging statements as the function builds the device_map.
clean_result (`bool`, *optional*, defaults to `True`):
Clean the resulting device_map by grouping all submodules that go on the same device together.
offload_buffers (`bool`, *optional*, defaults to `False`):
In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as
well as the parameters.
"""
# Get default / clean up max_memory
max_memory = get_max_memory(max_memory)
if no_split_module_classes is None:
no_split_module_classes = []
elif not isinstance(no_split_module_classes, (list, tuple)):
no_split_module_classes = [no_split_module_classes]
devices = list(max_memory.keys())
if "disk" not in devices:
devices.append("disk")
gpus = [device for device in devices if device not in ["cpu", "disk"]]
# Devices that need to keep space for a potential offloaded layer.
if "mps" in gpus:
main_devices = ["mps"]
elif len(gpus) > 0:
main_devices = [gpus[0], "cpu"]
else:
main_devices = ["cpu"]
module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes)
tied_parameters = find_tied_parameters(model)
if check_tied_parameters_in_config(model) and len(tied_parameters) == 0:
logger.warn(
"The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function."
)
device_map = OrderedDict()
current_device = 0
current_memory_used = 0
device_memory_used = {}
device_buffer_sizes = {}
# Direct submodules and parameters
modules_to_treat = (
list(model.named_parameters(recurse=False))
+ list(model.named_children())
+ list(model.named_buffers(recurse=False))
)
# Initialize maximum largest layer, to know which space to keep in memory
max_layer_size, max_layer_names = get_max_layer_size(modules_to_treat, module_sizes, no_split_module_classes)
# Ready ? This is going to be a bit messy.
while len(modules_to_treat) > 0:
name, module = modules_to_treat.pop(0)
if verbose:
print(f"\nTreating module {name}.")
# Max size in the remaining layers may have changed since we took one, so we maybe update it.
max_layer_names = [n for n in max_layer_names if n != name and not n.startswith(name + ".")]
if len(max_layer_names) == 0:
max_layer_size, max_layer_names = get_max_layer_size(
[(n, m) for n, m in modules_to_treat if isinstance(m, torch.nn.Module)],
module_sizes,
no_split_module_classes,
)
# Assess size needed
module_size = module_sizes[name]
# We keep relevant tied parameters only: one of the tied parameters in the group is inside the current module
# and the other is not.
# Note: If we are currently processing the name `compute.weight`, an other parameter named e.g. `compute.weight_submodule.parameter`
# needs to be considered outside the current module, hence the check with additional dots.
tied_param_goups = [
tied_group
for tied_group in tied_parameters
if any(name + "." in k + "." for k in tied_group) and not all(name + "." in k + "." for k in tied_group)
]
if verbose and len(tied_param_goups) > 0:
print(f" Found the relevant tied param groups {tied_param_goups}")
# Then we keep track of all the parameters that are tied to the current module, but not in the current module
tied_params = sum(
[[p for p in tied_group if name + "." not in p + "."] for tied_group in tied_param_goups], []
)
if verbose and len(tied_params) > 0:
print(f" So those parameters need to be taken into account {tied_params}")
device = devices[current_device]
current_max_size = max_memory[device] if device != "disk" else None
current_memory_reserved = 0
# Reduce max size available by the largest layer.
if devices[current_device] in main_devices:
current_max_size = current_max_size - max_layer_size
current_memory_reserved = max_layer_size
# Case 1 -> We're too big!
if current_max_size is not None and current_memory_used + module_size > current_max_size:
# Split or not split?
modules_children = (
[]
if isinstance(module, nn.Parameter) or isinstance(module, torch.Tensor)
else list(module.named_children())
)
if verbose:
print(
f"Not enough space on {devices[current_device]} to put {name} (space available "
f"{current_max_size - current_memory_used}, module size {module_size})."
)
if len(modules_children) == 0 or module.__class__.__name__ in no_split_module_classes:
# -> no split, we go to the next device
if verbose:
print("This module cannot be split, going to the next device.")
device_memory_used[device] = current_memory_used + current_memory_reserved
current_device += 1
modules_to_treat = [(name, module)] + modules_to_treat
current_memory_used = 0
else:
# -> split, we replace the module studied by its children + parameters
if verbose:
print(f"Splitting {name}.")
modules_children = list(module.named_parameters(recurse=False)) + modules_children
modules_to_treat = [(f"{name}.{n}", v) for n, v in modules_children] + modules_to_treat
# Update the max layer size.
max_layer_size, max_layer_names = get_max_layer_size(
[(n, m) for n, m in modules_to_treat if isinstance(m, torch.nn.Module)],
module_sizes,
no_split_module_classes,
)
# Case 2, it fits! We're not entirely out of the wood though, because we may have some tied parameters.
elif len(tied_params) > 0:
# First locate all tied modules
tied_module_names = []
tied_modules = []
for tied_param in tied_params:
tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n in tied_param][0]
tied_module_names.append(modules_to_treat[tied_module_index][0])
tied_modules.append(modules_to_treat[tied_module_index][1])
if verbose:
print(
f" It looks like {name} is going to fit on {devices[current_device]} but we have tied "
f"parameters to account for.\n - Names {tied_params}\n - Module names {tied_module_names}"
)
# Let's see if it all fits first
module_size_with_ties = module_size
for tied_param, tied_module_name in zip(tied_params, tied_module_names):
module_size_with_ties += module_sizes[tied_module_name] - module_sizes[tied_param]
if current_max_size is None or current_memory_used + module_size_with_ties <= current_max_size:
# We really really fit!
if verbose:
print(f"Putting {name} and {tied_module_names} on {devices[current_device]}.")
current_memory_used += module_size_with_ties
device_map[name] = devices[current_device]
for tied_module_name in tied_module_names:
if tied_module_name in [m[0] for m in modules_to_treat]:
# The module may have been removed by a previous iteration of this loop.
tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n == tied_module_name][
0
]
modules_to_treat.pop(tied_module_index)
device_map[tied_module_name] = devices[current_device]
if not offload_buffers and isinstance(module, nn.Module):
current_buffer_size = compute_module_total_buffer_size(
module, dtype=dtype, special_dtypes=special_dtypes
)
device_buffer_sizes[device] = device_buffer_sizes.get(device, 0) + current_buffer_size
else:
# We don't fit with the tied modules. Next question is: can we split one of the tied modules to make it
# smaller or do we need to go on the next device?
if verbose:
print(
f"Not enough space on {devices[current_device]} to put {name} and {tied_module_names} (space "
f"available {current_max_size - current_memory_used}, needed size {module_size_with_ties})."
)
split_happened = False
for tied_module_name, tied_module in zip(tied_module_names, tied_modules):
tied_module_children = list(tied_module.named_children())
if len(tied_module_children) == 0 or tied_module.__class__.__name__ in no_split_module_classes:
# can't break this one.
continue
if verbose:
print(f"Splitting {tied_module_name}.")
tied_module_children = list(tied_module.named_parameters(recurse=False)) + tied_module_children
tied_module_children = [(f"{tied_module_name}.{n}", v) for n, v in tied_module_children]
tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n == tied_module_name][0]
modules_to_treat = (
[(name, module)]
+ modules_to_treat[:tied_module_index]
+ tied_module_children
+ modules_to_treat[tied_module_index + 1 :]
)
# Update the max layer size.
max_layer_size, max_layer_names = get_max_layer_size(
[(n, m) for n, m in modules_to_treat if isinstance(m, torch.nn.Module)],
module_sizes,
no_split_module_classes,
)
split_happened = True
break
if not split_happened:
# If the tied module is not split, we go to the next device
if verbose:
print("None of the tied module can be split, going to the next device.")
device_memory_used[device] = current_memory_used + current_memory_reserved
current_device += 1
modules_to_treat = [(name, module)] + modules_to_treat
current_memory_used = 0
else:
if verbose:
if current_max_size is None:
print(f"Putting {name} (size={module_size}) on {devices[current_device]}.")
else:
print(
f"Putting {name} (size={module_size}) on {devices[current_device]} "
f"(available={current_max_size - current_memory_used})."
)
current_memory_used += module_size
device_memory_used[device] = current_memory_used + current_memory_reserved
device_map[name] = devices[current_device]
if not offload_buffers and isinstance(module, nn.Module):
current_buffer_size = compute_module_total_buffer_size(
module, dtype=dtype, special_dtypes=special_dtypes
)
device_buffer_sizes[device] = device_buffer_sizes.get(device, 0) + current_buffer_size
if clean_result:
device_map = clean_device_map(device_map)
non_gpu_buffer_size = device_buffer_sizes.get("cpu", 0) + device_buffer_sizes.get("disk", 0)
if non_gpu_buffer_size > 0 and not offload_buffers:
is_buffer_fit_any_gpu = False
for gpu_device, gpu_max_memory in max_memory.items():
if gpu_device == "cpu" or gpu_device == "disk":
continue
if not is_buffer_fit_any_gpu:
gpu_memory_used = device_memory_used.get(gpu_device, 0)
if gpu_max_memory >= non_gpu_buffer_size + gpu_memory_used:
is_buffer_fit_any_gpu = True
if len(gpus) > 0 and not is_buffer_fit_any_gpu:
warnings.warn(
f"Current model requires {non_gpu_buffer_size} bytes of buffer for offloaded layers, which seems does "
f"not fit any GPU's remaining memory. If you are experiencing a OOM later, please consider using "
f"offload_buffers=True."
)
return device_map
def check_device_map(model: nn.Module, device_map: Dict[str, Union[int, str, torch.device]]):
"""
Checks a device map covers everything in a given model.
Args:
model (`torch.nn.Module`): The model to check the device map against.
device_map (`Dict[str, Union[int, str, torch.device]]`): The device map to check.
"""
all_model_tensors = [name for name, _ in model.state_dict().items()]
for module_name in device_map.keys():
if module_name == "":
all_model_tensors.clear()
break
else:
all_model_tensors = [
name
for name in all_model_tensors
if not name == module_name and not name.startswith(module_name + ".")
]
if len(all_model_tensors) > 0:
non_covered_params = ", ".join(all_model_tensors)
raise ValueError(
f"The device_map provided does not give any device for the following parameters: {non_covered_params}"
)
def load_state_dict(checkpoint_file, device_map=None):
"""
Load a checkpoint from a given file. If the checkpoint is in the safetensors format and a device map is passed, the
weights can be fast-loaded directly on the GPU.
Args:
checkpoint_file (`str`): The path to the checkpoint to load.
device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer
name, once a given module name is inside, every submodule of it will be sent to the same device.
"""
if checkpoint_file.endswith(".safetensors"):
with safe_open(checkpoint_file, framework="pt") as f:
metadata = f.metadata()
weight_names = f.keys()
if metadata is None:
logger.warn(
f"The safetensors archive passed at {checkpoint_file} does not contain metadata. "
"Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata."
)
metadata = {"format": "pt"}
if metadata.get("format") not in ["pt", "tf", "flax"]:
raise OSError(
f"The safetensors archive passed at {checkpoint_file} does not contain the valid metadata. Make sure "
"you save your model with the `save_pretrained` method."
)
elif metadata["format"] != "pt":
raise ValueError(f"The checkpoint passed was saved with {metadata['format']}, we need a the pt format.")
if device_map is None:
return safe_load_file(checkpoint_file)
else:
# if we only have one device we can load everything directly
if len(set(device_map.values())) == 1:
return safe_load_file(checkpoint_file, device=list(device_map.values())[0])
devices = list(set(device_map.values()) - {"disk"})
# cpu device should always exist as fallback option
if "cpu" not in devices:
devices.append("cpu")
# For each device, get the weights that go there
device_weights = {device: [] for device in devices}
for module_name, device in device_map.items():
if device in devices:
device_weights[device].extend(
[k for k in weight_names if k == module_name or k.startswith(module_name + ".")]
)
# all weights that haven't defined a device should be loaded on CPU
device_weights["cpu"].extend([k for k in weight_names if k not in sum(device_weights.values(), [])])
tensors = {}
if is_tqdm_available():
progress_bar = tqdm(
main_process_only=False,
total=sum([len(device_weights[device]) for device in devices]),
unit="w",
smoothing=0,
leave=False,
)
else:
progress_bar = None
for device in devices:
target_device = device
if is_xpu_available():
current_safetensors_version = packaging.version.parse(importlib.metadata.version("safetensors"))
if compare_versions(current_safetensors_version, "<", "0.4.2"):
raise ModuleNotFoundError(
f"You need at least safetensors 0.4.2 for Intel GPU, while you have {current_safetensors_version}"
)
if isinstance(device, int):
target_device = f"xpu:{device}"
with safe_open(checkpoint_file, framework="pt", device=target_device) as f:
for key in device_weights[device]:
if progress_bar is not None:
progress_bar.set_postfix(dev=device, refresh=False)
progress_bar.set_description(key)
tensors[key] = f.get_tensor(key)
if progress_bar is not None:
progress_bar.update()
if progress_bar is not None:
progress_bar.close()
return tensors
else:
return torch.load(checkpoint_file, map_location=torch.device("cpu"))
def get_state_dict_offloaded_model(model: nn.Module):
"""
Returns the state dictionary for an offloaded model via iterative onloading
Args:
model (`torch.nn.Module`):
The offloaded model we want to save
"""
from ..hooks import AlignDevicesHook
state_dict = {}
placeholders = set()
for name, module in model.named_modules():
if name == "":
continue
if hasattr(module, "_hf_hook") and isinstance(module._hf_hook, AlignDevicesHook) and module._hf_hook.offload:
original_device = module._hf_hook.execution_device
# assign hook execution device to cpu
module._hf_hook.execution_device = "cpu"
# onload meta tensors to execution device
try:
module._hf_hook.pre_forward(module)
except MemoryError:
raise MemoryError("Offloaded module must fit in CPU memory to call save_model!") from None
module_state_dict = module.state_dict()
# offload meta tensors from cpu
module._hf_hook.post_forward(module, torch.tensor([]))
# re-assign hook to original execution device
module._hf_hook.execution_device = original_device
else:
module_state_dict = module.state_dict()
for key in module_state_dict:
# ignore placeholder parameters that are still on the meta device
if module_state_dict[key].device == torch.device("meta"):
placeholders.add(name + f".{key}")
continue
params = module_state_dict[key]
state_dict[name + f".{key}"] = params
for key in placeholders.copy():
if key in state_dict:
placeholders.remove(key)
if placeholders:
logger.warning(f"The following tensors were not saved because they were still on meta device: {placeholders}")
return state_dict
def load_checkpoint_in_model(
model: nn.Module,
checkpoint: Union[str, os.PathLike],
device_map: Optional[Dict[str, Union[int, str, torch.device]]] = None,
offload_folder: Optional[Union[str, os.PathLike]] = None,
dtype: Optional[Union[str, torch.dtype]] = None,
offload_state_dict: bool = False,
offload_buffers: bool = False,
keep_in_fp32_modules: List[str] = None,
offload_8bit_bnb: bool = False,
strict: bool = False,
):
"""
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are
loaded.
<Tip warning={true}>
Once loaded across devices, you still need to call [`dispatch_model`] on your model to make it able to run. To
group the checkpoint loading and dispatch in one single call, use [`load_checkpoint_and_dispatch`].
</Tip>
Args:
model (`torch.nn.Module`):
The model in which we want to load a checkpoint.
checkpoint (`str` or `os.PathLike`):
The folder checkpoint to load. It can be:
- a path to a file containing a whole model state dict
- a path to a `.json` file containing the index to a sharded checkpoint
- a path to a folder containing a unique `.index.json` file and the shards of a checkpoint.
- a path to a folder containing a unique pytorch_model.bin or a model.safetensors file.
device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer
name, once a given module name is inside, every submodule of it will be sent to the same device.
offload_folder (`str` or `os.PathLike`, *optional*):
If the `device_map` contains any value `"disk"`, the folder where we will offload weights.
dtype (`str` or `torch.dtype`, *optional*):
If provided, the weights will be converted to that type when loaded.
offload_state_dict (`bool`, *optional*, defaults to `False`):
If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if
the weight of the CPU state dict + the biggest shard does not fit.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to include the buffers in the weights offloaded to disk.
keep_in_fp32_modules(`List[str]`, *optional*):
A list of the modules that we keep in `torch.float32` dtype.
offload_8bit_bnb (`bool`, *optional*):
Whether or not to enable offload of 8-bit modules on cpu/disk.
strict (`bool`, *optional*, defaults to `False`):
Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model's
state_dict.
"""
if offload_8bit_bnb:
from .bnb import quantize_and_offload_8bit
tied_params = find_tied_parameters(model)
if check_tied_parameters_in_config(model) and len(tied_params) == 0:
logger.warn(
"The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function."
)
if device_map is not None:
check_tied_parameters_on_same_device(tied_params, device_map)
if offload_folder is None and device_map is not None and "disk" in device_map.values():
raise ValueError(
"At least one of the model submodule will be offloaded to disk, please pass along an `offload_folder`."
)
elif offload_folder is not None and device_map is not None and "disk" in device_map.values():
os.makedirs(offload_folder, exist_ok=True)
if isinstance(dtype, str):
# We accept "torch.float16" or just "float16"
dtype = dtype.replace("torch.", "")
dtype = getattr(torch, dtype)
checkpoint_files = None
index_filename = None
if os.path.isfile(checkpoint):
if str(checkpoint).endswith(".json"):
index_filename = checkpoint
else:
checkpoint_files = [checkpoint]
elif os.path.isdir(checkpoint):
# check if the whole state dict is present
potential_state_bin = [f for f in os.listdir(checkpoint) if f == WEIGHTS_NAME]
potential_state_safetensor = [f for f in os.listdir(checkpoint) if f == SAFE_WEIGHTS_NAME]
if len(potential_state_bin) == 1:
checkpoint_files = [os.path.join(checkpoint, potential_state_bin[0])]
elif len(potential_state_safetensor) == 1:
checkpoint_files = [os.path.join(checkpoint, potential_state_safetensor[0])]
else:
# otherwise check for sharded checkpoints
potential_index = [f for f in os.listdir(checkpoint) if f.endswith(".index.json")]
if len(potential_index) == 0:
raise ValueError(
f"{checkpoint} is not a folder containing a `.index.json` file or a {WEIGHTS_NAME} or a {SAFE_WEIGHTS_NAME} file"
)
elif len(potential_index) == 1:
index_filename = os.path.join(checkpoint, potential_index[0])
else:
raise ValueError(
f"{checkpoint} containing more than one `.index.json` file, delete the irrelevant ones."
)
else:
raise ValueError(
"`checkpoint` should be the path to a file containing a whole state dict, or the index of a sharded "
f"checkpoint, or a folder containing a sharded checkpoint or the whole state dict, but got {checkpoint}."
)
if index_filename is not None:
checkpoint_folder = os.path.split(index_filename)[0]
with open(index_filename) as f:
index = json.loads(f.read())
if "weight_map" in index:
index = index["weight_map"]
checkpoint_files = sorted(list(set(index.values())))
checkpoint_files = [os.path.join(checkpoint_folder, f) for f in checkpoint_files]
# Logic for missing/unexepected keys goes here.
offload_index = {}
if offload_state_dict:
state_dict_folder = tempfile.mkdtemp()
state_dict_index = {}
unexpected_keys = set()
model_keys = set(model.state_dict().keys())
buffer_names = [name for name, _ in model.named_buffers()]
for checkpoint_file in checkpoint_files:
loaded_checkpoint = load_state_dict(checkpoint_file, device_map=device_map)
if device_map is None:
model.load_state_dict(loaded_checkpoint, strict=strict)
unexpected_keys.update(set(loaded_checkpoint.keys()) - model_keys)
else:
for param_name, param in loaded_checkpoint.items():
# skip SCB parameter (for 8-bit serialization)
if "SCB" in param_name:
continue
if param_name not in model_keys:
unexpected_keys.add(param_name)
if not strict:
continue # Skip loading this parameter.
module_name = param_name
while len(module_name) > 0 and module_name not in device_map:
module_name = ".".join(module_name.split(".")[:-1])
if module_name == "" and "" not in device_map:
# TODO: group all errors and raise at the end.
raise ValueError(f"{param_name} doesn't have any device set.")
param_device = device_map[module_name]
new_dtype = dtype
if dtype is not None and torch.is_floating_point(param):
if keep_in_fp32_modules is not None and dtype == torch.float16:
proceed = False
for key in keep_in_fp32_modules:
if ((key in param_name) and (key + "." in param_name)) or key == param_name:
proceed = True
break
if proceed:
new_dtype = torch.float32
if "weight" in param_name and param_name.replace("weight", "SCB") in loaded_checkpoint.keys():
if param.dtype == torch.int8:
fp16_statistics = loaded_checkpoint[param_name.replace("weight", "SCB")]
else:
fp16_statistics = None
if param_device == "disk":
if offload_buffers or param_name not in buffer_names:
if new_dtype is None:
new_dtype = param.dtype
if offload_8bit_bnb:
quantize_and_offload_8bit(
model, param, param_name, new_dtype, offload_folder, offload_index, fp16_statistics
)
continue
else:
set_module_tensor_to_device(model, param_name, "meta", dtype=new_dtype)
offload_weight(param, param_name, offload_folder, index=offload_index)
elif param_device == "cpu" and offload_state_dict:
if new_dtype is None:
new_dtype = param.dtype
if offload_8bit_bnb:
quantize_and_offload_8bit(
model, param, param_name, new_dtype, state_dict_folder, state_dict_index, fp16_statistics
)
else:
set_module_tensor_to_device(model, param_name, "meta", dtype=new_dtype)
offload_weight(param, param_name, state_dict_folder, index=state_dict_index)
else:
set_module_tensor_to_device(
model,
param_name,
param_device,
value=param,
dtype=new_dtype,
fp16_statistics=fp16_statistics,
)
# Force Python to clean up.
del loaded_checkpoint
gc.collect()
if not strict and len(unexpected_keys) > 0:
logger.warning(
f"Some weights of the model checkpoint at {checkpoint} were not used when"
f" initializing {model.__class__.__name__}: {unexpected_keys}. This may or may not be an issue - make sure that the checkpoint does not have unnecessary parameters, or that the model definition correctly corresponds to the checkpoint."
)
save_offload_index(offload_index, offload_folder)
# Load back offloaded state dict on CPU
if offload_state_dict:
load_offloaded_weights(model, state_dict_index, state_dict_folder)
shutil.rmtree(state_dict_folder)
retie_parameters(model, tied_params)
def get_mixed_precision_context_manager(native_amp: bool = False, autocast_kwargs: AutocastKwargs = None):
"""
Return a context manager for autocasting mixed precision
Args:
native_amp (`bool`, *optional*, defaults to False):
Whether mixed precision is actually enabled.
cache_enabled (`bool`, *optional*, defaults to True):
Whether the weight cache inside autocast should be enabled.
"""
state = AcceleratorState()
if autocast_kwargs is None:
autocast_kwargs = {}
else:
autocast_kwargs = autocast_kwargs.to_kwargs()
if native_amp:
device_type = (
"cuda"
if (state.distributed_type == DistributedType.XLA and is_torch_xla_available(check_is_gpu=True))
else state.device.type
)
if state.mixed_precision == "fp16":
return torch.autocast(device_type=device_type, dtype=torch.float16, **autocast_kwargs)
elif state.mixed_precision == "bf16" and state.distributed_type in [
DistributedType.NO,
DistributedType.MULTI_CPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.FSDP,
DistributedType.XLA,
]:
return torch.autocast(device_type=device_type, dtype=torch.bfloat16, **autocast_kwargs)
else:
return torch.autocast(device_type=device_type, **autocast_kwargs)
else:
return contextlib.nullcontext()
|
accelerate/src/accelerate/utils/modeling.py/0
|
{
"file_path": "accelerate/src/accelerate/utils/modeling.py",
"repo_id": "accelerate",
"token_count": 34738
}
| 8
|
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import unittest
from pathlib import Path
from unittest.mock import patch
import torch
from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
from accelerate.commands.config.config_args import BaseConfig, ClusterConfig, SageMakerConfig
from accelerate.commands.estimate import estimate_command, estimate_command_parser, gather_data
from accelerate.commands.launch import _validate_launch_command, launch_command_parser
from accelerate.test_utils import execute_subprocess_async
from accelerate.test_utils.testing import (
DEFAULT_LAUNCH_COMMAND,
get_launch_command,
path_in_accelerate_package,
require_multi_device,
require_timm,
require_transformers,
run_command,
)
from accelerate.utils import patch_environment
from accelerate.utils.launch import prepare_simple_launcher_cmd_env
class AccelerateLauncherTester(unittest.TestCase):
"""
Test case for verifying the `accelerate launch` CLI operates correctly.
If a `default_config.yaml` file is located in the cache it will temporarily move it
for the duration of the tests.
"""
test_file_path = path_in_accelerate_package("test_utils", "scripts", "test_cli.py")
notebook_launcher_path = path_in_accelerate_package("test_utils", "scripts", "test_notebook.py")
config_folder = Path.home() / ".cache/huggingface/accelerate"
config_file = "default_config.yaml"
config_path = config_folder / config_file
changed_path = config_folder / "_default_config.yaml"
test_config_path = Path("tests/test_configs")
@classmethod
def setUpClass(cls):
if cls.config_path.is_file():
cls.config_path.rename(cls.changed_path)
@classmethod
def tearDownClass(cls):
if cls.changed_path.is_file():
cls.changed_path.rename(cls.config_path)
def test_no_config(self):
if torch.cuda.is_available() and (torch.cuda.device_count() > 1):
cmd = get_launch_command(multi_gpu=True)
else:
cmd = DEFAULT_LAUNCH_COMMAND
cmd.append(self.test_file_path)
execute_subprocess_async(cmd, env=os.environ.copy())
def test_config_compatibility(self):
for config in sorted(self.test_config_path.glob("**/*.yaml")):
if "invalid" in str(config) or "mpi" in str(config):
continue
with self.subTest(config_file=config):
cmd = get_launch_command(config_file=config) + [self.test_file_path]
execute_subprocess_async(cmd)
def test_invalid_keys(self):
config_path = self.test_config_path / "invalid_keys.yaml"
with self.assertRaises(
RuntimeError,
msg="The config file at 'invalid_keys.yaml' had unknown keys ('another_invalid_key', 'invalid_key')",
):
cmd = get_launch_command(config_file=config_path) + [self.test_file_path]
execute_subprocess_async(cmd)
def test_accelerate_test(self):
execute_subprocess_async(["accelerate", "test"])
@require_multi_device
def test_notebook_launcher(self):
"""
This test checks a variety of situations and scenarios
with the `notebook_launcher`
"""
cmd = ["python", self.notebook_launcher_path]
with patch_environment(omp_num_threads=1, accelerate_num_processes=2):
run_command(cmd)
def test_mpi_multicpu_config_cmd(self):
"""
Parses a launch command with a test file and the 0_28_0_mpi.yaml config. Tests getting the command and
environment vars and verifies the mpirun command arg values.
"""
mpi_config_path = str(self.test_config_path / "0_28_0_mpi.yaml")
test_file_arg = "--cpu"
with patch("sys.argv", ["accelerate", str(self.test_file_path), test_file_arg]):
parser = launch_command_parser()
args = parser.parse_args()
args.config_file = mpi_config_path
args, _, _ = _validate_launch_command(args)
# Mock out the check for mpirun version to simulate Intel MPI
with patch("accelerate.utils.launch.which", return_value=True):
with patch("accelerate.utils.launch.subprocess.check_output", return_value=b"Intel MPI"):
cmd, _ = prepare_simple_launcher_cmd_env(args)
# Verify the mpirun command args
expected_mpirun_cmd = ["mpirun", "-f", "/home/user/hostfile", "-ppn", "4", "-n", "16"]
self.assertGreater(len(cmd), len(expected_mpirun_cmd))
generated_mpirun_cmd = cmd[0 : len(expected_mpirun_cmd)]
self.assertEqual(expected_mpirun_cmd, generated_mpirun_cmd)
# Verify the python script and args in the mpirun command
python_script_cmd = cmd[len(expected_mpirun_cmd) :]
self.assertEqual(len(python_script_cmd), 3)
self.assertEqual(python_script_cmd[1], str(self.test_file_path))
self.assertEqual(python_script_cmd[2], test_file_arg)
class LaunchArgTester(unittest.TestCase):
"""
Test cases revolving around the CLI wrappers
"""
parser = launch_command_parser()
def test_hyphen(self):
# Try a little from each cluster
args = ["--config-file", "test.yaml", "test.py"]
result = self.parser.parse_args(args)
assert result.config_file == "test.yaml"
assert result.multi_gpu is False
args = ["--multi-gpu", "--num-processes", "4", "test.py"]
result = self.parser.parse_args(args)
assert result.multi_gpu is True
assert result.num_processes == 4
# And use a mix
args = ["--multi-gpu", "--use-deepspeed", "--use-fsdp", "--num_processes", "4", "test.py"]
result = self.parser.parse_args(args)
assert result.multi_gpu is True
assert result.use_deepspeed is True
assert result.use_fsdp is True
assert result.num_processes == 4
def test_underscore(self):
# Try a little from each cluster
args = ["--config_file", "test.yaml", "test.py"]
result = self.parser.parse_args(args)
assert result.config_file == "test.yaml"
args = ["--multi_gpu", "--num_processes", "4", "test.py"]
result = self.parser.parse_args(args)
assert result.multi_gpu is True
assert result.num_processes == 4
# And use a mix
args = ["--multi_gpu", "--use_deepspeed", "--use_fsdp", "--num-processes", "4", "test.py"]
result = self.parser.parse_args(args)
assert result.multi_gpu is True
assert result.use_deepspeed is True
assert result.use_fsdp is True
assert result.num_processes == 4
def test_duplicate_entities(self):
help_return = self.parser.format_help()
args = self.parser.parse_args(["test.py"])
for arg in args.__dict__:
if "_" in arg:
bad_arg = f'--{arg.replace("_", "-")}'
# Need an exception for `num-processes` since it's in the docstring
if bad_arg == "--num-processes":
assert help_return.count(bad_arg) == 1, f"Found {bad_arg} in `accelerate launch -h`"
else:
assert bad_arg not in help_return, f"Found {bad_arg} in `accelerate launch -h`"
class ClusterConfigTester(unittest.TestCase):
"""
Test case for verifying the config dataclasses work
"""
def test_base_config(self):
# Tests that all the dataclasses can be initialized
config = BaseConfig(
compute_environment="LOCAL_MACHINE",
distributed_type="NO",
mixed_precision="fp16",
debug=False,
use_cpu=False,
)
assert config.compute_environment == "LOCAL_MACHINE"
assert config.distributed_type == "NO"
assert config.mixed_precision == "fp16"
assert config.debug is False
def test_cluster_config(self):
# First normally
config = ClusterConfig(
compute_environment="LOCAL_MACHINE",
distributed_type="NO",
mixed_precision="fp16",
num_processes=2,
debug=False,
use_cpu=False,
)
assert config.compute_environment == "LOCAL_MACHINE"
assert config.distributed_type == "NO"
assert config.mixed_precision == "fp16"
assert config.debug is False
# Then check with other compute environments
config = ClusterConfig(
compute_environment="LOCAL_MACHINE",
distributed_type="MULTI_GPU",
mixed_precision="fp16",
debug=False,
num_processes=2,
enable_cpu_affinity=True,
use_cpu=False,
)
assert config.distributed_type == "MULTI_GPU"
assert config.num_processes == 2
assert config.enable_cpu_affinity is True
def test_sagemaker_config(self):
config = SageMakerConfig(
compute_environment="AMAZON_SAGEMAKER",
distributed_type="NO",
mixed_precision="fp16",
debug=False,
use_cpu=False,
ec2_instance_type="MY_TYPE",
iam_role_name="MY_ROLE",
)
assert config.compute_environment == "AMAZON_SAGEMAKER"
assert config.ec2_instance_type == "MY_TYPE"
assert config.iam_role_name == "MY_ROLE"
class TpuConfigTester(unittest.TestCase):
"""
Test case for verifying the `accelerate tpu-config` CLI passes the right `gcloud` command.
"""
tpu_name = "test-tpu"
tpu_zone = "us-central1-a"
command = "ls"
cmd = ["accelerate", "tpu-config"]
base_output = "cd /usr/share"
command_file = "tests/test_samples/test_command_file.sh"
gcloud = "Running gcloud compute tpus tpu-vm ssh"
def test_base(self):
output = run_command(
self.cmd
+ ["--command", self.command, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug"],
return_stdout=True,
)
assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output
def test_base_backward_compatibility(self):
output = run_command(
self.cmd
+ [
"--config_file",
"tests/test_configs/0_12_0.yaml",
"--command",
self.command,
"--tpu_zone",
self.tpu_zone,
"--tpu_name",
self.tpu_name,
"--debug",
],
return_stdout=True,
)
assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output
def test_with_config_file(self):
output = run_command(
self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--debug"], return_stdout=True
)
assert (
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all'
in output
)
def test_with_config_file_and_command(self):
output = run_command(
self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--command", self.command, "--debug"],
return_stdout=True,
)
assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output
def test_with_config_file_and_multiple_command(self):
output = run_command(
self.cmd
+ [
"--config_file",
"tests/test_configs/latest.yaml",
"--command",
self.command,
"--command",
'echo "Hello World"',
"--debug",
],
return_stdout=True,
)
assert (
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls; echo "Hello World" --worker all'
in output
)
def test_with_config_file_and_command_file(self):
output = run_command(
self.cmd
+ ["--config_file", "tests/test_configs/latest.yaml", "--command_file", self.command_file, "--debug"],
return_stdout=True,
)
assert (
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all'
in output
)
def test_with_config_file_and_command_file_backward_compatibility(self):
output = run_command(
self.cmd
+ [
"--config_file",
"tests/test_configs/0_12_0.yaml",
"--command_file",
self.command_file,
"--tpu_zone",
self.tpu_zone,
"--tpu_name",
self.tpu_name,
"--debug",
],
return_stdout=True,
)
assert (
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all'
in output
)
def test_accelerate_install(self):
output = run_command(
self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--install_accelerate", "--debug"],
return_stdout=True,
)
assert (
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate -U; echo "hello world"; echo "this is a second command" --worker all'
in output
)
def test_accelerate_install_version(self):
output = run_command(
self.cmd
+ [
"--config_file",
"tests/test_configs/latest.yaml",
"--install_accelerate",
"--accelerate_version",
"12.0.0",
"--debug",
],
return_stdout=True,
)
assert (
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate==12.0.0; echo "hello world"; echo "this is a second command" --worker all'
in output
)
class ModelEstimatorTester(unittest.TestCase):
"""
Test case for checking the output of `accelerate estimate-memory` is correct.
- Uses `estimate_command` when trying to catch raised errors
- Uses `gather_data` when just verifying the calculations are correct
"""
parser = estimate_command_parser()
def test_invalid_model_name(self):
with self.assertRaises(
RepositoryNotFoundError, msg="Repo for model `somebrokenname` does not exist on the Hub"
):
args = self.parser.parse_args(["somebrokenname"])
estimate_command(args)
@require_timm
def test_invalid_model_name_timm(self):
with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `timm` but"):
args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "timm"])
estimate_command(args)
@require_transformers
def test_invalid_model_name_transformers(self):
with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `transformers` but"):
args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "transformers"])
estimate_command(args)
def test_no_metadata(self):
with self.assertRaises(
ValueError, msg="Model `muellerzr/dummy` does not have any library metadata on the Hub"
):
args = self.parser.parse_args(["muellerzr/dummy"])
estimate_command(args)
def test_gated(self):
with self.assertRaises(GatedRepoError, msg="Repo for model `meta-llama/Llama-2-7b-hf` is gated"):
args = self.parser.parse_args(["meta-llama/Llama-2-7b-hf"])
with patch_environment(hf_hub_disable_implicit_token="1"):
estimate_command(args)
@require_transformers
def test_remote_code(self):
# Also tests that custom `Auto` classes work
args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model"])
with self.assertRaises(ValueError, msg="--trust_remote_code"):
gather_data(args)
# Verify it works with the flag
args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model", "--trust_remote_code"])
gather_data(args)
@require_transformers
def test_explicit_dtypes(self):
args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32", "float16"])
output = gather_data(args)
# The largest layer and total size of the model in bytes
largest_layer, total_size = 89075712, 433249280
# Check that full precision -> int4 is calculating correctly
assert len(output) == 2, f"Output was missing a precision, expected 2 but received {len(output)}"
for i, factor in enumerate([1, 2]):
precision = 32 // factor
precision_str = f"float{precision}"
largest_layer_estimate = largest_layer / factor
total_size_estimate = total_size / factor
total_training_size_estimate = total_size_estimate * 4
assert precision_str == output[i][0], f"Output is missing precision `{precision_str}`"
assert (
largest_layer_estimate == output[i][1]
), f"Calculation for largest layer size in `{precision_str}` is incorrect."
assert (
total_size_estimate == output[i][2]
), f"Calculation for total size in `{precision_str}` is incorrect."
assert total_training_size_estimate == max(
output[i][3].values()
), f"Calculation for total training size in `{precision_str}` is incorrect."
@require_transformers
def test_transformers_model(self):
args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32"])
output = gather_data(args)
# The largest layer and total size of the model in bytes
largest_layer, total_size = 89075712, 433249280
assert (
largest_layer == output[0][1]
), f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}"
assert (
total_size == output[0][2]
), f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}"
@require_transformers
def test_no_split_modules(self):
# idefics-80b-instruct has ["IdeficsDecoderLayer", "IdeficsGatedCrossAttentionLayer"]
args = self.parser.parse_args(["HuggingFaceM4/idefics-80b-instruct", "--dtypes", "float32"])
output = gather_data(args)
# without factoring in `no_split` modules, the largest layer is 721420288 bytes
assert output[0][1] != 721420288, "Largest layer calculation incorrect, did not factor in `no_split` modules."
# the real answer is 3240165632 bytes
assert output[0][1] == 3240165632
@require_timm
def test_timm_model(self):
args = self.parser.parse_args(["timm/resnet50.a1_in1k", "--library_name", "timm"])
output = gather_data(args)
# The largest layer and total size of the model in bytes
largest_layer, total_size = 9437184, 102441032
assert (
largest_layer == output[0][1]
), f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}"
assert (
total_size == output[0][2]
), f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}"
|
accelerate/tests/test_cli.py/0
|
{
"file_path": "accelerate/tests/test_cli.py",
"repo_id": "accelerate",
"token_count": 9040
}
| 9
|
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import unittest
import torch
from accelerate import Accelerator
from accelerate.big_modeling import dispatch_model
from accelerate.test_utils import (
DEFAULT_LAUNCH_COMMAND,
assert_exception,
device_count,
execute_subprocess_async,
get_launch_command,
path_in_accelerate_package,
require_huggingface_suite,
require_multi_device,
require_multi_gpu,
require_non_torch_xla,
require_pippy,
)
from accelerate.utils import patch_environment
class MultiDeviceTester(unittest.TestCase):
test_file_path = path_in_accelerate_package("test_utils", "scripts", "test_script.py")
data_loop_file_path = path_in_accelerate_package("test_utils", "scripts", "test_distributed_data_loop.py")
operation_file_path = path_in_accelerate_package("test_utils", "scripts", "test_ops.py")
pippy_file_path = path_in_accelerate_package("test_utils", "scripts", "external_deps", "test_pippy.py")
@require_multi_device
def test_multi_device(self):
print(f"Found {device_count} devices.")
cmd = DEFAULT_LAUNCH_COMMAND + [self.test_file_path]
with patch_environment(omp_num_threads=1):
execute_subprocess_async(cmd)
@require_multi_device
def test_multi_device_ops(self):
print(f"Found {device_count} devices.")
cmd = DEFAULT_LAUNCH_COMMAND + [self.operation_file_path]
with patch_environment(omp_num_threads=1):
execute_subprocess_async(cmd)
@require_multi_device
def test_pad_across_processes(self):
print(f"Found {device_count} devices.")
cmd = DEFAULT_LAUNCH_COMMAND + [inspect.getfile(self.__class__)]
with patch_environment(omp_num_threads=1):
execute_subprocess_async(cmd)
@require_non_torch_xla
@require_multi_gpu
def test_distributed_data_loop(self):
"""
This TestCase checks the behaviour that occurs during distributed training or evaluation,
when the batch size does not evenly divide the dataset size.
"""
print(f"Found {device_count} devices, using 2 devices only")
cmd = get_launch_command(num_processes=2) + [self.data_loop_file_path]
with patch_environment(omp_num_threads=1, cuda_visible_devices="0,1"):
execute_subprocess_async(cmd)
@require_multi_gpu
@require_pippy
@require_huggingface_suite
def test_pippy(self):
"""
Checks the integration with the pippy framework
"""
print(f"Found {device_count} devices")
cmd = get_launch_command(multi_gpu=True, num_processes=device_count) + [self.pippy_file_path]
with patch_environment(omp_num_threads=1):
execute_subprocess_async(cmd)
if __name__ == "__main__":
accelerator = Accelerator()
shape = (accelerator.state.process_index + 2, 10)
tensor = torch.randint(0, 10, shape).to(accelerator.device)
error_msg = ""
tensor1 = accelerator.pad_across_processes(tensor)
if tensor1.shape[0] != accelerator.state.num_processes + 1:
error_msg += f"Found shape {tensor1.shape} but should have {accelerator.state.num_processes + 1} at dim 0."
if not torch.equal(tensor1[: accelerator.state.process_index + 2], tensor):
error_msg += "Tensors have different values."
if not torch.all(tensor1[accelerator.state.process_index + 2 :] == 0):
error_msg += "Padding was not done with the right value (0)."
tensor2 = accelerator.pad_across_processes(tensor, pad_first=True)
if tensor2.shape[0] != accelerator.state.num_processes + 1:
error_msg += f"Found shape {tensor2.shape} but should have {accelerator.state.num_processes + 1} at dim 0."
index = accelerator.state.num_processes - accelerator.state.process_index - 1
if not torch.equal(tensor2[index:], tensor):
error_msg += "Tensors have different values."
if not torch.all(tensor2[:index] == 0):
error_msg += "Padding was not done with the right value (0)."
# Raise error at the end to make sure we don't stop at the first failure.
if len(error_msg) > 0:
raise ValueError(error_msg)
# Check device_map
accelerator.print("Test `device_map` cannot be prepared.")
class ModelForTest(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear1 = torch.nn.Linear(3, 4)
self.batchnorm = torch.nn.BatchNorm1d(4)
self.linear2 = torch.nn.Linear(4, 5)
def forward(self, x):
return self.linear2(self.batchnorm(self.linear1(x)))
device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": 1}
model = ModelForTest()
dispatch_model(model, device_map=device_map)
with assert_exception(ValueError, "You can't train a model that has been loaded with"):
model = accelerator.prepare_model(model)
|
accelerate/tests/test_multigpu.py/0
|
{
"file_path": "accelerate/tests/test_multigpu.py",
"repo_id": "accelerate",
"token_count": 2091
}
| 10
|
#!/bin/bash
#SBATCH --ntasks-per-node=1
#SBATCH --exclusive
#SBATCH --gres=gpu:8
#SBATCH --partition=hopper-prod # Adjust this for your cluster
#SBATCH --output=/fsx/h4/logs/%x-%j.out # Adjust this for your cluster
#SBATCH --err=/fsx/h4/logs/%x-%j.err # Adjust this for your cluster
set -x -e
source ~/.bashrc
conda activate handbook
echo "START TIME: $(date)"
MODEL=$1
TASK=$2
PRECISION=$3
ACCELERATOR=$4
OPTIONAL_ARGS=$5
# Training setup
NUM_NODES=$SLURM_NNODES
GPUS_PER_NODE=8
WORLD_SIZE=$(($NUM_NODES*$GPUS_PER_NODE))
# Due to conflicts between Accelerate's DeepSpeed configs and Transformers' TrainingArguments, we need to parse the gradient accumulation steps from the config file to ensure they match
CONFIG_FILE=recipes/$MODEL/$TASK/config_$PRECISION.yaml
GRAD_ACC_STEPS=$(grep 'gradient_accumulation_steps' $CONFIG_FILE | awk '{print $2}')
# Split the string into individual arguments
IFS=' ' read -ra ARGS <<< "$OPTIONAL_ARGS"
# Loop through the arguments and find the one with "--gradient_accumulation_steps"
for arg in "${ARGS[@]}"; do
if [[ "$arg" == "--gradient_accumulation_steps="* ]]; then
# Extract the value after the equals sign
GRAD_ACC_STEPS="${arg#*=}"
break # Exit the loop once we find the desired argument
fi
done
echo "Gradient accumulation steps: $GRAD_ACC_STEPS"
# so processes know who to talk to
MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
MASTER_PORT=6000
export CMD=" \
scripts/run_$TASK.py $CONFIG_FILE $OPTIONAL_ARGS
"
export LAUNCHER="HF_HUB_ENABLE_HF_TRANSFER=1 ACCELERATE_LOG_LEVEL=info TRANSFORMERS_VERBOSITY=info accelerate launch \
--config_file recipes/accelerate_configs/$ACCELERATOR.yaml \
--gradient_accumulation_steps $GRAD_ACC_STEPS \
--num_machines $NUM_NODES \
--num_processes $WORLD_SIZE \
--main_process_ip $MASTER_ADDR \
--main_process_port $MASTER_PORT \
--machine_rank \$SLURM_PROCID \
--rdzv_conf "rdzv_backend=c10d,rdzv_endpoint=$MASTER_ADDR:$MASTER_PORT" \
--max_restarts 1 \
--role \$(hostname -s): \
--tee 3 \
"
# force crashing on nccl issues like hanging broadcast
export NCCL_ASYNC_ERROR_HANDLING=1
# export NCCL_DEBUG=INFO
# export NCCL_DEBUG_SUBSYS=COLL
# export NCCL_SOCKET_NTHREADS=1
# export NCCL_NSOCKS_PERTHREAD=1
# export CUDA_LAUNCH_BLOCKING=1
# Specific configuration optimized for the Hugging Face Compute Cluster
# Be ye warned this may not work on other clusters!
module load cuda/12.1
# srun error handling:
# --wait=60: wait 60 sec after the first task terminates before terminating all remaining tasks
# --kill-on-bad-exit=1: terminate a step if any task exits with a non-zero exit code
SRUN_ARGS=" \
--wait=60 \
--kill-on-bad-exit=1 \
"
clear; srun $SRUN_ARGS --jobid $SLURM_JOB_ID bash -c "$LAUNCHER --role \$SLURMD_NODENAME: $CMD" 2>&1
echo "END TIME: $(date)"
|
alignment-handbook/recipes/launch.slurm/0
|
{
"file_path": "alignment-handbook/recipes/launch.slurm",
"repo_id": "alignment-handbook",
"token_count": 1135
}
| 11
|
# Scripts to Train and Evaluate Chat Models
## Fine-tuning
In the handbook, we provide three main ways to align LLMs for chat:
- Full fine-tuning on a multi-GPU machine with DeepSpeed ZeRO-3 (tested on an 8 x A100 (80GB) node).
- LoRA or QLoRA fine-tuning on a single consumer 24GB GPU (tested on an RTX 4090).
- LoRA fine-tuning on a multi-GPU machine with DeepSpeed ZeRO-3 (tested on a 2 x A100s (80GB)).
In practice, we find comparable performance for both full and QLoRA fine-tuning, with the latter having the advantage of producing small adapter weights that are fast to upload and download from the Hugging Face Hub. Here are the general commands to fine-tune your models:
```shell
# Full training with ZeRO-3 on 8 GPUs
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml scripts/run_{task}.py recipes/{model_name}/{task}/config_full.yaml
# QLoRA 4-bit training on a single GPU
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_{task}.py recipes/{model_name}/{task}/config_qlora.yaml
# LoRA training on a single GPU
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_{task}.py recipes/{model_name}/{task}/config_qlora.yaml --load_in_4bit=false
# LoRA training with ZeRO-3 on two or more GPUs
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --num_processes={num_gpus} scripts/run_{task}.py recipes/{model_name}/{task}/config_qlora.yaml --load_in_4bit=false
```
Here `{task}` refers to the type of training you wish to run. Currently the following tasks are supported: continued pretraining `cpt`, supervised finetuning `sft`, and direct preference optimisation `dpo`. Note that `cpt` is only present in the `gpt-nl` example recipe. {model_name}` refers to the choice of a recipe in the `recipes` directory. For example, to replicate Zephyr-7B-β you can run:
```shell
# Step 1 - train SFT policy
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_full.yaml
# Step 2 - align with DPO
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_full.yaml
```
**💡 Tip:** If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant (and thus replicate our results).
By default, these scripts will push each model to your Hugging Face Hub username, i.e. `{username}/{model_name}-{task}`. You can override the parameters in each YAML config by appending them to the command as follows:
```shell
# Change batch size, number of epochs etc
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml scripts/run_{task}.py recipes/{model_name}/{task}/config_full.yaml --per_device_train_batch_size=42 --num_train_epochs=5
```
## Logging with Weights and Biases
By default all training metrics are logged with TensorBoard. If you have a [Weights and Biases](https://wandb.ai/site) account and are logged in, you can view the training metrics by appending `--report_to=wandb`, e.g.
```shell
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml scripts/run_{task}.py recipes/{model_name}/{task}/config_full.yaml --report_to=wandb
```
## Launching jobs on a Slurm cluster
If you have access to a Slurm cluster, we provide a `recipes/launch.slurm` script that will automatically queue training jobs for you. Here's how you can use it:
```shell
sbatch --job-name=handbook_{task} --nodes=1 recipes/launch.slurm {model_name} {task} {precision} {accelerator}
```
Here `{model_name}` and `{task}` are defined as above, while `{precision}` refers to the type of training (`full` vs `qlora`) and `{accelerator}` refers to the choice of 🤗 Accelerate config in `recipes/accelerate_configs`. If you wish to override the default config parameters, you can provide them by appending a space-separated string like `'--arg1=value1 --arg2=value2'. Here's a concrete example to run SFT on 1 node of 8 GPUs:
```shell
# Launch on Slurm and override default hyperparameters
sbatch --job-name=handbook_sft --nodes=1 recipes/launch.slurm zephyr-7b-beta sft full deepspeed_zero3 '--per_device_train_batch_size=42 --num_train_epochs=5'
```
You can scale the number of nodes by increasing the `--nodes` flag.
**⚠️ Note:** the configuration in `recipes/launch.slurm` is optimised for the Hugging Face Compute Cluster and may require tweaking to be adapted to your own compute nodes.
## Fine-tuning on your datasets
Under the hood, each training script uses the `get_datasets()` function which allows one to easily combine multiple datasets with varying proportions. For instance, this is how one can specify multiple datasets and which splits to combine in one of the YAML configs:
```yaml
datasets_mixer:
dataset_1: 0.5 # Use 50% of the training examples
dataset_2: 0.66 # Use 66% of the training examples
dataset_3: 0.10 # Use 10% of the training examples
dataset_splits:
- train_xxx # The training splits to mix
- test_xxx # The test splits to mix
```
If you want to fine-tune on your datasets, the main thing to keep in mind is how the chat templates are applied to the dataset blend. Since each task (SFT, DPO, etc), requires a different format, we assume the datasets have the following columns:
**SFT**
* `messages`: A list of `dicts` in the form `{"role": "{role}", "content": {content}}`.
* See [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) for an example.
**DPO**
* `chosen`: A list of `dicts` in the form `{"role": "{role}", "content": {content}}` corresponding to the preferred dialogue.
* `rejected`: A list of `dicts` in the form `{"role": "{role}", "content": {content}}` corresponding to the dispreferred dialogue.
* See [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) for an example.
We also find it useful to include dedicated splits per task in our datasets, so e.g. we have:
* `{train,test}_sft`: Splits for SFT training.
* `{train,test}_gen`: Splits for generation ranking like rejection sampling or PPO.
* `{train,test}_prefs`: Splits for preference modelling, like reward modelling or DPO.
If you format your dataset in the same way, our training scripts should work out of the box!
## Evaluating chat models
We recommend benchmarking chat models on:
* [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench): a multi-turn benchmark spanning 80 dialogues and 10 domains.
* [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval): a single-turn benchmark which evaluates the helpfulness of chat and instruct models against `text-davinci-003`.
For both benchmarks, we have added support for the [Zephyr chat template](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full/blob/ac6e600eefcce74f5e8bae1035d4f66019e93190/tokenizer_config.json#L30) (which is the default produced by our scripts), so you can evaluate models produced by our scripts as follows:
**MT-Bench**
* Follow the installation instructions [here](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge)
* Make sure the word `zephyr` exists in the `--model-path` argument when generating the model responses [here](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge#step-1-generate-model-answers-to-mt-bench-questions). This will ensure the correct chat template is loaded. For example, the following model name is valid: `--model-path {hub_username}/my-baby-zephyr`
* Generate the model responses and GPT-4 rankings.
**AlpacaEval**
* Follow the installation instructions [here](https://github.com/tatsu-lab/alpaca_eval#quick-start)
* Copy-paste the [config](https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/models_configs/zephyr-7b-beta/configs.yaml) for `zephyr-7b-beta` and place it in the `model_configs` directory under `{your_zephyr_model}`.
* Next, update the [config name](https://github.com/tatsu-lab/alpaca_eval/blob/2daa6e11b194653043ca74f735728dc068e04aae/src/alpaca_eval/models_configs/zephyr-7b-beta/configs.yaml#L1) and [Hub model ID](https://github.com/tatsu-lab/alpaca_eval/blob/2daa6e11b194653043ca74f735728dc068e04aae/src/alpaca_eval/models_configs/zephyr-7b-beta/configs.yaml#L5) to match your model name.
* Follow the steps to evaluate your model [here](https://github.com/tatsu-lab/alpaca_eval/tree/main#evaluating-a-model).
Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. For that reason, we also recommend submitting your best models for human evaluation in:
* [Chatbot Arena](https://chat.lmsys.org): a live, human evaluation of chat models in head-to-head comparisons.
|
alignment-handbook/scripts/README.md/0
|
{
"file_path": "alignment-handbook/scripts/README.md",
"repo_id": "alignment-handbook",
"token_count": 2967
}
| 12
|
# coding=utf-8
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from copy import deepcopy
import pytest
from datasets import Dataset
from transformers import AutoTokenizer
from alignment import DataArguments, ModelArguments, apply_chat_template, get_datasets, get_tokenizer
from alignment.data import maybe_insert_system_message
class GetDatasetsTest(unittest.TestCase):
"""Each of these test datasets has 100 examples"""
def test_loading_data_args(self):
dataset_mixer = {
"HuggingFaceH4/testing_alpaca_small": 0.5,
"HuggingFaceH4/testing_self_instruct_small": 0.3,
"HuggingFaceH4/testing_codealpaca_small": 0.2,
}
data_args = DataArguments(dataset_mixer=dataset_mixer)
datasets = get_datasets(data_args, columns_to_keep=["prompt", "completion"])
self.assertEqual(len(datasets["train"]), 100)
self.assertEqual(len(datasets["test"]), 300)
def test_loading_data_dict(self):
dataset_mixer = {
"HuggingFaceH4/testing_alpaca_small": 0.5,
"HuggingFaceH4/testing_self_instruct_small": 0.3,
"HuggingFaceH4/testing_codealpaca_small": 0.2,
}
datasets = get_datasets(dataset_mixer, columns_to_keep=["prompt", "completion"])
self.assertEqual(len(datasets["train"]), 100)
self.assertEqual(len(datasets["test"]), 300)
def test_loading_with_unit_fractions(self):
dataset_mixer = {
"HuggingFaceH4/testing_alpaca_small": 1.0,
"HuggingFaceH4/testing_self_instruct_small": 1.0,
"HuggingFaceH4/testing_codealpaca_small": 1.0,
}
datasets = get_datasets(dataset_mixer, columns_to_keep=["prompt", "completion"])
self.assertEqual(len(datasets["train"]), 300)
self.assertEqual(len(datasets["test"]), 300)
def test_loading_with_fractions_greater_than_unity(self):
dataset_mixer = {
"HuggingFaceH4/testing_alpaca_small": 0.7,
"HuggingFaceH4/testing_self_instruct_small": 0.4,
}
datasets = get_datasets(dataset_mixer, columns_to_keep=["prompt", "completion"])
self.assertEqual(len(datasets["train"]), 70 + 40)
self.assertEqual(len(datasets["test"]), 200)
def test_loading_fails_with_negative_fractions(self):
dataset_mixer = {
"HuggingFaceH4/testing_alpaca_small": 0.7,
"HuggingFaceH4/testing_self_instruct_small": -0.3,
}
with pytest.raises(ValueError, match=r"Dataset fractions cannot be negative."):
get_datasets(dataset_mixer, columns_to_keep=["prompt", "completion"])
def test_loading_single_split_with_unit_fractions(self):
dataset_mixer = {
"HuggingFaceH4/testing_alpaca_small": 1.0,
}
datasets = get_datasets(dataset_mixer, splits=["test"], columns_to_keep=["prompt", "completion"])
self.assertEqual(len(datasets["test"]), 100)
self.assertRaises(KeyError, lambda: datasets["train"])
class ApplyChatTemplateTest(unittest.TestCase):
def setUp(self):
model_args = ModelArguments(model_name_or_path="HuggingFaceH4/zephyr-7b-alpha")
data_args = DataArguments()
self.tokenizer = get_tokenizer(model_args, data_args)
self.dataset = Dataset.from_dict(
{
"prompt": ["Hello!"],
"messages": [
[
{"role": "system", "content": "You are a happy chatbot"},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Bonjour!"},
{"role": "user", "content": "How are you?"},
{"role": "assistant", "content": "I am doing well, thanks!"},
]
],
"chosen": [
[
{"role": "system", "content": "You are a happy chatbot"},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Bonjour!"},
{"role": "user", "content": "How are you?"},
{"role": "assistant", "content": "I am doing well, thanks!"},
]
],
"rejected": [
[
{"role": "system", "content": "You are a happy chatbot"},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Bonjour!"},
{"role": "user", "content": "How are you?"},
{"role": "assistant", "content": "Not so good tbh"},
]
],
}
)
def test_maybe_insert_system_message(self):
# does not accept system prompt
mistral_tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
# accepts system prompt. use codellama since it has no HF token reqiurement
llama_tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
messages_sys_excl = [{"role": "user", "content": "Tell me a joke."}]
messages_sys_incl = [{"role": "system", "content": ""}, {"role": "user", "content": "Tell me a joke."}]
mistral_messages = deepcopy(messages_sys_excl)
llama_messages = deepcopy(messages_sys_excl)
maybe_insert_system_message(mistral_messages, mistral_tokenizer)
maybe_insert_system_message(llama_messages, llama_tokenizer)
# output from mistral should not have a system message, output from llama should
self.assertEqual(mistral_messages, messages_sys_excl)
self.assertEqual(llama_messages, messages_sys_incl)
def test_sft(self):
dataset = self.dataset.map(
apply_chat_template,
fn_kwargs={"tokenizer": self.tokenizer, "task": "sft"},
remove_columns=self.dataset.column_names,
)
self.assertDictEqual(
dataset[0],
{
"text": "<|system|>\nYou are a happy chatbot</s>\n<|user|>\nHello!</s>\n<|assistant|>\nBonjour!</s>\n<|user|>\nHow are you?</s>\n<|assistant|>\nI am doing well, thanks!</s>\n"
},
)
def test_generation(self):
# Remove last turn from messages
dataset = self.dataset.map(lambda x: {"messages": x["messages"][:-1]})
dataset = dataset.map(
apply_chat_template,
fn_kwargs={"tokenizer": self.tokenizer, "task": "generation"},
remove_columns=self.dataset.column_names,
)
self.assertDictEqual(
dataset[0],
{
"text": "<|system|>\nYou are a happy chatbot</s>\n<|user|>\nHello!</s>\n<|assistant|>\nBonjour!</s>\n<|user|>\nHow are you?</s>\n<|assistant|>\n"
},
)
def test_rm(self):
dataset = self.dataset.map(
apply_chat_template,
fn_kwargs={"tokenizer": self.tokenizer, "task": "rm"},
remove_columns=self.dataset.column_names,
)
self.assertDictEqual(
dataset[0],
{
"text_chosen": "<|system|>\nYou are a happy chatbot</s>\n<|user|>\nHello!</s>\n<|assistant|>\nBonjour!</s>\n<|user|>\nHow are you?</s>\n<|assistant|>\nI am doing well, thanks!</s>\n",
"text_rejected": "<|system|>\nYou are a happy chatbot</s>\n<|user|>\nHello!</s>\n<|assistant|>\nBonjour!</s>\n<|user|>\nHow are you?</s>\n<|assistant|>\nNot so good tbh</s>\n",
},
)
def test_dpo(self):
dataset = self.dataset.map(
apply_chat_template,
fn_kwargs={"tokenizer": self.tokenizer, "task": "dpo"},
remove_columns=self.dataset.column_names,
)
self.assertDictEqual(
dataset[0],
{
"text_prompt": "<|system|>\nYou are a happy chatbot</s>\n<|user|>\nHello!</s>\n<|assistant|>\nBonjour!</s>\n<|user|>\nHow are you?</s>\n",
"text_chosen": "<|assistant|>\nI am doing well, thanks!</s>\n",
"text_rejected": "<|assistant|>\nNot so good tbh</s>\n",
},
)
|
alignment-handbook/tests/test_data.py/0
|
{
"file_path": "alignment-handbook/tests/test_data.py",
"repo_id": "alignment-handbook",
"token_count": 4290
}
| 13
|
[package]
name = "candle-book"
version.workspace = true
edition.workspace = true
description.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
license.workspace = true
readme = "README.md"
[dependencies]
accelerate-src = { workspace = true, optional = true }
candle = { workspace = true }
candle-datasets = { workspace = true }
candle-nn = { workspace = true }
candle-transformers = { workspace = true }
candle-flash-attn = { workspace = true, optional = true }
safetensors = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
num-traits = { workspace = true }
intel-mkl-src = { workspace = true, optional = true }
cudarc = { workspace = true, optional = true }
half = { workspace = true, optional = true }
image = { workspace = true, optional = true }
anyhow = { workspace = true }
tokio = "1.29.1"
[dev-dependencies]
byteorder = { workspace = true }
hf-hub = { workspace = true, features=["tokio"]}
clap = { workspace = true }
memmap2 = { workspace = true }
rand = { workspace = true }
tokenizers = { workspace = true, features = ["onig"] }
tracing = { workspace = true }
tracing-chrome = { workspace = true }
tracing-subscriber = { workspace = true }
wav = { workspace = true }
# Necessary to disambiguate with tokio in wasm examples which are 1.28.1
parquet = { workspace = true }
image = { workspace = true }
[build-dependencies]
anyhow = { workspace = true }
[features]
default = []
|
candle/candle-book/Cargo.toml/0
|
{
"file_path": "candle/candle-book/Cargo.toml",
"repo_id": "candle",
"token_count": 467
}
| 14
|
# Installation
**With Cuda support**:
1. First, make sure that Cuda is correctly installed.
- `nvcc --version` should print information about your Cuda compiler driver.
- `nvidia-smi --query-gpu=compute_cap --format=csv` should print your GPUs compute capability, e.g. something
like:
```bash
compute_cap
8.9
```
You can also compile the Cuda kernels for a specific compute cap using the
`CUDA_COMPUTE_CAP=<compute cap>` environment variable.
If any of the above commands errors out, please make sure to update your Cuda version.
2. Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) with Cuda support.
Start by creating a new cargo:
```bash
cargo new myapp
cd myapp
```
Make sure to add the `candle-core` crate with the cuda feature:
```bash
cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda"
```
Run `cargo build` to make sure everything can be correctly built.
```bash
cargo build
```
**Without Cuda support**:
Create a new app and add [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as follows:
```bash
cargo new myapp
cd myapp
cargo add --git https://github.com/huggingface/candle.git candle-core
```
Finally, run `cargo build` to make sure everything can be correctly built.
```bash
cargo build
```
**With mkl support**
You can also see the `mkl` feature which could be interesting to get faster inference on CPU. [Using mkl](./advanced/mkl.md)
|
candle/candle-book/src/guide/installation.md/0
|
{
"file_path": "candle/candle-book/src/guide/installation.md",
"repo_id": "candle",
"token_count": 487
}
| 15
|
mod benchmarks;
use criterion::criterion_main;
criterion_main!(
benchmarks::affine::benches,
benchmarks::matmul::benches,
benchmarks::random::benches,
benchmarks::where_cond::benches,
benchmarks::conv_transpose2d::benches,
);
|
candle/candle-core/benches/bench_main.rs/0
|
{
"file_path": "candle/candle-core/benches/bench_main.rs",
"repo_id": "candle",
"token_count": 88
}
| 16
|
#![allow(clippy::excessive_precision)]
// Code taken from https://github.com/statrs-dev/statrs
//! Provides the [error](https://en.wikipedia.org/wiki/Error_function) and
//! related functions
mod evaluate {
//! Provides functions that don't have a numerical solution and must
//! be solved computationally (e.g. evaluation of a polynomial)
/// evaluates a polynomial at `z` where `coeff` are the coeffecients
/// to a polynomial of order `k` where `k` is the length of `coeff` and the
/// coeffecient
/// to the `k`th power is the `k`th element in coeff. E.g. [3,-1,2] equates to
/// `2z^2 - z + 3`
///
/// # Remarks
///
/// Returns 0 for a 0 length coefficient slice
pub fn polynomial(z: f64, coeff: &[f64]) -> f64 {
let n = coeff.len();
if n == 0 {
return 0.0;
}
let mut sum = *coeff.last().unwrap();
for c in coeff[0..n - 1].iter().rev() {
sum = *c + z * sum;
}
sum
}
}
use std::f64;
/// `erf` calculates the error function at `x`.
pub fn erf(x: f64) -> f64 {
if x.is_nan() {
f64::NAN
} else if x >= 0.0 && x.is_infinite() {
1.0
} else if x <= 0.0 && x.is_infinite() {
-1.0
} else if x == 0. {
0.0
} else {
erf_impl(x, false)
}
}
/// `erf_inv` calculates the inverse error function
/// at `x`.
pub fn erf_inv(x: f64) -> f64 {
if x == 0.0 {
0.0
} else if x >= 1.0 {
f64::INFINITY
} else if x <= -1.0 {
f64::NEG_INFINITY
} else if x < 0.0 {
erf_inv_impl(-x, 1.0 + x, -1.0)
} else {
erf_inv_impl(x, 1.0 - x, 1.0)
}
}
/// `erfc` calculates the complementary error function
/// at `x`.
pub fn erfc(x: f64) -> f64 {
if x.is_nan() {
f64::NAN
} else if x == f64::INFINITY {
0.0
} else if x == f64::NEG_INFINITY {
2.0
} else {
erf_impl(x, true)
}
}
/// `erfc_inv` calculates the complementary inverse
/// error function at `x`.
pub fn erfc_inv(x: f64) -> f64 {
if x <= 0.0 {
f64::INFINITY
} else if x >= 2.0 {
f64::NEG_INFINITY
} else if x > 1.0 {
erf_inv_impl(-1.0 + x, 2.0 - x, -1.0)
} else {
erf_inv_impl(1.0 - x, x, 1.0)
}
}
// **********************************************************
// ********** Coefficients for erf_impl polynomial **********
// **********************************************************
/// Polynomial coefficients for a numerator of `erf_impl`
/// in the interval [1e-10, 0.5].
const ERF_IMPL_AN: &[f64] = &[
0.00337916709551257388990745,
-0.00073695653048167948530905,
-0.374732337392919607868241,
0.0817442448733587196071743,
-0.0421089319936548595203468,
0.0070165709512095756344528,
-0.00495091255982435110337458,
0.000871646599037922480317225,
];
/// Polynomial coefficients for a denominator of `erf_impl`
/// in the interval [1e-10, 0.5]
const ERF_IMPL_AD: &[f64] = &[
1.0,
-0.218088218087924645390535,
0.412542972725442099083918,
-0.0841891147873106755410271,
0.0655338856400241519690695,
-0.0120019604454941768171266,
0.00408165558926174048329689,
-0.000615900721557769691924509,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [0.5, 0.75].
const ERF_IMPL_BN: &[f64] = &[
-0.0361790390718262471360258,
0.292251883444882683221149,
0.281447041797604512774415,
0.125610208862766947294894,
0.0274135028268930549240776,
0.00250839672168065762786937,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [0.5, 0.75].
const ERF_IMPL_BD: &[f64] = &[
1.0,
1.8545005897903486499845,
1.43575803037831418074962,
0.582827658753036572454135,
0.124810476932949746447682,
0.0113724176546353285778481,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [0.75, 1.25].
const ERF_IMPL_CN: &[f64] = &[
-0.0397876892611136856954425,
0.153165212467878293257683,
0.191260295600936245503129,
0.10276327061989304213645,
0.029637090615738836726027,
0.0046093486780275489468812,
0.000307607820348680180548455,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [0.75, 1.25].
const ERF_IMPL_CD: &[f64] = &[
1.0,
1.95520072987627704987886,
1.64762317199384860109595,
0.768238607022126250082483,
0.209793185936509782784315,
0.0319569316899913392596356,
0.00213363160895785378615014,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [1.25, 2.25].
const ERF_IMPL_DN: &[f64] = &[
-0.0300838560557949717328341,
0.0538578829844454508530552,
0.0726211541651914182692959,
0.0367628469888049348429018,
0.00964629015572527529605267,
0.00133453480075291076745275,
0.778087599782504251917881e-4,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [1.25, 2.25].
const ERF_IMPL_DD: &[f64] = &[
1.0,
1.75967098147167528287343,
1.32883571437961120556307,
0.552528596508757581287907,
0.133793056941332861912279,
0.0179509645176280768640766,
0.00104712440019937356634038,
-0.106640381820357337177643e-7,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [2.25, 3.5].
const ERF_IMPL_EN: &[f64] = &[
-0.0117907570137227847827732,
0.014262132090538809896674,
0.0202234435902960820020765,
0.00930668299990432009042239,
0.00213357802422065994322516,
0.00025022987386460102395382,
0.120534912219588189822126e-4,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [2.25, 3.5].
const ERF_IMPL_ED: &[f64] = &[
1.0,
1.50376225203620482047419,
0.965397786204462896346934,
0.339265230476796681555511,
0.0689740649541569716897427,
0.00771060262491768307365526,
0.000371421101531069302990367,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [3.5, 5.25].
const ERF_IMPL_FN: &[f64] = &[
-0.00546954795538729307482955,
0.00404190278731707110245394,
0.0054963369553161170521356,
0.00212616472603945399437862,
0.000394984014495083900689956,
0.365565477064442377259271e-4,
0.135485897109932323253786e-5,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [3.5, 5.25].
const ERF_IMPL_FD: &[f64] = &[
1.0,
1.21019697773630784832251,
0.620914668221143886601045,
0.173038430661142762569515,
0.0276550813773432047594539,
0.00240625974424309709745382,
0.891811817251336577241006e-4,
-0.465528836283382684461025e-11,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [5.25, 8].
const ERF_IMPL_GN: &[f64] = &[
-0.00270722535905778347999196,
0.0013187563425029400461378,
0.00119925933261002333923989,
0.00027849619811344664248235,
0.267822988218331849989363e-4,
0.923043672315028197865066e-6,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [5.25, 8].
const ERF_IMPL_GD: &[f64] = &[
1.0,
0.814632808543141591118279,
0.268901665856299542168425,
0.0449877216103041118694989,
0.00381759663320248459168994,
0.000131571897888596914350697,
0.404815359675764138445257e-11,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [8, 11.5].
const ERF_IMPL_HN: &[f64] = &[
-0.00109946720691742196814323,
0.000406425442750422675169153,
0.000274499489416900707787024,
0.465293770646659383436343e-4,
0.320955425395767463401993e-5,
0.778286018145020892261936e-7,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [8, 11.5].
const ERF_IMPL_HD: &[f64] = &[
1.0,
0.588173710611846046373373,
0.139363331289409746077541,
0.0166329340417083678763028,
0.00100023921310234908642639,
0.24254837521587225125068e-4,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [11.5, 17].
const ERF_IMPL_IN: &[f64] = &[
-0.00056907993601094962855594,
0.000169498540373762264416984,
0.518472354581100890120501e-4,
0.382819312231928859704678e-5,
0.824989931281894431781794e-7,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [11.5, 17].
const ERF_IMPL_ID: &[f64] = &[
1.0,
0.339637250051139347430323,
0.043472647870310663055044,
0.00248549335224637114641629,
0.535633305337152900549536e-4,
-0.117490944405459578783846e-12,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [17, 24].
const ERF_IMPL_JN: &[f64] = &[
-0.000241313599483991337479091,
0.574224975202501512365975e-4,
0.115998962927383778460557e-4,
0.581762134402593739370875e-6,
0.853971555085673614607418e-8,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [17, 24].
const ERF_IMPL_JD: &[f64] = &[
1.0,
0.233044138299687841018015,
0.0204186940546440312625597,
0.000797185647564398289151125,
0.117019281670172327758019e-4,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [24, 38].
const ERF_IMPL_KN: &[f64] = &[
-0.000146674699277760365803642,
0.162666552112280519955647e-4,
0.269116248509165239294897e-5,
0.979584479468091935086972e-7,
0.101994647625723465722285e-8,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [24, 38].
const ERF_IMPL_KD: &[f64] = &[
1.0,
0.165907812944847226546036,
0.0103361716191505884359634,
0.000286593026373868366935721,
0.298401570840900340874568e-5,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [38, 60].
const ERF_IMPL_LN: &[f64] = &[
-0.583905797629771786720406e-4,
0.412510325105496173512992e-5,
0.431790922420250949096906e-6,
0.993365155590013193345569e-8,
0.653480510020104699270084e-10,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [38, 60].
const ERF_IMPL_LD: &[f64] = &[
1.0,
0.105077086072039915406159,
0.00414278428675475620830226,
0.726338754644523769144108e-4,
0.477818471047398785369849e-6,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [60, 85].
const ERF_IMPL_MN: &[f64] = &[
-0.196457797609229579459841e-4,
0.157243887666800692441195e-5,
0.543902511192700878690335e-7,
0.317472492369117710852685e-9,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [60, 85].
const ERF_IMPL_MD: &[f64] = &[
1.0,
0.052803989240957632204885,
0.000926876069151753290378112,
0.541011723226630257077328e-5,
0.535093845803642394908747e-15,
];
/// Polynomial coefficients for a numerator in `erf_impl`
/// in the interval [85, 110].
const ERF_IMPL_NN: &[f64] = &[
-0.789224703978722689089794e-5,
0.622088451660986955124162e-6,
0.145728445676882396797184e-7,
0.603715505542715364529243e-10,
];
/// Polynomial coefficients for a denominator in `erf_impl`
/// in the interval [85, 110].
const ERF_IMPL_ND: &[f64] = &[
1.0,
0.0375328846356293715248719,
0.000467919535974625308126054,
0.193847039275845656900547e-5,
];
// **********************************************************
// ********** Coefficients for erf_inv_impl polynomial ******
// **********************************************************
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0, 0.5].
const ERF_INV_IMPL_AN: &[f64] = &[
-0.000508781949658280665617,
-0.00836874819741736770379,
0.0334806625409744615033,
-0.0126926147662974029034,
-0.0365637971411762664006,
0.0219878681111168899165,
0.00822687874676915743155,
-0.00538772965071242932965,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0, 0.5].
const ERF_INV_IMPL_AD: &[f64] = &[
1.0,
-0.970005043303290640362,
-1.56574558234175846809,
1.56221558398423026363,
0.662328840472002992063,
-0.71228902341542847553,
-0.0527396382340099713954,
0.0795283687341571680018,
-0.00233393759374190016776,
0.000886216390456424707504,
];
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0.5, 0.75].
const ERF_INV_IMPL_BN: &[f64] = &[
-0.202433508355938759655,
0.105264680699391713268,
8.37050328343119927838,
17.6447298408374015486,
-18.8510648058714251895,
-44.6382324441786960818,
17.445385985570866523,
21.1294655448340526258,
-3.67192254707729348546,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0.5, 0.75].
const ERF_INV_IMPL_BD: &[f64] = &[
1.0,
6.24264124854247537712,
3.9713437953343869095,
-28.6608180499800029974,
-20.1432634680485188801,
48.5609213108739935468,
10.8268667355460159008,
-22.6436933413139721736,
1.72114765761200282724,
];
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0.75, 1] with x less than 3.
const ERF_INV_IMPL_CN: &[f64] = &[
-0.131102781679951906451,
-0.163794047193317060787,
0.117030156341995252019,
0.387079738972604337464,
0.337785538912035898924,
0.142869534408157156766,
0.0290157910005329060432,
0.00214558995388805277169,
-0.679465575181126350155e-6,
0.285225331782217055858e-7,
-0.681149956853776992068e-9,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0.75, 1] with x less than 3.
const ERF_INV_IMPL_CD: &[f64] = &[
1.0,
3.46625407242567245975,
5.38168345707006855425,
4.77846592945843778382,
2.59301921623620271374,
0.848854343457902036425,
0.152264338295331783612,
0.01105924229346489121,
];
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0.75, 1] with x between 3 and 6.
const ERF_INV_IMPL_DN: &[f64] = &[
-0.0350353787183177984712,
-0.00222426529213447927281,
0.0185573306514231072324,
0.00950804701325919603619,
0.00187123492819559223345,
0.000157544617424960554631,
0.460469890584317994083e-5,
-0.230404776911882601748e-9,
0.266339227425782031962e-11,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0.75, 1] with x between 3 and 6.
const ERF_INV_IMPL_DD: &[f64] = &[
1.0,
1.3653349817554063097,
0.762059164553623404043,
0.220091105764131249824,
0.0341589143670947727934,
0.00263861676657015992959,
0.764675292302794483503e-4,
];
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0.75, 1] with x between 6 and 18.
const ERF_INV_IMPL_EN: &[f64] = &[
-0.0167431005076633737133,
-0.00112951438745580278863,
0.00105628862152492910091,
0.000209386317487588078668,
0.149624783758342370182e-4,
0.449696789927706453732e-6,
0.462596163522878599135e-8,
-0.281128735628831791805e-13,
0.99055709973310326855e-16,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0.75, 1] with x between 6 and 18.
const ERF_INV_IMPL_ED: &[f64] = &[
1.0,
0.591429344886417493481,
0.138151865749083321638,
0.0160746087093676504695,
0.000964011807005165528527,
0.275335474764726041141e-4,
0.282243172016108031869e-6,
];
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0.75, 1] with x between 18 and 44.
const ERF_INV_IMPL_FN: &[f64] = &[
-0.0024978212791898131227,
-0.779190719229053954292e-5,
0.254723037413027451751e-4,
0.162397777342510920873e-5,
0.396341011304801168516e-7,
0.411632831190944208473e-9,
0.145596286718675035587e-11,
-0.116765012397184275695e-17,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0.75, 1] with x between 18 and 44.
const ERF_INV_IMPL_FD: &[f64] = &[
1.0,
0.207123112214422517181,
0.0169410838120975906478,
0.000690538265622684595676,
0.145007359818232637924e-4,
0.144437756628144157666e-6,
0.509761276599778486139e-9,
];
/// Polynomial coefficients for a numerator of `erf_inv_impl`
/// in the interval [0.75, 1] with x greater than 44.
const ERF_INV_IMPL_GN: &[f64] = &[
-0.000539042911019078575891,
-0.28398759004727721098e-6,
0.899465114892291446442e-6,
0.229345859265920864296e-7,
0.225561444863500149219e-9,
0.947846627503022684216e-12,
0.135880130108924861008e-14,
-0.348890393399948882918e-21,
];
/// Polynomial coefficients for a denominator of `erf_inv_impl`
/// in the interval [0.75, 1] with x greater than 44.
const ERF_INV_IMPL_GD: &[f64] = &[
1.0,
0.0845746234001899436914,
0.00282092984726264681981,
0.468292921940894236786e-4,
0.399968812193862100054e-6,
0.161809290887904476097e-8,
0.231558608310259605225e-11,
];
/// `erf_impl` computes the error function at `z`.
/// If `inv` is true, `1 - erf` is calculated as opposed to `erf`
fn erf_impl(z: f64, inv: bool) -> f64 {
if z < 0.0 {
if !inv {
return -erf_impl(-z, false);
}
if z < -0.5 {
return 2.0 - erf_impl(-z, true);
}
return 1.0 + erf_impl(-z, false);
}
let result = if z < 0.5 {
if z < 1e-10 {
z * 1.125 + z * 0.003379167095512573896158903121545171688
} else {
z * 1.125
+ z * evaluate::polynomial(z, ERF_IMPL_AN) / evaluate::polynomial(z, ERF_IMPL_AD)
}
} else if z < 110.0 {
let (r, b) = if z < 0.75 {
(
evaluate::polynomial(z - 0.5, ERF_IMPL_BN)
/ evaluate::polynomial(z - 0.5, ERF_IMPL_BD),
0.3440242112,
)
} else if z < 1.25 {
(
evaluate::polynomial(z - 0.75, ERF_IMPL_CN)
/ evaluate::polynomial(z - 0.75, ERF_IMPL_CD),
0.419990927,
)
} else if z < 2.25 {
(
evaluate::polynomial(z - 1.25, ERF_IMPL_DN)
/ evaluate::polynomial(z - 1.25, ERF_IMPL_DD),
0.4898625016,
)
} else if z < 3.5 {
(
evaluate::polynomial(z - 2.25, ERF_IMPL_EN)
/ evaluate::polynomial(z - 2.25, ERF_IMPL_ED),
0.5317370892,
)
} else if z < 5.25 {
(
evaluate::polynomial(z - 3.5, ERF_IMPL_FN)
/ evaluate::polynomial(z - 3.5, ERF_IMPL_FD),
0.5489973426,
)
} else if z < 8.0 {
(
evaluate::polynomial(z - 5.25, ERF_IMPL_GN)
/ evaluate::polynomial(z - 5.25, ERF_IMPL_GD),
0.5571740866,
)
} else if z < 11.5 {
(
evaluate::polynomial(z - 8.0, ERF_IMPL_HN)
/ evaluate::polynomial(z - 8.0, ERF_IMPL_HD),
0.5609807968,
)
} else if z < 17.0 {
(
evaluate::polynomial(z - 11.5, ERF_IMPL_IN)
/ evaluate::polynomial(z - 11.5, ERF_IMPL_ID),
0.5626493692,
)
} else if z < 24.0 {
(
evaluate::polynomial(z - 17.0, ERF_IMPL_JN)
/ evaluate::polynomial(z - 17.0, ERF_IMPL_JD),
0.5634598136,
)
} else if z < 38.0 {
(
evaluate::polynomial(z - 24.0, ERF_IMPL_KN)
/ evaluate::polynomial(z - 24.0, ERF_IMPL_KD),
0.5638477802,
)
} else if z < 60.0 {
(
evaluate::polynomial(z - 38.0, ERF_IMPL_LN)
/ evaluate::polynomial(z - 38.0, ERF_IMPL_LD),
0.5640528202,
)
} else if z < 85.0 {
(
evaluate::polynomial(z - 60.0, ERF_IMPL_MN)
/ evaluate::polynomial(z - 60.0, ERF_IMPL_MD),
0.5641309023,
)
} else {
(
evaluate::polynomial(z - 85.0, ERF_IMPL_NN)
/ evaluate::polynomial(z - 85.0, ERF_IMPL_ND),
0.5641584396,
)
};
let g = (-z * z).exp() / z;
g * b + g * r
} else {
0.0
};
if inv && z >= 0.5 {
result
} else if z >= 0.5 || inv {
1.0 - result
} else {
result
}
}
// `erf_inv_impl` computes the inverse error function where
// `p`,`q`, and `s` are the first, second, and third intermediate
// parameters respectively
fn erf_inv_impl(p: f64, q: f64, s: f64) -> f64 {
let result = if p <= 0.5 {
let y = 0.0891314744949340820313;
let g = p * (p + 10.0);
let r = evaluate::polynomial(p, ERF_INV_IMPL_AN) / evaluate::polynomial(p, ERF_INV_IMPL_AD);
g * y + g * r
} else if q >= 0.25 {
let y = 2.249481201171875;
let g = (-2.0 * q.ln()).sqrt();
let xs = q - 0.25;
let r =
evaluate::polynomial(xs, ERF_INV_IMPL_BN) / evaluate::polynomial(xs, ERF_INV_IMPL_BD);
g / (y + r)
} else {
let x = (-q.ln()).sqrt();
if x < 3.0 {
let y = 0.807220458984375;
let xs = x - 1.125;
let r = evaluate::polynomial(xs, ERF_INV_IMPL_CN)
/ evaluate::polynomial(xs, ERF_INV_IMPL_CD);
y * x + r * x
} else if x < 6.0 {
let y = 0.93995571136474609375;
let xs = x - 3.0;
let r = evaluate::polynomial(xs, ERF_INV_IMPL_DN)
/ evaluate::polynomial(xs, ERF_INV_IMPL_DD);
y * x + r * x
} else if x < 18.0 {
let y = 0.98362827301025390625;
let xs = x - 6.0;
let r = evaluate::polynomial(xs, ERF_INV_IMPL_EN)
/ evaluate::polynomial(xs, ERF_INV_IMPL_ED);
y * x + r * x
} else if x < 44.0 {
let y = 0.99714565277099609375;
let xs = x - 18.0;
let r = evaluate::polynomial(xs, ERF_INV_IMPL_FN)
/ evaluate::polynomial(xs, ERF_INV_IMPL_FD);
y * x + r * x
} else {
let y = 0.99941349029541015625;
let xs = x - 44.0;
let r = evaluate::polynomial(xs, ERF_INV_IMPL_GN)
/ evaluate::polynomial(xs, ERF_INV_IMPL_GD);
y * x + r * x
}
};
s * result
}
|
candle/candle-core/src/cpu/erf.rs/0
|
{
"file_path": "candle/candle-core/src/cpu/erf.rs",
"repo_id": "candle",
"token_count": 11974
}
| 17
|
#![allow(dead_code)]
use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT};
use crate::{CpuStorage, DType, Error, Layout, Result, Shape};
#[derive(Debug, Clone)]
pub struct CudaDevice;
#[derive(Debug)]
pub struct CudaStorage;
macro_rules! fail {
() => {
unimplemented!("cuda support has not been enabled, add `cuda` feature to enable.")
};
}
impl crate::backend::BackendStorage for CudaStorage {
type Device = CudaDevice;
fn try_clone(&self, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn dtype(&self) -> DType {
fail!()
}
fn device(&self) -> &Self::Device {
fail!()
}
fn to_cpu_storage(&self) -> Result<CpuStorage> {
Err(Error::NotCompiledWithCudaSupport)
}
fn affine(&self, _: &Layout, _: f64, _: f64) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn powf(&self, _: &Layout, _: f64) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn elu(&self, _: &Layout, _: f64) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn reduce_op(&self, _: ReduceOp, _: &Layout, _: &[usize]) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn cmp(&self, _: CmpOp, _: &Self, _: &Layout, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn to_dtype(&self, _: &Layout, _: DType) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn unary_impl<B: UnaryOpT>(&self, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn binary_impl<B: BinaryOpT>(&self, _: &Self, _: &Layout, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn where_cond(&self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn conv1d(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &crate::conv::ParamsConv1D,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn conv_transpose1d(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &crate::conv::ParamsConvTranspose1D,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn conv2d(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &crate::conv::ParamsConv2D,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn conv_transpose2d(
&self,
_l: &Layout,
_kernel: &Self,
_kernel_l: &Layout,
_params: &crate::conv::ParamsConvTranspose2D,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn index_select(&self, _: &Self, _: &Layout, _: &Layout, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn gather(&self, _: &Layout, _: &Self, _: &Layout, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn scatter_add(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &Self,
_: &Layout,
_: usize,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn index_add(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &Self,
_: &Layout,
_: usize,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn matmul(
&self,
_: &Self,
_: (usize, usize, usize, usize),
_: &Layout,
_: &Layout,
) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn copy_strided_src(&self, _: &mut Self, _: usize, _: &Layout) -> Result<()> {
Err(Error::NotCompiledWithCudaSupport)
}
fn copy2d(
&self,
_: &mut Self,
_: usize,
_: usize,
_: usize,
_: usize,
_: usize,
_: usize,
) -> Result<()> {
Err(Error::NotCompiledWithCudaSupport)
}
fn avg_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn max_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn upsample_nearest1d(&self, _: &Layout, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn upsample_nearest2d(&self, _: &Layout, _: usize, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
}
impl crate::backend::BackendDevice for CudaDevice {
type Storage = CudaStorage;
fn new(_: usize) -> Result<Self> {
Err(Error::NotCompiledWithCudaSupport)
}
fn set_seed(&self, _: u64) -> Result<()> {
Err(Error::NotCompiledWithCudaSupport)
}
fn location(&self) -> crate::DeviceLocation {
fail!()
}
fn same_device(&self, _: &Self) -> bool {
fail!()
}
fn zeros_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
fn ones_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
unsafe fn alloc_uninit(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
fn storage_from_cpu_storage(&self, _: &CpuStorage) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
fn storage_from_cpu_storage_owned(&self, _: CpuStorage) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
fn rand_uniform(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
fn rand_normal(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> {
Err(Error::NotCompiledWithCudaSupport)
}
}
|
candle/candle-core/src/dummy_cuda_backend.rs/0
|
{
"file_path": "candle/candle-core/src/dummy_cuda_backend.rs",
"repo_id": "candle",
"token_count": 2897
}
| 18
|
//! Support for the GGML file format.
use super::{k_quants, GgmlDType, QStorage};
use crate::{Device, Result};
use byteorder::{LittleEndian, ReadBytesExt};
use std::collections::HashMap;
// https://github.com/ggerganov/llama.cpp/blob/468ea24fb4633a0d681f7ac84089566c1c6190cb/llama.h#L37
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum Magic {
Ggjt,
Ggla,
Ggmf,
Ggml,
Ggsn,
}
impl TryFrom<u32> for Magic {
type Error = crate::Error;
fn try_from(value: u32) -> Result<Self> {
let magic = match value {
0x67676a74 => Self::Ggjt,
0x67676c61 => Self::Ggla,
0x67676d66 => Self::Ggmf,
0x67676d6c => Self::Ggml,
0x6767736e => Self::Ggsn,
_ => crate::bail!("unknown magic {value:08x}"),
};
Ok(magic)
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum VersionedMagic {
GgmlUnversioned,
GgmfV1,
GgjtV1,
GgjtV2,
GgjtV3,
}
impl VersionedMagic {
fn read<R: std::io::Read>(reader: &mut R) -> Result<Self> {
let magic = reader.read_u32::<LittleEndian>()?;
let magic = Magic::try_from(magic)?;
if magic == Magic::Ggml {
return Ok(Self::GgmlUnversioned);
}
let version = reader.read_u32::<LittleEndian>()?;
let versioned_magic = match (magic, version) {
(Magic::Ggmf, 1) => Self::GgmfV1,
(Magic::Ggjt, 1) => Self::GgjtV1,
(Magic::Ggjt, 2) => Self::GgjtV2,
(Magic::Ggjt, 3) => Self::GgjtV3,
_ => crate::bail!("ggml: unsupported magic/version {magic:?}/{version}"),
};
Ok(versioned_magic)
}
fn align32(&self) -> bool {
match self {
Self::GgmlUnversioned | Self::GgmfV1 => false,
Self::GgjtV1 | Self::GgjtV2 | Self::GgjtV3 => true,
}
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct HParams {
pub n_vocab: u32,
pub n_embd: u32,
pub n_mult: u32,
pub n_head: u32,
pub n_layer: u32,
pub n_rot: u32,
pub ftype: u32,
}
impl HParams {
fn read<R: std::io::Read>(reader: &mut R) -> Result<Self> {
let n_vocab = reader.read_u32::<LittleEndian>()?;
let n_embd = reader.read_u32::<LittleEndian>()?;
let n_mult = reader.read_u32::<LittleEndian>()?;
let n_head = reader.read_u32::<LittleEndian>()?;
let n_layer = reader.read_u32::<LittleEndian>()?;
let n_rot = reader.read_u32::<LittleEndian>()?;
let ftype = reader.read_u32::<LittleEndian>()?;
Ok(Self {
n_vocab,
n_embd,
n_mult,
n_head,
n_layer,
n_rot,
ftype,
})
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct Vocab {
pub token_score_pairs: Vec<(Vec<u8>, f32)>,
}
impl Vocab {
fn read<R: std::io::Read>(reader: &mut R, n_vocab: usize) -> Result<Self> {
// https://github.com/ggerganov/llama.cpp/blob/468ea24fb4633a0d681f7ac84089566c1c6190cb/llama.cpp#L556
let mut token_score_pairs = Vec::with_capacity(n_vocab);
for _index in 0..n_vocab {
let len = reader.read_u32::<LittleEndian>()? as usize;
let mut word = vec![0u8; len];
reader.read_exact(&mut word)?;
let score = reader.read_f32::<LittleEndian>()?;
token_score_pairs.push((word, score))
}
Ok(Self { token_score_pairs })
}
}
fn from_raw_data<T: super::GgmlType + Send + Sync + 'static>(
raw_data: &[u8],
size_in_bytes: usize,
dims: Vec<usize>,
device: &Device,
) -> Result<super::QTensor> {
let raw_data_ptr = raw_data.as_ptr();
let n_blocks = size_in_bytes / std::mem::size_of::<T>();
let data = unsafe { std::slice::from_raw_parts(raw_data_ptr as *const T, n_blocks) };
let data: QStorage = match device {
Device::Cpu => QStorage::Cpu(Box::new(data.to_vec())),
Device::Metal(metal) => super::metal::load_quantized(metal, data)?,
Device::Cuda(cuda) => super::cuda::load_quantized(cuda, data)?,
};
super::QTensor::new(data, dims)
}
/// Creates a [Tensor] from a raw GGML tensor.
pub fn qtensor_from_ggml(
ggml_dtype: GgmlDType,
raw_data: &[u8],
dims: Vec<usize>,
device: &Device,
) -> Result<super::QTensor> {
let tensor_elems = dims.iter().product::<usize>();
let block_size = ggml_dtype.block_size();
if tensor_elems % block_size != 0 {
crate::bail!(
"the number of elements {tensor_elems} is not divisible by the block size {block_size}"
)
}
let size_in_bytes = tensor_elems / block_size * ggml_dtype.type_size();
match ggml_dtype {
GgmlDType::F32 => from_raw_data::<f32>(raw_data, size_in_bytes, dims, device),
GgmlDType::F16 => from_raw_data::<half::f16>(raw_data, size_in_bytes, dims, device),
GgmlDType::Q4_0 => {
from_raw_data::<k_quants::BlockQ4_0>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q4_1 => {
from_raw_data::<k_quants::BlockQ4_1>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q5_0 => {
from_raw_data::<k_quants::BlockQ5_0>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q5_1 => {
from_raw_data::<k_quants::BlockQ5_1>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q8_0 => {
from_raw_data::<k_quants::BlockQ8_0>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q2K => {
from_raw_data::<k_quants::BlockQ2K>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q3K => {
from_raw_data::<k_quants::BlockQ3K>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q4K => {
from_raw_data::<k_quants::BlockQ4K>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q5K => {
from_raw_data::<k_quants::BlockQ5K>(raw_data, size_in_bytes, dims, device)
}
GgmlDType::Q6K => {
from_raw_data::<k_quants::BlockQ6K>(raw_data, size_in_bytes, dims, device)
}
_ => crate::bail!("quantized type {ggml_dtype:?} is not supported yet"),
}
}
fn read_one_tensor<R: std::io::Seek + std::io::Read>(
reader: &mut R,
magic: VersionedMagic,
device: &Device,
) -> Result<(String, super::QTensor)> {
let n_dims = reader.read_u32::<LittleEndian>()?;
let name_len = reader.read_u32::<LittleEndian>()?;
let ggml_dtype = reader.read_u32::<LittleEndian>()?;
let ggml_dtype = GgmlDType::from_u32(ggml_dtype)?;
let mut dims = vec![0u32; n_dims as usize];
reader.read_u32_into::<LittleEndian>(&mut dims)?;
// The dimensions are stored in reverse order, see for example:
// https://github.com/ggerganov/llama.cpp/blob/b5ffb2849d23afe73647f68eec7b68187af09be6/convert.py#L969
dims.reverse();
let mut name = vec![0u8; name_len as usize];
reader.read_exact(&mut name)?;
let name = String::from_utf8_lossy(&name).into_owned();
if magic.align32() {
let pos = reader.stream_position()?;
reader.seek(std::io::SeekFrom::Current(((32 - pos % 32) % 32) as i64))?;
}
let dims = dims.iter().map(|&u| u as usize).collect::<Vec<_>>();
let tensor_elems = dims.iter().product::<usize>();
let size_in_bytes = tensor_elems * ggml_dtype.type_size() / ggml_dtype.block_size();
// TODO: Mmap version to avoid copying the data around?
let mut raw_data = vec![0u8; size_in_bytes];
reader.read_exact(&mut raw_data)?;
match qtensor_from_ggml(ggml_dtype, &raw_data, dims, device) {
Ok(tensor) => Ok((name, tensor)),
Err(e) => crate::bail!("Error creating tensor {name}: {e}"),
}
}
pub struct Content {
pub magic: VersionedMagic,
pub hparams: HParams,
pub vocab: Vocab,
pub tensors: HashMap<String, super::QTensor>,
pub device: Device,
}
impl Content {
pub fn read<R: std::io::Seek + std::io::Read>(
reader: &mut R,
device: &Device,
) -> Result<Content> {
// https://github.com/ggerganov/llama.cpp/blob/468ea24fb4633a0d681f7ac84089566c1c6190cb/llama.cpp#L505
let last_position = reader.seek(std::io::SeekFrom::End(0))?;
reader.seek(std::io::SeekFrom::Start(0))?;
let magic = VersionedMagic::read(reader)?;
let hparams = HParams::read(reader)?;
let vocab = Vocab::read(reader, hparams.n_vocab as usize)?;
let mut tensors = HashMap::new();
while reader.stream_position()? != last_position {
let (name, tensor) = read_one_tensor(reader, magic, device)?;
tensors.insert(name, tensor);
}
let device = device.clone();
Ok(Self {
magic,
hparams,
vocab,
tensors,
device,
})
}
pub fn remove(&mut self, name: &str) -> Result<super::QTensor> {
match self.tensors.remove(name) {
None => crate::bail!("cannot find tensor with name '{name}'"),
Some(tensor) => Ok(tensor),
}
}
}
|
candle/candle-core/src/quantized/ggml_file.rs/0
|
{
"file_path": "candle/candle-core/src/quantized/ggml_file.rs",
"repo_id": "candle",
"token_count": 4586
}
| 19
|
use std::str::FromStr;
pub fn get_num_threads() -> usize {
// Respond to the same environment variable as rayon.
match std::env::var("RAYON_NUM_THREADS")
.ok()
.and_then(|s| usize::from_str(&s).ok())
{
Some(x) if x > 0 => x,
Some(_) | None => num_cpus::get(),
}
}
pub fn has_accelerate() -> bool {
cfg!(feature = "accelerate")
}
pub fn has_mkl() -> bool {
cfg!(feature = "mkl")
}
pub fn cuda_is_available() -> bool {
cfg!(feature = "cuda")
}
pub fn metal_is_available() -> bool {
cfg!(feature = "metal")
}
pub fn with_avx() -> bool {
cfg!(target_feature = "avx")
}
pub fn with_neon() -> bool {
cfg!(target_feature = "neon")
}
pub fn with_simd128() -> bool {
cfg!(target_feature = "simd128")
}
pub fn with_f16c() -> bool {
cfg!(target_feature = "f16c")
}
|
candle/candle-core/src/utils.rs/0
|
{
"file_path": "candle/candle-core/src/utils.rs",
"repo_id": "candle",
"token_count": 389
}
| 20
|
use candle_core::{test_device, test_utils, DType, Device, IndexOp, Result, Tensor, D};
fn zeros(device: &Device) -> Result<()> {
let tensor = Tensor::zeros((5, 2), DType::F32, device)?;
let (dim1, dim2) = tensor.dims2()?;
assert_eq!(dim1, 5);
assert_eq!(dim2, 2);
Ok(())
}
fn ones(device: &Device) -> Result<()> {
assert_eq!(
Tensor::ones((2, 3), DType::U8, device)?.to_vec2::<u8>()?,
[[1, 1, 1], [1, 1, 1]],
);
assert_eq!(
Tensor::ones((2, 3), DType::U32, device)?.to_vec2::<u32>()?,
[[1, 1, 1], [1, 1, 1]],
);
assert_eq!(
Tensor::ones((2, 3), DType::I64, device)?.to_vec2::<i64>()?,
[[1, 1, 1], [1, 1, 1]],
);
assert_eq!(
Tensor::ones((2, 3), DType::F32, device)?.to_vec2::<f32>()?,
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
);
assert_eq!(
Tensor::ones((2, 3), DType::F64, device)?.to_vec2::<f64>()?,
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]],
);
Ok(())
}
fn full(device: &Device) -> Result<()> {
assert_eq!(
Tensor::full(42u32, (2, 3), device)?.to_vec2::<u32>()?,
[[42, 42, 42], [42, 42, 42]],
);
Ok(())
}
fn arange(device: &Device) -> Result<()> {
assert_eq!(
Tensor::arange(0u8, 5u8, device)?.to_vec1::<u8>()?,
[0, 1, 2, 3, 4],
);
assert_eq!(
Tensor::arange_step(0u8, 5u8, 2, device)?.to_vec1::<u8>()?,
[0, 2, 4],
);
assert_eq!(
Tensor::arange_step(0u8, 5u8, 3, device)?.to_vec1::<u8>()?,
[0, 3],
);
assert_eq!(
Tensor::arange_step(5i64, 0i64, -1, device)?.to_vec1::<i64>()?,
[5, 4, 3, 2, 1],
);
Ok(())
}
fn add_mul(device: &Device) -> Result<()> {
let tensor = Tensor::new(&[3f32, 1., 4.], device)?;
let dim1 = tensor.dims1()?;
assert_eq!(dim1, 3);
let content: Vec<f32> = tensor.to_vec1()?;
assert_eq!(content, [3., 1., 4.]);
let tensor = Tensor::add(&tensor, &tensor)?;
let content: Vec<f32> = tensor.to_vec1()?;
assert_eq!(content, [6., 2., 8.]);
let tensor = Tensor::mul(&tensor, &tensor)?;
let content: Vec<f32> = tensor.to_vec1()?;
assert_eq!(content, [36., 4., 64.]);
Ok(())
}
fn tensor_2d(device: &Device) -> Result<()> {
let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]];
let tensor = Tensor::new(data, device)?;
let dims = tensor.dims2()?;
assert_eq!(dims, (2, 5));
let content: Vec<Vec<f32>> = tensor.to_vec2()?;
assert_eq!(content, data);
Ok(())
}
fn clamp(device: &Device) -> Result<()> {
let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]];
let tensor = Tensor::new(data, device)?;
let tensor = tensor.clamp(1.5, 6.2)?;
assert_eq!(
tensor.to_vec2::<f32>()?,
[[3.0, 1.5, 4.0, 1.5, 5.0], [2.0, 1.5, 6.2, 6.2, 2.0]],
);
Ok(())
}
fn unary_op(device: &Device) -> Result<()> {
let data = &[[-3f32, 1., 4., -0.1, 0.5], [2.7, -1.8, -0.28, 1.8, 2.8]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
test_utils::to_vec2_round(&tensor.gelu()?, 4)?,
[
[-0.0036, 0.8412, 3.9999, -0.046, 0.3457],
[2.6911, -0.0647, -0.1091, 1.7353, 2.7933]
]
);
let t_f16 = tensor.to_dtype(DType::F16)?.gelu()?.to_dtype(DType::F32)?;
let max_diff = (tensor.gelu()? - t_f16)?.flatten_all()?.max(0)?;
assert!(max_diff.to_vec0::<f32>()? < 5e-3);
assert_eq!(
test_utils::to_vec2_round(&tensor.gelu_erf()?, 4)?,
[
[-0.004, 0.8413, 3.9999, -0.046, 0.3457],
[2.6906, -0.0647, -0.1091, 1.7353, 2.7928]
]
);
assert_eq!(
test_utils::to_vec2_round(&tensor.erf()?, 4)?,
[
[-1.0, 0.8427, 1.0, -0.1125, 0.5205],
[0.9999, -0.9891, -0.3079, 0.9891, 0.9999]
]
);
assert_eq!(
test_utils::to_vec2_round(&tensor.silu()?, 4)?,
[
[-0.1423, 0.7311, 3.9281, -0.0475, 0.3112],
[2.53, -0.2553, -0.1205, 1.5447, 2.6395]
]
);
assert_eq!(
test_utils::to_vec2_round(&tensor.ceil()?, 4)?,
[[-3.0, 1.0, 4.0, -0.0, 1.0], [3.0, -1.0, -0.0, 2.0, 3.0]]
);
assert_eq!(
test_utils::to_vec2_round(&tensor.floor()?, 4)?,
[[-3.0, 1.0, 4.0, -1.0, 0.0], [2.0, -2.0, -1.0, 1.0, 2.0]]
);
assert_eq!(
test_utils::to_vec2_round(&tensor.round()?, 4)?,
[[-3.0, 1.0, 4.0, -0.0, 1.0], [3.0, -2.0, -0.0, 2.0, 3.0]]
);
let tensor = Tensor::new(&[2997.9246, 314.15926f32], device)?;
assert_eq!(
test_utils::to_vec1_round(&tensor.round_to(2)?, 4)?,
[2997.92, 314.16]
);
assert_eq!(
test_utils::to_vec1_round(&tensor.round_to(-2)?, 4)?,
[3000.0, 300.]
);
let tensor = Tensor::new(
&[-1.01f32, -0.9, -0.1, 0.0, -0.0, 0.1, 0.9, 1.0, 1.1],
device,
)?;
assert_eq!(
tensor.sign()?.to_vec1::<f32>()?,
[-1., -1., -1., 0., 0., 1., 1., 1., 1.]
);
Ok(())
}
fn binary_op(device: &Device) -> Result<()> {
let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]];
let tensor1 = Tensor::new(data, device)?;
let data2 = &[[5f32, 5., 5., 5., 5.], [2., 1., 7., 8., 2.]];
let tensor2 = Tensor::new(data2, device)?;
let tensor = (&tensor1 + (&tensor1 * &tensor1)? / (&tensor1 + &tensor2))?;
let dims = tensor.dims2()?;
assert_eq!(dims, (2, 5));
let content: Vec<Vec<f32>> = tensor.to_vec2()?;
assert_eq!(content[0], [4.125, 1.1666666, 5.7777777, 1.1666666, 7.5]);
assert_eq!(content[1], [3.0, 1.5, 10.5, 12.0, 3.0]);
#[allow(clippy::eq_op)]
let tensor = (&tensor - &tensor)?;
let content: Vec<Vec<f32>> = tensor.to_vec2()?;
assert_eq!(content[0], [0., 0., 0., 0., 0.]);
let min = tensor1.minimum(&(&tensor2 * 0.5)?)?;
let max = tensor1.maximum(&(&tensor2 * 0.5)?)?;
assert_eq!(
min.to_vec2::<f32>()?,
[[2.5, 1.0, 2.5, 1.0, 2.5], [1.0, 0.5, 3.5, 4.0, 1.0]],
);
assert_eq!(
max.to_vec2::<f32>()?,
[[3.0, 2.5, 4.0, 2.5, 5.0], [2.0, 1.0, 7.0, 8.0, 2.0]]
);
Ok(())
}
fn transpose(device: &Device) -> Result<()> {
let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]];
let tensor = Tensor::new(data, device)?.t()?;
let dims = tensor.dims2()?;
assert_eq!(dims, (5, 2));
assert_eq!(
tensor.to_vec2::<f32>()?,
&[[3f32, 2.], [1., 1.], [4., 7.], [1., 8.], [5., 2.]]
);
assert_eq!(tensor.t()?.to_vec2::<f32>()?, data);
assert_eq!(tensor.contiguous()?.t()?.to_vec2::<f32>()?, data);
assert_eq!(((tensor + 1.)?.t()? - 1.)?.to_vec2::<f32>()?, data);
Ok(())
}
fn var(device: &Device) -> Result<()> {
// Values taken from https://pytorch.org/docs/stable/generated/torch.var.html
let data = &[
[0.2035f32, 1.2959, 1.8101, -0.4644],
[1.5027, -0.3270, 0.5905, 0.6538],
[-1.5745, 1.3330, -0.5596, -0.6548],
[0.1264, -0.5080, 1.6420, 0.1992],
];
let tensor = Tensor::new(data, device)?;
assert_eq!(
test_utils::to_vec2_round(&tensor.var_keepdim(1)?, 4)?,
&[[1.0631], [0.559], [1.4893], [0.8258]]
);
Ok(())
}
fn sum(device: &Device) -> Result<()> {
let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.sum_keepdim(2)?.to_vec3::<u32>()?,
&[[[8], [15]], [[10], [18]]]
);
assert_eq!(
tensor.sum_keepdim(0)?.to_vec3::<u32>()?,
&[[[5, 2, 11], [9, 7, 17]]],
);
assert_eq!(tensor.sum_keepdim((0, 2, 1))?.to_vec3::<u32>()?, &[[[51]]],);
assert_eq!(
tensor.t()?.sum_keepdim(1)?.t()?.to_vec3::<u32>()?,
&[[[8], [15]], [[10], [18]]]
);
assert_eq!(
tensor.sum_keepdim((2, 1))?.to_vec3::<u32>()?,
&[[[8 + 15]], [[10 + 18]]]
);
let data: Vec<u32> = (0..4000u32).collect();
let tensor = Tensor::new(data.as_slice(), device)?;
assert_eq!(tensor.sum_keepdim(0)?.to_vec1::<u32>()?, &[7998000]);
let tensor = tensor.reshape((2000, 2))?;
assert_eq!(tensor.sum_keepdim((0, 1))?.to_vec2::<u32>()?, &[[7998000]]);
assert_eq!(
tensor.sum_keepdim(0)?.sum_keepdim(1)?.to_vec2::<u32>()?,
&[[7998000]]
);
assert_eq!(
tensor.sum_keepdim(1)?.sum_keepdim(0)?.to_vec2::<u32>()?,
&[[7998000]]
);
assert_eq!(
tensor.sum_keepdim(0)?.to_vec2::<u32>()?,
&[[3998000, 4000000]]
);
// Make the tensor non contiguous.
let tensor = tensor.t()?.contiguous()?.t()?;
assert_eq!(tensor.sum_keepdim((0, 1))?.to_vec2::<u32>()?, &[[7998000]]);
assert_eq!(
tensor.sum_keepdim(0)?.sum_keepdim(1)?.to_vec2::<u32>()?,
&[[7998000]]
);
assert_eq!(
tensor.sum_keepdim(1)?.sum_keepdim(0)?.to_vec2::<u32>()?,
&[[7998000]]
);
assert_eq!(
tensor.sum_keepdim(0)?.to_vec2::<u32>()?,
&[[3998000, 4000000]]
);
let t1 = tensor.reshape((200, 5, 4))?;
let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?;
for tensor in [t1, t2] {
assert_eq!(
tensor.sum_keepdim((0, 1, 2))?.to_vec3::<u32>()?,
&[[[7998000]]]
);
assert_eq!(
tensor
.sum_keepdim(0)?
.sum_keepdim(2)?
.sum_keepdim(1)?
.to_vec3::<u32>()?,
&[[[7998000]]]
);
assert_eq!(
tensor
.sum_keepdim(0)?
.sum_keepdim((1, 2))?
.to_vec3::<u32>()?,
&[[[7998000]]]
);
assert_eq!(
tensor
.sum_keepdim(1)?
.sum_keepdim((0, 2))?
.to_vec3::<u32>()?,
&[[[7998000]]]
);
assert_eq!(
tensor.sum_keepdim(0)?.to_vec3::<u32>()?,
&[[
[398000, 398200, 398400, 398600],
[398800, 399000, 399200, 399400],
[399600, 399800, 400000, 400200],
[400400, 400600, 400800, 401000],
[401200, 401400, 401600, 401800]
]]
);
}
Ok(())
}
fn min(device: &Device) -> Result<()> {
let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.min_keepdim(2)?.to_vec3::<u32>()?,
&[[[1], [1]], [[1], [2]]]
);
assert_eq!(
tensor.min_keepdim(0)?.to_vec3::<u32>()?,
&[[[2, 1, 4], [1, 2, 8]]],
);
let data: Vec<u32> = (200..4000u32).collect();
let tensor = Tensor::new(data.as_slice(), device)?;
assert_eq!(tensor.min_keepdim(0)?.to_vec1::<u32>()?, &[200]);
let tensor = tensor.reshape((1900, 2))?;
assert_eq!(
tensor.min_keepdim(0)?.min_keepdim(1)?.to_vec2::<u32>()?,
&[[200]]
);
assert_eq!(
tensor.min_keepdim(1)?.min_keepdim(0)?.to_vec2::<u32>()?,
&[[200]]
);
assert_eq!(tensor.min_keepdim(0)?.to_vec2::<u32>()?, &[[200, 201]]);
// Make the tensor non contiguous.
let tensor = tensor.t()?.contiguous()?.t()?;
assert_eq!(
tensor.min_keepdim(0)?.min_keepdim(1)?.to_vec2::<u32>()?,
&[[200]]
);
assert_eq!(
tensor.min_keepdim(1)?.min_keepdim(0)?.to_vec2::<u32>()?,
&[[200]]
);
assert_eq!(tensor.min_keepdim(0)?.to_vec2::<u32>()?, &[[200, 201]]);
let t1 = tensor.reshape((190, 5, 4))?;
let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?;
for tensor in [t1, t2] {
assert_eq!(
tensor
.min_keepdim(0)?
.min_keepdim(2)?
.min_keepdim(1)?
.to_vec3::<u32>()?,
&[[[200]]]
);
assert_eq!(
tensor.min_keepdim(0)?.to_vec3::<u32>()?,
&[[
[200, 201, 202, 203],
[204, 205, 206, 207],
[208, 209, 210, 211],
[212, 213, 214, 215],
[216, 217, 218, 219]
]]
);
}
Ok(())
}
fn max(device: &Device) -> Result<()> {
let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.max_keepdim(2)?.to_vec3::<u32>()?,
&[[[4], [9]], [[7], [8]]]
);
assert_eq!(
tensor.max_keepdim(0)?.to_vec3::<u32>()?,
&[[[3, 1, 7], [8, 5, 9]]],
);
let data: Vec<u32> = (200..4000u32).collect();
let tensor = Tensor::new(data.as_slice(), device)?;
assert_eq!(tensor.max_keepdim(0)?.to_vec1::<u32>()?, &[3999]);
let tensor = tensor.reshape((1900, 2))?;
assert_eq!(
tensor.max_keepdim(0)?.max_keepdim(1)?.to_vec2::<u32>()?,
&[[3999]]
);
assert_eq!(
tensor.max_keepdim(1)?.max_keepdim(0)?.to_vec2::<u32>()?,
&[[3999]]
);
assert_eq!(tensor.max_keepdim(0)?.to_vec2::<u32>()?, &[[3998, 3999]]);
// Make the tensor non contiguous.
let tensor = tensor.t()?.contiguous()?.t()?;
assert_eq!(
tensor.max_keepdim(0)?.max_keepdim(1)?.to_vec2::<u32>()?,
&[[3999]]
);
assert_eq!(
tensor.max_keepdim(1)?.max_keepdim(0)?.to_vec2::<u32>()?,
&[[3999]]
);
assert_eq!(tensor.max_keepdim(0)?.to_vec2::<u32>()?, &[[3998, 3999]]);
let t1 = tensor.reshape((190, 5, 4))?;
let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?;
for tensor in [t1, t2] {
assert_eq!(
tensor
.max_keepdim(0)?
.max_keepdim(2)?
.max_keepdim(1)?
.to_vec3::<u32>()?,
&[[[3999]]]
);
assert_eq!(
tensor.max_keepdim(0)?.to_vec3::<u32>()?,
&[[
[3980, 3981, 3982, 3983],
[3984, 3985, 3986, 3987],
[3988, 3989, 3990, 3991],
[3992, 3993, 3994, 3995],
[3996, 3997, 3998, 3999]
]]
);
}
Ok(())
}
fn argmin(device: &Device) -> Result<()> {
let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.argmin_keepdim(2)?.to_vec3::<u32>()?,
&[[[1], [0]], [[1], [1]]]
);
assert_eq!(
tensor.argmin_keepdim(0)?.to_vec3::<u32>()?,
&[[[1, 0, 0], [0, 1, 1]]],
);
let data: Vec<u32> = (200..4000u32).collect();
let tensor = Tensor::new(data.as_slice(), device)?;
assert_eq!(tensor.argmin_keepdim(0)?.to_vec1::<u32>()?, &[0]);
let tensor = tensor.reshape((1900, 2))?;
assert_eq!(
tensor
.argmin_keepdim(0)?
.argmin_keepdim(1)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(
tensor
.argmin_keepdim(1)?
.argmin_keepdim(0)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(tensor.argmin_keepdim(0)?.to_vec2::<u32>()?, &[[0, 0]]);
// Make the tensor non contiguous.
let tensor = tensor.t()?.contiguous()?.t()?;
assert_eq!(
tensor
.argmin_keepdim(0)?
.argmin_keepdim(1)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(
tensor
.argmin_keepdim(1)?
.argmin_keepdim(0)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(tensor.argmin_keepdim(0)?.to_vec2::<u32>()?, &[[0, 0]]);
let t1 = tensor.reshape((190, 5, 4))?;
let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?;
for tensor in [t1, t2] {
assert_eq!(
tensor
.argmin_keepdim(0)?
.argmin_keepdim(2)?
.argmin_keepdim(1)?
.to_vec3::<u32>()?,
&[[[0]]]
);
assert_eq!(
tensor.argmin_keepdim(0)?.to_vec3::<u32>()?,
&[[
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
]]
);
}
Ok(())
}
fn argmax(device: &Device) -> Result<()> {
let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.argmax_keepdim(2)?.to_vec3::<u32>()?,
&[[[2], [2]], [[2], [0]]]
);
assert_eq!(
tensor.argmax_keepdim(0)?.to_vec3::<u32>()?,
&[[[0, 0, 1], [1, 0, 0]]],
);
let data: Vec<u32> = (200..4000u32).collect();
let tensor = Tensor::new(data.as_slice(), device)?;
assert_eq!(tensor.argmax_keepdim(0)?.to_vec1::<u32>()?, &[3799]);
let tensor = tensor.reshape((1900, 2))?;
assert_eq!(
tensor
.argmax_keepdim(0)?
.argmax_keepdim(1)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(
tensor
.argmax_keepdim(1)?
.argmax_keepdim(0)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(tensor.argmax_keepdim(0)?.to_vec2::<u32>()?, &[[1899, 1899]]);
// Make the tensor non contiguous.
let tensor = tensor.t()?.contiguous()?.t()?;
assert_eq!(
tensor
.argmax_keepdim(0)?
.argmax_keepdim(1)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(
tensor
.argmax_keepdim(1)?
.argmax_keepdim(0)?
.to_vec2::<u32>()?,
&[[0]]
);
assert_eq!(tensor.argmax_keepdim(0)?.to_vec2::<u32>()?, &[[1899, 1899]]);
let t1 = tensor.reshape((190, 5, 4))?;
let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?;
for tensor in [t1, t2] {
assert_eq!(
tensor
.argmax_keepdim(0)?
.argmax_keepdim(2)?
.argmax_keepdim(1)?
.to_vec3::<u32>()?,
&[[[0]]]
);
assert_eq!(
tensor.argmax_keepdim(0)?.to_vec3::<u32>()?,
&[[
[189, 189, 189, 189],
[189, 189, 189, 189],
[189, 189, 189, 189],
[189, 189, 189, 189],
[189, 189, 189, 189],
]]
);
}
Ok(())
}
fn narrow(device: &Device) -> Result<()> {
let data = &[[[3f32, 1., 4.], [1., 5., 9.]], [[2., 1., 7.], [8., 2., 8.]]];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.narrow(2, 1, 2)?.to_vec3::<f32>()?,
&[[[1.0, 4.0], [5.0, 9.0]], [[1.0, 7.0], [2.0, 8.0]]],
);
assert_eq!(
tensor.narrow(1, 1, 1)?.to_vec3::<f32>()?,
&[[[1.0, 5.0, 9.0]], [[8.0, 2.0, 8.0]]],
);
assert_eq!(
tensor.narrow(0, 0, 1)?.to_vec3::<f32>()?,
&[[[3.0, 1.0, 4.0], [1.0, 5.0, 9.0]]],
);
assert_eq!(
tensor.narrow(0, 1, 1)?.to_vec3::<f32>()?,
&[[[2.0, 1.0, 7.0], [8.0, 2.0, 8.0]]],
);
// The following has been checked against PyTorch via:
// import torch
// t = torch.tensor([[[3., 1., 4.], [1., 5., 9.]], [[2., 1., 7.], [8., 2., 8.]]])
// t.transpose(-1, -2).narrow(1, 1, 2)
assert_eq!(
tensor.t()?.narrow(1, 1, 2)?.to_vec3::<f32>()?,
&[[[1.0, 5.0], [4.0, 9.0]], [[1.0, 2.0], [7.0, 8.0]]],
);
Ok(())
}
fn broadcast(device: &Device) -> Result<()> {
let data = &[3f32, 1., 4.];
let tensor = Tensor::new(data, device)?;
assert_eq!(
tensor.broadcast_left((3, 1))?.to_vec3::<f32>()?,
&[[[3.0, 1.0, 4.0]], [[3.0, 1.0, 4.0]], [[3.0, 1.0, 4.0]]]
);
Ok(())
}
fn cat(device: &Device) -> Result<()> {
// 1D
let t1 = Tensor::new(&[3f32, 1., 4.], device)?;
let t2 = Tensor::new(&[1f32, 5., 9., 2.], device)?;
let t3 = Tensor::new(&[6f32, 5., 3., 5., 8., 9.], device)?;
assert_eq!(Tensor::cat(&[&t1], 0)?.to_vec1::<f32>()?, [3f32, 1., 4.],);
assert_eq!(
Tensor::cat(&[&t1, &t2], 0)?.to_vec1::<f32>()?,
[3f32, 1., 4., 1., 5., 9., 2.],
);
assert_eq!(
Tensor::cat(&[&t1, &t2, &t3], 0)?.to_vec1::<f32>()?,
[3f32, 1., 4., 1., 5., 9., 2., 6., 5., 3., 5., 8., 9.],
);
// 2D
let data = &[[3f32, 1., 4., 1., 5.], [2., 7., 1., 8., 2.]];
let t1 = Tensor::new(data, device)?;
let data2 = &[[5f32, 5., 5., 5., 5.], [2., 7., 1., 8., 2.]];
let t2 = Tensor::new(data2, device)?;
assert_eq!(
Tensor::cat(&[&t1, &t2], 0)?.to_vec2::<f32>()?,
[
[3.0, 1.0, 4.0, 1.0, 5.0],
[2.0, 7.0, 1.0, 8.0, 2.0],
[5.0, 5.0, 5.0, 5.0, 5.0],
[2.0, 7.0, 1.0, 8.0, 2.0]
]
);
// PyTorch equivalent:
// import torch
// t1 = torch.tensor([[3, 1, 4, 1, 5], [2, 7, 1, 8, 2]])
// t2 = torch.tensor([[5]*5, [2, 7, 1, 8, 2]])
// torch.cat([t1.t(), t2.t()], dim=1).t()
assert_eq!(
Tensor::cat(&[&t1.t()?, &t2.t()?], 1)?
.t()?
.to_vec2::<f32>()?,
[
[3.0, 1.0, 4.0, 1.0, 5.0],
[2.0, 7.0, 1.0, 8.0, 2.0],
[5.0, 5.0, 5.0, 5.0, 5.0],
[2.0, 7.0, 1.0, 8.0, 2.0]
]
);
assert_eq!(
Tensor::cat(&[&t1, &t2], 1)?.to_vec2::<f32>()?,
[
[3.0, 1.0, 4.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0],
[2.0, 7.0, 1.0, 8.0, 2.0, 2.0, 7.0, 1.0, 8.0, 2.0]
]
);
// 3D
let t1 = Tensor::arange(0, 48i64, device)?.reshape((2, 6, 4))?;
let t2 = Tensor::arange(100, 124i64, device)?.reshape((2, 3, 4))?;
let t3 = Tensor::arange(10000, 10032i64, device)?.reshape((2, 4, 4))?;
let t_cat = Tensor::cat(&[&t1, &t2, &t3], 1)?;
let t1 = t1.t()?.contiguous()?.t()?;
let t2 = t2.t()?.contiguous()?.t()?;
let t3 = t3.t()?.contiguous()?.t()?;
let t_cat2 = Tensor::cat(&[&t1, &t2, &t3], 1)?;
let diff = t_cat.eq(&t_cat2)?.to_dtype(DType::F32)?.sum_all()?;
assert_eq!(diff.to_vec0::<f32>()?, 104.0);
assert_eq!(t_cat.i((0, 0, 0))?.to_vec0::<i64>()?, 0);
assert_eq!(t_cat.i((0, 4, 0))?.to_vec0::<i64>()?, 16);
assert_eq!(t_cat.i((0, 5, 0))?.to_vec0::<i64>()?, 20);
assert_eq!(t_cat.i((1, 5, 0))?.to_vec0::<i64>()?, 44);
assert_eq!(t_cat.i((0, 6, 0))?.to_vec0::<i64>()?, 100);
assert_eq!(t_cat.i((1, 6, 0))?.to_vec0::<i64>()?, 112);
assert_eq!(t_cat.i((0, 6, 1))?.to_vec0::<i64>()?, 101);
assert_eq!(t_cat.i((0, 7, 1))?.to_vec0::<i64>()?, 105);
assert_eq!(t_cat.i((0, 12, 1))?.to_vec0::<i64>()?, 10013);
assert_eq!(t_cat.i((1, 12, 3))?.to_vec0::<i64>()?, 10031);
Ok(())
}
fn embeddings(device: &Device) -> Result<()> {
let ids = Tensor::new(&[0u32, 2u32, 1u32], device)?;
let t = Tensor::new(&[[0f32, 1f32], [2f32, 3f32], [4f32, 5f32]], device)?;
let hs = t.embedding(&ids)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 1.0], [4.0, 5.0], [2.0, 3.0]]);
let hs = t.index_select(&ids, 0)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 1.0], [4.0, 5.0], [2.0, 3.0]]);
let hs = t.index_select(&ids.to_dtype(DType::I64)?, 0)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 1.0], [4.0, 5.0], [2.0, 3.0]]);
Ok(())
}
fn cmp(device: &Device) -> Result<()> {
let t1 = Tensor::new(&[[0f32, 1f32], [2f32, 3f32], [4f32, 5f32]], device)?;
let t2 = Tensor::new(&[[1f32, 0f32], [3f32, 3f32], [4f32, 7f32]], device)?;
assert_eq!(t1.eq(&t2)?.to_vec2::<u8>()?, &[[0, 0], [0, 1], [1, 0]]);
assert_eq!(t1.ne(&t2)?.to_vec2::<u8>()?, &[[1, 1], [1, 0], [0, 1]]);
assert_eq!(t1.le(&t2)?.to_vec2::<u8>()?, &[[1, 0], [1, 1], [1, 1]]);
assert_eq!(t1.lt(&t2)?.to_vec2::<u8>()?, &[[1, 0], [1, 0], [0, 1]]);
assert_eq!(t1.gt(&t2)?.to_vec2::<u8>()?, &[[0, 1], [0, 0], [0, 0]]);
assert_eq!(t1.ge(&t2)?.to_vec2::<u8>()?, &[[0, 1], [0, 1], [1, 0]]);
Ok(())
}
fn index_select(device: &Device) -> Result<()> {
let ids = Tensor::new(&[0u32, 2u32, 1u32], device)?;
let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?;
assert_eq!(
t.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0],
[9.0, 10.0, 11.0]
]
);
for dtype in [DType::U8, DType::U32, DType::I64] {
let ids = ids.to_dtype(dtype)?;
let hs = t.index_select(&ids, 1)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[
[0.0, 2.0, 1.0],
[3.0, 5.0, 4.0],
[6.0, 8.0, 7.0],
[9.0, 11.0, 10.0]
]
);
let hs = t.index_select(&ids, 0)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[[0.0, 1.0, 2.0], [6.0, 7.0, 8.0], [3.0, 4.0, 5.0]]
);
// Prior to https://github.com/huggingface/candle/pull/1022
// There would be a bug where the last values in the result tensor would be set to 0.
let ids = Tensor::new(&[0u32, 2u32, 1u32, 0u32, 2u32, 1u32], device)?;
let hs = t.index_select(&ids, 0)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[6.0, 7.0, 8.0],
[3.0, 4.0, 5.0],
[0.0, 1.0, 2.0],
[6.0, 7.0, 8.0],
[3.0, 4.0, 5.0],
]
);
// Test when selecting dim > 0 with ids size different from elem count of
// target dim in source/input.
let ids = Tensor::new(&[1u32, 0u32, 1u32], device)?;
let t = Tensor::arange(1f32, 5f32, device)?.reshape((2, 2))?;
assert_eq!(t.to_vec2::<f32>()?, &[[1.0, 2.0], [3.0, 4.0]]);
let hs = t.index_select(&ids, 1)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[2.0, 1.0, 2.0], [4.0, 3.0, 4.0]]);
}
Ok(())
}
fn index_add(device: &Device) -> Result<()> {
let ids = Tensor::new(&[0u32, 1u32, 1u32], device)?;
let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?;
assert_eq!(
t.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0],
[9.0, 10.0, 11.0]
]
);
let init = Tensor::ones((4, 2), DType::F32, device)?;
let hs = init.index_add(&ids, &t, 1)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[[1.0, 4.0], [4.0, 10.0], [7.0, 16.0], [10.0, 22.0]],
);
let init = Tensor::zeros((4, 2), DType::F32, device)?;
let ids = Tensor::new(&[1u32, 0u32, 0u32], device)?;
let hs = init.index_add(&ids, &t, 1)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[[3.0, 0.0], [9.0, 3.0], [15.0, 6.0], [21.0, 9.0]],
);
let init = Tensor::zeros((6, 3), DType::F32, device)?;
let ids = Tensor::new(&[5u32, 0u32, 1u32, 0u32], device)?;
let hs = init.index_add(&ids, &t, 0)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[
[12.0, 14.0, 16.0],
[6.0, 7.0, 8.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 1.0, 2.0]
]
);
Ok(())
}
fn slice_scatter(device: &Device) -> Result<()> {
let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?;
assert_eq!(
t.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0],
[9.0, 10.0, 11.0]
]
);
let src = Tensor::arange(100f32, 106f32, device)?.reshape((2, 3))?;
assert_eq!(
t.slice_scatter0(&src, 0)?.to_vec2::<f32>()?,
&[
[100.0, 101.0, 102.0],
[103.0, 104.0, 105.0],
[6.0, 7.0, 8.0],
[9.0, 10.0, 11.0]
]
);
assert_eq!(
t.slice_scatter0(&src, 1)?.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[100.0, 101.0, 102.0],
[103.0, 104.0, 105.0],
[9.0, 10.0, 11.0]
]
);
assert_eq!(
t.slice_scatter0(&src, 2)?.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[3.0, 4.0, 5.0],
[100.0, 101.0, 102.0],
[103.0, 104.0, 105.0],
]
);
Ok(())
}
fn scatter_add(device: &Device) -> Result<()> {
let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?;
assert_eq!(
t.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0],
[9.0, 10.0, 11.0]
]
);
let ids = Tensor::new(&[[0u32, 1, 2], [3, 4, 0], [3, 3, 1], [2, 0, 4]], device)?;
let init = Tensor::ones((4, 5), DType::F32, device)?;
let hs = init.scatter_add(&ids, &t, 1)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[
[1.0, 2.0, 3.0, 1.0, 1.0],
[6.0, 1.0, 1.0, 4.0, 5.0],
[1.0, 9.0, 1.0, 14.0, 1.0],
[11.0, 1.0, 10.0, 1.0, 12.0]
]
);
let init = Tensor::ones((6, 3), DType::F32, device)?;
let hs = init.scatter_add(&ids, &t, 0)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[
[1.0, 11.0, 6.0],
[1.0, 2.0, 9.0],
[10.0, 1.0, 3.0],
[10.0, 8.0, 1.0],
[1.0, 5.0, 12.0],
[1.0, 1.0, 1.0]
]
);
Ok(())
}
fn gather(device: &Device) -> Result<()> {
let ids = Tensor::new(&[[0u32], [2u32], [1u32], [0u32]], device)?;
let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?;
assert_eq!(
t.to_vec2::<f32>()?,
&[
[0.0, 1.0, 2.0],
[3.0, 4.0, 5.0],
[6.0, 7.0, 8.0],
[9.0, 10.0, 11.0]
]
);
let hs = t.gather(&ids, 1)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[0.0], [5.0], [7.0], [9.0]]);
let ids = Tensor::new(
&[[0u32, 0u32], [2u32, 0u32], [1u32, 1u32], [0u32, 2u32]],
device,
)?;
let hs = t.gather(&ids, 1)?;
assert_eq!(
hs.to_vec2::<f32>()?,
&[[0.0, 0.0], [5.0, 3.0], [7.0, 7.0], [9.0, 11.0]]
);
let ids = Tensor::new(&[[0u32, 2u32, 0u32]], device)?;
let hs = t.gather(&ids, 0)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 7.0, 2.0]]);
let ids = Tensor::new(&[[0u32, 2u32, 0u32], [0u32, 1u32, 1u32]], device)?;
let hs = t.gather(&ids, 0)?;
assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 7.0, 2.0], [0.0, 4.0, 5.0]]);
Ok(())
}
fn broadcasting(device: &Device) -> Result<()> {
let t1 = Tensor::arange(0f32, 24f32, device)?.reshape((4, 2, 3))?;
let t2 = Tensor::new(&[100f32, 200f32], device)?;
let s = t1.broadcast_add(&t2.reshape((2, 1))?)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[100.0, 101.0, 102.0], [203.0, 204.0, 205.0]],
[[106.0, 107.0, 108.0], [209.0, 210.0, 211.0]],
[[112.0, 113.0, 114.0], [215.0, 216.0, 217.0]],
[[118.0, 119.0, 120.0], [221.0, 222.0, 223.0]]
]
);
let s = t1.t()?.broadcast_add(&t2)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[100.0, 203.0], [101.0, 204.0], [102.0, 205.0]],
[[106.0, 209.0], [107.0, 210.0], [108.0, 211.0]],
[[112.0, 215.0], [113.0, 216.0], [114.0, 217.0]],
[[118.0, 221.0], [119.0, 222.0], [120.0, 223.0]]
]
);
let s = t1.broadcast_sub(&t2.reshape((2, 1))?)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[-100.0, -99.0, -98.0], [-197.0, -196.0, -195.0]],
[[-94.0, -93.0, -92.0], [-191.0, -190.0, -189.0]],
[[-88.0, -87.0, -86.0], [-185.0, -184.0, -183.0]],
[[-82.0, -81.0, -80.0], [-179.0, -178.0, -177.0]]
]
);
let s = t1.t()?.broadcast_sub(&t2)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[-100.0, -197.0], [-99.0, -196.0], [-98.0, -195.0]],
[[-94.0, -191.0], [-93.0, -190.0], [-92.0, -189.0]],
[[-88.0, -185.0], [-87.0, -184.0], [-86.0, -183.0]],
[[-82.0, -179.0], [-81.0, -178.0], [-80.0, -177.0]]
]
);
// Test a narrowed version as this uses a layout start_offset.
let t1 = t1.i(2..)?;
let s = t1.broadcast_add(&t2.reshape((2, 1))?)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[112.0, 113.0, 114.0], [215.0, 216.0, 217.0]],
[[118.0, 119.0, 120.0], [221.0, 222.0, 223.0]]
]
);
let s = t1.t()?.broadcast_add(&t2)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[112.0, 215.0], [113.0, 216.0], [114.0, 217.0]],
[[118.0, 221.0], [119.0, 222.0], [120.0, 223.0]]
]
);
let s = t1.broadcast_sub(&t2.reshape((2, 1))?)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[-88.0, -87.0, -86.0], [-185.0, -184.0, -183.0]],
[[-82.0, -81.0, -80.0], [-179.0, -178.0, -177.0]]
]
);
let s = t1.t()?.broadcast_sub(&t2)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[-88.0, -185.0], [-87.0, -184.0], [-86.0, -183.0]],
[[-82.0, -179.0], [-81.0, -178.0], [-80.0, -177.0]]
]
);
let t3 = Tensor::new(1f32, device)?.broadcast_div(&t2)?;
let s = t1.broadcast_mul(&t2.reshape((2, 1))?)?;
let s_div = t1.broadcast_div(&t3.reshape((2, 1))?)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[1200.0, 1300.0, 1400.0], [3000.0, 3200.0, 3400.0]],
[[1800.0, 1900.0, 2000.0], [4200.0, 4400.0, 4600.0]]
]
);
assert_eq!(s.to_vec3::<f32>()?, s_div.to_vec3::<f32>()?,);
let s = t1.t()?.broadcast_mul(&t2)?;
let s_div = t1.t()?.broadcast_div(&t3)?;
assert_eq!(
s.to_vec3::<f32>()?,
&[
[[1200.0, 3000.0], [1300.0, 3200.0], [1400.0, 3400.0]],
[[1800.0, 4200.0], [1900.0, 4400.0], [2000.0, 4600.0]]
]
);
assert_eq!(s.to_vec3::<f32>()?, s_div.to_vec3::<f32>()?,);
Ok(())
}
fn randn(device: &Device) -> Result<()> {
let tensor = Tensor::randn(0f32, 1f32, (5, 3), device)?;
assert_eq!(tensor.dims(), [5, 3]);
// Check that the seed gets updated by checking that
// a new series of numbers is generated each time
let tensor2 = Tensor::randn(0f32, 1f32, (5, 3), device)?;
assert_ne!(tensor.to_vec2::<f32>()?, tensor2.to_vec2::<f32>()?);
let tensor = Tensor::rand(0f32, 1f32, (5, 3), device)?;
assert_eq!(tensor.dims(), [5, 3]);
// Check that the seed gets updated by checking that
// a new series of numbers is generated each time
let tensor2 = Tensor::rand(0f32, 1f32, (5, 3), device)?;
assert_ne!(tensor.to_vec2::<f32>()?, tensor2.to_vec2::<f32>()?);
// We do not expect deterministic elements at any index.
// There once was a bug that had a deterministic zero element in evenly sized tensors.
const N: usize = 2;
let v = (0..100)
.map(|_| Tensor::randn(0f32, 1f32, N, device).and_then(|t| t.to_vec1::<f32>()))
.collect::<Result<Vec<_>>>()?;
assert!(
(0..N).all(|i| v.windows(2).any(|pair| pair[0][i] != pair[1][i])),
"There are deterministic values in the randn tensors"
);
let v = (0..100)
.map(|_| Tensor::rand(0f32, 1f32, N, device).and_then(|t| t.to_vec1::<f32>()))
.collect::<Result<Vec<_>>>()?;
assert!(
(0..N).all(|i| v.windows(2).any(|pair| pair[0][i] != pair[1][i])),
"There are deterministic values in the rand tensors"
);
Ok(())
}
test_device!(zeros, zeros_cpu, zeros_gpu, zeros_metal);
test_device!(ones, ones_cpu, ones_gpu, ones_metal);
test_device!(full, full_cpu, full_gpu, full_metal);
test_device!(arange, arange_cpu, arange_gpu, arange_metal);
test_device!(add_mul, add_mul_cpu, add_mul_gpu, add_mul_metal);
test_device!(tensor_2d, tensor_2d_cpu, tensor_2d_gpu, tensor_2d_metal);
test_device!(narrow, narrow_cpu, narrow_gpu, narrow_metal);
test_device!(broadcast, broadcast_cpu, broadcast_gpu, broadcast_metal);
test_device!(cat, cat_cpu, cat_gpu, cat_metal);
test_device!(sum, sum_cpu, sum_gpu, sum_metal);
test_device!(min, min_cpu, min_gpu, min_metal);
test_device!(max, max_cpu, max_gpu, max_metal);
test_device!(argmax, argmax_cpu, argmax_gpu, argmax_metal);
test_device!(argmin, argmin_cpu, argmin_gpu, argmin_metal);
test_device!(transpose, transpose_cpu, transpose_gpu, transpose_metal);
test_device!(unary_op, unary_op_cpu, unary_op_gpu, unary_op_metal);
test_device!(binary_op, binary_op_cpu, binary_op_gpu, binary_op_metal);
test_device!(embeddings, embeddings_cpu, embeddings_gpu, embeddings_metal);
test_device!(cmp, cmp_cpu, cmp_gpu, cmp_metal);
test_device!(
broadcasting,
broadcasting_cpu,
broadcasting_gpu,
broadcasting_metal
);
test_device!(
index_select,
index_select_cpu,
index_select_gpu,
index_select_metal
);
test_device!(index_add, index_add_cpu, index_add_gpu, index_add_metal);
test_device!(gather, gather_cpu, gather_gpu, gather_metal);
test_device!(
scatter_add,
scatter_add_cpu,
scatter_add_gpu,
scatter_add_metal
);
test_device!(
slice_scatter,
slice_scatter_cpu,
slice_scatter_gpu,
slice_scatter_metal
);
test_device!(randn, randn_cpu, randn_gpu, randn_metal);
test_device!(clamp, clamp_cpu, clamp_gpu, clamp_metal);
test_device!(var, var_cpu, var_gpu, var_metal);
// There was originally a bug on the CPU implementation for randn
// https://github.com/huggingface/candle/issues/381
#[test]
fn randn_hasneg() -> Result<()> {
let t = Tensor::randn(0f32, 1f32, 200, &Device::Cpu)?.to_vec1::<f32>()?;
if t.iter().all(|&v| v >= 0.) {
candle_core::bail!("all values in tensors are non-negative")
}
Ok(())
}
#[test]
fn pad_with_same() -> Result<()> {
let t = Tensor::arange(1f32, 5f32, &Device::Cpu)?.reshape((2, 2))?;
let t0 = t.pad_with_same(0, 1, 2)?;
assert_eq!(
t0.to_vec2::<f32>()?,
[[1.0, 2.0], [1.0, 2.0], [3.0, 4.0], [3.0, 4.0], [3.0, 4.0]]
);
let t1 = t.pad_with_same(1, 1, 2)?;
assert_eq!(
t1.to_vec2::<f32>()?,
[[1.0, 1.0, 2.0, 2.0, 2.0], [3.0, 3.0, 4.0, 4.0, 4.0]]
);
Ok(())
}
#[test]
fn i64_abs() -> Result<()> {
let t = Tensor::new(&[-42i64, 1337], &Device::Cpu)?;
let t = t.abs()?;
assert_eq!(t.to_vec1::<i64>()?, [42, 1337]);
Ok(())
}
#[test]
fn tril_triu_eye() -> Result<()> {
let t = Tensor::tril2(4, DType::F32, &Device::Cpu)?;
assert_eq!(
t.to_vec2::<f32>()?,
[
[1.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.0],
[1.0, 1.0, 1.0, 1.0]
],
);
let t = Tensor::triu2(4, DType::F32, &Device::Cpu)?;
assert_eq!(
t.to_vec2::<f32>()?,
[
[1.0, 1.0, 1.0, 1.0],
[0.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1.0]
]
);
let t = Tensor::eye(4, DType::F32, &Device::Cpu)?;
assert_eq!(
t.to_vec2::<f32>()?,
[
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]
]
);
Ok(())
}
#[test]
fn cumsum() -> Result<()> {
let t = &[3f32, 1., 4., 1., 5.];
let t = Tensor::new(t, &Device::Cpu)?;
assert_eq!(t.cumsum(0)?.to_vec1::<f32>()?, [3., 4., 8., 9., 14.]);
let t = t.unsqueeze(1)?;
assert_eq!(
t.cumsum(0)?.to_vec2::<f32>()?,
[[3.0], [4.0], [8.0], [9.0], [14.0]]
);
assert_eq!(
t.cumsum(1)?.to_vec2::<f32>()?,
[[3.0], [1.0], [4.0], [1.0], [5.0]]
);
let t = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]];
let t = Tensor::new(t, &Device::Cpu)?;
assert_eq!(
t.cumsum(1)?.to_vec2::<f32>()?,
[[3.0, 4.0, 8.0, 9.0, 14.0], [2.0, 3.0, 10.0, 18.0, 20.0]],
);
assert_eq!(
t.cumsum(0)?.to_vec2::<f32>()?,
[[3.0, 1.0, 4.0, 1.0, 5.0], [5.0, 2.0, 11.0, 9.0, 7.0]]
);
Ok(())
}
/// A helper function for floating point comparison. Both a and b must be 1D Tensor and contains the same amount of data.
/// Assertion passes if the difference of all pairs of a and b is smaller than epsilon.
fn assert_close(a: &Tensor, b: &Tensor, epsilon: f64) -> Result<()> {
let a_vec: Vec<f64> = a.to_vec1()?;
let b_vec: Vec<f64> = b.to_vec1()?;
assert_eq!(a_vec.len(), b_vec.len());
for (a, b) in a_vec.iter().zip(b_vec.iter()) {
assert!((a - b).abs() < epsilon);
}
Ok(())
}
#[test]
fn log_sum_exp() -> Result<()> {
let input = Tensor::new(&[[1f64, 2., 3.], [4., 5., 6.]], &Device::Cpu)?;
let output = input.log_sum_exp(D::Minus1)?;
// The expectations obtained from pytorch.
let expected = Tensor::new(&[3.4076, 6.4076], &Device::Cpu)?;
assert_close(&output, &expected, 0.00001)?;
Ok(())
}
#[test]
fn pow() -> Result<()> {
let lhs = Tensor::new(&[[1f32, 2., 3.], [4., 5., 6.]], &Device::Cpu)?;
let rhs = (&lhs - 2.)?;
let res = lhs.pow(&rhs)?;
assert_eq!(
test_utils::to_vec2_round(&res, 3)?,
[[1.0, 1.0, 3.0], [16.0, 125.0, 1296.0]]
);
Ok(())
}
|
candle/candle-core/tests/tensor_tests.rs/0
|
{
"file_path": "candle/candle-core/tests/tensor_tests.rs",
"repo_id": "candle",
"token_count": 24240
}
| 21
|
# candle-examples
|
candle/candle-examples/README.md/0
|
{
"file_path": "candle/candle-examples/README.md",
"repo_id": "candle",
"token_count": 6
}
| 22
|
/*
* Adapted from
* https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/reduce_kernel_utils.cuh
* Copyright (c) 2023, The vLLM team.
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
template <typename T> __inline__ __device__ T warpReduceSum(T val) {
#pragma unroll
for (int mask = 16; mask > 0; mask >>= 1)
val += __shfl_xor_sync(0xffffffff, val, mask, 32);
return val;
}
/* Calculate the sum of all elements in a block */
template <typename T> __inline__ __device__ T blockReduceSum(T val) {
static __shared__ T shared[32];
int lane = threadIdx.x & 0x1f;
int wid = threadIdx.x >> 5;
val = warpReduceSum<T>(val);
if (lane == 0)
shared[wid] = val;
__syncthreads();
// Modify from blockDim.x << 5 to blockDim.x / 32. to prevent
// blockDim.x is not divided by 32
val = (threadIdx.x < (blockDim.x / 32.f)) ? shared[lane] : (T)(0.0f);
val = warpReduceSum<T>(val);
return val;
}
|
candle/candle-examples/examples/custom-ops/kernels/reduction_utils.cuh/0
|
{
"file_path": "candle/candle-examples/examples/custom-ops/kernels/reduction_utils.cuh",
"repo_id": "candle",
"token_count": 529
}
| 23
|
# candle-metavoice
MetaVoice-1B is a text-to-speech model trained on 100K hours of speech, more
details on the [model
card](https://huggingface.co/metavoiceio/metavoice-1B-v0.1).
Note that the current candle implementation suffers from some limitations as of
2024-03-02:
- The speaker embeddings are hardcoded.
- The generated audio file quality is weaker than the Python implementation,
probably because of some implementation discrepancies.
## Run an example
```bash
cargo run --example metavoice --release -- \\
--prompt "This is a demo of text to speech by MetaVoice-1B, an open-source foundational audio model."
```
|
candle/candle-examples/examples/metavoice/README.md/0
|
{
"file_path": "candle/candle-examples/examples/metavoice/README.md",
"repo_id": "candle",
"token_count": 178
}
| 24
|
# candle-phi: 1.3b and 2.7b LLM with state of the art performance for <10b models.
[Phi-1.5](https://huggingface.co/microsoft/phi-1_5) and
[Phi-2](https://huggingface.co/microsoft/phi-2) are language models using
only 1.3 and 2.7 billion parameters but with state of the art performance compared to
models with up to 10 billion parameters.
The candle implementation provides both the standard version as well as a
quantized variant.
## Running some examples
For the v2 version.
```bash
$ cargo run --example phi --release -- --model 2 \
--prompt "A skier slides down a frictionless slope of height 40m and length 80m. What's the skier speed at the bottom?"
A skier slides down a frictionless slope of height 40m and length 80m. What's the skier speed at the bottom?
Solution:
The potential energy of the skier is converted into kinetic energy as it slides down the slope. The formula for potential energy is mgh, where m is mass, g is acceleration due to gravity (9.8 m/s^2), and h is height. Since there's no friction, all the potential energy is converted into kinetic energy at the bottom of the slope. The formula for kinetic energy is 1/2mv^2, where v is velocity. We can equate these two formulas:
mgh = 1/2mv^2
Solving for v, we get:
v = sqrt(2gh)
Substituting the given values, we get:
v = sqrt(2*9.8*40) = 28 m/s
Therefore, the skier speed at the bottom of the slope is 28 m/s.
```
For the v1.5 version.
```bash
$ cargo run --example phi --release -- --prompt "def print_prime(n): "
def print_prime(n):
print("Printing prime numbers")
for i in range(2, n+1):
if is_prime(i):
print(i)
def is_prime(n):
if n <= 1:
return False
for i in range(2, int(math.sqrt(n))+1):
if n % i == 0:
return False
return True
$ cargo run --example phi --release -- \
--prompt "Explain how to find the median of an array and write the corresponding python function.\nAnswer:" \
--quantized --sample-len 200
Explain how to find the median of an array and write the corresponding python function.
Answer: The median is the middle value in an array. If the array has an even number of elements, the median is the average of the two middle values.
def median(arr):
arr.sort()
n = len(arr)
if n % 2 == 0:
return (arr[n//2 - 1] + arr[n//2]) / 2
else:
return arr[n//2]
```
This also supports the [Puffin Phi v2
model](https://huggingface.co/teknium/Puffin-Phi-v2) for human interaction.
```
$ cargo run --example phi --release -- \
--prompt "USER: What would you do on a sunny day in Paris?\nASSISTANT:" \
--sample-len 200 --model puffin-phi-v2 --quantized
USER: What would you do on a sunny day in Paris?
ASSISTANT: On a sunny day in Paris, you could visit the Musée du Louvre to admire the famous
painting "Mona Lisa" by Leonardo da Vinci. You might also want to stroll along the Champs-Élysées
and enjoy the beautiful architecture of the buildings around you. Don't forget to stop by a café
for a cup of coffee and to soak up the sun!"
```
|
candle/candle-examples/examples/phi/README.md/0
|
{
"file_path": "candle/candle-examples/examples/phi/README.md",
"repo_id": "candle",
"token_count": 1011
}
| 25
|
#![allow(unused)]
//! Vectorized version of the gym environment.
use candle::{DType, Device, Result, Tensor};
use pyo3::prelude::*;
use pyo3::types::PyDict;
#[derive(Debug)]
pub struct Step {
pub obs: Tensor,
pub reward: Tensor,
pub is_done: Tensor,
}
pub struct VecGymEnv {
env: PyObject,
action_space: usize,
observation_space: Vec<usize>,
}
fn w(res: PyErr) -> candle::Error {
candle::Error::wrap(res)
}
impl VecGymEnv {
pub fn new(name: &str, img_dir: Option<&str>, nprocesses: usize) -> Result<VecGymEnv> {
Python::with_gil(|py| {
let sys = py.import_bound("sys")?;
let path = sys.getattr("path")?;
let _ = path.call_method1(
"append",
("candle-examples/examples/reinforcement-learning",),
)?;
let gym = py.import_bound("atari_wrappers")?;
let make = gym.getattr("make")?;
let env = make.call1((name, img_dir, nprocesses))?;
let action_space = env.getattr("action_space")?;
let action_space = action_space.getattr("n")?.extract()?;
let observation_space = env.getattr("observation_space")?;
let observation_space: Vec<usize> = observation_space.getattr("shape")?.extract()?;
let observation_space =
[vec![nprocesses].as_slice(), observation_space.as_slice()].concat();
Ok(VecGymEnv {
env: env.into(),
action_space,
observation_space,
})
})
.map_err(w)
}
pub fn reset(&self) -> Result<Tensor> {
let obs = Python::with_gil(|py| {
let obs = self.env.call_method0(py, "reset")?;
let obs = obs.call_method0(py, "flatten")?;
obs.extract::<Vec<f32>>(py)
})
.map_err(w)?;
Tensor::new(obs, &Device::Cpu)?.reshape(self.observation_space.as_slice())
}
pub fn step(&self, action: Vec<usize>) -> Result<Step> {
let (obs, reward, is_done) = Python::with_gil(|py| {
let step = self.env.call_method_bound(py, "step", (action,), None)?;
let step = step.bind(py);
let obs = step.get_item(0)?.call_method("flatten", (), None)?;
let obs_buffer = pyo3::buffer::PyBuffer::get_bound(&obs)?;
let obs: Vec<u8> = obs_buffer.to_vec(py)?;
let reward: Vec<f32> = step.get_item(1)?.extract()?;
let is_done: Vec<f32> = step.get_item(2)?.extract()?;
Ok((obs, reward, is_done))
})
.map_err(w)?;
let obs = Tensor::from_vec(obs, self.observation_space.as_slice(), &Device::Cpu)?
.to_dtype(DType::F32)?;
let reward = Tensor::new(reward, &Device::Cpu)?;
let is_done = Tensor::new(is_done, &Device::Cpu)?;
Ok(Step {
obs,
reward,
is_done,
})
}
pub fn action_space(&self) -> usize {
self.action_space
}
pub fn observation_space(&self) -> &[usize] {
&self.observation_space
}
}
|
candle/candle-examples/examples/reinforcement-learning/vec_gym_env.rs/0
|
{
"file_path": "candle/candle-examples/examples/reinforcement-learning/vec_gym_env.rs",
"repo_id": "candle",
"token_count": 1569
}
| 26
|
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle::{DType, IndexOp, D};
use candle_nn::{ModuleT, VarBuilder};
use candle_transformers::models::vgg::{Models, Vgg};
use clap::{Parser, ValueEnum};
#[derive(Clone, Copy, Debug, ValueEnum)]
enum Which {
Vgg13,
Vgg16,
Vgg19,
}
#[derive(Parser)]
struct Args {
#[arg(long)]
image: String,
/// Run on CPU rather than on GPU.
#[arg(long)]
cpu: bool,
/// Variant of the model to use.
#[arg(value_enum, long, default_value_t = Which::Vgg13)]
which: Which,
}
pub fn main() -> anyhow::Result<()> {
let args = Args::parse();
let device = candle_examples::device(args.cpu)?;
let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?;
println!("loaded image {image:?}");
let api = hf_hub::api::sync::Api::new()?;
let repo = match args.which {
Which::Vgg13 => "timm/vgg13.tv_in1k",
Which::Vgg16 => "timm/vgg16.tv_in1k",
Which::Vgg19 => "timm/vgg19.tv_in1k",
};
let api = api.model(repo.into());
let filename = "model.safetensors";
let model_file = api.get(filename)?;
let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? };
let model = match args.which {
Which::Vgg13 => Vgg::new(vb, Models::Vgg13)?,
Which::Vgg16 => Vgg::new(vb, Models::Vgg16)?,
Which::Vgg19 => Vgg::new(vb, Models::Vgg19)?,
};
let logits = model.forward_t(&image, /*train=*/ false)?;
let prs = candle_nn::ops::softmax(&logits, D::Minus1)?
.i(0)?
.to_vec1::<f32>()?;
// Sort the predictions and take the top 5
let mut top: Vec<_> = prs.iter().enumerate().collect();
top.sort_by(|a, b| b.1.partial_cmp(a.1).unwrap());
let top = top.into_iter().take(5).collect::<Vec<_>>();
// Print the top predictions
for &(i, p) in &top {
println!(
"{:50}: {:.2}%",
candle_examples::imagenet::CLASSES[i],
p * 100.0
);
}
Ok(())
}
|
candle/candle-examples/examples/vgg/main.rs/0
|
{
"file_path": "candle/candle-examples/examples/vgg/main.rs",
"repo_id": "candle",
"token_count": 967
}
| 27
|
use candle::{DType, Device, IndexOp, Result, Tensor};
use candle_nn::{batch_norm, conv2d, conv2d_no_bias, Func, Module, VarBuilder};
use std::collections::BTreeMap;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::Path;
#[derive(Debug)]
struct Block {
block_type: String,
parameters: BTreeMap<String, String>,
}
impl Block {
fn get(&self, key: &str) -> Result<&str> {
match self.parameters.get(&key.to_string()) {
None => candle::bail!("cannot find {} in {}", key, self.block_type),
Some(value) => Ok(value),
}
}
}
#[derive(Debug)]
pub struct Darknet {
blocks: Vec<Block>,
parameters: BTreeMap<String, String>,
}
impl Darknet {
fn get(&self, key: &str) -> Result<&str> {
match self.parameters.get(&key.to_string()) {
None => candle::bail!("cannot find {} in net parameters", key),
Some(value) => Ok(value),
}
}
}
struct Accumulator {
block_type: Option<String>,
parameters: BTreeMap<String, String>,
net: Darknet,
}
impl Accumulator {
fn new() -> Accumulator {
Accumulator {
block_type: None,
parameters: BTreeMap::new(),
net: Darknet {
blocks: vec![],
parameters: BTreeMap::new(),
},
}
}
fn finish_block(&mut self) {
match &self.block_type {
None => (),
Some(block_type) => {
if block_type == "net" {
self.net.parameters = self.parameters.clone();
} else {
let block = Block {
block_type: block_type.to_string(),
parameters: self.parameters.clone(),
};
self.net.blocks.push(block);
}
self.parameters.clear();
}
}
self.block_type = None;
}
}
pub fn parse_config<T: AsRef<Path>>(path: T) -> Result<Darknet> {
let file = File::open(path.as_ref())?;
let mut acc = Accumulator::new();
for line in BufReader::new(file).lines() {
let line = line?;
if line.is_empty() || line.starts_with('#') {
continue;
}
let line = line.trim();
if line.starts_with('[') {
if !line.ends_with(']') {
candle::bail!("line does not end with ']' {line}")
}
let line = &line[1..line.len() - 1];
acc.finish_block();
acc.block_type = Some(line.to_string());
} else {
let key_value: Vec<&str> = line.splitn(2, '=').collect();
if key_value.len() != 2 {
candle::bail!("missing equal {line}")
}
let prev = acc.parameters.insert(
key_value[0].trim().to_owned(),
key_value[1].trim().to_owned(),
);
if prev.is_some() {
candle::bail!("multiple value for key {}", line)
}
}
}
acc.finish_block();
Ok(acc.net)
}
enum Bl {
Layer(Box<dyn candle_nn::Module + Send + Sync>),
Route(Vec<usize>),
Shortcut(usize),
Yolo(usize, Vec<(usize, usize)>),
}
fn conv(vb: VarBuilder, index: usize, p: usize, b: &Block) -> Result<(usize, Bl)> {
let activation = b.get("activation")?;
let filters = b.get("filters")?.parse::<usize>()?;
let pad = b.get("pad")?.parse::<usize>()?;
let size = b.get("size")?.parse::<usize>()?;
let stride = b.get("stride")?.parse::<usize>()?;
let padding = if pad != 0 { (size - 1) / 2 } else { 0 };
let (bn, bias) = match b.parameters.get("batch_normalize") {
Some(p) if p.parse::<usize>()? != 0 => {
let bn = batch_norm(filters, 1e-5, vb.pp(&format!("batch_norm_{index}")))?;
(Some(bn), false)
}
Some(_) | None => (None, true),
};
let conv_cfg = candle_nn::Conv2dConfig {
stride,
padding,
groups: 1,
dilation: 1,
};
let conv = if bias {
conv2d(p, filters, size, conv_cfg, vb.pp(&format!("conv_{index}")))?
} else {
conv2d_no_bias(p, filters, size, conv_cfg, vb.pp(&format!("conv_{index}")))?
};
let leaky = match activation {
"leaky" => true,
"linear" => false,
otherwise => candle::bail!("unsupported activation {}", otherwise),
};
let func = candle_nn::func(move |xs| {
let xs = conv.forward(xs)?;
let xs = match &bn {
Some(bn) => xs.apply_t(bn, false)?,
None => xs,
};
let xs = if leaky {
xs.maximum(&(&xs * 0.1)?)?
} else {
xs
};
Ok(xs)
});
Ok((filters, Bl::Layer(Box::new(func))))
}
fn upsample(prev_channels: usize) -> Result<(usize, Bl)> {
let layer = candle_nn::func(|xs| {
let (_n, _c, h, w) = xs.dims4()?;
xs.upsample_nearest2d(2 * h, 2 * w)
});
Ok((prev_channels, Bl::Layer(Box::new(layer))))
}
fn int_list_of_string(s: &str) -> Result<Vec<i64>> {
let res: std::result::Result<Vec<_>, _> =
s.split(',').map(|xs| xs.trim().parse::<i64>()).collect();
Ok(res?)
}
fn usize_of_index(index: usize, i: i64) -> usize {
if i >= 0 {
i as usize
} else {
(index as i64 + i) as usize
}
}
fn route(index: usize, p: &[(usize, Bl)], block: &Block) -> Result<(usize, Bl)> {
let layers = int_list_of_string(block.get("layers")?)?;
let layers: Vec<usize> = layers
.into_iter()
.map(|l| usize_of_index(index, l))
.collect();
let channels = layers.iter().map(|&l| p[l].0).sum();
Ok((channels, Bl::Route(layers)))
}
fn shortcut(index: usize, p: usize, block: &Block) -> Result<(usize, Bl)> {
let from = block.get("from")?.parse::<i64>()?;
Ok((p, Bl::Shortcut(usize_of_index(index, from))))
}
fn yolo(p: usize, block: &Block) -> Result<(usize, Bl)> {
let classes = block.get("classes")?.parse::<usize>()?;
let flat = int_list_of_string(block.get("anchors")?)?;
if flat.len() % 2 != 0 {
candle::bail!("even number of anchors");
}
let flat = flat.into_iter().map(|i| i as usize).collect::<Vec<_>>();
let anchors: Vec<_> = (0..(flat.len() / 2))
.map(|i| (flat[2 * i], flat[2 * i + 1]))
.collect();
let mask = int_list_of_string(block.get("mask")?)?;
let anchors = mask.into_iter().map(|i| anchors[i as usize]).collect();
Ok((p, Bl::Yolo(classes, anchors)))
}
fn detect(
xs: &Tensor,
image_height: usize,
classes: usize,
anchors: &[(usize, usize)],
) -> Result<Tensor> {
let (bsize, _channels, height, _width) = xs.dims4()?;
let stride = image_height / height;
let grid_size = image_height / stride;
let bbox_attrs = 5 + classes;
let nanchors = anchors.len();
let xs = xs
.reshape((bsize, bbox_attrs * nanchors, grid_size * grid_size))?
.transpose(1, 2)?
.contiguous()?
.reshape((bsize, grid_size * grid_size * nanchors, bbox_attrs))?;
let grid = Tensor::arange(0u32, grid_size as u32, &Device::Cpu)?;
let a = grid.repeat((grid_size, 1))?;
let b = a.t()?.contiguous()?;
let x_offset = a.flatten_all()?.unsqueeze(1)?;
let y_offset = b.flatten_all()?.unsqueeze(1)?;
let xy_offset = Tensor::cat(&[&x_offset, &y_offset], 1)?
.repeat((1, nanchors))?
.reshape((grid_size * grid_size * nanchors, 2))?
.unsqueeze(0)?
.to_dtype(DType::F32)?;
let anchors: Vec<f32> = anchors
.iter()
.flat_map(|&(x, y)| vec![x as f32 / stride as f32, y as f32 / stride as f32].into_iter())
.collect();
let anchors = Tensor::new(anchors.as_slice(), &Device::Cpu)?
.reshape((anchors.len() / 2, 2))?
.repeat((grid_size * grid_size, 1))?
.unsqueeze(0)?;
let ys02 = xs.i((.., .., 0..2))?;
let ys24 = xs.i((.., .., 2..4))?;
let ys4 = xs.i((.., .., 4..))?;
let ys02 = (candle_nn::ops::sigmoid(&ys02)?.add(&xy_offset)? * stride as f64)?;
let ys24 = (ys24.exp()?.mul(&anchors)? * stride as f64)?;
let ys4 = candle_nn::ops::sigmoid(&ys4)?;
let ys = Tensor::cat(&[ys02, ys24, ys4], 2)?;
Ok(ys)
}
impl Darknet {
pub fn height(&self) -> Result<usize> {
let image_height = self.get("height")?.parse::<usize>()?;
Ok(image_height)
}
pub fn width(&self) -> Result<usize> {
let image_width = self.get("width")?.parse::<usize>()?;
Ok(image_width)
}
pub fn build_model(&self, vb: VarBuilder) -> Result<Func> {
let mut blocks: Vec<(usize, Bl)> = vec![];
let mut prev_channels: usize = 3;
for (index, block) in self.blocks.iter().enumerate() {
let channels_and_bl = match block.block_type.as_str() {
"convolutional" => conv(vb.pp(&index.to_string()), index, prev_channels, block)?,
"upsample" => upsample(prev_channels)?,
"shortcut" => shortcut(index, prev_channels, block)?,
"route" => route(index, &blocks, block)?,
"yolo" => yolo(prev_channels, block)?,
otherwise => candle::bail!("unsupported block type {}", otherwise),
};
prev_channels = channels_and_bl.0;
blocks.push(channels_and_bl);
}
let image_height = self.height()?;
let func = candle_nn::func(move |xs| {
let mut prev_ys: Vec<Tensor> = vec![];
let mut detections: Vec<Tensor> = vec![];
for (_, b) in blocks.iter() {
let ys = match b {
Bl::Layer(l) => {
let xs = prev_ys.last().unwrap_or(xs);
l.forward(xs)?
}
Bl::Route(layers) => {
let layers: Vec<_> = layers.iter().map(|&i| &prev_ys[i]).collect();
Tensor::cat(&layers, 1)?
}
Bl::Shortcut(from) => (prev_ys.last().unwrap() + prev_ys.get(*from).unwrap())?,
Bl::Yolo(classes, anchors) => {
let xs = prev_ys.last().unwrap_or(xs);
detections.push(detect(xs, image_height, *classes, anchors)?);
Tensor::new(&[0u32], &Device::Cpu)?
}
};
prev_ys.push(ys);
}
Tensor::cat(&detections, 1)
});
Ok(func)
}
}
|
candle/candle-examples/examples/yolo-v3/darknet.rs/0
|
{
"file_path": "candle/candle-examples/examples/yolo-v3/darknet.rs",
"repo_id": "candle",
"token_count": 5403
}
| 28
|
use candle::Result;
/// This is a wrapper around a tokenizer to ensure that tokens can be returned to the user in a
/// streaming way rather than having to wait for the full decoding.
pub struct TokenOutputStream {
tokenizer: tokenizers::Tokenizer,
tokens: Vec<u32>,
prev_index: usize,
current_index: usize,
}
impl TokenOutputStream {
pub fn new(tokenizer: tokenizers::Tokenizer) -> Self {
Self {
tokenizer,
tokens: Vec::new(),
prev_index: 0,
current_index: 0,
}
}
pub fn into_inner(self) -> tokenizers::Tokenizer {
self.tokenizer
}
fn decode(&self, tokens: &[u32]) -> Result<String> {
match self.tokenizer.decode(tokens, true) {
Ok(str) => Ok(str),
Err(err) => candle::bail!("cannot decode: {err}"),
}
}
// https://github.com/huggingface/text-generation-inference/blob/5ba53d44a18983a4de32d122f4cb46f4a17d9ef6/server/text_generation_server/models/model.py#L68
pub fn next_token(&mut self, token: u32) -> Result<Option<String>> {
let prev_text = if self.tokens.is_empty() {
String::new()
} else {
let tokens = &self.tokens[self.prev_index..self.current_index];
self.decode(tokens)?
};
self.tokens.push(token);
let text = self.decode(&self.tokens[self.prev_index..])?;
if text.len() > prev_text.len() && text.chars().last().unwrap().is_alphanumeric() {
let text = text.split_at(prev_text.len());
self.prev_index = self.current_index;
self.current_index = self.tokens.len();
Ok(Some(text.1.to_string()))
} else {
Ok(None)
}
}
pub fn decode_rest(&self) -> Result<Option<String>> {
let prev_text = if self.tokens.is_empty() {
String::new()
} else {
let tokens = &self.tokens[self.prev_index..self.current_index];
self.decode(tokens)?
};
let text = self.decode(&self.tokens[self.prev_index..])?;
if text.len() > prev_text.len() {
let text = text.split_at(prev_text.len());
Ok(Some(text.1.to_string()))
} else {
Ok(None)
}
}
pub fn decode_all(&self) -> Result<String> {
self.decode(&self.tokens)
}
pub fn get_token(&self, token_s: &str) -> Option<u32> {
self.tokenizer.get_vocab(true).get(token_s).copied()
}
pub fn tokenizer(&self) -> &tokenizers::Tokenizer {
&self.tokenizer
}
pub fn clear(&mut self) {
self.tokens.clear();
self.prev_index = 0;
self.current_index = 0;
}
}
|
candle/candle-examples/src/token_output_stream.rs/0
|
{
"file_path": "candle/candle-examples/src/token_output_stream.rs",
"repo_id": "candle",
"token_count": 1295
}
| 29
|
/******************************************************************************
* Copyright (c) 2023, Tri Dao.
******************************************************************************/
#pragma once
#include <assert.h>
#include <stdint.h>
#include <stdlib.h>
#include <cuda_fp16.h>
#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800
#include <cuda_bf16.h>
#endif
#include <cute/algorithm/copy.hpp>
#include <cute/algorithm/gemm.hpp>
#include <cutlass/array.h>
#include <cutlass/cutlass.h>
#include <cutlass/numeric_conversion.h>
#include <cutlass/numeric_types.h>
////////////////////////////////////////////////////////////////////////////////////////////////////
namespace flash {
////////////////////////////////////////////////////////////////////////////////////////////////////
template<typename T>
inline __device__ uint32_t relu2(const uint32_t x);
template<>
inline __device__ uint32_t relu2<cutlass::half_t>(const uint32_t x) {
uint32_t res;
const uint32_t zero = 0u;
#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800
asm volatile("max.f16x2 %0, %1, %2;\n" : "=r"(res) : "r"(x), "r"(zero));
#else
asm volatile( \
"{\n" \
"\t .reg .f16x2 sela;\n" \
"\t set.gtu.u32.f16x2 sela, %1, %2;\n" \
"\t and.b32 %0, sela, %1;\n"
"}\n" : "=r"(res) : "r"(x), "r"(zero));
#endif
return res;
}
#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800
template<>
inline __device__ uint32_t relu2<cutlass::bfloat16_t>(const uint32_t x) {
uint32_t res;
const uint32_t zero = 0u;
asm volatile("max.bf16x2 %0, %1, %2;\n" : "=r"(res) : "r"(x), "r"(zero));
return res;
}
#endif
////////////////////////////////////////////////////////////////////////////////////////////////////
#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800
template<typename T>
inline __device__ uint32_t convert_relu2(const float2 x);
template<>
inline __device__ uint32_t convert_relu2<cutlass::half_t>(const float2 x) {
uint32_t res;
const uint32_t a = reinterpret_cast<const uint32_t&>(x.x);
const uint32_t b = reinterpret_cast<const uint32_t&>(x.y);
asm volatile("cvt.rn.relu.f16x2.f32 %0, %1, %2;\n" : "=r"(res) : "r"(b), "r"(a));
return res;
}
template<>
inline __device__ uint32_t convert_relu2<cutlass::bfloat16_t>(const float2 x) {
uint32_t res;
const uint32_t a = reinterpret_cast<const uint32_t&>(x.x);
const uint32_t b = reinterpret_cast<const uint32_t&>(x.y);
asm volatile("cvt.rn.relu.bf16x2.f32 %0, %1, %2;\n" : "=r"(res) : "r"(b), "r"(a));
return res;
}
#endif
////////////////////////////////////////////////////////////////////////////////////////////////////
template<typename T>
struct MaxOp {
__device__ inline T operator()(T const & x, T const & y) { return x > y ? x : y; }
};
template <>
struct MaxOp<float> {
// This is slightly faster
__device__ inline float operator()(float const &x, float const &y) { return max(x, y); }
};
////////////////////////////////////////////////////////////////////////////////////////////////////
template<typename T>
struct SumOp {
__device__ inline T operator()(T const & x, T const & y) { return x + y; }
};
////////////////////////////////////////////////////////////////////////////////////////////////////
template<int THREADS>
struct Allreduce {
static_assert(THREADS == 32 || THREADS == 16 || THREADS == 8 || THREADS == 4);
template<typename T, typename Operator>
static __device__ inline T run(T x, Operator &op) {
constexpr int OFFSET = THREADS / 2;
x = op(x, __shfl_xor_sync(uint32_t(-1), x, OFFSET));
return Allreduce<OFFSET>::run(x, op);
}
};
////////////////////////////////////////////////////////////////////////////////////////////////////
template<>
struct Allreduce<2> {
template<typename T, typename Operator>
static __device__ inline T run(T x, Operator &op) {
x = op(x, __shfl_xor_sync(uint32_t(-1), x, 1));
return x;
}
};
////////////////////////////////////////////////////////////////////////////////////////////////////
template<bool A_in_regs=false, bool B_in_regs=false, typename Tensor0, typename Tensor1,
typename Tensor2, typename Tensor3, typename Tensor4,
typename TiledMma, typename TiledCopyA, typename TiledCopyB,
typename ThrCopyA, typename ThrCopyB>
inline __device__ void gemm(Tensor0 &acc, Tensor1 &tCrA, Tensor2 &tCrB, Tensor3 const& tCsA,
Tensor4 const& tCsB, TiledMma tiled_mma,
TiledCopyA smem_tiled_copy_A, TiledCopyB smem_tiled_copy_B,
ThrCopyA smem_thr_copy_A, ThrCopyB smem_thr_copy_B) {
CUTE_STATIC_ASSERT_V(size<1>(tCrA) == size<1>(acc)); // MMA_M
CUTE_STATIC_ASSERT_V(size<1>(tCrB) == size<2>(acc)); // MMA_N
CUTE_STATIC_ASSERT_V(size<2>(tCrA) == size<2>(tCrB)); // MMA_K
Tensor tCrA_copy_view = smem_thr_copy_A.retile_D(tCrA);
CUTE_STATIC_ASSERT_V(size<1>(tCsA) == size<1>(tCrA_copy_view)); // M
Tensor tCrB_copy_view = smem_thr_copy_B.retile_D(tCrB);
CUTE_STATIC_ASSERT_V(size<1>(tCsB) == size<1>(tCrB_copy_view)); // N
if (!A_in_regs) { cute::copy(smem_tiled_copy_A, tCsA(_, _, _0{}), tCrA_copy_view(_, _, _0{})); }
if (!B_in_regs) { cute::copy(smem_tiled_copy_B, tCsB(_, _, _0{}), tCrB_copy_view(_, _, _0{})); }
#pragma unroll
for (int i = 0; i < size<2>(tCrA); ++i) {
if (i < size<2>(tCrA) - 1) {
if (!A_in_regs) { cute::copy(smem_tiled_copy_A, tCsA(_, _, i + 1), tCrA_copy_view(_, _, i + 1)); }
if (!B_in_regs) { cute::copy(smem_tiled_copy_B, tCsB(_, _, i + 1), tCrB_copy_view(_, _, i + 1)); }
}
cute::gemm(tiled_mma, tCrA(_, _, i), tCrB(_, _, i), acc);
}
}
////////////////////////////////////////////////////////////////////////////////////////////////////
template<typename Tensor0, typename Tensor1, typename Tensor2, typename Tensor3,
typename TiledMma, typename TiledCopy, typename ThrCopy>
inline __device__ void gemm_A_in_regs(Tensor0 &acc, Tensor1 &tCrA, Tensor2 &tCrB, Tensor3 const& tCsB,
TiledMma tiled_mma, TiledCopy smem_tiled_copy_B,
ThrCopy smem_thr_copy_B) {
CUTE_STATIC_ASSERT_V(size<1>(tCrA) == size<1>(acc)); // MMA_M
CUTE_STATIC_ASSERT_V(size<1>(tCrB) == size<2>(acc)); // MMA_N
CUTE_STATIC_ASSERT_V(size<2>(tCrA) == size<2>(tCrB)); // MMA_K
Tensor tCrB_copy_view = smem_thr_copy_B.retile_D(tCrB);
CUTE_STATIC_ASSERT_V(size<1>(tCsB) == size<1>(tCrB_copy_view)); // N
cute::copy(smem_tiled_copy_B, tCsB(_, _, _0{}), tCrB_copy_view(_, _, _0{}));
#pragma unroll
for (int i = 0; i < size<2>(tCrA); ++i) {
if (i < size<2>(tCrA) - 1) {
cute::copy(smem_tiled_copy_B, tCsB(_, _, i + 1), tCrB_copy_view(_, _, i + 1));
}
cute::gemm(tiled_mma, tCrA(_, _, i), tCrB(_, _, i), acc);
}
}
////////////////////////////////////////////////////////////////////////////////////////////////////
// Convert acc_layout from (MMA=4, MMA_M, MMA_N) to (nrow=(2, MMA_M), ncol=(2, MMA_N))
template<typename Layout>
inline __device__ auto convert_layout_acc_rowcol(Layout acc_layout) {
static_assert(decltype(size<0>(acc_layout))::value == 4);
static_assert(decltype(rank(acc_layout))::value == 3);
auto l = logical_divide(acc_layout, Shape<_2>{}); // ((2, 2), MMA_M, MMA_N)
// TD [2023-08-13]: Idk why but get<0, 1>(l) doesn't work for Cutlass 3.2, I'm getting
// "int_tuple.hpp(74): error: conversion to inaccessible base class"
// return make_layout(make_layout(get<0, 1>(l), get<1>(l)), make_layout(get<0, 0>(l), get<2>(l)));
return make_layout(make_layout(get<1>(get<0>(l)), get<1>(l)), make_layout(get<0>(get<0>(l)), get<2>(l)));
};
////////////////////////////////////////////////////////////////////////////////////////////////////
// Convert rowcol_layout from (nrow=(2, MMA_M), ncol=(2, MMA_N)) to ((2, 2, 2), MMA_M, MMA_N / 2)
// if using m16n8k16, or to ((2, 2, 1), MMA_M, MMA_N) if using m16n8k8.
template<typename MMA_traits, typename Layout>
inline __device__ auto convert_layout_rowcol_Aregs(Layout rowcol_layout) {
using X = Underscore;
static_assert(decltype(size<0, 0>(rowcol_layout))::value == 2);
static_assert(decltype(size<1, 0>(rowcol_layout))::value == 2);
constexpr int mma_shape_K = get<2>(typename MMA_traits::Shape_MNK{});
static_assert(mma_shape_K == 8 || mma_shape_K == 16);
constexpr int MMA_N_divisor = mma_shape_K == 8 ? 1 : 2;
auto l = logical_divide(rowcol_layout, Shape<X, Shape<X, Int<MMA_N_divisor>>>{}); // ((2, MMA_M), (2, (2, MMA_N / 2)))
// TD [2023-08-13]: Same error as above on Cutlass 3.2
// return make_layout(make_layout(get<1, 0>(l), get<0, 0>(l), get<1, 1, 0>(l)),
// get<0, 1>(l),
// get<1, 1, 1>(l));
return make_layout(make_layout(get<0>(get<1>(l)), get<0>(get<0>(l)), get<0>(get<1>(get<1>(l)))),
get<1>(get<0>(l)),
get<1>(get<1>(get<1>(l))));
};
////////////////////////////////////////////////////////////////////////////////////////////////////
template <typename To_type, typename Engine, typename Layout>
inline __device__ auto convert_type(Tensor<Engine, Layout> const &tensor) {
using From_type = typename Engine::value_type;
constexpr int numel = decltype(size(tensor))::value;
cutlass::NumericArrayConverter<To_type, From_type, numel> convert_op;
// HACK: this requires tensor to be "contiguous"
auto frag = convert_op(*reinterpret_cast<const cutlass::Array<From_type, numel> *>(tensor.data()));
return make_tensor(make_rmem_ptr<To_type>(&frag), tensor.layout());
}
////////////////////////////////////////////////////////////////////////////////////////////////////
template <typename Engine, typename Layout>
inline __device__ void relu_(Tensor<Engine, Layout> &tensor) {
constexpr int numel = decltype(size(tensor))::value;
static_assert(numel % 2 == 0);
using value_t = typename Engine::value_type;
// HACK: this requires tensor to be "contiguous"
Tensor tensor_uint32 = recast<uint32_t>(tensor);
#pragma unroll
for (int i = 0; i < size(tensor_uint32); ++i) {
tensor_uint32(i) = relu2<value_t>(tensor_uint32(i));
}
}
////////////////////////////////////////////////////////////////////////////////////////////////////
// On SM80 and above, we can fuse fp32 -> fp16/bf16 conversion and relu into 1 instruction
template <typename To_type, typename Engine, typename Layout>
inline __device__ auto convert_type_relu(Tensor<Engine, Layout> const &tensor) {
using From_type = typename Engine::value_type;
static_assert(std::is_same_v<To_type, cutlass::half_t> || std::is_same_v<To_type, cutlass::bfloat16_t>);
static_assert(std::is_same_v<float, From_type>);
constexpr int numel = decltype(size(tensor))::value;
static_assert(numel % 2 == 0);
#if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800
// HACK: this requires tensor to be "contiguous"
Tensor tensor_float2 = recast<float2>(tensor);
Tensor out_uint32 = make_tensor<uint32_t>(tensor_float2.layout());
#pragma unroll
for (int i = 0; i < size(out_uint32); ++i) {
out_uint32(i) = convert_relu2<To_type>(tensor_float2(i));
}
Tensor out = make_tensor(make_rmem_ptr<To_type>(out_uint32.data()), tensor.layout());
#else
Tensor out = flash::convert_type<To_type>(tensor);
flash::relu_(out);
#endif
return out;
}
////////////////////////////////////////////////////////////////////////////////////////////////////
// Blocks until all but N previous cp.async.commit_group operations have committed.
// This differs from cute::cp_async_wait in that when N = 0 we don't call cp.async.wait_all
// (which is equivalent to commit_group then wait_group 0).
// Instead we just call cp.async.wait_group 0, which is slightly faster.
// https://github.com/NVIDIA/cutlass/blob/master/include/cute/arch/copy_sm80.hpp#L113
template <int N>
CUTE_HOST_DEVICE
void cp_async_wait() {
#if defined(CUTE_ARCH_CP_ASYNC_SM80_ENABLED)
asm volatile("cp.async.wait_group %0;\n" :: "n"(N));
#endif
}
////////////////////////////////////////////////////////////////////////////////////////////////////
template <bool Is_even_MN=true, bool Is_even_K=true, bool Clear_OOB_MN=false, bool Clear_OOB_K=true,
typename TiledCopy, typename Engine0, typename Layout0, typename Engine1, typename Layout1,
typename Engine2, typename Layout2, typename Engine3, typename Layout3>
inline __device__ void copy(TiledCopy tiled_copy, Tensor<Engine0, Layout0> const &S,
Tensor<Engine1, Layout1> &D, Tensor<Engine2, Layout2> const &identity_MN,
Tensor<Engine3, Layout3> const &predicate_K, const int max_MN=0) {
CUTE_STATIC_ASSERT_V(rank(S) == Int<3>{});
CUTE_STATIC_ASSERT_V(rank(D) == Int<3>{});
CUTE_STATIC_ASSERT_V(size<0>(S) == size<0>(D)); // MMA
CUTE_STATIC_ASSERT_V(size<1>(S) == size<1>(D)); // MMA_M
CUTE_STATIC_ASSERT_V(size<2>(S) == size<2>(D)); // MMA_K
// There's no case where !Clear_OOB_K && Clear_OOB_MN
static_assert(!(Clear_OOB_MN && !Clear_OOB_K));
#pragma unroll
for (int m = 0; m < size<1>(S); ++m) {
if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) {
#pragma unroll
for (int k = 0; k < size<2>(S); ++k) {
if (Is_even_K || predicate_K(k)) {
cute::copy(tiled_copy, S(_, m, k), D(_, m, k));
} else if (Clear_OOB_K) {
cute::clear(D(_, m, k));
}
}
} else if (Clear_OOB_MN) {
cute::clear(D(_, m, _));
}
}
// TD [2023-04-13]: Strange that the code below can cause race condition.
// I think it's because the copies are under an if statement.
// if (Is_even_K) {
// #pragma unroll
// for (int m = 0; m < size<1>(S); ++m) {
// if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) {
// copy(tiled_copy, S(_, m, _), D(_, m, _));
// } else if (Clear_OOB_MN) {
// clear(D(_, m, _));
// }
// }
// } else { // It's slightly faster in this case if iterate over K first
// #pragma unroll
// for (int k = 0; k < size<2>(S); ++k) {
// if (predicate_K(k)) {
// #pragma unroll
// for (int m = 0; m < size<1>(S); ++m) {
// if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) {
// copy(tiled_copy, S(_, m, k), D(_, m, k));
// } else if (Clear_OOB_MN) {
// clear(D(_, m, k));
// }
// }
// } else if (Clear_OOB_K) { // There's no case where !Clear_OOB_K && Clear_OOB_MN
// if (Clear_OOB_MN || Is_even_MN) {
// clear(D(_, _, k));
// } else {
// #pragma unroll
// for (int m = 0; m < size<1>(S); ++m) {
// if (!(Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN)) {
// clear(D(_, m, k));
// }
// }
// }
// }
// }
// }
}
////////////////////////////////////////////////////////////////////////////////////////////////////
} // namespace flash
|
candle/candle-flash-attn/kernels/utils.h/0
|
{
"file_path": "candle/candle-flash-attn/kernels/utils.h",
"repo_id": "candle",
"token_count": 6965
}
| 30
|
pub const AFFINE: &str = include_str!(concat!(env!("OUT_DIR"), "/affine.ptx"));
pub const BINARY: &str = include_str!(concat!(env!("OUT_DIR"), "/binary.ptx"));
pub const CAST: &str = include_str!(concat!(env!("OUT_DIR"), "/cast.ptx"));
pub const CONV: &str = include_str!(concat!(env!("OUT_DIR"), "/conv.ptx"));
pub const FILL: &str = include_str!(concat!(env!("OUT_DIR"), "/fill.ptx"));
pub const INDEXING: &str = include_str!(concat!(env!("OUT_DIR"), "/indexing.ptx"));
pub const QUANTIZED: &str = include_str!(concat!(env!("OUT_DIR"), "/quantized.ptx"));
pub const REDUCE: &str = include_str!(concat!(env!("OUT_DIR"), "/reduce.ptx"));
pub const TERNARY: &str = include_str!(concat!(env!("OUT_DIR"), "/ternary.ptx"));
pub const UNARY: &str = include_str!(concat!(env!("OUT_DIR"), "/unary.ptx"));
|
candle/candle-kernels/src/lib.rs/0
|
{
"file_path": "candle/candle-kernels/src/lib.rs",
"repo_id": "candle",
"token_count": 333
}
| 31
|
#include <metal_stdlib>
using namespace metal;
#define MAX(x, y) ((x) > (y) ? (x) : (y))
#define MIN(x, y) ((x) < (y) ? (x) : (y))
METAL_FUNC uint get_strided_index(
uint idx,
constant size_t &num_dims,
constant size_t *dims,
constant size_t *strides
) {
uint strided_i = 0;
for (uint d = 0; d < num_dims; d++) {
uint dim_idx = num_dims - 1 - d;
strided_i += (idx % dims[dim_idx]) * strides[dim_idx];
idx /= dims[dim_idx];
}
return strided_i;
}
constant int THREADGROUP_SIZE = 2048;
template<typename T>
METAL_FUNC void argmin(
constant size_t &num_dims,
constant size_t *dims,
constant size_t *strides,
constant size_t &el_to_sum_per_block,
device const T *src,
device uint *dst,
uint id,
uint tid,
uint dst_id,
uint block_dim,
threadgroup T *shared_memory,
threadgroup uint *shared_indices
) {
bool notset = true;
// Elements summed in this block range from dst_id * el_to_sum_per_block
// to (dst_id + 1) * el_to_sum_per_block.
size_t start_idx = dst_id * el_to_sum_per_block;
size_t stop_idx = start_idx + el_to_sum_per_block;
size_t idx = start_idx + tid;
while (idx < stop_idx) {
// TODO: Fast version for the contiguous case.
size_t strided_i = get_strided_index(idx, num_dims, dims, strides);
if (notset || src[strided_i] < shared_memory[tid]) {
shared_memory[tid] = src[strided_i];
/* Assume that the reduction takes place over the last dimension which is contiguous. */
shared_indices[tid] = idx % dims[num_dims - 1];
notset = false;
}
idx += block_dim;
}
threadgroup_barrier(mem_flags::mem_none);
// reduction in shared memory
for (uint s = block_dim / 2; s > 0; s >>= 1) {
if (tid < s && shared_memory[tid + s] < shared_memory[tid]) {
shared_indices[tid] = shared_indices[tid + s];
shared_memory[tid] = shared_memory[tid + s];
} \
threadgroup_barrier(mem_flags::mem_none);
}
if (tid == 0) {
dst[dst_id] = shared_indices[0];
}
}
#define ARGMIN(NAME, T, MAXVALUE) \
kernel void NAME( \
constant size_t &num_dims, \
constant size_t *dims, \
constant size_t *strides, \
constant size_t &el_to_sum_per_block, \
device const T *src, \
device uint *dst, \
uint id [[ thread_position_in_grid ]], \
uint tid [[ thread_index_in_threadgroup ]], \
uint dst_id [[ threadgroup_position_in_grid ]], \
uint block_dim [[ threads_per_threadgroup ]] \
) { \
threadgroup T shared_memory[THREADGROUP_SIZE]; \
threadgroup uint shared_indices[THREADGROUP_SIZE]; \
shared_memory[tid] = MAXVALUE; \
shared_indices[tid] = 0xFFFFFFFF; \
argmin<T>(num_dims, dims, strides, el_to_sum_per_block, src, dst, id, tid, dst_id, block_dim, shared_memory, shared_indices); \
} \
template<typename T>
METAL_FUNC void argmax(
constant size_t & num_dims,
constant size_t * dims,
constant size_t * strides,
constant size_t & el_to_sum_per_block,
device const T * src,
device uint * dst,
uint id,
uint tid,
uint dst_id,
uint block_dim,
threadgroup T * shared_memory,
threadgroup uint * shared_indices
) {
// Elements summed in this block range from dst_id * el_to_sum_per_block
// to (dst_id + 1) * el_to_sum_per_block.
size_t start_idx = dst_id * el_to_sum_per_block;
size_t stop_idx = start_idx + el_to_sum_per_block;
size_t idx = start_idx + tid;
bool notset = true;
while (idx < stop_idx) {
// TODO: Fast version for the contiguous case.
size_t strided_i = get_strided_index(idx, num_dims, dims, strides);
if (notset || shared_memory[tid] < src[strided_i]) {
shared_memory[tid] = src[strided_i];
shared_indices[tid] = idx % dims[num_dims - 1];
notset = false;
}
idx += block_dim;
}
threadgroup_barrier(mem_flags::mem_none);
// reduction in shared memory
for (uint s = block_dim / 2; s > 0; s >>= 1) {
if (tid < s && shared_memory[tid + s] > shared_memory[tid]) {
shared_indices[tid] = shared_indices[tid + s];
shared_memory[tid] = shared_memory[tid + s];
}
threadgroup_barrier(mem_flags::mem_none);
}
// Thread 0 writes the result of the reduction
if (tid == 0) {
dst[dst_id] = shared_indices[0];
}
}
#define ARGMAX(NAME, T, MINVALUE) \
kernel void NAME( \
constant size_t &num_dims, \
constant size_t *dims, \
constant size_t *strides, \
constant size_t &el_to_sum_per_block, \
device const T *src, \
device uint *dst, \
uint id [[ thread_position_in_grid ]], \
uint tid [[ thread_index_in_threadgroup ]], \
uint dst_id [[ threadgroup_position_in_grid ]], \
uint block_dim [[ threads_per_threadgroup ]] \
) { \
threadgroup T shared_memory[THREADGROUP_SIZE]; \
threadgroup uint shared_indices[THREADGROUP_SIZE]; \
shared_memory[tid] = MINVALUE; \
shared_indices[tid] = 0xFFFFFFFF; \
argmax<T>(num_dims, dims, strides, el_to_sum_per_block, src, dst, id, tid, dst_id, block_dim, shared_memory, shared_indices); \
} \
template<typename T>
METAL_FUNC void reduce(
constant size_t & num_dims,
constant size_t * dims,
constant size_t * strides,
constant size_t & el_to_sum_per_block,
device const T * src,
device T * dst,
uint id,
uint tid,
uint dst_id,
uint block_dim,
threadgroup T * shared_memory,
T (*fn)(T, T)
) {
// Elements summed in this block range from dst_id * el_to_sum_per_block
// to (dst_id + 1) * el_to_sum_per_block.
size_t start_idx = dst_id * el_to_sum_per_block;
size_t stop_idx = start_idx + el_to_sum_per_block;
size_t idx = start_idx + tid;
while (idx < stop_idx) {
// TODO: Fast version for the contiguous case.
size_t strided_i = get_strided_index(idx, num_dims, dims, strides);
T x = shared_memory[tid];
T y = src[strided_i];
shared_memory[tid] = fn(x, y);
idx += block_dim;
}
threadgroup_barrier(mem_flags::mem_none);
// reduction in shared memory
for (uint s = block_dim / 2; s > 0; s >>= 1) {
if (tid < s) {
T x = shared_memory[tid];
T y = shared_memory[tid + s];
shared_memory[tid] = fn(x, y);
}
threadgroup_barrier(mem_flags::mem_none);
}
if (tid == 0) {
dst[dst_id] = shared_memory[0];
}
}
#define REDUCE(FN, NAME, T, START) \
METAL_FUNC T NAME##_##op(T x, T y) { return FN; } \
kernel void NAME( \
constant size_t &num_dims, \
constant size_t *dims, \
constant size_t *strides, \
constant size_t &el_to_sum_per_block, \
device const T *src, \
device T *dst, \
uint id [[ thread_position_in_grid ]], \
uint tid [[ thread_index_in_threadgroup ]], \
uint dst_id [[ threadgroup_position_in_grid ]], \
uint block_dim [[ threads_per_threadgroup ]] \
) { \
threadgroup T shared_memory[THREADGROUP_SIZE]; \
shared_memory[tid] = START; \
reduce<T>(num_dims, dims, strides, el_to_sum_per_block, src, dst, id, tid, dst_id, block_dim, shared_memory, NAME##_##op); \
} \
template<typename T>
METAL_FUNC void softmax(
constant size_t & src_numel,
constant size_t & el_to_sum_per_block,
device const T * src,
device T * dst,
uint id,
uint tid,
uint dst_id,
uint block_dim,
threadgroup float * shared_memory
) {
size_t start_idx = dst_id * el_to_sum_per_block;
size_t stop_idx = min(start_idx + el_to_sum_per_block, src_numel);
size_t idx = start_idx + tid;
float tmp = -INFINITY;
while (idx < stop_idx) {
tmp = MAX(tmp, float(src[idx]));
idx += block_dim;
}
shared_memory[tid] = tmp;
threadgroup_barrier(mem_flags::mem_threadgroup);
for (uint s = block_dim / 2; s > 0; s >>= 1) {
if (tid < s) {
shared_memory[tid] = MAX(shared_memory[tid], shared_memory[tid + s]);\
}
threadgroup_barrier(mem_flags::mem_threadgroup);
}
/* wait for shared_memory[0] to be filled */
threadgroup_barrier(mem_flags::mem_threadgroup);
float _max = shared_memory[0];
/* prevent tid=0 from overwriting _max before other threads have written it */
threadgroup_barrier(mem_flags::mem_threadgroup);
shared_memory[tid] = 0;
idx = start_idx + tid;
while (idx < stop_idx) {
const float val = exp(float(src[idx]) - _max);
dst[idx] = T(val);
shared_memory[tid] += val;
idx += block_dim;
}
threadgroup_barrier(mem_flags::mem_threadgroup);
for (uint s = block_dim / 2; s > 0; s >>= 1) {
if (tid < s) {
shared_memory[tid] += shared_memory[tid + s];
}
threadgroup_barrier(mem_flags::mem_threadgroup);
}
const T inv_acc = T(1.0 / shared_memory[0]);
idx = start_idx + tid;
while (idx < stop_idx) {
dst[idx] *= inv_acc;
idx += block_dim;
}
}
#define SOFTMAX(NAME, T) \
kernel void NAME( \
constant size_t &src_numel, \
constant size_t &el_to_sum_per_block, \
device const T *src, \
device T *dst, \
uint id [[ thread_position_in_grid ]], \
uint tid [[ thread_index_in_threadgroup ]], \
uint dst_id [[ threadgroup_position_in_grid ]], \
uint block_dim [[ threads_per_threadgroup ]] \
) { \
threadgroup float shared_memory[THREADGROUP_SIZE]; \
shared_memory[tid] = -INFINITY; \
softmax<T>(src_numel, el_to_sum_per_block, src, dst, id, tid, dst_id, block_dim, shared_memory); \
} \
template<typename T>
METAL_FUNC void rmsnorm(
constant size_t & src_numel,
constant size_t & el_to_sum_per_block,
device const T * src,
device T * dst,
device const T * alpha,
constant float & eps,
uint id,
uint tid,
uint dst_id,
uint block_dim,
threadgroup float * shared_memory
) {
size_t start_idx = dst_id * el_to_sum_per_block;
size_t stop_idx = min(start_idx + el_to_sum_per_block, src_numel);
size_t idx = start_idx + tid;
float tmp = 0;
while (idx < stop_idx) {
tmp = tmp + float(src[idx]) * float(src[idx]);
idx += block_dim;
}
shared_memory[tid] = tmp;
threadgroup_barrier(mem_flags::mem_threadgroup);
for (uint s = block_dim / 2; s > 0; s >>= 1) {
if (tid < s) {
shared_memory[tid] = shared_memory[tid] + shared_memory[tid + s];
}
threadgroup_barrier(mem_flags::mem_threadgroup);
}
/* wait for shared_memory[0] to be filled */
threadgroup_barrier(mem_flags::mem_threadgroup);
float norm = sqrt(shared_memory[0] / float(el_to_sum_per_block) + eps);
float inv_norm = 1.0f / norm;
idx = start_idx + tid;
while (idx < stop_idx) {
float val = float(src[idx]) * inv_norm;
if (alpha != nullptr) {
val *= float(alpha[idx - start_idx]);
}
dst[idx] = T(val);
idx += block_dim;
}
}
#define RMSNORM(NAME, T) \
kernel void NAME( \
constant size_t &src_numel, \
constant size_t &el_to_sum_per_block, \
device const T *src, \
device T *dst, \
device const T *alpha, \
constant float &eps, \
uint id [[ thread_position_in_grid ]], \
uint tid [[ thread_index_in_threadgroup ]], \
uint dst_id [[ threadgroup_position_in_grid ]], \
uint block_dim [[ threads_per_threadgroup ]] \
) { \
threadgroup float shared_memory[THREADGROUP_SIZE]; \
shared_memory[tid] = 0; \
rmsnorm<T>(src_numel, el_to_sum_per_block, src, dst, alpha, eps, id, tid, dst_id, block_dim, shared_memory); \
} \
template<typename T>
METAL_FUNC void ropei(
constant size_t &bh,
constant size_t &td,
device const T *src,
device const T *cos,
device const T *sin,
device T *dst,
uint tid
) {
if (2 * tid >= bh * td) {
return;
}
size_t rope_idx = tid % (td / 2);
T c = cos[rope_idx];
T s = sin[rope_idx];
dst[2 * tid] = src[2 * tid] * c - src[2 * tid + 1] * s;
dst[2 * tid + 1] = src[2 * tid] * s + src[2 * tid + 1] * c;
}
template<typename T>
METAL_FUNC void rope(
constant size_t &bh,
constant size_t &td,
constant size_t &d,
device const T *src,
device const T *cos,
device const T *sin,
device T *dst,
uint idx
) {
if (2 * idx >= bh * td) {
return;
}
size_t i_bh = idx / (td / 2);
size_t i_td = idx - (td / 2) * i_bh;
size_t i_t = i_td / (d / 2);
size_t i_d = i_td - (d / 2) * i_t;
size_t i1 = i_bh * td + i_t * d + i_d;
size_t i2 = i1 + d / 2;
size_t i_cs = i_t * (d / 2) + i_d;
T c = cos[i_cs];
T s = sin[i_cs];
dst[i1] = src[i1] * c - src[i2] * s;
dst[i2] = src[i1] * s + src[i2] * c;
}
template<typename T>
METAL_FUNC void rope_thd(
constant size_t &b,
constant size_t &t,
constant size_t &h,
constant size_t &d,
device const T *src,
device const T *cos,
device const T *sin,
device T *dst,
uint idx
) {
if (2 * idx >= b * t * h * d) {
return;
}
const size_t i_bth = idx / (d / 2);
const size_t i_d = idx - (d / 2) * i_bth;
const size_t i_t = (i_bth / h) % t;
const size_t i1 = i_bth * d + i_d;
const size_t i2 = i1 + d / 2;
const size_t i_cs = i_t * (d / 2) + i_d;
T c = cos[i_cs];
T s = sin[i_cs];
dst[i1] = src[i1] * c - src[i2] * s;
dst[i2] = src[i1] * s + src[i2] * c;
}
#define ROPE(FN_NAME, FN_NAME_I, FN_NAME_THD, TYPENAME) \
kernel void FN_NAME_I( \
constant size_t &bh, \
constant size_t &td, \
device const TYPENAME *src, \
device const TYPENAME *cos, \
device const TYPENAME *sin, \
device TYPENAME *dst, \
uint tid [[ thread_position_in_grid ]] \
) { \
ropei<TYPENAME>(bh, td, src, cos, sin, dst, tid); \
}\
kernel void FN_NAME( \
constant size_t &bh, \
constant size_t &td, \
constant size_t &d, \
device const TYPENAME *src, \
device const TYPENAME *cos, \
device const TYPENAME *sin, \
device TYPENAME *dst, \
uint idx [[ thread_position_in_grid ]] \
) { \
rope<TYPENAME>(bh, td, d, src, cos, sin, dst, idx); \
}\
kernel void FN_NAME_THD( \
constant size_t &b, \
constant size_t &t, \
constant size_t &h, \
constant size_t &d, \
device const TYPENAME *src, \
device const TYPENAME *cos, \
device const TYPENAME *sin, \
device TYPENAME *dst, \
uint idx [[ thread_position_in_grid ]] \
) { \
rope_thd<TYPENAME>(b, t, h, d, src, cos, sin, dst, idx); \
}\
REDUCE(x + y, fast_sum_f32_strided, float, 0)
REDUCE(x + y, fast_sum_u32_strided, uint, 0)
REDUCE(x + y, fast_sum_f16_strided, half, 0)
REDUCE(x + y, fast_sum_u8_strided, uint8_t, 0)
REDUCE(x * y, fast_mul_f32_strided, float, 1)
REDUCE(x * y, fast_mul_u32_strided, uint, 1)
REDUCE(x * y, fast_mul_f16_strided, half, 1)
REDUCE(MAX(x, y), fast_max_f32_strided, float, -HUGE_VALF)
REDUCE(MAX(x, y), fast_max_u32_strided, uint, 0)
REDUCE(MAX(x, y), fast_max_f16_strided, half, -HUGE_VALH)
REDUCE(MAX(x, y), fast_max_u8_strided, uint8_t, 0)
REDUCE(MIN(x, y), fast_min_f32_strided, float, HUGE_VALF)
REDUCE(MIN(x, y), fast_min_u32_strided, uint, 0xFFFFFFFF)
REDUCE(MIN(x, y), fast_min_f16_strided, half, HUGE_VALH)
REDUCE(MIN(x, y), fast_min_u8_strided, uint8_t, 0xFF)
ARGMIN(fast_argmin_f32_strided, float, HUGE_VALF)
ARGMIN(fast_argmin_f16_strided, half, HUGE_VALH)
ARGMIN(fast_argmin_u32_strided, uint, 0xFFFFFFFF)
ARGMIN(fast_argmin_u8_strided, uint8_t, 0xFF)
ARGMAX(fast_argmax_f32_strided, float, -HUGE_VALF)
ARGMAX(fast_argmax_f16_strided, half, -HUGE_VALH)
ARGMAX(fast_argmax_u32_strided, uint, 0)
ARGMAX(fast_argmax_u8_strided, uint8_t, 0)
SOFTMAX(softmax_f32, float)
SOFTMAX(softmax_f16, half)
RMSNORM(rmsnorm_f32, float)
RMSNORM(rmsnorm_f16, half)
ROPE(rope_f32, rope_i_f32, rope_thd_f32, float)
ROPE(rope_f16, rope_i_f16, rope_thd_f16, half)
#if __METAL_VERSION__ >= 220
REDUCE(x + y, fast_sum_i64_strided, int64_t, 0)
REDUCE(MIN(x, y), fast_min_i64_strided, int64_t, INT_MAX)
REDUCE(MAX(x, y), fast_max_i64_strided, int64_t, INT_MIN)
ARGMIN(fast_argmin_i64_strided, int64_t, INT_MAX)
ARGMAX(fast_argmax_i64_strided, int64_t, INT_MIN)
#endif
#if defined(__HAVE_BFLOAT__)
REDUCE(x + y, fast_sum_bf16, bfloat, 0)
REDUCE(x + y, fast_sum_bf16_strided, half, 0)
REDUCE(x * y, fast_mul_bf16, bfloat, 1)
REDUCE(x * y, fast_mul_bf16_strided, bfloat, 1)
REDUCE(MAX(x, y), fast_max_bf16, bfloat, -HUGE_VALBF)
REDUCE(MAX(x, y), fast_max_bf16_strided, bfloat, -HUGE_VALBF)
REDUCE(MIN(x, y), fast_min_bf16, bfloat, HUGE_VALBF)
REDUCE(MIN(x, y), fast_min_bf16_strided, bfloat, HUGE_VALBF)
ARGMIN(fast_argmin_bf16, bfloat, HUGE_VALBF)
ARGMAX(fast_argmax_bf16, bfloat, -HUGE_VALBF)
SOFTMAX(softmax_bf16, bfloat)
RMSNORM(rmsnorm_bf16, bfloat)
ROPE(rope_bf16, rope_i_bf16, rope_thd_bf16, bfloat)
#endif
|
candle/candle-metal-kernels/src/reduce.metal/0
|
{
"file_path": "candle/candle-metal-kernels/src/reduce.metal",
"repo_id": "candle",
"token_count": 7961
}
| 32
|
/// This example contains some simple benchmarks so that it's easy to run them in perf etc.
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle::quantized::GgmlType;
use candle::{CpuStorage, Device, Layout, Module, Result, Shape, Tensor, D};
use clap::{Parser, Subcommand};
const CHECK_CONV2D: bool = false;
trait Benchmark {
type PreProcessData;
type RunResult;
fn preprocess() -> Result<Self::PreProcessData>;
fn run_one(_: &Self::PreProcessData) -> Result<Self::RunResult>;
const ITERS: usize;
}
struct Im2Col {
h_k: usize,
w_k: usize,
stride: usize,
dilation: usize,
padding: usize,
}
impl Im2Col {
fn hw_out(&self, h: usize, w: usize) -> (usize, usize) {
let h_out = (h + 2 * self.padding - self.dilation * (self.h_k - 1) - 1) / self.stride + 1;
let w_out = (w + 2 * self.padding - self.dilation * (self.w_k - 1) - 1) / self.stride + 1;
(h_out, w_out)
}
}
impl candle::CustomOp1 for Im2Col {
fn name(&self) -> &'static str {
"im2col"
}
fn cpu_fwd(&self, storage: &CpuStorage, layout: &Layout) -> Result<(CpuStorage, Shape)> {
let &Self {
h_k,
w_k,
stride,
dilation,
padding,
} = self;
let (b, c, h, w) = layout.shape().dims4()?;
let (h_out, w_out) = self.hw_out(h, w);
let slice = storage.as_slice::<f32>()?;
let src = &slice[layout.start_offset()..];
let mut dst = vec![0f32; b * h_out * w_out * c * h_k * w_k];
let (src_s0, src_s1, src_s2, src_s3) = {
let s = layout.stride();
(s[0], s[1], s[2], s[3])
};
// TODO: provide specialized kernels for the common use cases.
// - h_k = w_k = 1
// - padding = 0
// - stride = 1
// - dilation = 1
for b_idx in 0..b {
let src_idx = b_idx * src_s0;
let dst_idx = b_idx * h_out * w_out * c * h_k * w_k;
for h_idx in 0..h_out {
let dst_idx = dst_idx + h_idx * w_out * c * h_k * w_k;
for w_idx in 0..w_out {
let dst_idx = dst_idx + w_idx * c * h_k * w_k;
for c_idx in 0..c {
let dst_idx = dst_idx + c_idx * h_k * w_k;
let src_idx = c_idx * src_s1 + src_idx;
for h_k_idx in 0..h_k {
let src_h = h_idx * stride + h_k_idx * dilation;
if padding != 0 && (src_h < padding || src_h >= h + padding) {
continue;
}
let src_h = src_h - padding;
let src_idx = src_idx + src_h * src_s2;
let dst_idx = dst_idx + h_k_idx * w_k;
for w_k_idx in 0..w_k {
let src_w = w_idx * stride + w_k_idx * dilation;
if padding != 0 && (src_w < padding || src_w >= w + padding) {
continue;
}
let src_w = src_w - padding;
let src_idx = src_idx + src_w * src_s3;
let dst_idx = dst_idx + w_k_idx;
dst[dst_idx] = src[src_idx]
}
}
}
}
}
}
let storage = candle::WithDType::to_cpu_storage_owned(dst);
Ok((storage, (b * h_out * w_out, c * h_k * w_k).into()))
}
}
// Conv1d example as used in whisper.
struct Conv1d;
impl Benchmark for Conv1d {
type PreProcessData = (Tensor, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let inp = Tensor::randn(0f32, 1., (1, 384, 3000), &Device::Cpu)?;
let w = Tensor::randn(0f32, 1., (384, 384, 3), &Device::Cpu)?;
Ok((inp, w))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
d.0.conv1d(&d.1, 0, 1, 1, 1)
}
const ITERS: usize = 5;
}
// Conv2d example as used in stable-diffusion.
struct Conv2d;
impl Benchmark for Conv2d {
type PreProcessData = (Tensor, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let inp = Tensor::randn(0f32, 1., (2, 320, 96, 96), &Device::Cpu)?;
let w = Tensor::randn(0f32, 1., (320, 320, 3, 3), &Device::Cpu)?;
Ok((inp, w))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
d.0.conv2d(&d.1, 0, 1, 1, 1)
}
const ITERS: usize = 5;
}
// Conv2d example as used in stable-diffusion, im2col implementation.
struct Conv2dIm2Col;
impl Benchmark for Conv2dIm2Col {
type PreProcessData = (Tensor, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let inp = Tensor::randn(0f32, 1., (2, 320, 96, 96), &Device::Cpu)?;
let w = Tensor::randn(0f32, 1., (320, 320, 3, 3), &Device::Cpu)?;
Ok((inp, w))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
// d.0.conv2d(&d.1, 0, 1, 1, 1)
let (b, _, h, w) = d.0.dims4()?;
let (_, _, h_k, w_k) = d.1.dims4()?;
let op = Im2Col {
h_k,
w_k,
stride: 1,
dilation: 1,
padding: 0,
};
let (h_out, w_out) = op.hw_out(h, w);
let col = d.0.apply_op1_no_bwd(&op)?;
let res = col.matmul(&d.1.flatten_from(1)?.t()?)?;
let res = res
.reshape((b, h_out, w_out, ()))?
.permute((0, 3, 1, 2))?
.contiguous()?;
if CHECK_CONV2D {
let res2 = d.0.conv2d(&d.1, op.padding, op.stride, op.dilation, 1);
let diff = (&res - res2)?.sqr()?.mean_all()?;
println!("{diff}");
}
Ok(res)
}
const ITERS: usize = 5;
}
struct MatMul;
impl Benchmark for MatMul {
type PreProcessData = (Tensor, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let lhs = Tensor::randn(0f32, 1., (1024, 1024), &Device::Cpu)?;
let rhs = Tensor::randn(0f32, 1., (1024, 1024), &Device::Cpu)?;
Ok((lhs, rhs))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
d.0.matmul(&d.1)
}
const ITERS: usize = 100;
}
struct MatVec;
impl Benchmark for MatVec {
type PreProcessData = (Tensor, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let lhs = Tensor::randn(0f32, 1., (1024 * 4, 1024 * 4), &Device::Cpu)?;
let rhs = Tensor::randn(0f32, 1., (1024 * 4, 1), &Device::Cpu)?;
Ok((lhs, rhs))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
d.0.matmul(&d.1)
}
const ITERS: usize = 100;
}
// This benchmark is similar to:
// https://github.com/ggerganov/llama.cpp/blob/master/examples/benchmark/benchmark-matmult.cpp
struct QMatMul;
impl Benchmark for QMatMul {
type PreProcessData = (candle::quantized::QMatMul, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let zeros = vec![candle::quantized::k_quants::BlockQ4_0::zeros(); 4096 * 11008 / 32];
let mm = candle::quantized::QTensor::new(
candle::quantized::QStorage::Cpu(Box::new(zeros)),
(4096, 11008),
)?;
let mm = candle::quantized::QMatMul::from_qtensor(mm)?;
let arg = Tensor::randn(0f32, 1., (128, 11008), &Device::Cpu)?;
Ok((mm, arg))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
d.0.forward(&d.1)
}
const ITERS: usize = 100;
}
struct Cat;
impl Benchmark for Cat {
type PreProcessData = (Tensor, Tensor);
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
let lhs = Tensor::randn(0f32, 1., (1, 32, 2000, 128), &Device::Cpu)?;
let rhs = Tensor::randn(0f32, 1., (1, 32, 1, 128), &Device::Cpu)?;
Ok((lhs, rhs))
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
Tensor::cat(&[&d.0, &d.1], 2)
}
const ITERS: usize = 1000;
}
struct Softmax;
impl Benchmark for Softmax {
type PreProcessData = Tensor;
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
// Typical whisper tiny size.
let x = Tensor::randn(0f32, 1., (1, 6, 200, 1500), &Device::Cpu)?;
Ok(x)
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
candle_nn::ops::softmax(d, D::Minus1)
}
const ITERS: usize = 100;
}
struct SoftmaxLastDim;
impl Benchmark for SoftmaxLastDim {
type PreProcessData = Tensor;
type RunResult = Tensor;
fn preprocess() -> Result<Self::PreProcessData> {
// Typical whisper tiny size.
let x = Tensor::randn(0f32, 1., (1, 6, 200, 1500), &Device::Cpu)?;
Ok(x)
}
fn run_one(d: &Self::PreProcessData) -> Result<Self::RunResult> {
candle_nn::ops::softmax_last_dim(d)
}
const ITERS: usize = 100;
}
fn run<B: Benchmark>(iters: Option<usize>) -> Result<()> {
use std::hint::black_box;
let iters = iters.unwrap_or(B::ITERS);
let d = B::preprocess()?;
let start = std::time::Instant::now();
for _iter in 0..iters {
let _res = black_box(B::run_one(black_box(&d))?);
}
println!("{:?}", start.elapsed() / iters as u32);
Ok(())
}
#[derive(Subcommand, Debug, Clone)]
enum Task {
Conv1d,
Conv2d,
Conv2dIm2Col,
Matmul,
Matvec,
Qmatmul,
Softmax,
SoftmaxLastDim,
Cat,
}
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
pub struct Args {
/// The benchmark to be run.
#[command(subcommand)]
task: Task,
#[arg(long)]
iters: Option<usize>,
}
fn main() -> Result<()> {
let args = Args::parse();
match args.task {
Task::Conv1d => run::<Conv1d>(args.iters)?,
Task::Conv2d => run::<Conv2d>(args.iters)?,
Task::Conv2dIm2Col => run::<Conv2dIm2Col>(args.iters)?,
Task::Matmul => run::<MatMul>(args.iters)?,
Task::Matvec => run::<MatVec>(args.iters)?,
Task::Softmax => run::<Softmax>(args.iters)?,
Task::SoftmaxLastDim => run::<SoftmaxLastDim>(args.iters)?,
Task::Qmatmul => run::<QMatMul>(args.iters)?,
Task::Cat => run::<Cat>(args.iters)?,
}
Ok(())
}
|
candle/candle-nn/examples/cpu_benchmarks.rs/0
|
{
"file_path": "candle/candle-nn/examples/cpu_benchmarks.rs",
"repo_id": "candle",
"token_count": 5543
}
| 33
|
use candle::{CpuStorage, Layout, Result, Shape, Tensor, D};
use rayon::prelude::*;
/// Interleaved variant of rotary embeddings.
/// The x0 and x1 value are interleaved on the n_embd (= head_dim) dimension.
/// The resulting y0 and y1 are also interleaved with:
/// y0 = x0*cos - x1*sin
/// y1 = x0*sin + x1*cos
#[derive(Debug, Clone)]
struct RotaryEmbI;
impl candle::CustomOp3 for RotaryEmbI {
fn name(&self) -> &'static str {
"rotary-emb-int"
}
fn cpu_fwd(
&self,
s1: &CpuStorage,
l1: &Layout,
s2: &CpuStorage,
l2: &Layout,
s3: &CpuStorage,
l3: &Layout,
) -> Result<(CpuStorage, Shape)> {
fn inner<T: candle::WithDType + num_traits::Float>(
src: &[T],
l_src: &Layout,
cos: &[T],
l_cos: &Layout,
sin: &[T],
l_sin: &Layout,
) -> Result<(CpuStorage, Shape)> {
let src = match l_src.contiguous_offsets() {
None => candle::bail!("input src has to be contiguous"),
Some((o1, o2)) => &src[o1..o2],
};
let cos = match l_cos.contiguous_offsets() {
None => candle::bail!("input cos has to be contiguous"),
Some((o1, o2)) => &cos[o1..o2],
};
let sin = match l_sin.contiguous_offsets() {
None => candle::bail!("input sin has to be contiguous"),
Some((o1, o2)) => &sin[o1..o2],
};
let (b, h, t, d) = l_src.shape().dims4()?;
let el_count = b * h * t * d;
let mut dst = vec![T::zero(); el_count];
src.par_chunks(t * d)
.zip(dst.par_chunks_mut(t * d))
.for_each(|(src, dst)| {
for i_over_2 in 0..t * d / 2 {
let i = 2 * i_over_2;
dst[i] = src[i] * cos[i_over_2] - src[i + 1] * sin[i_over_2];
dst[i + 1] = src[i] * sin[i_over_2] + src[i + 1] * cos[i_over_2];
}
});
let storage = candle::WithDType::to_cpu_storage_owned(dst);
Ok((storage, (b, h, t, d).into()))
}
use candle::backend::BackendStorage;
use CpuStorage::{BF16, F16, F32, F64};
match (s1, s2, s3) {
(BF16(s1), BF16(s2), BF16(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F16(s1), F16(s2), F16(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F32(s1), F32(s2), F32(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F64(s1), F64(s2), F64(s3)) => inner(s1, l1, s2, l2, s3, l3),
_ => candle::bail!(
"unsupported dtype for rope {:?} {:?} {:?}",
s1.dtype(),
s2.dtype(),
s3.dtype()
),
}
}
#[cfg(feature = "cuda")]
fn cuda_fwd(
&self,
s1: &candle::CudaStorage,
l1: &Layout,
s2: &candle::CudaStorage,
l2: &Layout,
s3: &candle::CudaStorage,
l3: &Layout,
) -> Result<(candle::CudaStorage, Shape)> {
use candle::cuda_backend::cudarc::driver::{
CudaSlice, DeviceRepr, LaunchAsync, LaunchConfig,
};
use candle::cuda_backend::{kernel_name, kernels, WrapErr};
use candle::{CudaDevice, WithDType};
fn inner<T: DeviceRepr + WithDType>(
src: &CudaSlice<T>,
l_src: &Layout,
cos: &CudaSlice<T>,
l_cos: &Layout,
sin: &CudaSlice<T>,
l_sin: &Layout,
dev: &CudaDevice,
) -> Result<CudaSlice<T>> {
let src = match l_src.contiguous_offsets() {
None => candle::bail!("src input has to be contiguous"),
Some((o1, o2)) => src.slice(o1..o2),
};
let cos = match l_cos.contiguous_offsets() {
None => candle::bail!("cos input has to be contiguous"),
Some((o1, o2)) => cos.slice(o1..o2),
};
let sin = match l_sin.contiguous_offsets() {
None => candle::bail!("sin input has to be contiguous"),
Some((o1, o2)) => sin.slice(o1..o2),
};
let (b, h, t, d) = l_src.shape().dims4()?;
let el = b * h * t * d;
let cfg = LaunchConfig::for_num_elems((el / 2) as u32);
let func = dev.get_or_load_func(&kernel_name::<T>("rope_i"), kernels::REDUCE)?;
// SAFETY: Set later by running the kernel.
let dst = unsafe { dev.alloc::<T>(el) }.w()?;
let params = (&src, &cos, &sin, &dst, (b * h) as u32, (t * d) as u32);
// SAFETY: ffi.
unsafe { func.launch(cfg, params) }.w()?;
Ok(dst)
}
use candle::backend::BackendStorage;
use candle::cuda_backend::CudaStorageSlice::{BF16, F16, F32, F64};
let dev = s1.device();
let slice = match (&s1.slice, &s2.slice, &s3.slice) {
(BF16(s1), BF16(s2), BF16(s3)) => BF16(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F16(s1), F16(s2), F16(s3)) => F16(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F32(s1), F32(s2), F32(s3)) => F32(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F64(s1), F64(s2), F64(s3)) => F64(inner(s1, l1, s2, l2, s3, l3, dev)?),
_ => candle::bail!(
"unsupported dtype for rope {:?} {:?} {:?}",
s1.dtype(),
s2.dtype(),
s3.dtype()
),
};
let dst = candle::cuda_backend::CudaStorage {
slice,
device: dev.clone(),
};
Ok((dst, l1.shape().clone()))
}
#[cfg(feature = "metal")]
fn metal_fwd(
&self,
src: &candle::MetalStorage,
l_src: &Layout,
cos: &candle::MetalStorage,
l_cos: &Layout,
sin: &candle::MetalStorage,
l_sin: &Layout,
) -> Result<(candle::MetalStorage, Shape)> {
use candle::backend::BackendStorage;
let device = src.device();
let command_buffer = device.command_buffer()?;
let kernels = device.kernels();
if cos.dtype() != src.dtype() || sin.dtype() != src.dtype() {
candle::bail!(
"dtype mismatch in rope-i {:?} {:?} {:?}",
src.dtype(),
cos.dtype(),
sin.dtype()
)
}
let name = match src.dtype() {
candle::DType::F32 => "rope_i_f32",
candle::DType::F16 => "rope_i_f16",
candle::DType::BF16 => "rope_i_bf16",
dtype => candle::bail!("rope-i is not implemented for {dtype:?}"),
};
let (b, h, t, d) = l_src.shape().dims4()?;
let el = b * h * t * d;
let output = device.new_buffer(el, src.dtype(), "rope-i")?;
candle_metal_kernels::call_rope_i(
device.metal_device(),
&command_buffer,
kernels,
name,
b * h,
t * d,
src.buffer(),
l_src.start_offset() * src.dtype().size_in_bytes(),
cos.buffer(),
l_cos.start_offset() * cos.dtype().size_in_bytes(),
sin.buffer(),
l_sin.start_offset() * sin.dtype().size_in_bytes(),
&output,
)
.map_err(candle::Error::wrap)?;
let out = candle::MetalStorage::new(output, device.clone(), el, src.dtype());
Ok((out, l_src.shape().clone()))
}
}
pub fn rope_i(xs: &Tensor, cos: &Tensor, sin: &Tensor) -> Result<Tensor> {
let (_b_sz, _n_head, seq_len, n_embd) = xs.dims4()?;
let (cos_seq_len, cos_n_embd) = cos.dims2()?;
let (sin_seq_len, sin_n_embd) = cos.dims2()?;
if cos_n_embd * 2 != n_embd
|| sin_n_embd * 2 != n_embd
|| seq_len > cos_seq_len
|| seq_len > sin_seq_len
{
candle::bail!(
"inconsistent last dim size in rope {:?} {:?} {:?}",
xs.shape(),
cos.shape(),
sin.shape()
)
}
if !xs.is_contiguous() {
candle::bail!("xs has to be contiguous in rope")
}
if !cos.is_contiguous() {
candle::bail!("cos has to be contiguous in rope")
}
if !sin.is_contiguous() {
candle::bail!("sin has to be contiguous in rope")
}
xs.apply_op3_no_bwd(cos, sin, &RotaryEmbI)
}
pub fn rope_i_slow(x: &Tensor, cos: &Tensor, sin: &Tensor) -> Result<Tensor> {
let (b_sz, n_head, seq_len, n_embd) = x.dims4()?;
let cos = cos
.narrow(0, 0, seq_len)?
.reshape((seq_len, n_embd / 2, 1))?;
let sin = sin
.narrow(0, 0, seq_len)?
.reshape((seq_len, n_embd / 2, 1))?;
let cos = cos.broadcast_as((b_sz, 1, seq_len, n_embd / 2, 1))?;
let sin = sin.broadcast_as((b_sz, 1, seq_len, n_embd / 2, 1))?;
let x = x.reshape((b_sz, n_head, seq_len, n_embd / 2, 2))?;
let x0 = x.narrow(D::Minus1, 0, 1)?;
let x1 = x.narrow(D::Minus1, 1, 1)?;
let y0 = (x0.broadcast_mul(&cos)? - x1.broadcast_mul(&sin)?)?;
let y1 = (x0.broadcast_mul(&sin)? + x1.broadcast_mul(&cos)?)?;
let rope = Tensor::cat(&[y0, y1], D::Minus1)?;
let rope = rope.flatten_from(D::Minus2)?;
Ok(rope)
}
/// Contiguous variant of rope embeddings.
#[derive(Debug, Clone)]
struct RotaryEmb;
impl candle::CustomOp3 for RotaryEmb {
fn name(&self) -> &'static str {
"rotary-emb"
}
fn cpu_fwd(
&self,
s1: &CpuStorage,
l1: &Layout,
s2: &CpuStorage,
l2: &Layout,
s3: &CpuStorage,
l3: &Layout,
) -> Result<(CpuStorage, Shape)> {
fn inner<T: candle::WithDType + num_traits::Float>(
src: &[T],
l_src: &Layout,
cos: &[T],
l_cos: &Layout,
sin: &[T],
l_sin: &Layout,
) -> Result<(CpuStorage, Shape)> {
let src = match l_src.contiguous_offsets() {
None => candle::bail!("input src has to be contiguous"),
Some((o1, o2)) => &src[o1..o2],
};
let cos = match l_cos.contiguous_offsets() {
None => candle::bail!("input cos has to be contiguous"),
Some((o1, o2)) => &cos[o1..o2],
};
let sin = match l_sin.contiguous_offsets() {
None => candle::bail!("input sin has to be contiguous"),
Some((o1, o2)) => &sin[o1..o2],
};
let (b, h, t, d) = l_src.shape().dims4()?;
let el_count = b * h * t * d;
let mut dst = vec![T::zero(); el_count];
src.par_chunks(t * d)
.zip(dst.par_chunks_mut(t * d))
.for_each(|(src, dst)| {
for i_t in 0..t {
for i_d in 0..d / 2 {
let i1 = i_t * d + i_d;
let i2 = i1 + d / 2;
let i_cs = i_t * (d / 2) + i_d;
dst[i1] = src[i1] * cos[i_cs] - src[i2] * sin[i_cs];
dst[i2] = src[i1] * sin[i_cs] + src[i2] * cos[i_cs];
}
}
});
let storage = candle::WithDType::to_cpu_storage_owned(dst);
Ok((storage, (b, h, t, d).into()))
}
use candle::backend::BackendStorage;
use CpuStorage::{BF16, F16, F32, F64};
match (s1, s2, s3) {
(BF16(s1), BF16(s2), BF16(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F16(s1), F16(s2), F16(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F32(s1), F32(s2), F32(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F64(s1), F64(s2), F64(s3)) => inner(s1, l1, s2, l2, s3, l3),
_ => candle::bail!(
"unsupported dtype for rope {:?} {:?} {:?}",
s1.dtype(),
s2.dtype(),
s3.dtype()
),
}
}
#[cfg(feature = "cuda")]
fn cuda_fwd(
&self,
s1: &candle::CudaStorage,
l1: &Layout,
s2: &candle::CudaStorage,
l2: &Layout,
s3: &candle::CudaStorage,
l3: &Layout,
) -> Result<(candle::CudaStorage, Shape)> {
use candle::cuda_backend::cudarc::driver::{
CudaSlice, DeviceRepr, LaunchAsync, LaunchConfig,
};
use candle::cuda_backend::{kernel_name, kernels, WrapErr};
use candle::{CudaDevice, WithDType};
fn inner<T: DeviceRepr + WithDType>(
src: &CudaSlice<T>,
l_src: &Layout,
cos: &CudaSlice<T>,
l_cos: &Layout,
sin: &CudaSlice<T>,
l_sin: &Layout,
dev: &CudaDevice,
) -> Result<CudaSlice<T>> {
let src = match l_src.contiguous_offsets() {
None => candle::bail!("src input has to be contiguous"),
Some((o1, o2)) => src.slice(o1..o2),
};
let cos = match l_cos.contiguous_offsets() {
None => candle::bail!("cos input has to be contiguous"),
Some((o1, o2)) => cos.slice(o1..o2),
};
let sin = match l_sin.contiguous_offsets() {
None => candle::bail!("sin input has to be contiguous"),
Some((o1, o2)) => sin.slice(o1..o2),
};
let (b, h, t, d) = l_src.shape().dims4()?;
let el = b * h * t * d;
let cfg = LaunchConfig::for_num_elems((el / 2) as u32);
let func = dev.get_or_load_func(&kernel_name::<T>("rope"), kernels::REDUCE)?;
// SAFETY: Set later by running the kernel.
let dst = unsafe { dev.alloc::<T>(el) }.w()?;
let params = (
&src,
&cos,
&sin,
&dst,
(b * h) as u32,
(t * d) as u32,
d as u32,
);
// SAFETY: ffi.
unsafe { func.launch(cfg, params) }.w()?;
Ok(dst)
}
use candle::backend::BackendStorage;
use candle::cuda_backend::CudaStorageSlice::{BF16, F16, F32, F64};
let dev = s1.device();
let slice = match (&s1.slice, &s2.slice, &s3.slice) {
(BF16(s1), BF16(s2), BF16(s3)) => BF16(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F16(s1), F16(s2), F16(s3)) => F16(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F32(s1), F32(s2), F32(s3)) => F32(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F64(s1), F64(s2), F64(s3)) => F64(inner(s1, l1, s2, l2, s3, l3, dev)?),
_ => candle::bail!(
"unsupported dtype for rope {:?} {:?} {:?}",
s1.dtype(),
s2.dtype(),
s3.dtype()
),
};
let dst = candle::cuda_backend::CudaStorage {
slice,
device: dev.clone(),
};
Ok((dst, l1.shape().clone()))
}
#[cfg(feature = "metal")]
fn metal_fwd(
&self,
src: &candle::MetalStorage,
l_src: &Layout,
cos: &candle::MetalStorage,
l_cos: &Layout,
sin: &candle::MetalStorage,
l_sin: &Layout,
) -> Result<(candle::MetalStorage, Shape)> {
use candle::backend::BackendStorage;
let device = src.device();
let command_buffer = device.command_buffer()?;
let kernels = device.kernels();
if cos.dtype() != src.dtype() || sin.dtype() != src.dtype() {
candle::bail!(
"dtype mismatch in rope {:?} {:?} {:?}",
src.dtype(),
cos.dtype(),
sin.dtype()
)
}
let name = match src.dtype() {
candle::DType::F32 => "rope_f32",
candle::DType::F16 => "rope_f16",
candle::DType::BF16 => "rope_bf16",
dtype => candle::bail!("rope is not implemented for {dtype:?}"),
};
let (b, h, t, d) = l_src.shape().dims4()?;
let el = b * h * t * d;
let output = device.new_buffer(el, src.dtype(), "rope-i")?;
candle_metal_kernels::call_rope(
device.metal_device(),
&command_buffer,
kernels,
name,
b * h,
t * d,
d,
src.buffer(),
l_src.start_offset() * src.dtype().size_in_bytes(),
cos.buffer(),
l_cos.start_offset() * cos.dtype().size_in_bytes(),
sin.buffer(),
l_sin.start_offset() * sin.dtype().size_in_bytes(),
&output,
)
.map_err(candle::Error::wrap)?;
let out = candle::MetalStorage::new(output, device.clone(), el, src.dtype());
Ok((out, l_src.shape().clone()))
}
}
pub fn rope(xs: &Tensor, cos: &Tensor, sin: &Tensor) -> Result<Tensor> {
let (_b_sz, _n_head, seq_len, n_embd) = xs.dims4()?;
let (cos_seq_len, cos_n_embd) = cos.dims2()?;
let (sin_seq_len, sin_n_embd) = sin.dims2()?;
if cos_n_embd * 2 != n_embd
|| sin_n_embd * 2 != n_embd
|| seq_len > cos_seq_len
|| seq_len > sin_seq_len
{
candle::bail!(
"inconsistent last dim size in rope {:?} {:?} {:?}",
xs.shape(),
cos.shape(),
sin.shape()
)
}
if !xs.is_contiguous() {
candle::bail!("xs has to be contiguous in rope")
}
if !cos.is_contiguous() {
candle::bail!("cos has to be contiguous in rope")
}
if !sin.is_contiguous() {
candle::bail!("sin has to be contiguous in rope")
}
xs.apply_op3_no_bwd(cos, sin, &RotaryEmb)
}
fn rotate_half(xs: &Tensor) -> Result<Tensor> {
let last_dim = xs.dim(D::Minus1)?;
let xs1 = xs.narrow(D::Minus1, 0, last_dim / 2)?;
let xs2 = xs.narrow(D::Minus1, last_dim / 2, last_dim - last_dim / 2)?;
Tensor::cat(&[&xs2.neg()?, &xs1], D::Minus1)
}
pub fn rope_slow(x: &Tensor, cos: &Tensor, sin: &Tensor) -> Result<Tensor> {
let (_b_sz, _h, seq_len, _n_embd) = x.dims4()?;
let cos = Tensor::cat(&[cos, cos], D::Minus1)?;
let sin = Tensor::cat(&[sin, sin], D::Minus1)?;
let cos = cos.narrow(0, 0, seq_len)?;
let sin = sin.narrow(0, 0, seq_len)?;
let cos = cos.unsqueeze(0)?.unsqueeze(0)?;
let sin = sin.unsqueeze(0)?.unsqueeze(0)?;
x.broadcast_mul(&cos)? + rotate_half(x)?.broadcast_mul(&sin)?
}
/// T (seqlen)/H (num-heads)/D (head-dim) contiguous variant of rope embeddings.
#[derive(Debug, Clone)]
struct RotaryEmbThd;
impl candle::CustomOp3 for RotaryEmbThd {
fn name(&self) -> &'static str {
"rotary-emb"
}
fn cpu_fwd(
&self,
s1: &CpuStorage,
l1: &Layout,
s2: &CpuStorage,
l2: &Layout,
s3: &CpuStorage,
l3: &Layout,
) -> Result<(CpuStorage, Shape)> {
fn inner<T: candle::WithDType + num_traits::Float>(
src: &[T],
l_src: &Layout,
cos: &[T],
l_cos: &Layout,
sin: &[T],
l_sin: &Layout,
) -> Result<(CpuStorage, Shape)> {
let src = match l_src.contiguous_offsets() {
None => candle::bail!("input src has to be contiguous"),
Some((o1, o2)) => &src[o1..o2],
};
let cos = match l_cos.contiguous_offsets() {
None => candle::bail!("input cos has to be contiguous"),
Some((o1, o2)) => &cos[o1..o2],
};
let sin = match l_sin.contiguous_offsets() {
None => candle::bail!("input sin has to be contiguous"),
Some((o1, o2)) => &sin[o1..o2],
};
let (b, t, h, d) = l_src.shape().dims4()?;
let el_count = b * h * t * d;
let mut dst = vec![T::zero(); el_count];
src.par_chunks(t * h * d)
.zip(dst.par_chunks_mut(t * h * d))
.for_each(|(src, dst)| {
for i_t in 0..t {
for i_d in 0..d / 2 {
let i_cs = i_t * (d / 2) + i_d;
for i_h in 0..h {
let i1 = i_t * h * d + i_h * d + i_d;
let i2 = i1 + d / 2;
dst[i1] = src[i1] * cos[i_cs] - src[i2] * sin[i_cs];
dst[i2] = src[i1] * sin[i_cs] + src[i2] * cos[i_cs];
}
}
}
});
let storage = candle::WithDType::to_cpu_storage_owned(dst);
Ok((storage, (b, t, h, d).into()))
}
use candle::backend::BackendStorage;
use CpuStorage::{BF16, F16, F32, F64};
match (s1, s2, s3) {
(BF16(s1), BF16(s2), BF16(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F16(s1), F16(s2), F16(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F32(s1), F32(s2), F32(s3)) => inner(s1, l1, s2, l2, s3, l3),
(F64(s1), F64(s2), F64(s3)) => inner(s1, l1, s2, l2, s3, l3),
_ => candle::bail!(
"unsupported dtype for rope {:?} {:?} {:?}",
s1.dtype(),
s2.dtype(),
s3.dtype()
),
}
}
#[cfg(feature = "cuda")]
fn cuda_fwd(
&self,
s1: &candle::CudaStorage,
l1: &Layout,
s2: &candle::CudaStorage,
l2: &Layout,
s3: &candle::CudaStorage,
l3: &Layout,
) -> Result<(candle::CudaStorage, Shape)> {
use candle::cuda_backend::cudarc::driver::{
CudaSlice, DeviceRepr, LaunchAsync, LaunchConfig,
};
use candle::cuda_backend::{kernel_name, kernels, WrapErr};
use candle::{CudaDevice, WithDType};
fn inner<T: DeviceRepr + WithDType>(
src: &CudaSlice<T>,
l_src: &Layout,
cos: &CudaSlice<T>,
l_cos: &Layout,
sin: &CudaSlice<T>,
l_sin: &Layout,
dev: &CudaDevice,
) -> Result<CudaSlice<T>> {
let src = match l_src.contiguous_offsets() {
None => candle::bail!("src input has to be contiguous"),
Some((o1, o2)) => src.slice(o1..o2),
};
let cos = match l_cos.contiguous_offsets() {
None => candle::bail!("cos input has to be contiguous"),
Some((o1, o2)) => cos.slice(o1..o2),
};
let sin = match l_sin.contiguous_offsets() {
None => candle::bail!("sin input has to be contiguous"),
Some((o1, o2)) => sin.slice(o1..o2),
};
let (b, t, h, d) = l_src.shape().dims4()?;
let el = b * h * t * d;
let cfg = LaunchConfig::for_num_elems((el / 2) as u32);
let func = dev.get_or_load_func(&kernel_name::<T>("rope_thd"), kernels::REDUCE)?;
// SAFETY: Set later by running the kernel.
let dst = unsafe { dev.alloc::<T>(el) }.w()?;
let params = (
&src, &cos, &sin, &dst, b as u32, t as u32, h as u32, d as u32,
);
// SAFETY: ffi.
unsafe { func.launch(cfg, params) }.w()?;
Ok(dst)
}
use candle::backend::BackendStorage;
use candle::cuda_backend::CudaStorageSlice::{BF16, F16, F32, F64};
let dev = s1.device();
let slice = match (&s1.slice, &s2.slice, &s3.slice) {
(BF16(s1), BF16(s2), BF16(s3)) => BF16(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F16(s1), F16(s2), F16(s3)) => F16(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F32(s1), F32(s2), F32(s3)) => F32(inner(s1, l1, s2, l2, s3, l3, dev)?),
(F64(s1), F64(s2), F64(s3)) => F64(inner(s1, l1, s2, l2, s3, l3, dev)?),
_ => candle::bail!(
"unsupported dtype for rope {:?} {:?} {:?}",
s1.dtype(),
s2.dtype(),
s3.dtype()
),
};
let dst = candle::cuda_backend::CudaStorage {
slice,
device: dev.clone(),
};
Ok((dst, l1.shape().clone()))
}
#[cfg(feature = "metal")]
fn metal_fwd(
&self,
src: &candle::MetalStorage,
l_src: &Layout,
cos: &candle::MetalStorage,
l_cos: &Layout,
sin: &candle::MetalStorage,
l_sin: &Layout,
) -> Result<(candle::MetalStorage, Shape)> {
use candle::backend::BackendStorage;
let device = src.device();
let command_buffer = device.command_buffer()?;
let kernels = device.kernels();
if cos.dtype() != src.dtype() || sin.dtype() != src.dtype() {
candle::bail!(
"dtype mismatch in rope {:?} {:?} {:?}",
src.dtype(),
cos.dtype(),
sin.dtype()
)
}
let name = match src.dtype() {
candle::DType::F32 => "rope_thd_f32",
candle::DType::F16 => "rope_thd_f16",
candle::DType::BF16 => "rope_thd_bf16",
dtype => candle::bail!("rope_thd is not implemented for {dtype:?}"),
};
let (b, t, h, d) = l_src.shape().dims4()?;
let el = b * h * t * d;
let output = device.new_buffer(el, src.dtype(), "rope-thd")?;
candle_metal_kernels::call_rope_thd(
device.metal_device(),
&command_buffer,
kernels,
name,
b,
t,
h,
d,
src.buffer(),
l_src.start_offset() * src.dtype().size_in_bytes(),
cos.buffer(),
l_cos.start_offset() * cos.dtype().size_in_bytes(),
sin.buffer(),
l_sin.start_offset() * sin.dtype().size_in_bytes(),
&output,
)
.map_err(candle::Error::wrap)?;
let out = candle::MetalStorage::new(output, device.clone(), el, src.dtype());
Ok((out, l_src.shape().clone()))
}
}
pub fn rope_thd(xs: &Tensor, cos: &Tensor, sin: &Tensor) -> Result<Tensor> {
let (_b_sz, seq_len, _n_head, n_embd) = xs.dims4()?;
let (cos_seq_len, cos_n_embd) = cos.dims2()?;
let (sin_seq_len, sin_n_embd) = sin.dims2()?;
if cos_n_embd * 2 != n_embd
|| sin_n_embd * 2 != n_embd
|| seq_len > cos_seq_len
|| seq_len > sin_seq_len
{
candle::bail!(
"inconsistent last dim size in rope {:?} {:?} {:?}",
xs.shape(),
cos.shape(),
sin.shape()
)
}
if !xs.is_contiguous() {
candle::bail!("xs has to be contiguous in rope")
}
if !cos.is_contiguous() {
candle::bail!("cos has to be contiguous in rope")
}
if !sin.is_contiguous() {
candle::bail!("sin has to be contiguous in rope")
}
xs.apply_op3_no_bwd(cos, sin, &RotaryEmbThd)
}
|
candle/candle-nn/src/rotary_emb.rs/0
|
{
"file_path": "candle/candle-nn/src/rotary_emb.rs",
"repo_id": "candle",
"token_count": 15510
}
| 34
|
use candle::Result;
use prost::Message;
pub mod onnx {
include!(concat!(env!("OUT_DIR"), "/onnx.rs"));
}
pub mod eval;
pub use eval::{dtype, simple_eval};
pub fn read_file<P: AsRef<std::path::Path>>(p: P) -> Result<onnx::ModelProto> {
let buf = std::fs::read(p)?;
onnx::ModelProto::decode(buf.as_slice()).map_err(candle::Error::wrap)
}
|
candle/candle-onnx/src/lib.rs/0
|
{
"file_path": "candle/candle-onnx/src/lib.rs",
"repo_id": "candle",
"token_count": 154
}
| 35
|
from .module import Module
from .container import Sequential, ModuleList, ModuleDict
from .sparse import Embedding
from .normalization import LayerNorm
from .linear import Linear
|
candle/candle-pyo3/py_src/candle/nn/__init__.py/0
|
{
"file_path": "candle/candle-pyo3/py_src/candle/nn/__init__.py",
"repo_id": "candle",
"token_count": 43
}
| 36
|
use std::collections::HashMap;
use crate::utils::wrap_err;
use crate::{PyDType, PyTensor};
use candle_onnx::eval::{dtype, get_tensor, simple_eval};
use candle_onnx::onnx::tensor_proto::DataType;
use candle_onnx::onnx::tensor_shape_proto::dimension::Value;
use candle_onnx::onnx::type_proto::{Tensor as ONNXTensor, Value as ONNXValue};
use candle_onnx::onnx::{ModelProto, ValueInfoProto};
use pyo3::exceptions::PyValueError;
use pyo3::prelude::*;
use pyo3::types::{PyList, PyTuple};
#[derive(Clone, Debug)]
#[pyclass(name = "ONNXTensorDescription")]
/// A wrapper around an ONNX tensor description.
pub struct PyONNXTensorDescriptor(ONNXTensor);
#[pymethods]
impl PyONNXTensorDescriptor {
#[getter]
/// The data type of the tensor.
/// &RETURNS&: DType
fn dtype(&self) -> PyResult<PyDType> {
match DataType::try_from(self.0.elem_type) {
Ok(dt) => match dtype(dt) {
Some(dt) => Ok(PyDType(dt)),
None => Err(PyValueError::new_err(format!(
"unsupported 'value' data-type {dt:?}"
))),
},
type_ => Err(PyValueError::new_err(format!(
"unsupported input type {type_:?}"
))),
}
}
#[getter]
/// The shape of the tensor.
/// &RETURNS&: Tuple[Union[int,str,Any]]
fn shape(&self, py: Python) -> PyResult<Py<PyTuple>> {
let shape = PyList::empty_bound(py);
if let Some(d) = &self.0.shape {
for dim in d.dim.iter() {
if let Some(value) = &dim.value {
match value {
Value::DimValue(v) => shape.append(*v)?,
Value::DimParam(s) => shape.append(s.clone())?,
};
} else {
return Err(PyValueError::new_err("None value in shape"));
}
}
}
Ok(shape.to_tuple().into())
}
fn __repr__(&self, py: Python) -> String {
match (self.shape(py), self.dtype()) {
(Ok(shape), Ok(dtype)) => format!(
"TensorDescriptor[shape: {:?}, dtype: {:?}]",
shape.to_string(),
dtype.__str__()
),
(Err(_), Err(_)) => "TensorDescriptor[shape: unknown, dtype: unknown]".to_string(),
(Err(_), Ok(dtype)) => format!(
"TensorDescriptor[shape: unknown, dtype: {:?}]",
dtype.__str__()
),
(Ok(shape), Err(_)) => format!(
"TensorDescriptor[shape: {:?}, dtype: unknown]",
shape.to_string()
),
}
}
fn __str__(&self, py: Python) -> String {
self.__repr__(py)
}
}
#[derive(Clone, Debug)]
#[pyclass(name = "ONNXModel")]
/// A wrapper around an ONNX model.
pub struct PyONNXModel(ModelProto);
fn extract_tensor_descriptions(
value_infos: &[ValueInfoProto],
) -> HashMap<String, PyONNXTensorDescriptor> {
let mut map = HashMap::new();
for value_info in value_infos.iter() {
let input_type = match &value_info.r#type {
Some(input_type) => input_type,
None => continue,
};
let input_type = match &input_type.value {
Some(input_type) => input_type,
None => continue,
};
let tensor_type: &ONNXTensor = match input_type {
ONNXValue::TensorType(tt) => tt,
_ => continue,
};
map.insert(
value_info.name.to_string(),
PyONNXTensorDescriptor(tensor_type.clone()),
);
}
map
}
#[pymethods]
impl PyONNXModel {
#[new]
#[pyo3(text_signature = "(self, path:str)")]
/// Load an ONNX model from the given path.
fn new(path: String) -> PyResult<Self> {
let model: ModelProto = candle_onnx::read_file(path).map_err(wrap_err)?;
Ok(PyONNXModel(model))
}
#[getter]
/// The version of the IR this model targets.
/// &RETURNS&: int
fn ir_version(&self) -> i64 {
self.0.ir_version
}
#[getter]
/// The producer of the model.
/// &RETURNS&: str
fn producer_name(&self) -> String {
self.0.producer_name.clone()
}
#[getter]
/// The version of the producer of the model.
/// &RETURNS&: str
fn producer_version(&self) -> String {
self.0.producer_version.clone()
}
#[getter]
/// The domain of the operator set of the model.
/// &RETURNS&: str
fn domain(&self) -> String {
self.0.domain.clone()
}
#[getter]
/// The version of the model.
/// &RETURNS&: int
fn model_version(&self) -> i64 {
self.0.model_version
}
#[getter]
/// The doc string of the model.
/// &RETURNS&: str
fn doc_string(&self) -> String {
self.0.doc_string.clone()
}
/// Get the weights of the model.
/// &RETURNS&: Dict[str, Tensor]
fn initializers(&self) -> PyResult<HashMap<String, PyTensor>> {
let mut map = HashMap::new();
if let Some(graph) = self.0.graph.as_ref() {
for tensor_description in graph.initializer.iter() {
let tensor = get_tensor(tensor_description, tensor_description.name.as_str())
.map_err(wrap_err)?;
map.insert(tensor_description.name.to_string(), PyTensor(tensor));
}
}
Ok(map)
}
#[getter]
/// The inputs of the model.
/// &RETURNS&: Optional[Dict[str, ONNXTensorDescription]]
fn inputs(&self) -> Option<HashMap<String, PyONNXTensorDescriptor>> {
if let Some(graph) = self.0.graph.as_ref() {
return Some(extract_tensor_descriptions(&graph.input));
}
None
}
#[getter]
/// The outputs of the model.
/// &RETURNS&: Optional[Dict[str, ONNXTensorDescription]]
fn outputs(&self) -> Option<HashMap<String, PyONNXTensorDescriptor>> {
if let Some(graph) = self.0.graph.as_ref() {
return Some(extract_tensor_descriptions(&graph.output));
}
None
}
#[pyo3(text_signature = "(self, inputs:Dict[str,Tensor])")]
/// Run the model on the given inputs.
/// &RETURNS&: Dict[str,Tensor]
fn run(&self, inputs: HashMap<String, PyTensor>) -> PyResult<HashMap<String, PyTensor>> {
let unwrapped_tensors = inputs.into_iter().map(|(k, v)| (k.clone(), v.0)).collect();
let result = simple_eval(&self.0, unwrapped_tensors).map_err(wrap_err)?;
Ok(result
.into_iter()
.map(|(k, v)| (k.clone(), PyTensor(v)))
.collect())
}
}
|
candle/candle-pyo3/src/onnx.rs/0
|
{
"file_path": "candle/candle-pyo3/src/onnx.rs",
"repo_id": "candle",
"token_count": 3268
}
| 37
|
pub mod generation;
pub mod models;
pub mod object_detection;
pub mod pipelines;
pub mod quantized_nn;
pub mod quantized_var_builder;
pub mod utils;
|
candle/candle-transformers/src/lib.rs/0
|
{
"file_path": "candle/candle-transformers/src/lib.rs",
"repo_id": "candle",
"token_count": 47
}
| 38
|
use candle::{DType, Device, Result, Tensor, D};
use candle_nn::{embedding, linear_b as linear, Embedding, LayerNorm, Linear, Module, VarBuilder};
use serde::Deserialize;
const MAX_SEQ_LEN: usize = 5000;
fn layer_norm(size: usize, eps: f64, vb: VarBuilder) -> Result<LayerNorm> {
let (weight, bias) = match (vb.get(size, "weight"), vb.get(size, "bias")) {
(Ok(weight), Ok(bias)) => (weight, bias),
(Err(err), _) | (_, Err(err)) => {
if let (Ok(weight), Ok(bias)) = (vb.get(size, "gamma"), vb.get(size, "beta")) {
(weight, bias)
} else {
return Err(err);
}
}
};
Ok(LayerNorm::new(weight, bias, eps))
}
// https://raw.githubusercontent.com/huggingface/transformers/030c863aaa0165e98352b61697430bf69bf33755/src/transformers/models/falcon/configuration_falcon.py
#[derive(Clone, Debug, Deserialize)]
pub struct Config {
pub vocab_size: usize,
pub hidden_size: usize,
pub num_hidden_layers: usize,
pub num_attention_heads: usize,
pub layer_norm_epsilon: f64,
pub initializer_range: f64,
pub use_cache: bool,
pub bos_token_id: u32,
pub eos_token_id: u32,
pub hidden_dropout: f64,
pub attention_dropout: f64,
pub n_head_kv: Option<usize>,
pub alibi: bool,
pub new_decoder_architecture: bool,
pub multi_query: bool,
pub parallel_attn: bool,
pub bias: bool,
}
impl Default for Config {
fn default() -> Self {
Self {
vocab_size: 65024,
hidden_size: 4544,
num_hidden_layers: 32,
num_attention_heads: 71,
layer_norm_epsilon: 1e-5,
initializer_range: 0.02,
use_cache: true,
bos_token_id: 11,
eos_token_id: 11,
hidden_dropout: 0.0,
attention_dropout: 0.0,
n_head_kv: None,
alibi: false,
new_decoder_architecture: false,
multi_query: true,
parallel_attn: true,
bias: false,
}
}
}
impl Config {
pub fn validate(&self) -> Result<()> {
if self.alibi {
candle::bail!("alibi is not supported");
}
if self.new_decoder_architecture {
candle::bail!("new_decoder_architecture is not supported");
}
if self.n_head_kv.is_some() {
candle::bail!("n_head_kv is not supported");
}
Ok(())
}
// https://huggingface.co/tiiuae/falcon-7b/blob/main/config.json
pub fn falcon7b() -> Self {
// This is currently on par with the defaults, the defaults come from the Python default
// arguments for the config initialization whereas the following come from the json config.
Self {
vocab_size: 65024,
hidden_size: 4544,
num_hidden_layers: 32,
num_attention_heads: 71,
layer_norm_epsilon: 1e-5,
initializer_range: 0.02,
use_cache: true,
bos_token_id: 11,
eos_token_id: 11,
hidden_dropout: 0.,
attention_dropout: 0.,
n_head_kv: None,
alibi: false,
new_decoder_architecture: false,
multi_query: true,
parallel_attn: true,
bias: false,
}
}
fn head_dim(&self) -> usize {
self.hidden_size / self.num_attention_heads
}
fn rotary(&self) -> bool {
!self.alibi
}
}
fn rotate_half(x: &Tensor) -> Result<Tensor> {
let l = x.dim(D::Minus1)?;
let x1 = x.narrow(D::Minus1, 0, l / 2)?;
let x2 = x.narrow(D::Minus1, l / 2, l - l / 2)?;
let x21 = Tensor::cat(&[&x2.neg()?, &x1], D::Minus1)?;
Ok(x21)
}
#[derive(Debug)]
struct FalconRotaryEmbedding {
inv_freq: Tensor,
cache: Option<(usize, Tensor, Tensor)>,
}
impl FalconRotaryEmbedding {
fn load(device: &Device, cfg: &Config) -> Result<Self> {
let head_dim = cfg.head_dim();
let inv_freq: Vec<_> = (0..head_dim)
.step_by(2)
.map(|i| 1f32 / 10000f32.powf(i as f32 / head_dim as f32))
.collect();
Ok(Self {
inv_freq: Tensor::new(inv_freq.as_slice(), device)?,
cache: None,
})
}
fn cos_sin(
&mut self,
seq_len: usize,
device: &Device,
dtype: DType,
) -> Result<(Tensor, Tensor)> {
match &self.cache {
Some((s, cos, sin)) if *s == seq_len => {
return Ok((cos.clone(), sin.clone()));
}
_ => {}
}
let t = Tensor::arange(0, seq_len as u32, device)?.to_dtype(dtype)?;
let inv_freq = self.inv_freq.to_dtype(dtype)?;
let freqs = t.unsqueeze(1)?.matmul(&inv_freq.unsqueeze(0)?)?;
let emb = Tensor::cat(&[&freqs, &freqs], D::Minus1)?;
let cos = emb.cos()?;
let sin = emb.sin()?;
self.cache = Some((seq_len, cos.clone(), sin.clone()));
Ok((cos, sin))
}
fn forward(
&mut self,
query: &Tensor,
key: &Tensor,
past_kv_len: usize,
) -> Result<(Tensor, Tensor)> {
let (_batch, seq_len, _head_dim) = query.dims3()?;
let (cos, sin) = self.cos_sin(MAX_SEQ_LEN, query.device(), query.dtype())?;
let cos = cos.narrow(0, past_kv_len, seq_len)?;
let sin = sin.narrow(0, past_kv_len, seq_len)?;
let qs = (query.broadcast_mul(&cos)? + &rotate_half(query)?.broadcast_mul(&sin)?)?;
let ks = (key.broadcast_mul(&cos)? + &rotate_half(key)?.broadcast_mul(&sin)?)?;
Ok((qs, ks))
}
}
fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: f32) -> Result<Tensor> {
let shape = mask.shape();
let on_true = Tensor::new(on_true, on_false.device())?.broadcast_as(shape.dims())?;
let m = mask.where_cond(&on_true, on_false)?;
Ok(m)
}
#[derive(Debug)]
struct FalconAttention {
query_key_value: Linear,
dense: Linear,
maybe_rotary: Option<FalconRotaryEmbedding>,
kv_cache: Option<(Tensor, Tensor)>,
inv_norm_factor: f64,
multi_query: bool,
use_cache: bool,
num_heads: usize,
head_dim: usize,
n_head_kv: usize,
}
impl FalconAttention {
fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> {
let maybe_rotary = if cfg.rotary() {
let rotary = FalconRotaryEmbedding::load(vb.device(), cfg)?;
Some(rotary)
} else {
None
};
let head_dim = cfg.head_dim();
let hidden_size = cfg.hidden_size;
let qkv_out_dim = if cfg.multi_query {
hidden_size + 2 * head_dim
} else {
3 * hidden_size
};
let query_key_value = linear(hidden_size, qkv_out_dim, cfg.bias, vb.pp("query_key_value"))?;
let dense = linear(hidden_size, hidden_size, cfg.bias, vb.pp("dense"))?;
Ok(Self {
query_key_value,
dense,
maybe_rotary,
kv_cache: None,
inv_norm_factor: 1. / (head_dim as f64).sqrt(),
multi_query: cfg.multi_query,
use_cache: cfg.use_cache,
num_heads: cfg.num_attention_heads,
n_head_kv: cfg.n_head_kv.unwrap_or(1),
head_dim,
})
}
fn split_heads(&self, fused_qkv: &Tensor) -> Result<(Tensor, Tensor, Tensor)> {
let (b_sz, seq_len, _) = fused_qkv.dims3()?;
if !self.multi_query {
let fused_qkv = fused_qkv.reshape((b_sz, seq_len, self.num_heads, 3, self.head_dim))?;
let q = fused_qkv.narrow(D::Minus2, 0, 1)?.squeeze(D::Minus2)?;
let k = fused_qkv.narrow(D::Minus2, 1, 1)?.squeeze(D::Minus2)?;
let v = fused_qkv.narrow(D::Minus2, 2, 1)?.squeeze(D::Minus2)?;
Ok((q, k, v))
} else {
let fused_qkv =
fused_qkv.reshape((b_sz, seq_len, self.num_heads + 2, self.head_dim))?;
let d = fused_qkv.dim(D::Minus2)?;
let q = fused_qkv.narrow(D::Minus2, 0, d - 2)?;
let k = fused_qkv.narrow(D::Minus2, d - 2, 1)?;
let v = fused_qkv.narrow(D::Minus2, d - 1, 1)?;
Ok((q, k, v))
}
}
fn forward(&mut self, x: &Tensor, mask: Option<&Tensor>, past_kv_len: usize) -> Result<Tensor> {
let fused_qkv = self.query_key_value.forward(x)?;
let head_dim = self.head_dim;
let (query, key, value) = self.split_heads(&fused_qkv)?;
let (b_sz, seq_len, _, _) = query.dims4()?;
let query = query
.transpose(1, 2)?
.reshape((b_sz * self.num_heads, seq_len, head_dim))?;
let key = key
.transpose(1, 2)?
.reshape((b_sz * self.n_head_kv, seq_len, head_dim))?;
let value = value
.transpose(1, 2)?
.reshape((b_sz * self.n_head_kv, seq_len, head_dim))?;
let (query, key) = if let Some(r) = &mut self.maybe_rotary {
r.forward(&query, &key, past_kv_len)?
} else {
(query, key)
};
let (mut key, mut value) = (key, value);
if self.use_cache {
if let Some((cache_k, cache_v)) = &self.kv_cache {
// TODO: we could trim the tensors to MAX_SEQ_LEN so that this would work for
// arbitrarily large sizes.
key = Tensor::cat(&[cache_k, &key], 1)?.contiguous()?;
value = Tensor::cat(&[cache_v, &value], 1)?.contiguous()?;
}
self.kv_cache = Some((key.clone(), value.clone()))
}
let query = query.reshape((b_sz * self.num_heads, seq_len, head_dim))?;
let all_len = past_kv_len + seq_len;
let key = key.reshape((b_sz * self.n_head_kv, all_len, head_dim))?;
let value = value.reshape((b_sz * self.n_head_kv, all_len, head_dim))?;
let (key, value) = if self.n_head_kv == 1 {
(
key.broadcast_as((b_sz * self.num_heads, all_len, head_dim))?,
value.broadcast_as((b_sz * self.num_heads, all_len, head_dim))?,
)
} else {
(key, value)
};
// Only handle the case where alibi is None here, and non-flash attention.
let attention_scores = (query.matmul(&key.t()?)? * self.inv_norm_factor)?;
let attention_scores = match mask {
None => attention_scores,
Some(mask) => {
let mask = masked_fill(&mask.to_dtype(DType::F32)?, mask, -1e9)?
.to_dtype(query.dtype())?;
attention_scores.broadcast_add(&mask.squeeze(1)?)?
}
};
let attention_scores =
candle_nn::ops::softmax(&attention_scores.to_dtype(DType::F32)?, D::Minus1)?
.to_dtype(x.dtype())?;
let attn_output = attention_scores
.matmul(&value)?
.reshape((b_sz, self.num_heads, seq_len, head_dim))?
.transpose(1, 2)?
.reshape((b_sz, seq_len, self.num_heads * head_dim))?;
let attn_output = self.dense.forward(&attn_output)?;
Ok(attn_output)
}
}
#[derive(Debug)]
struct FalconMlp {
dense_h_to_4h: Linear,
dense_4h_to_h: Linear,
}
impl FalconMlp {
fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> {
let h = cfg.hidden_size;
let b = cfg.bias;
let dense_h_to_4h = linear(h, 4 * h, b, vb.pp("dense_h_to_4h"))?;
let dense_4h_to_h = linear(4 * h, h, b, vb.pp("dense_4h_to_h"))?;
Ok(Self {
dense_h_to_4h,
dense_4h_to_h,
})
}
fn forward(&self, x: &Tensor) -> Result<Tensor> {
let x = self.dense_h_to_4h.forward(x)?.gelu()?;
let x = self.dense_4h_to_h.forward(&x)?;
Ok(x)
}
}
#[derive(Debug)]
struct FalconDecoderLayer {
inp_layernorm: LayerNorm,
self_attention: FalconAttention,
post_attention_layernorm: Option<LayerNorm>,
mlp: FalconMlp,
parallel_attn: bool,
}
impl FalconDecoderLayer {
fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> {
let mlp = FalconMlp::load(vb.pp("mlp"), cfg)?;
let inp_layernorm = layer_norm(
cfg.hidden_size,
cfg.layer_norm_epsilon,
vb.pp("input_layernorm"),
)?;
let self_attention = FalconAttention::load(vb.pp("self_attention"), cfg)?;
let post_attention_layernorm = if cfg.parallel_attn {
None
} else {
let ln = layer_norm(
cfg.hidden_size,
cfg.layer_norm_epsilon,
vb.pp("post_attention_layernorm"),
)?;
Some(ln)
};
Ok(Self {
inp_layernorm,
self_attention,
post_attention_layernorm,
mlp,
parallel_attn: cfg.parallel_attn,
})
}
fn forward(&mut self, x: &Tensor, mask: Option<&Tensor>, past_kv_len: usize) -> Result<Tensor> {
let residual = x.clone();
let ln_attn = self.inp_layernorm.forward(x)?;
let attn_output = self.self_attention.forward(&ln_attn, mask, past_kv_len)?;
let (residual, ln_mlp) = match &self.post_attention_layernorm {
None => (residual, ln_attn),
Some(pal) => {
// This should include some dropout.
let residual = (&attn_output + &residual)?;
let ln_mlp = pal.forward(&residual)?;
(residual, ln_mlp)
}
};
let mlp_output = self.mlp.forward(&ln_mlp)?;
let mlp_output = if self.parallel_attn {
(mlp_output + attn_output)?
} else {
mlp_output
};
let output = (mlp_output + residual)?;
Ok(output)
}
}
#[derive(Debug)]
pub struct Falcon {
word_embeddings: Embedding,
blocks: Vec<FalconDecoderLayer>,
ln_f: LayerNorm,
lm_head: Linear,
config: Config,
}
fn make_causal_mask(t: usize) -> Result<Tensor> {
let mask: Vec<_> = (0..t)
.flat_map(|i| (0..t).map(move |j| u8::from(j > i)))
.collect();
let mask = Tensor::from_slice(&mask, (t, t), &Device::Cpu)?;
Ok(mask)
}
fn prepare_attn_mask(b_sz: usize, seq_len: usize) -> Result<Tensor> {
// let mask = Tensor::ones((b_sz, seq_len), DType::U32, &Device::Cpu)?;
let mask = make_causal_mask(seq_len)?;
let mask = mask.broadcast_as((b_sz, 1, seq_len, seq_len))?;
Ok(mask)
}
impl Falcon {
pub fn config(&self) -> &Config {
&self.config
}
pub fn load(vb: VarBuilder, cfg: Config) -> Result<Self> {
let word_embeddings = embedding(
cfg.vocab_size,
cfg.hidden_size,
vb.pp("transformer.word_embeddings"),
)?;
let blocks = (0..cfg.num_hidden_layers)
.map(|i| FalconDecoderLayer::load(vb.pp(&format!("transformer.h.{i}")), &cfg))
.collect::<Result<Vec<_>>>()?;
let ln_f = layer_norm(
cfg.hidden_size,
cfg.layer_norm_epsilon,
vb.pp("transformer.ln_f"),
)?;
let lm_head = linear(cfg.hidden_size, cfg.vocab_size, false, vb.pp("lm_head"))?;
Ok(Self {
word_embeddings,
blocks,
ln_f,
lm_head,
config: cfg,
})
}
pub fn forward(&mut self, input_ids: &Tensor) -> Result<Tensor> {
let (b_sz, seq_len) = input_ids.dims2()?;
let mut hidden_state = self.word_embeddings.forward(input_ids)?;
let past_kv_len = match &self.blocks[0].self_attention.kv_cache {
Some((k, _)) => k.dim(1)?,
None => 0,
};
let causal_mask = if seq_len <= 1 {
None
} else {
Some(prepare_attn_mask(b_sz, seq_len)?.to_device(input_ids.device())?)
};
for block in self.blocks.iter_mut() {
hidden_state = block.forward(&hidden_state, causal_mask.as_ref(), past_kv_len)?;
}
let hidden_state = self.ln_f.forward(&hidden_state)?;
let hidden_state = hidden_state.narrow(1, seq_len - 1, 1)?;
let logits = self.lm_head.forward(&hidden_state)?.squeeze(1)?;
Ok(logits)
}
}
|
candle/candle-transformers/src/models/falcon.rs/0
|
{
"file_path": "candle/candle-transformers/src/models/falcon.rs",
"repo_id": "candle",
"token_count": 8599
}
| 39
|
use candle::DType;
use serde::Deserialize;
pub const DTYPE: DType = DType::F32;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum PositionEmbeddingType {
Absolute,
Alibi,
}
// https://github.com/huggingface/transformers/blob/main/src/transformers/models/persimmon/configuration_persimmon.py
#[derive(Debug, Clone, PartialEq, Deserialize)]
pub struct Config {
pub vocab_size: usize,
pub hidden_size: usize,
pub intermediate_size: usize,
pub num_hidden_layers: usize,
pub num_attention_heads: usize,
pub num_key_value_heads: usize,
pub hidden_act: candle_nn::Activation,
pub max_position_embeddings: usize,
pub initializer_range: f64,
pub layer_norm_eps: f64,
pub rms_norm_eps: f64,
pub use_cache: bool,
pub tie_word_embeddings: bool,
pub rope_theta: f64,
pub qk_layernorm: bool,
pub partial_rotary_factor: f64,
}
impl Config {
pub fn base_8b() -> Self {
// https://huggingface.co/adept/persimmon-8b-base/blob/main/config.json
Self {
hidden_act: candle_nn::Activation::Relu,
hidden_size: 4096,
initializer_range: 0.02,
intermediate_size: 16384,
layer_norm_eps: 1e-05,
max_position_embeddings: 16384,
num_attention_heads: 64,
num_hidden_layers: 36,
num_key_value_heads: 64,
qk_layernorm: true,
rms_norm_eps: 1e-06,
rope_theta: 25000.0,
tie_word_embeddings: false,
use_cache: true,
vocab_size: 262144,
partial_rotary_factor: 0.5,
}
}
}
|
candle/candle-transformers/src/models/persimmon.rs/0
|
{
"file_path": "candle/candle-transformers/src/models/persimmon.rs",
"repo_id": "candle",
"token_count": 814
}
| 40
|
use crate::models::with_tracing::{linear, linear_no_bias, Linear, RmsNorm};
use candle::{DType, Device, Module, Result, Tensor, D};
use candle_nn::{Activation, VarBuilder};
use std::sync::Arc;
#[derive(Debug, Clone, PartialEq, serde::Deserialize)]
pub struct Config {
pub vocab_size: usize,
pub hidden_size: usize,
pub intermediate_size: usize,
pub num_hidden_layers: usize,
pub num_attention_heads: usize,
pub num_key_value_heads: usize,
pub max_position_embeddings: usize,
pub sliding_window: usize,
pub max_window_layers: usize,
pub tie_word_embeddings: bool,
pub rope_theta: f64,
pub rms_norm_eps: f64,
pub use_sliding_window: bool,
pub hidden_act: Activation,
pub decoder_sparse_step: usize,
pub moe_intermediate_size: usize,
pub shared_expert_intermediate_size: usize,
pub num_experts_per_tok: usize,
pub num_experts: usize,
pub norm_topk_prob: bool,
}
#[derive(Debug, Clone)]
struct RotaryEmbedding {
sin: Tensor,
cos: Tensor,
}
fn rotate_half(xs: &Tensor) -> Result<Tensor> {
let last_dim = xs.dim(D::Minus1)?;
let xs1 = xs.narrow(D::Minus1, 0, last_dim / 2)?;
let xs2 = xs.narrow(D::Minus1, last_dim / 2, last_dim - last_dim / 2)?;
Tensor::cat(&[&xs2.neg()?, &xs1], D::Minus1)
}
impl RotaryEmbedding {
fn new(dtype: DType, cfg: &Config, dev: &Device) -> Result<Self> {
let dim = cfg.hidden_size / cfg.num_attention_heads;
let max_seq_len = cfg.max_position_embeddings;
let inv_freq: Vec<_> = (0..dim)
.step_by(2)
.map(|i| 1f32 / cfg.rope_theta.powf(i as f64 / dim as f64) as f32)
.collect();
let inv_freq_len = inv_freq.len();
let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?.to_dtype(dtype)?;
let t = Tensor::arange(0u32, max_seq_len as u32, dev)?
.to_dtype(dtype)?
.reshape((max_seq_len, 1))?;
let freqs = t.matmul(&inv_freq)?;
let freqs = Tensor::cat(&[&freqs, &freqs], D::Minus1)?;
Ok(Self {
sin: freqs.sin()?,
cos: freqs.cos()?,
})
}
fn apply_rotary_emb_qkv(
&self,
q: &Tensor,
k: &Tensor,
seqlen_offset: usize,
) -> Result<(Tensor, Tensor)> {
let (_b_sz, _h, seq_len, _n_embd) = q.dims4()?;
let cos = self.cos.narrow(0, seqlen_offset, seq_len)?;
let sin = self.sin.narrow(0, seqlen_offset, seq_len)?;
let cos = cos.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim)
let sin = sin.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim)
let q_embed = (q.broadcast_mul(&cos)? + rotate_half(q)?.broadcast_mul(&sin))?;
let k_embed = (k.broadcast_mul(&cos)? + rotate_half(k)?.broadcast_mul(&sin))?;
Ok((q_embed, k_embed))
}
}
#[derive(Debug, Clone)]
#[allow(clippy::upper_case_acronyms)]
struct MLP {
gate_proj: Linear,
up_proj: Linear,
down_proj: Linear,
act_fn: Activation,
}
impl MLP {
fn new(intermediate_sz: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> {
let hidden_sz = cfg.hidden_size;
let gate_proj = linear_no_bias(hidden_sz, intermediate_sz, vb.pp("gate_proj"))?;
let up_proj = linear_no_bias(hidden_sz, intermediate_sz, vb.pp("up_proj"))?;
let down_proj = linear_no_bias(intermediate_sz, hidden_sz, vb.pp("down_proj"))?;
Ok(Self {
gate_proj,
up_proj,
down_proj,
act_fn: cfg.hidden_act,
})
}
}
impl Module for MLP {
fn forward(&self, xs: &Tensor) -> Result<Tensor> {
let lhs = xs.apply(&self.gate_proj)?.apply(&self.act_fn)?;
let rhs = xs.apply(&self.up_proj)?;
(lhs * rhs)?.apply(&self.down_proj)
}
}
#[derive(Debug, Clone)]
struct Attention {
q_proj: Linear,
k_proj: Linear,
v_proj: Linear,
o_proj: Linear,
num_heads: usize,
num_kv_heads: usize,
num_kv_groups: usize,
head_dim: usize,
hidden_size: usize,
rotary_emb: Arc<RotaryEmbedding>,
kv_cache: Option<(Tensor, Tensor)>,
}
impl Attention {
fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> {
let hidden_sz = cfg.hidden_size;
let num_heads = cfg.num_attention_heads;
let num_kv_heads = cfg.num_key_value_heads;
let num_kv_groups = num_heads / num_kv_heads;
let head_dim = hidden_sz / num_heads;
let q_proj = linear(hidden_sz, num_heads * head_dim, vb.pp("q_proj"))?;
let k_proj = linear(hidden_sz, num_kv_heads * head_dim, vb.pp("k_proj"))?;
let v_proj = linear(hidden_sz, num_kv_heads * head_dim, vb.pp("v_proj"))?;
let o_proj = linear_no_bias(num_heads * head_dim, hidden_sz, vb.pp("o_proj"))?;
Ok(Self {
q_proj,
k_proj,
v_proj,
o_proj,
num_heads,
num_kv_heads,
num_kv_groups,
head_dim,
hidden_size: hidden_sz,
rotary_emb,
kv_cache: None,
})
}
fn repeat_kv(&self, xs: Tensor) -> Result<Tensor> {
let n_rep = self.num_kv_groups;
if n_rep == 1 {
Ok(xs)
} else {
let (b_sz, num_kv_heads, seq_len, head_dim) = xs.dims4()?;
xs.unsqueeze(2)?
.expand((b_sz, num_kv_heads, n_rep, seq_len, head_dim))?
.reshape((b_sz, num_kv_heads * n_rep, seq_len, head_dim))
}
}
fn forward(
&mut self,
xs: &Tensor,
attention_mask: Option<&Tensor>,
seqlen_offset: usize,
) -> Result<Tensor> {
let (b_sz, q_len, _) = xs.dims3()?;
let query_states = self.q_proj.forward(xs)?;
let key_states = self.k_proj.forward(xs)?;
let value_states = self.v_proj.forward(xs)?;
let query_states = query_states
.reshape((b_sz, q_len, self.num_heads, self.head_dim))?
.transpose(1, 2)?;
let key_states = key_states
.reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))?
.transpose(1, 2)?;
let value_states = value_states
.reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))?
.transpose(1, 2)?;
let (query_states, key_states) =
self.rotary_emb
.apply_rotary_emb_qkv(&query_states, &key_states, seqlen_offset)?;
let (key_states, value_states) = match &self.kv_cache {
None => (key_states, value_states),
Some((prev_k, prev_v)) => {
let key_states = Tensor::cat(&[prev_k, &key_states], 2)?;
let value_states = Tensor::cat(&[prev_v, &value_states], 2)?;
(key_states, value_states)
}
};
self.kv_cache = Some((key_states.clone(), value_states.clone()));
let key_states = self.repeat_kv(key_states)?.contiguous()?;
let value_states = self.repeat_kv(value_states)?.contiguous()?;
let attn_output = {
let scale = 1f64 / f64::sqrt(self.head_dim as f64);
let attn_weights = (query_states.matmul(&key_states.transpose(2, 3)?)? * scale)?;
let attn_weights = match attention_mask {
None => attn_weights,
Some(mask) => attn_weights.broadcast_add(mask)?,
};
let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?;
attn_weights.matmul(&value_states)?
};
attn_output
.transpose(1, 2)?
.reshape((b_sz, q_len, self.hidden_size))?
.apply(&self.o_proj)
}
fn clear_kv_cache(&mut self) {
self.kv_cache = None
}
}
// https://github.com/huggingface/transformers/blob/536ea2aca234fb48c5c69769431d643b0d93b233/src/transformers/models/qwen2_moe/modeling_qwen2_moe.py#L800
#[derive(Debug, Clone)]
struct SparseMoeBlock {
gate: Linear,
experts: Vec<MLP>,
shared_expert: MLP,
shared_expert_gate: Linear,
norm_topk_prob: bool,
num_experts_per_tok: usize,
}
impl SparseMoeBlock {
fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> {
let gate = linear_no_bias(cfg.hidden_size, cfg.num_experts, vb.pp("gate"))?;
let mut experts = Vec::with_capacity(cfg.num_experts);
let vb_e = vb.pp("experts");
for idx in 0..cfg.num_experts {
let expert = MLP::new(cfg.moe_intermediate_size, cfg, vb_e.pp(idx))?;
experts.push(expert)
}
let shared_expert = MLP::new(
cfg.shared_expert_intermediate_size,
cfg,
vb.pp("shared_expert"),
)?;
let shared_expert_gate = linear_no_bias(cfg.hidden_size, 1, vb.pp("shared_expert_gate"))?;
Ok(Self {
gate,
experts,
shared_expert,
shared_expert_gate,
norm_topk_prob: cfg.norm_topk_prob,
num_experts_per_tok: cfg.num_experts_per_tok,
})
}
}
impl Module for SparseMoeBlock {
fn forward(&self, xs: &Tensor) -> Result<Tensor> {
let (b_size, seq_len, hidden_dim) = xs.dims3()?;
let xs = xs.reshape(((), hidden_dim))?;
let router_logits = xs.apply(&self.gate)?;
let routing_weights = candle_nn::ops::softmax_last_dim(&router_logits)?;
// In order to extract topk, we extract the data from the tensor and manipulate it
// directly. Maybe we will want to use some custom ops instead at some point.
let routing_weights = routing_weights.to_dtype(DType::F32)?.to_vec2::<f32>()?;
// routing_weights, selected_experts = torch.topk(routing_weights, self.top_k, dim=-1)
// top_x contains the row indexes to evaluate for each expert.
let mut top_x = vec![vec![]; self.experts.len()];
let mut selected_experts = vec![vec![]; self.experts.len()];
for (row_idx, rw) in routing_weights.iter().enumerate() {
let mut dst = (0..rw.len() as u32).collect::<Vec<u32>>();
dst.sort_by(|&i, &j| rw[j as usize].total_cmp(&rw[i as usize]));
let mut sum_routing_weights = 0f32;
for &expert_idx in dst.iter().take(self.num_experts_per_tok) {
let expert_idx = expert_idx as usize;
let routing_weight = rw[expert_idx];
sum_routing_weights += routing_weight;
top_x[expert_idx].push(row_idx as u32);
}
for &expert_idx in dst.iter().take(self.num_experts_per_tok) {
let expert_idx = expert_idx as usize;
let routing_weight = if self.norm_topk_prob {
rw[expert_idx] / sum_routing_weights
} else {
rw[expert_idx]
};
selected_experts[expert_idx].push(routing_weight)
}
}
let mut ys = xs.zeros_like()?;
for (expert_idx, expert_layer) in self.experts.iter().enumerate() {
let top_x = &top_x[expert_idx];
if top_x.is_empty() {
continue;
}
let top_x = Tensor::new(top_x.as_slice(), xs.device())?;
let selected_experts =
Tensor::new(selected_experts[expert_idx].as_slice(), xs.device())?
.reshape(((), 1))?
.to_dtype(xs.dtype())?;
// Index the correct hidden states and compute the expert hidden state for
// the current expert. We need to make sure to multiply the output hidden
// states by `routing_weights` on the corresponding tokens (top-1 and top-2)
let current_state = xs.index_select(&top_x, 0)?.reshape(((), hidden_dim))?;
// current_hidden_states = expert_layer(current_state, routing_weights[top_x_list, idx_list, None])
let current_hidden_states = expert_layer.forward(¤t_state)?;
let current_hidden_states = current_hidden_states.broadcast_mul(&selected_experts)?;
ys = ys.index_add(&top_x, ¤t_hidden_states, 0)?;
}
let shared_expert_output = xs.apply(&self.shared_expert)?;
let shared_expert_output = shared_expert_output.broadcast_mul(&candle_nn::ops::sigmoid(
&xs.apply(&self.shared_expert_gate)?,
)?)?;
let ys = (ys + shared_expert_output)?;
let ys = ys.reshape((b_size, seq_len, hidden_dim))?;
Ok(ys)
}
}
#[derive(Debug, Clone)]
enum MlpOrMoeBlock {
Mlp(MLP),
MoeBlock(SparseMoeBlock),
}
impl Module for MlpOrMoeBlock {
fn forward(&self, xs: &Tensor) -> Result<Tensor> {
match self {
Self::MoeBlock(m) => m.forward(xs),
Self::Mlp(m) => m.forward(xs),
}
}
}
#[derive(Debug, Clone)]
struct DecoderLayer {
self_attn: Attention,
mlp: MlpOrMoeBlock,
input_layernorm: RmsNorm,
post_attention_layernorm: RmsNorm,
}
impl DecoderLayer {
fn new(
layer_idx: usize,
rotary_emb: Arc<RotaryEmbedding>,
cfg: &Config,
vb: VarBuilder,
) -> Result<Self> {
let self_attn = Attention::new(rotary_emb, cfg, vb.pp("self_attn"))?;
let mlp = if cfg.num_experts > 0 && (layer_idx + 1) % cfg.decoder_sparse_step == 0 {
MlpOrMoeBlock::MoeBlock(SparseMoeBlock::new(cfg, vb.pp("mlp"))?)
} else {
MlpOrMoeBlock::Mlp(MLP::new(cfg.intermediate_size, cfg, vb.pp("mlp"))?)
};
let input_layernorm =
RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb.pp("input_layernorm"))?;
let post_attention_layernorm = RmsNorm::new(
cfg.hidden_size,
cfg.rms_norm_eps,
vb.pp("post_attention_layernorm"),
)?;
Ok(Self {
self_attn,
mlp,
input_layernorm,
post_attention_layernorm,
})
}
fn forward(
&mut self,
xs: &Tensor,
attention_mask: Option<&Tensor>,
seqlen_offset: usize,
) -> Result<Tensor> {
let residual = xs;
let xs = self.input_layernorm.forward(xs)?;
let xs = self.self_attn.forward(&xs, attention_mask, seqlen_offset)?;
let xs = (xs + residual)?;
let residual = &xs;
let xs = xs.apply(&self.post_attention_layernorm)?.apply(&self.mlp)?;
residual + xs
}
fn clear_kv_cache(&mut self) {
self.self_attn.clear_kv_cache()
}
}
#[derive(Debug, Clone)]
pub struct Model {
embed_tokens: candle_nn::Embedding,
layers: Vec<DecoderLayer>,
norm: RmsNorm,
lm_head: Linear,
sliding_window: usize,
device: Device,
dtype: DType,
}
impl Model {
pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> {
let vb_m = vb.pp("model");
let embed_tokens =
candle_nn::embedding(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embed_tokens"))?;
let rotary_emb = Arc::new(RotaryEmbedding::new(vb.dtype(), cfg, vb_m.device())?);
let mut layers = Vec::with_capacity(cfg.num_hidden_layers);
let vb_l = vb_m.pp("layers");
for layer_idx in 0..cfg.num_hidden_layers {
let layer = DecoderLayer::new(layer_idx, rotary_emb.clone(), cfg, vb_l.pp(layer_idx))?;
layers.push(layer)
}
let norm = RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb_m.pp("norm"))?;
let lm_head = linear_no_bias(cfg.hidden_size, cfg.vocab_size, vb.pp("lm_head"))?;
Ok(Self {
embed_tokens,
layers,
norm,
lm_head,
sliding_window: cfg.sliding_window,
device: vb.device().clone(),
dtype: vb.dtype(),
})
}
fn prepare_decoder_attention_mask(
&self,
b_size: usize,
tgt_len: usize,
seqlen_offset: usize,
) -> Result<Tensor> {
// Sliding window mask?
let mask: Vec<_> = (0..tgt_len)
.flat_map(|i| {
(0..tgt_len).map(move |j| {
if i < j || j + self.sliding_window < i {
f32::NEG_INFINITY
} else {
0.
}
})
})
.collect();
let mask = Tensor::from_slice(&mask, (tgt_len, tgt_len), &self.device)?;
let mask = if seqlen_offset > 0 {
let mask0 = Tensor::zeros((tgt_len, seqlen_offset), DType::F32, &self.device)?;
Tensor::cat(&[&mask0, &mask], D::Minus1)?
} else {
mask
};
mask.expand((b_size, 1, tgt_len, tgt_len + seqlen_offset))?
.to_dtype(self.dtype)
}
pub fn forward(&mut self, input_ids: &Tensor, seqlen_offset: usize) -> Result<Tensor> {
let (b_size, seq_len) = input_ids.dims2()?;
let attention_mask = if seq_len <= 1 {
None
} else {
let mask = self.prepare_decoder_attention_mask(b_size, seq_len, seqlen_offset)?;
Some(mask)
};
let mut xs = self.embed_tokens.forward(input_ids)?;
for layer in self.layers.iter_mut() {
xs = layer.forward(&xs, attention_mask.as_ref(), seqlen_offset)?
}
xs.narrow(1, seq_len - 1, 1)?
.apply(&self.norm)?
.apply(&self.lm_head)
}
pub fn clear_kv_cache(&mut self) {
for layer in self.layers.iter_mut() {
layer.clear_kv_cache()
}
}
}
|
candle/candle-transformers/src/models/qwen2_moe.rs/0
|
{
"file_path": "candle/candle-transformers/src/models/qwen2_moe.rs",
"repo_id": "candle",
"token_count": 9128
}
| 41
|
use super::schedulers::{betas_for_alpha_bar, BetaSchedule, PredictionType};
use candle::{Result, Tensor};
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum DDPMVarianceType {
FixedSmall,
FixedSmallLog,
FixedLarge,
FixedLargeLog,
Learned,
}
impl Default for DDPMVarianceType {
fn default() -> Self {
Self::FixedSmall
}
}
#[derive(Debug, Clone)]
pub struct DDPMSchedulerConfig {
/// The value of beta at the beginning of training.
pub beta_start: f64,
/// The value of beta at the end of training.
pub beta_end: f64,
/// How beta evolved during training.
pub beta_schedule: BetaSchedule,
/// Option to predicted sample between -1 and 1 for numerical stability.
pub clip_sample: bool,
/// Option to clip the variance used when adding noise to the denoised sample.
pub variance_type: DDPMVarianceType,
/// prediction type of the scheduler function
pub prediction_type: PredictionType,
/// number of diffusion steps used to train the model.
pub train_timesteps: usize,
}
impl Default for DDPMSchedulerConfig {
fn default() -> Self {
Self {
beta_start: 0.00085,
beta_end: 0.012,
beta_schedule: BetaSchedule::ScaledLinear,
clip_sample: false,
variance_type: DDPMVarianceType::FixedSmall,
prediction_type: PredictionType::Epsilon,
train_timesteps: 1000,
}
}
}
pub struct DDPMScheduler {
alphas_cumprod: Vec<f64>,
init_noise_sigma: f64,
timesteps: Vec<usize>,
step_ratio: usize,
pub config: DDPMSchedulerConfig,
}
impl DDPMScheduler {
pub fn new(inference_steps: usize, config: DDPMSchedulerConfig) -> Result<Self> {
let betas = match config.beta_schedule {
BetaSchedule::ScaledLinear => super::utils::linspace(
config.beta_start.sqrt(),
config.beta_end.sqrt(),
config.train_timesteps,
)?
.sqr()?,
BetaSchedule::Linear => {
super::utils::linspace(config.beta_start, config.beta_end, config.train_timesteps)?
}
BetaSchedule::SquaredcosCapV2 => betas_for_alpha_bar(config.train_timesteps, 0.999)?,
};
let betas = betas.to_vec1::<f64>()?;
let mut alphas_cumprod = Vec::with_capacity(betas.len());
for &beta in betas.iter() {
let alpha = 1.0 - beta;
alphas_cumprod.push(alpha * *alphas_cumprod.last().unwrap_or(&1f64))
}
// min(train_timesteps, inference_steps)
// https://github.com/huggingface/diffusers/blob/8331da46837be40f96fbd24de6a6fb2da28acd11/src/diffusers/schedulers/scheduling_ddpm.py#L187
let inference_steps = inference_steps.min(config.train_timesteps);
// arange the number of the scheduler's timesteps
let step_ratio = config.train_timesteps / inference_steps;
let timesteps: Vec<usize> = (0..inference_steps).map(|s| s * step_ratio).rev().collect();
Ok(Self {
alphas_cumprod,
init_noise_sigma: 1.0,
timesteps,
step_ratio,
config,
})
}
fn get_variance(&self, timestep: usize) -> f64 {
let prev_t = timestep as isize - self.step_ratio as isize;
let alpha_prod_t = self.alphas_cumprod[timestep];
let alpha_prod_t_prev = if prev_t >= 0 {
self.alphas_cumprod[prev_t as usize]
} else {
1.0
};
let current_beta_t = 1. - alpha_prod_t / alpha_prod_t_prev;
// For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
// and sample from it to get previous sample
// x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
let variance = (1. - alpha_prod_t_prev) / (1. - alpha_prod_t) * current_beta_t;
// retrieve variance
match self.config.variance_type {
DDPMVarianceType::FixedSmall => variance.max(1e-20),
// for rl-diffuser https://arxiv.org/abs/2205.09991
DDPMVarianceType::FixedSmallLog => {
let variance = variance.max(1e-20).ln();
(variance * 0.5).exp()
}
DDPMVarianceType::FixedLarge => current_beta_t,
DDPMVarianceType::FixedLargeLog => current_beta_t.ln(),
DDPMVarianceType::Learned => variance,
}
}
pub fn timesteps(&self) -> &[usize] {
self.timesteps.as_slice()
}
/// Ensures interchangeability with schedulers that need to scale the denoising model input
/// depending on the current timestep.
pub fn scale_model_input(&self, sample: Tensor, _timestep: usize) -> Tensor {
sample
}
pub fn step(&self, model_output: &Tensor, timestep: usize, sample: &Tensor) -> Result<Tensor> {
let prev_t = timestep as isize - self.step_ratio as isize;
// https://github.com/huggingface/diffusers/blob/df2b548e893ccb8a888467c2508756680df22821/src/diffusers/schedulers/scheduling_ddpm.py#L272
// 1. compute alphas, betas
let alpha_prod_t = self.alphas_cumprod[timestep];
let alpha_prod_t_prev = if prev_t >= 0 {
self.alphas_cumprod[prev_t as usize]
} else {
1.0
};
let beta_prod_t = 1. - alpha_prod_t;
let beta_prod_t_prev = 1. - alpha_prod_t_prev;
let current_alpha_t = alpha_prod_t / alpha_prod_t_prev;
let current_beta_t = 1. - current_alpha_t;
// 2. compute predicted original sample from predicted noise also called "predicted x_0" of formula (15)
let mut pred_original_sample = match self.config.prediction_type {
PredictionType::Epsilon => {
((sample - model_output * beta_prod_t.sqrt())? / alpha_prod_t.sqrt())?
}
PredictionType::Sample => model_output.clone(),
PredictionType::VPrediction => {
((sample * alpha_prod_t.sqrt())? - model_output * beta_prod_t.sqrt())?
}
};
// 3. clip predicted x_0
if self.config.clip_sample {
pred_original_sample = pred_original_sample.clamp(-1f32, 1f32)?;
}
// 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
// See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
let pred_original_sample_coeff = (alpha_prod_t_prev.sqrt() * current_beta_t) / beta_prod_t;
let current_sample_coeff = current_alpha_t.sqrt() * beta_prod_t_prev / beta_prod_t;
// 5. Compute predicted previous sample µ_t
// See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
let pred_prev_sample = ((&pred_original_sample * pred_original_sample_coeff)?
+ sample * current_sample_coeff)?;
// https://github.com/huggingface/diffusers/blob/df2b548e893ccb8a888467c2508756680df22821/src/diffusers/schedulers/scheduling_ddpm.py#L305
// 6. Add noise
let mut variance = model_output.zeros_like()?;
if timestep > 0 {
let variance_noise = model_output.randn_like(0., 1.)?;
if self.config.variance_type == DDPMVarianceType::FixedSmallLog {
variance = (variance_noise * self.get_variance(timestep))?;
} else {
variance = (variance_noise * self.get_variance(timestep).sqrt())?;
}
}
&pred_prev_sample + variance
}
pub fn add_noise(
&self,
original_samples: &Tensor,
noise: Tensor,
timestep: usize,
) -> Result<Tensor> {
(original_samples * self.alphas_cumprod[timestep].sqrt())?
+ noise * (1. - self.alphas_cumprod[timestep]).sqrt()
}
pub fn init_noise_sigma(&self) -> f64 {
self.init_noise_sigma
}
}
|
candle/candle-transformers/src/models/stable_diffusion/ddpm.rs/0
|
{
"file_path": "candle/candle-transformers/src/models/stable_diffusion/ddpm.rs",
"repo_id": "candle",
"token_count": 3662
}
| 42
|
// Audio processing code, adapted from whisper.cpp
// https://github.com/ggerganov/whisper.cpp
use candle::utils::get_num_threads;
use std::sync::Arc;
use std::thread;
pub trait Float:
num_traits::Float + num_traits::FloatConst + num_traits::NumAssign + Send + Sync
{
}
impl Float for f32 {}
impl Float for f64 {}
// https://github.com/ggerganov/whisper.cpp/blob/4774d2feb01a772a15de81ffc34b34a1f294f020/whisper.cpp#L2357
fn fft<T: Float>(inp: &[T]) -> Vec<T> {
let n = inp.len();
let zero = T::zero();
if n == 1 {
return vec![inp[0], zero];
}
if n % 2 == 1 {
return dft(inp);
}
let mut out = vec![zero; n * 2];
let mut even = Vec::with_capacity(n / 2);
let mut odd = Vec::with_capacity(n / 2);
for (i, &inp) in inp.iter().enumerate() {
if i % 2 == 0 {
even.push(inp)
} else {
odd.push(inp);
}
}
let even_fft = fft(&even);
let odd_fft = fft(&odd);
let two_pi = T::PI() + T::PI();
let n_t = T::from(n).unwrap();
for k in 0..n / 2 {
let k_t = T::from(k).unwrap();
let theta = two_pi * k_t / n_t;
let re = theta.cos();
let im = -theta.sin();
let re_odd = odd_fft[2 * k];
let im_odd = odd_fft[2 * k + 1];
out[2 * k] = even_fft[2 * k] + re * re_odd - im * im_odd;
out[2 * k + 1] = even_fft[2 * k + 1] + re * im_odd + im * re_odd;
out[2 * (k + n / 2)] = even_fft[2 * k] - re * re_odd + im * im_odd;
out[2 * (k + n / 2) + 1] = even_fft[2 * k + 1] - re * im_odd - im * re_odd;
}
out
}
// https://github.com/ggerganov/whisper.cpp/blob/4774d2feb01a772a15de81ffc34b34a1f294f020/whisper.cpp#L2337
fn dft<T: Float>(inp: &[T]) -> Vec<T> {
let zero = T::zero();
let n = inp.len();
let two_pi = T::PI() + T::PI();
let mut out = Vec::with_capacity(2 * n);
let n_t = T::from(n).unwrap();
for k in 0..n {
let k_t = T::from(k).unwrap();
let mut re = zero;
let mut im = zero;
for (j, &inp) in inp.iter().enumerate() {
let j_t = T::from(j).unwrap();
let angle = two_pi * k_t * j_t / n_t;
re += inp * angle.cos();
im -= inp * angle.sin();
}
out.push(re);
out.push(im);
}
out
}
#[allow(clippy::too_many_arguments)]
// https://github.com/ggerganov/whisper.cpp/blob/4774d2feb01a772a15de81ffc34b34a1f294f020/whisper.cpp#L2414
fn log_mel_spectrogram_w<T: Float>(
ith: usize,
hann: &[T],
samples: &[T],
filters: &[T],
fft_size: usize,
fft_step: usize,
speed_up: bool,
n_len: usize,
n_mel: usize,
n_threads: usize,
) -> Vec<T> {
let n_fft = if speed_up {
1 + fft_size / 4
} else {
1 + fft_size / 2
};
let zero = T::zero();
let half = T::from(0.5).unwrap();
let mut fft_in = vec![zero; fft_size];
let mut mel = vec![zero; n_len * n_mel];
let n_samples = samples.len();
let end = std::cmp::min(n_samples / fft_step + 1, n_len);
for i in (ith..end).step_by(n_threads) {
let offset = i * fft_step;
// apply Hanning window
for j in 0..std::cmp::min(fft_size, n_samples - offset) {
fft_in[j] = hann[j] * samples[offset + j];
}
// fill the rest with zeros
if n_samples - offset < fft_size {
fft_in[n_samples - offset..].fill(zero);
}
// FFT
let mut fft_out: Vec<T> = fft(&fft_in);
// Calculate modulus^2 of complex numbers
for j in 0..fft_size {
fft_out[j] = fft_out[2 * j] * fft_out[2 * j] + fft_out[2 * j + 1] * fft_out[2 * j + 1];
}
for j in 1..fft_size / 2 {
let v = fft_out[fft_size - j];
fft_out[j] += v;
}
if speed_up {
// scale down in the frequency domain results in a speed up in the time domain
for j in 0..n_fft {
fft_out[j] = half * (fft_out[2 * j] + fft_out[2 * j + 1]);
}
}
// mel spectrogram
for j in 0..n_mel {
let mut sum = zero;
let mut k = 0;
// Unroll loop
while k < n_fft.saturating_sub(3) {
sum += fft_out[k] * filters[j * n_fft + k]
+ fft_out[k + 1] * filters[j * n_fft + k + 1]
+ fft_out[k + 2] * filters[j * n_fft + k + 2]
+ fft_out[k + 3] * filters[j * n_fft + k + 3];
k += 4;
}
// Handle remainder
while k < n_fft {
sum += fft_out[k] * filters[j * n_fft + k];
k += 1;
}
mel[j * n_len + i] = T::max(sum, T::from(1e-10).unwrap()).log10();
}
}
mel
}
pub fn log_mel_spectrogram_<T: Float>(
samples: &[T],
filters: &[T],
fft_size: usize,
fft_step: usize,
n_mel: usize,
speed_up: bool,
) -> Vec<T> {
let zero = T::zero();
let two_pi = T::PI() + T::PI();
let half = T::from(0.5).unwrap();
let one = T::from(1.0).unwrap();
let four = T::from(4.0).unwrap();
let fft_size_t = T::from(fft_size).unwrap();
let hann: Vec<T> = (0..fft_size)
.map(|i| half * (one - ((two_pi * T::from(i).unwrap()) / fft_size_t).cos()))
.collect();
let n_len = samples.len() / fft_step;
// pad audio with at least one extra chunk of zeros
let pad = 100 * super::CHUNK_LENGTH / 2;
let n_len = if n_len % pad != 0 {
(n_len / pad + 1) * pad
} else {
n_len
};
let n_len = n_len + pad;
let samples = {
let mut samples_padded = samples.to_vec();
let to_add = n_len * fft_step - samples.len();
samples_padded.extend(std::iter::repeat(zero).take(to_add));
samples_padded
};
// ensure that the number of threads is even and less than 12
let n_threads = std::cmp::min(get_num_threads() - get_num_threads() % 2, 12);
let hann = Arc::new(hann);
let samples = Arc::new(samples);
let filters = Arc::new(filters);
// use scope to allow for non static references to be passed to the threads
// and directly collect the results into a single vector
let all_outputs = thread::scope(|s| {
(0..n_threads)
// create threads and return their handles
.map(|thread_id| {
let hann = Arc::clone(&hann);
let samples = Arc::clone(&samples);
let filters = Arc::clone(&filters);
// spawn new thread and start work
s.spawn(move || {
log_mel_spectrogram_w(
thread_id, &hann, &samples, &filters, fft_size, fft_step, speed_up, n_len,
n_mel, n_threads,
)
})
})
.collect::<Vec<_>>()
.into_iter()
// wait for each thread to finish and collect their results
.map(|handle| handle.join().expect("Thread failed"))
.collect::<Vec<_>>()
});
let l = all_outputs[0].len();
let mut mel = vec![zero; l];
// iterate over mel spectrogram segments, dividing work by threads.
for segment_start in (0..l).step_by(n_threads) {
// go through each thread's output.
for thread_output in all_outputs.iter() {
// add each thread's piece to our mel spectrogram.
for offset in 0..n_threads {
let mel_index = segment_start + offset; // find location in mel.
if mel_index < mel.len() {
// Make sure we don't go out of bounds.
mel[mel_index] += thread_output[mel_index];
}
}
}
}
let mmax = mel
.iter()
.max_by(|&u, &v| u.partial_cmp(v).unwrap_or(std::cmp::Ordering::Greater))
.copied()
.unwrap_or(zero)
- T::from(8).unwrap();
for m in mel.iter_mut() {
let v = T::max(*m, mmax);
*m = v / four + one
}
mel
}
pub fn pcm_to_mel<T: Float>(cfg: &super::Config, samples: &[T], filters: &[T]) -> Vec<T> {
log_mel_spectrogram_(
samples,
filters,
super::N_FFT,
super::HOP_LENGTH,
cfg.num_mel_bins,
false,
)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_fft() {
let input = vec![0.0, 1.0, 0.0, 0.0];
let output = fft(&input);
assert_eq!(
output,
vec![
1.0,
0.0,
6.123233995736766e-17,
-1.0,
-1.0,
0.0,
-6.123233995736766e-17,
1.0
]
);
}
#[test]
fn test_dft() {
let input = vec![0.0, 1.0, 0.0, 0.0];
let output = dft(&input);
assert_eq!(
output,
vec![
1.0,
0.0,
6.123233995736766e-17,
-1.0,
-1.0,
-1.2246467991473532e-16,
-1.8369701987210297e-16,
1.0
]
);
}
#[test]
fn test_log_mel_spectrogram() {
let samples = vec![0.0; 1000];
let filters = vec![0.0; 1000];
let output = log_mel_spectrogram_(&samples, &filters, 100, 10, 10, false);
assert_eq!(output.len(), 30_000);
}
#[test]
fn test_tiny_log_mel_spectrogram() {
let samples = vec![0.0; 100];
let filters = vec![0.0; 100];
let output = log_mel_spectrogram_(&samples, &filters, 20, 2, 2, false);
assert_eq!(output.len(), 6_000);
}
}
|
candle/candle-transformers/src/models/whisper/audio.rs/0
|
{
"file_path": "candle/candle-transformers/src/models/whisper/audio.rs",
"repo_id": "candle",
"token_count": 5282
}
| 43
|
use crate::models::with_tracing::QMatMul;
use crate::quantized_var_builder::VarBuilder;
use candle::quantized::QTensor;
use candle::{Module, Result, Tensor};
#[derive(Debug, Clone)]
pub struct Embedding {
inner: candle_nn::Embedding,
span: tracing::Span,
}
impl Embedding {
pub fn new(d1: usize, d2: usize, vb: VarBuilder) -> Result<Self> {
let embeddings = vb.get((d1, d2), "weight")?.dequantize(vb.device())?;
let inner = candle_nn::Embedding::new(embeddings, d2);
let span = tracing::span!(tracing::Level::TRACE, "embedding");
Ok(Self { inner, span })
}
pub fn embeddings(&self) -> &Tensor {
self.inner.embeddings()
}
}
impl Module for Embedding {
fn forward(&self, xs: &Tensor) -> Result<Tensor> {
let _enter = self.span.enter();
self.inner.forward(xs)
}
}
#[derive(Debug, Clone)]
pub struct Linear {
weight: QMatMul,
bias: Option<Tensor>,
}
impl Linear {
pub fn from_arc(weight: std::sync::Arc<QTensor>, bias: Option<Tensor>) -> Result<Self> {
let weight = QMatMul::from_weights(weight)?;
Ok(Self { weight, bias })
}
pub fn from_weights(weight: QMatMul, bias: Option<Tensor>) -> Self {
Self { weight, bias }
}
}
impl Module for Linear {
fn forward(&self, x: &Tensor) -> candle::Result<Tensor> {
let x = x.apply(&self.weight)?;
match &self.bias {
None => Ok(x),
Some(bias) => x.broadcast_add(bias),
}
}
}
pub fn linear_b(in_dim: usize, out_dim: usize, bias: bool, vb: VarBuilder) -> Result<Linear> {
let bias = if bias {
Some(vb.get(out_dim, "bias")?.dequantize(vb.device())?)
} else {
None
};
let weight = QMatMul::new(in_dim, out_dim, vb)?;
Ok(Linear { weight, bias })
}
pub fn linear(in_dim: usize, out_dim: usize, vb: VarBuilder) -> Result<Linear> {
let bias = vb.get(out_dim, "bias")?.dequantize(vb.device())?;
let weight = QMatMul::new(in_dim, out_dim, vb)?;
Ok(Linear {
weight,
bias: Some(bias),
})
}
pub fn layer_norm(size: usize, eps: f64, vb: VarBuilder) -> Result<candle_nn::LayerNorm> {
let weight = vb.get(size, "weight")?.dequantize(vb.device())?;
let bias = vb.get(size, "bias")?.dequantize(vb.device())?;
Ok(candle_nn::LayerNorm::new(weight, bias, eps))
}
pub fn layer_norm_no_bias(size: usize, eps: f64, vb: VarBuilder) -> Result<candle_nn::LayerNorm> {
let weight = vb.get(size, "weight")?.dequantize(vb.device())?;
Ok(candle_nn::LayerNorm::new_no_bias(weight, eps))
}
pub fn linear_no_bias(in_dim: usize, out_dim: usize, vb: VarBuilder) -> Result<Linear> {
let weight = QMatMul::new(in_dim, out_dim, vb)?;
Ok(Linear { weight, bias: None })
}
#[derive(Debug, Clone)]
pub struct RmsNorm {
weight: Tensor,
eps: f64,
span: tracing::Span,
}
impl RmsNorm {
pub fn new(size: usize, eps: f64, vb: VarBuilder) -> Result<Self> {
let span = tracing::span!(tracing::Level::TRACE, "rms-norm");
let weight = vb.get(size, "weight")?.dequantize(vb.device())?;
Ok(Self { weight, eps, span })
}
pub fn from_qtensor(weight: QTensor, eps: f64) -> Result<Self> {
let span = tracing::span!(tracing::Level::TRACE, "rms-norm");
let weight = weight.dequantize(&weight.device())?;
Ok(Self { weight, eps, span })
}
}
impl Module for RmsNorm {
fn forward(&self, x: &Tensor) -> Result<Tensor> {
let _enter = self.span.enter();
candle_nn::ops::rms_norm(x, &self.weight, self.eps as f32)
}
}
|
candle/candle-transformers/src/quantized_nn.rs/0
|
{
"file_path": "candle/candle-transformers/src/quantized_nn.rs",
"repo_id": "candle",
"token_count": 1623
}
| 44
|
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<style>
@import url("https://fonts.googleapis.com/css2?family=Source+Code+Pro:wght@200;300;400&family=Source+Sans+3:wght@100;200;300;400;500;600;700;800;900&display=swap");
html,
body {
font-family: "Source Sans 3", sans-serif;
}
</style>
<title>Candle Blip Image Captioning Demo</title>
<script src="https://cdn.tailwindcss.com"></script>
<script type="module" src="./code.js"></script>
<script type="module">
const MODELS = {
blip_image_quantized_q4k: {
base_url: "https://huggingface.co/lmz/candle-blip/resolve/main/",
model: "blip-image-captioning-large-q4k.gguf",
config: "config.json",
tokenizer: "tokenizer.json",
quantized: true,
size: "271 MB",
},
blip_image_quantized_q80: {
base_url: "https://huggingface.co/lmz/candle-blip/resolve/main/",
model: "blip-image-captioning-large-q80.gguf",
config: "config.json",
tokenizer: "tokenizer.json",
quantized: true,
size: "505 MB",
},
blip_image_large: {
base_url:
"https://huggingface.co/Salesforce/blip-image-captioning-large/resolve/refs%2Fpr%2F18/",
model: "model.safetensors",
config: "config.json",
tokenizer: "tokenizer.json",
quantized: false,
size: "1.88 GB",
},
};
const blipWorker = new Worker("./blipWorker.js", {
type: "module",
});
const outputStatusEl = document.querySelector("#output-status");
const outputCaptionEl = document.querySelector("#output-caption");
const modelSelectEl = document.querySelector("#model");
const clearBtn = document.querySelector("#clear-btn");
const fileUpload = document.querySelector("#file-upload");
const dropArea = document.querySelector("#drop-area");
const imagesExamples = document.querySelector("#image-select");
const canvas = document.querySelector("#canvas");
const ctxCanvas = canvas.getContext("2d");
let isCaptioning = false;
let currentImageURL = null;
clearBtn.addEventListener("click", () => {
clearImageCanvas();
});
modelSelectEl.addEventListener("change", () => {
if (currentImageURL) {
runInference(currentImageURL);
}
});
//add event listener to file input
fileUpload.addEventListener("input", async (e) => {
const target = e.target;
if (target.files.length > 0) {
const href = URL.createObjectURL(target.files[0]);
clearImageCanvas();
await drawImageCanvas(href);
runInference(href);
}
});
// add event listener to drop-area
dropArea.addEventListener("dragenter", (e) => {
e.preventDefault();
dropArea.classList.add("border-blue-700");
});
dropArea.addEventListener("dragleave", (e) => {
e.preventDefault();
dropArea.classList.remove("border-blue-700");
});
dropArea.addEventListener("dragover", (e) => {
e.preventDefault();
});
dropArea.addEventListener("drop", async (e) => {
e.preventDefault();
dropArea.classList.remove("border-blue-700");
const url = e.dataTransfer.getData("text/uri-list");
const files = e.dataTransfer.files;
if (files.length > 0) {
const href = URL.createObjectURL(files[0]);
clearImageCanvas();
await drawImageCanvas(href);
runInference(href);
} else if (url) {
clearImageCanvas();
await drawImageCanvas(url);
runInference(url);
}
});
imagesExamples.addEventListener("click", async (e) => {
if (isCaptioning) {
return;
}
const target = e.target;
if (target.nodeName === "IMG") {
const href = target.src;
clearImageCanvas();
await drawImageCanvas(href);
runInference(href);
}
});
function clearImageCanvas() {
ctxCanvas.clearRect(0, 0, canvas.width, canvas.height);
isCaptioning = false;
clearBtn.disabled = true;
canvas.parentElement.style.height = "auto";
outputStatusEl.hidden = false;
outputCaptionEl.hidden = true;
outputStatusEl.innerText = "Please select an image";
currentImageURL = null;
}
async function drawImageCanvas(imgURL) {
if (!imgURL) {
throw new Error("No image URL provided");
}
return new Promise((resolve, reject) => {
ctxCanvas.clearRect(0, 0, canvas.width, canvas.height);
ctxCanvas.clearRect(0, 0, canvas.width, canvas.height);
const img = new Image();
img.crossOrigin = "anonymous";
img.onload = () => {
canvas.width = img.width;
canvas.height = img.height;
ctxCanvas.drawImage(img, 0, 0);
canvas.parentElement.style.height = canvas.offsetHeight + "px";
clearBtn.disabled = false;
resolve(img);
};
img.src = imgURL;
currentImageURL = imgURL;
});
}
document.addEventListener("DOMContentLoaded", () => {
for (const [id, model] of Object.entries(MODELS)) {
const option = document.createElement("option");
option.value = id;
option.innerText = `${id} (${model.size})`;
modelSelectEl.appendChild(option);
}
});
async function getImageCaption(
worker,
weightsURL,
tokenizerURL,
configURL,
modelID,
imageURL,
quantized,
updateStatus = null
) {
return new Promise((resolve, reject) => {
worker.postMessage({
weightsURL,
tokenizerURL,
configURL,
modelID,
imageURL,
quantized,
});
function messageHandler(event) {
if ("error" in event.data) {
worker.removeEventListener("message", messageHandler);
reject(new Error(event.data.error));
}
if (event.data.status === "complete") {
worker.removeEventListener("message", messageHandler);
resolve(event.data);
}
if (updateStatus) updateStatus(event.data);
}
worker.addEventListener("message", messageHandler);
});
}
function updateStatus(data) {
if (data.status === "status") {
outputStatusEl.innerText = data.message;
}
}
async function runInference(imageURL) {
if (isCaptioning || !imageURL) {
alert("Please select an image first");
return;
}
outputStatusEl.hidden = false;
outputCaptionEl.hidden = true;
clearBtn.disabled = true;
modelSelectEl.disabled = true;
isCaptioning = true;
const selectedModel = modelSelectEl.value;
const model = MODELS[selectedModel];
const weightsURL = `${model.base_url}${model.model}`;
const tokenizerURL = `${model.base_url}${model.tokenizer}`;
const configURL = `${model.base_url}${model.config}`;
const quantized = model.quantized;
try {
const time = performance.now();
const caption = await getImageCaption(
blipWorker,
weightsURL,
tokenizerURL,
configURL,
selectedModel,
imageURL,
quantized,
updateStatus
);
outputStatusEl.hidden = true;
outputCaptionEl.hidden = false;
const totalTime = ((performance.now() - time)/1000).toFixed(2);
outputCaptionEl.innerHTML = `${
caption.output
}<br/><span class="text-xs">Inference time: ${totalTime} s</span>`;
} catch (err) {
console.error(err);
outputStatusEl.hidden = false;
outputCaptionEl.hidden = true;
outputStatusEl.innerText = err.message;
}
clearBtn.disabled = false;
modelSelectEl.disabled = false;
isCaptioning = false;
}
</script>
</head>
<body class="container max-w-4xl mx-auto p-4">
<main class="grid grid-cols-1 gap-5 relative">
<span class="absolute text-5xl -ml-[1em]"> 🕯️ </span>
<div>
<h1 class="text-5xl font-bold">Candle BLIP Image Captioning</h1>
<h2 class="text-2xl font-bold">Rust/WASM Demo</h2>
<p class="max-w-lg">
<a
href="https://huggingface.co/Salesforce/blip-image-captioning-large"
target="_blank"
class="underline hover:text-blue-500 hover:no-underline"
>BLIP Image Captioning
</a>
running in the browser using
<a
href="https://github.com/huggingface/candle/"
target="_blank"
class="underline hover:text-blue-500 hover:no-underline"
>Candle</a
>, a minimalist ML framework for Rust.
</p>
<p class="text-xs max-w-lg py-2">
<b>Note:</b>
The image captioning on the smallest model takes about ~50 seconds, it
will vary depending on your machine and model size.
</p>
</div>
<div>
<label for="model" class="font-medium block">Models Options: </label>
<select
id="model"
class="border-2 border-gray-500 rounded-md font-light interactive disabled:cursor-not-allowed w-full max-w-max"
></select>
</div>
<!-- drag and drop area -->
<div class="grid gap-4 sm:grid-cols-2 py-4">
<div class="relative max-w-lg">
<div
class="absolute w-full bottom-full flex justify-between items-center"
>
<div class="flex gap-2 w-full">
<button
id="clear-btn"
disabled
title="Clear Image"
class="ml-auto text-xs bg-white rounded-md disabled:opacity-50 flex gap-1 items-center"
>
<svg
class=""
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 13 12"
height="1em"
>
<path
d="M1.6.7 12 11.1M12 .7 1.6 11.1"
stroke="#2E3036"
stroke-width="2"
/>
</svg>
</button>
</div>
</div>
<div
id="drop-area"
class="flex flex-col items-center justify-center border-2 border-gray-300 border-dashed rounded-xl relative aspect-video w-full overflow-hidden"
>
<div
class="flex flex-col items-center justify-center space-y-1 text-center"
>
<svg
width="25"
height="25"
viewBox="0 0 25 25"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M3.5 24.3a3 3 0 0 1-1.9-.8c-.5-.5-.8-1.2-.8-1.9V2.9c0-.7.3-1.3.8-1.9.6-.5 1.2-.7 2-.7h18.6c.7 0 1.3.2 1.9.7.5.6.7 1.2.7 2v18.6c0 .7-.2 1.4-.7 1.9a3 3 0 0 1-2 .8H3.6Zm0-2.7h18.7V2.9H3.5v18.7Zm2.7-2.7h13.3c.3 0 .5 0 .6-.3v-.7l-3.7-5a.6.6 0 0 0-.6-.2c-.2 0-.4 0-.5.3l-3.5 4.6-2.4-3.3a.6.6 0 0 0-.6-.3c-.2 0-.4.1-.5.3l-2.7 3.6c-.1.2-.2.4 0 .7.1.2.3.3.6.3Z"
fill="#000"
/>
</svg>
<div class="flex text-sm text-gray-600">
<label
for="file-upload"
class="relative cursor-pointer bg-white rounded-md font-medium text-blue-950 hover:text-blue-700"
>
<span>Drag and drop y our image here</span>
<span class="block text-xs">or</span>
<span class="block text-xs">Click to upload</span>
</label>
</div>
<input
id="file-upload"
name="file-upload"
type="file"
class="sr-only"
/>
</div>
<canvas
id="canvas"
class="absolute pointer-events-none w-full"
></canvas>
</div>
</div>
<div class="">
<div
class="h-full bg-slate-100 text-gray-500 p-4 rounded-md flex flex-col gap-2"
>
<p
id="output-caption"
class="m-auto text-xl text-center p-2"
hidden
></p>
<span id="output-status" class="m-auto font-light">
Please select an image
</span>
</div>
</div>
</div>
<div>
<div
class="flex gap-3 items-center overflow-x-scroll"
id="image-select"
>
<h3 class="font-medium">Examples:</h3>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/candle/examples/sf.jpg"
class="cursor-pointer w-24 h-24 object-cover"
/>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/candle/examples/bike.jpeg"
class="cursor-pointer w-24 h-24 object-cover"
/>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/candle/examples/000000000077.jpg"
class="cursor-pointer w-24 h-24 object-cover"
/>
</div>
</div>
</main>
</body>
</html>
|
candle/candle-wasm-examples/blip/index.html/0
|
{
"file_path": "candle/candle-wasm-examples/blip/index.html",
"repo_id": "candle",
"token_count": 7164
}
| 45
|
use crate::model::{Cache, Config, Llama};
use byteorder::{LittleEndian, ReadBytesExt};
use candle::{DType, Device, IndexOp, Result, Shape, Tensor};
use candle_nn::VarBuilder;
use candle_transformers::generation::LogitsProcessor;
use serde::{Deserialize, Serialize};
use tokenizers::Tokenizer;
use wasm_bindgen::prelude::*;
use yew_agent::{HandlerId, Public, WorkerLink};
#[wasm_bindgen]
extern "C" {
// Use `js_namespace` here to bind `console.log(..)` instead of just
// `log(..)`
#[wasm_bindgen(js_namespace = console)]
pub fn log(s: &str);
}
#[macro_export]
macro_rules! console_log {
// Note that this is using the `log` function imported above during
// `bare_bones`
($($t:tt)*) => ($crate::worker::log(&format_args!($($t)*).to_string()))
}
// Communication to the worker happens through bincode, the model weights and configs are fetched
// on the main thread and transferred via the following structure.
#[derive(Serialize, Deserialize)]
pub struct ModelData {
pub tokenizer: Vec<u8>,
pub model: Vec<u8>,
}
fn read_i32<R: std::io::Read>(r: &mut R) -> Result<i32> {
let mut buf = [0u8; 4];
r.read_exact(&mut buf)?;
Ok(i32::from_le_bytes(buf))
}
fn read_tensor<R: std::io::Read, S: Into<Shape>>(
r: &mut R,
shape: S,
dev: &Device,
) -> Result<Tensor> {
let shape = shape.into();
let mut data_t = vec![0f32; shape.elem_count()];
r.read_f32_into::<LittleEndian>(&mut data_t)?;
let tensor = Tensor::from_vec(data_t, shape, dev)?;
Ok(tensor)
}
pub struct Model {
pub cache: Cache,
pub config: Config,
pub llama: Llama,
pub tokenizer: Tokenizer,
}
impl Model {
fn run(
&self,
link: &WorkerLink<Worker>,
id: HandlerId,
temp: f64,
top_p: f64,
prompt: String,
) -> Result<()> {
let dev = Device::Cpu;
let temp = if temp <= 0. { None } else { Some(temp) };
let top_p = if top_p <= 0. || top_p >= 1.0 {
None
} else {
Some(top_p)
};
console_log!("temp: {temp:?} top_p: {top_p:?} prompt: {prompt}");
let mut logits_processor = LogitsProcessor::new(299792458, temp, top_p);
let mut index_pos = 0;
let mut tokens = self
.tokenizer
.encode(prompt.to_string(), true)
.map_err(|m| candle::Error::Msg(m.to_string()))?
.get_ids()
.to_vec();
link.respond(id, Ok(WorkerOutput::Generated(prompt)));
for index in 0.. {
if tokens.len() >= self.config.seq_len {
break;
}
let context_size = if self.cache.use_kv_cache && index > 0 {
1
} else {
tokens.len()
};
let ctxt = &tokens[tokens.len().saturating_sub(context_size)..];
let input = Tensor::new(ctxt, &dev)?.unsqueeze(0)?;
let logits = self.llama.forward(&input, index_pos)?;
let logits = logits.squeeze(0)?;
index_pos += ctxt.len();
let next_token = logits_processor.sample(&logits)?;
tokens.push(next_token);
if let Some(text) = self.tokenizer.id_to_token(next_token) {
let text = text.replace('▁', " ").replace("<0x0A>", "\n");
link.respond(id, Ok(WorkerOutput::Generated(text)));
}
}
Ok(())
}
}
impl Config {
fn from_reader<R: std::io::Read>(r: &mut R) -> Result<Self> {
let dim = read_i32(r)? as usize;
let hidden_dim = read_i32(r)? as usize;
let n_layers = read_i32(r)? as usize;
let n_heads = read_i32(r)? as usize;
let n_kv_heads = read_i32(r)? as usize;
let vocab_size = read_i32(r)? as usize;
let seq_len = read_i32(r)? as usize;
Ok(Self {
dim,
hidden_dim,
n_layers,
n_heads,
n_kv_heads,
vocab_size,
seq_len,
norm_eps: 1e-5,
})
}
pub fn head_size(&self) -> usize {
self.dim / self.n_heads
}
}
struct TransformerWeights {
// token embedding table
token_embedding_table: Tensor, // (vocab_size, dim)
// weights for rmsnorms
rms_att_weight: Tensor, // (layer, dim) rmsnorm weights
rms_ffn_weight: Tensor, // (layer, dim)
// weights for matmuls
wq: Tensor, // (layer, dim, dim)
wk: Tensor, // (layer, dim, dim)
wv: Tensor, // (layer, dim, dim)
wo: Tensor, // (layer, dim, dim)
// weights for ffn
w1: Tensor, // (layer, hidden_dim, dim)
w2: Tensor, // (layer, dim, hidden_dim)
w3: Tensor, // (layer, hidden_dim, dim)
// final rmsnorm
rms_final_weight: Tensor, // (dim,)
// freq_cis for RoPE relatively positional embeddings
freq_cis_real: Tensor, // (seq_len, head_size/2)
freq_cis_imag: Tensor, // (seq_len, head_size/2)
}
impl TransformerWeights {
fn from_reader<R: std::io::Read>(r: &mut R, c: &Config, dev: &Device) -> Result<Self> {
let token_embedding_table = read_tensor(r, (c.vocab_size, c.dim), dev)?;
let rms_att_weight = read_tensor(r, (c.n_layers, c.dim), dev)?;
let wq = read_tensor(r, (c.n_layers, c.dim, c.dim), dev)?;
let wk = read_tensor(r, (c.n_layers, c.dim, c.dim), dev)?;
let wv = read_tensor(r, (c.n_layers, c.dim, c.dim), dev)?;
let wo = read_tensor(r, (c.n_layers, c.dim, c.dim), dev)?;
let rms_ffn_weight = read_tensor(r, (c.n_layers, c.dim), dev)?;
let w1 = read_tensor(r, (c.n_layers, c.hidden_dim, c.dim), dev)?;
let w2 = read_tensor(r, (c.n_layers, c.dim, c.hidden_dim), dev)?;
let w3 = read_tensor(r, (c.n_layers, c.hidden_dim, c.dim), dev)?;
let rms_final_weight = read_tensor(r, c.dim, dev)?;
let head_size = c.head_size();
let freq_cis_real = read_tensor(r, (c.seq_len, head_size / 2), dev)?;
let freq_cis_imag = read_tensor(r, (c.seq_len, head_size / 2), dev)?;
Ok(Self {
token_embedding_table,
rms_att_weight,
wq,
wk,
wv,
wo,
rms_ffn_weight,
w1,
w2,
w3,
rms_final_weight,
freq_cis_real,
freq_cis_imag,
})
}
fn var_builder(&self, cfg: &Config, device: &Device) -> Result<VarBuilder> {
let mut ws = std::collections::HashMap::new();
let mut insert = |name: &str, t: Tensor| {
ws.insert(name.to_string(), t);
};
insert("rot.freq_cis_real", self.freq_cis_real.clone());
insert("rot.freq_cis_imag", self.freq_cis_imag.clone());
insert(
"model.embed_tokens.weight",
self.token_embedding_table.clone(),
);
insert("lm_head.weight", self.token_embedding_table.clone());
insert("model.norm.weight", self.rms_final_weight.clone());
for layer in 0..cfg.n_layers {
ws.insert(
format!("model.layers.{layer}.self_attn.q_proj.weight"),
self.wq.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.self_attn.k_proj.weight"),
self.wk.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.self_attn.v_proj.weight"),
self.wv.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.self_attn.o_proj.weight"),
self.wo.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.mlp.gate_proj.weight"),
self.w1.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.mlp.down_proj.weight"),
self.w2.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.mlp.up_proj.weight"),
self.w3.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.input_layernorm.weight"),
self.rms_att_weight.i(layer)?,
);
ws.insert(
format!("model.layers.{layer}.post_attention_layernorm.weight"),
self.rms_ffn_weight.i(layer)?,
);
}
let vb = VarBuilder::from_tensors(ws, DType::F32, device);
Ok(vb)
}
}
impl Model {
pub fn load(md: ModelData) -> Result<Self> {
let dev = Device::Cpu;
let mut model = std::io::Cursor::new(md.model);
let config = Config::from_reader(&mut model)?;
let weights = TransformerWeights::from_reader(&mut model, &config, &dev)?;
let vb = weights.var_builder(&config, &dev)?;
let cache = Cache::new(true, &config, vb.pp("rot"))?;
let llama = Llama::load(vb, &cache, &config)?;
let tokenizer =
Tokenizer::from_bytes(&md.tokenizer).map_err(|m| candle::Error::Msg(m.to_string()))?;
Ok(Self {
cache,
config,
llama,
tokenizer,
})
}
}
pub struct Worker {
link: WorkerLink<Self>,
model: Option<Model>,
}
#[derive(Serialize, Deserialize)]
pub enum WorkerInput {
ModelData(ModelData),
Run(f64, f64, String),
}
#[derive(Serialize, Deserialize)]
pub enum WorkerOutput {
Generated(String),
GenerationDone(std::result::Result<(), String>),
WeightsLoaded,
}
impl yew_agent::Worker for Worker {
type Input = WorkerInput;
type Message = ();
type Output = std::result::Result<WorkerOutput, String>;
type Reach = Public<Self>;
fn create(link: WorkerLink<Self>) -> Self {
Self { link, model: None }
}
fn update(&mut self, _msg: Self::Message) {
// no messaging
}
fn handle_input(&mut self, msg: Self::Input, id: HandlerId) {
let output = match msg {
WorkerInput::ModelData(md) => match Model::load(md) {
Ok(model) => {
self.model = Some(model);
Ok(WorkerOutput::WeightsLoaded)
}
Err(err) => Err(format!("model creation error {err:?}")),
},
WorkerInput::Run(temp, top_p, prompt) => match &mut self.model {
None => Err("model has not been set yet".to_string()),
Some(model) => {
{
let mut cache = model.cache.kvs.lock().unwrap();
for elem in cache.iter_mut() {
*elem = None
}
}
let result = model
.run(&self.link, id, temp, top_p, prompt)
.map_err(|e| e.to_string());
Ok(WorkerOutput::GenerationDone(result))
}
},
};
self.link.respond(id, output);
}
fn name_of_resource() -> &'static str {
"worker.js"
}
fn resource_path_is_relative() -> bool {
true
}
}
|
candle/candle-wasm-examples/llama2-c/src/worker.rs/0
|
{
"file_path": "candle/candle-wasm-examples/llama2-c/src/worker.rs",
"repo_id": "candle",
"token_count": 5770
}
| 46
|
export async function extractEmbeddings(
worker,
weightsURL,
tokenizerURL,
configURL,
modelID,
sentences,
updateStatus,
normalize_embeddings = true
) {
return new Promise((resolve, reject) => {
worker.postMessage({
weightsURL,
tokenizerURL,
configURL,
modelID,
sentences,
normalize_embeddings,
});
function messageHandler(event) {
if ("error" in event.data) {
worker.removeEventListener("message", messageHandler);
reject(new Error(event.data.error));
}
if (event.data.status === "complete") {
worker.removeEventListener("message", messageHandler);
resolve(event.data);
}
if (updateStatus) updateStatus(event.data);
}
worker.addEventListener("message", messageHandler);
});
}
export async function generateText(
worker,
weightsURL,
tokenizerURL,
configURL,
modelID,
prompt,
params,
updateStatus
) {
return new Promise((resolve, reject) => {
worker.postMessage({
weightsURL,
tokenizerURL,
configURL,
modelID,
prompt,
params,
});
function messageHandler(event) {
if ("error" in event.data) {
worker.removeEventListener("message", messageHandler);
reject(new Error(event.data.error));
}
if (event.data.status === "complete") {
worker.removeEventListener("message", messageHandler);
resolve(event.data);
}
if (updateStatus) updateStatus(event.data);
}
worker.addEventListener("message", messageHandler);
});
}
export const MODELS = {
t5_small_quantized: {
size: "64.4 MB",
base_url: "https://huggingface.co/lmz/candle-quantized-t5/resolve/main/",
model: "model.gguf",
tokenizer: "tokenizer.json",
config: "config.json",
tasks: {
translation_en_to_de: {
prefix: "translate English to German: ",
max_length: 300,
},
translation_en_to_fr: {
prefix: "translate English to French: ",
max_length: 300,
},
translation_en_to_ro: {
prefix: "translate English to Romanian: ",
max_length: 300,
},
summarization: { prefix: "summarize: ", max_length: 200 },
},
},
t5_small: {
size: "242 MB",
base_url: "https://huggingface.co/t5-small/resolve/main/",
model: "model.safetensors",
tokenizer: "tokenizer.json",
config: "config.json",
tasks: {
translation_en_to_de: {
prefix: "translate English to German: ",
max_length: 300,
},
translation_en_to_fr: {
prefix: "translate English to French: ",
max_length: 300,
},
translation_en_to_ro: {
prefix: "translate English to Romanian: ",
max_length: 300,
},
summarization: { prefix: "summarize: ", max_length: 200 },
},
},
flan_t5_small: {
size: "308 MB",
base_url:
"https://huggingface.co/google/flan-t5-small/resolve/refs%2Fpr%2F14/",
model: "model.safetensors",
tokenizer: "tokenizer.json",
config: "config.json",
tasks: {
translation_en_to_de: {
prefix: "translate English to German: ",
max_length: 300,
},
translation_en_to_fr: {
prefix: "translate English to French: ",
max_length: 300,
},
translation_en_to_ro: {
prefix: "translate English to Romanian: ",
max_length: 300,
},
summarization: { prefix: "summarize: ", max_length: 200 },
},
},
flan_t5_base_quantized: {
size: "263 MB",
base_url: "https://huggingface.co/lmz/candle-quantized-t5/resolve/main/",
model: "model-flan-t5-base.gguf",
tokenizer: "tokenizer.json",
config: "config-flan-t5-base.json",
tasks: {
translation_en_to_de: {
prefix: "translate English to German: ",
max_length: 300,
},
translation_en_to_fr: {
prefix: "translate English to French: ",
max_length: 300,
},
translation_en_to_ro: {
prefix: "translate English to Romanian: ",
max_length: 300,
},
summarization: { prefix: "summarize: ", max_length: 200 },
},
},
coedit_large_quantized: {
size: "643 MB",
base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
model: "model.gguf",
tokenizer: "tokenizer.json",
config: "config.json",
tasks: {
fluency: {
prefix: "Fix the grammar: ",
max_length: 300,
},
coherence: {
prefix: "Rewrite to make this easier to understand: ",
max_length: 300,
},
simplification: {
prefix: "translate English to Romanian: ",
max_length: 300,
},
simplification: {
prefix: "Paraphrase this: ",
max_length: 300,
},
formalization: {
prefix: "Write this more formally: ",
max_length: 300,
},
neutralize: {
prefix: "Write in a more neutral way: ",
max_length: 300,
},
},
},
};
export function getModelInfo(id, taskID) {
const model = MODELS[id];
return {
modelURL: model.base_url + model.model,
configURL: model.base_url + model.config,
tokenizerURL: model.base_url + model.tokenizer,
maxLength: model.tasks[taskID].max_length,
};
}
|
candle/candle-wasm-examples/t5/utils.js/0
|
{
"file_path": "candle/candle-wasm-examples/t5/utils.js",
"repo_id": "candle",
"token_count": 2339
}
| 47
|
[package]
name = "candle-wasm-example-yolo"
version.workspace = true
edition.workspace = true
description.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
license.workspace = true
[dependencies]
candle = { workspace = true }
candle-nn = { workspace = true }
num-traits = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
image = { workspace = true }
# App crates.
anyhow = { workspace = true }
byteorder = { workspace = true }
log = { workspace = true }
rand = { workspace = true }
safetensors = { workspace = true }
# Wasm specific crates.
console_error_panic_hook = "0.1.7"
getrandom = { version = "0.2", features = ["js"] }
gloo = "0.11"
js-sys = "0.3.64"
wasm-bindgen = "0.2.87"
wasm-bindgen-futures = "0.4.37"
wasm-logger = "0.2"
yew-agent = "0.2.0"
yew = { version = "0.20.0", features = ["csr"] }
[dependencies.web-sys]
version = "0.3.64"
features = [
'Blob',
'CanvasRenderingContext2d',
'Document',
'Element',
'HtmlElement',
'HtmlCanvasElement',
'HtmlImageElement',
'ImageData',
'Node',
'Window',
'Request',
'RequestCache',
'RequestInit',
'RequestMode',
'Response',
'Performance',
'TextMetrics',
]
|
candle/candle-wasm-examples/yolo/Cargo.toml/0
|
{
"file_path": "candle/candle-wasm-examples/yolo/Cargo.toml",
"repo_id": "candle",
"token_count": 463
}
| 48
|
pub fn add(left: usize, right: usize) -> usize {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}
|
candle/candle-wasm-tests/src/lib.rs/0
|
{
"file_path": "candle/candle-wasm-tests/src/lib.rs",
"repo_id": "candle",
"token_count": 108
}
| 49
|
ARG INCLUDE_DB=false
FROM mongo:latest as mongo
FROM node:20-slim as local_db_false
FROM node:20-slim as local_db_true
RUN apt-get update
RUN apt-get install gnupg curl -y
COPY --from=mongo /usr/bin/mongo* /usr/bin/
FROM local_db_${INCLUDE_DB} as final
ARG INCLUDE_DB=false
ENV INCLUDE_DB=${INCLUDE_DB}
WORKDIR /app
COPY --link --chown=1000 package-lock.json package.json ./
RUN --mount=type=cache,target=/app/.npm \
npm set cache /app/.npm && \
npm ci
# copy the rest of the files, run regardless of
COPY --chown=1000 --link . .
RUN chmod +x /app/entrypoint.sh
CMD ["/bin/bash", "-c", "/app/entrypoint.sh"]
|
chat-ui/Dockerfile.local/0
|
{
"file_path": "chat-ui/Dockerfile.local",
"repo_id": "chat-ui",
"token_count": 278
}
| 50
|
export function clickOutside(element: HTMLDialogElement, callbackFunction: () => void) {
function onClick(event: MouseEvent) {
if (!element.contains(event.target as Node)) {
callbackFunction();
}
}
document.body.addEventListener("click", onClick);
return {
update(newCallbackFunction: () => void) {
callbackFunction = newCallbackFunction;
},
destroy() {
document.body.removeEventListener("click", onClick);
},
};
}
|
chat-ui/src/lib/actions/clickOutside.ts/0
|
{
"file_path": "chat-ui/src/lib/actions/clickOutside.ts",
"repo_id": "chat-ui",
"token_count": 143
}
| 51
|
<script lang="ts">
import { base } from "$app/paths";
import { page } from "$app/stores";
import { createEventDispatcher } from "svelte";
import CarbonCheckmark from "~icons/carbon/checkmark";
import CarbonTrashCan from "~icons/carbon/trash-can";
import CarbonClose from "~icons/carbon/close";
import CarbonEdit from "~icons/carbon/edit";
import type { ConvSidebar } from "$lib/types/ConvSidebar";
export let conv: ConvSidebar;
let confirmDelete = false;
const dispatch = createEventDispatcher<{
deleteConversation: string;
editConversationTitle: { id: string; title: string };
}>();
</script>
<a
data-sveltekit-noscroll
on:mouseleave={() => {
confirmDelete = false;
}}
href="{base}/conversation/{conv.id}"
class="group flex h-10 flex-none items-center gap-1.5 rounded-lg pl-2.5 pr-2 text-gray-600 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-gray-700 {conv.id ===
$page.params.id
? 'bg-gray-100 dark:bg-gray-700'
: ''}"
>
<div class="flex flex-1 items-center truncate">
{#if confirmDelete}
<span class="mr-1 font-semibold"> Delete </span>
{/if}
{#if conv.avatarHash}
<img
src="{base}/settings/assistants/{conv.assistantId}/avatar.jpg?hash={conv.avatarHash}"
alt="Assistant avatar"
class="mr-1.5 inline size-4 flex-none rounded-full object-cover"
/>
{conv.title.replace(/\p{Emoji}/gu, "")}
{:else if conv.assistantId}
<div
class="mr-1.5 flex size-4 flex-none items-center justify-center rounded-full bg-gray-300 text-xs font-bold uppercase text-gray-500"
/>
{conv.title.replace(/\p{Emoji}/gu, "")}
{:else}
{conv.title}
{/if}
</div>
{#if confirmDelete}
<button
type="button"
class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex"
title="Confirm delete action"
on:click|preventDefault={() => {
confirmDelete = false;
dispatch("deleteConversation", conv.id);
}}
>
<CarbonCheckmark class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" />
</button>
<button
type="button"
class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex"
title="Cancel delete action"
on:click|preventDefault={() => (confirmDelete = false)}
>
<CarbonClose class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" />
</button>
{:else}
<button
type="button"
class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex"
title="Edit conversation title"
on:click|preventDefault={() => {
const newTitle = prompt("Edit this conversation title:", conv.title);
if (!newTitle) return;
dispatch("editConversationTitle", { id: conv.id, title: newTitle });
}}
>
<CarbonEdit class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" />
</button>
<button
type="button"
class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex"
title="Delete conversation"
on:click|preventDefault={(event) => {
if (event.shiftKey) {
dispatch("deleteConversation", conv.id);
} else {
confirmDelete = true;
}
}}
>
<CarbonTrashCan class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" />
</button>
{/if}
</a>
|
chat-ui/src/lib/components/NavConversationItem.svelte/0
|
{
"file_path": "chat-ui/src/lib/components/NavConversationItem.svelte",
"repo_id": "chat-ui",
"token_count": 1309
}
| 52
|
<script lang="ts">
import { createEventDispatcher } from "svelte";
import IconGear from "~icons/bi/gear-fill";
import { base } from "$app/paths";
import type { Assistant } from "$lib/types/Assistant";
import { formatUserCount } from "$lib/utils/formatUserCount";
import IconInternet from "../icons/IconInternet.svelte";
import CarbonExport from "~icons/carbon/export";
import CarbonCheckmark from "~icons/carbon/checkmark";
import CarbonUserMultiple from "~icons/carbon/user-multiple";
import { share } from "$lib/utils/share";
import { PUBLIC_ORIGIN, PUBLIC_SHARE_PREFIX } from "$env/static/public";
import { page } from "$app/stores";
export let assistant: Pick<
Assistant,
| "avatar"
| "name"
| "rag"
| "dynamicPrompt"
| "modelId"
| "createdByName"
| "exampleInputs"
| "_id"
| "description"
| "userCount"
>;
const dispatch = createEventDispatcher<{ message: string }>();
$: hasRag =
assistant?.rag?.allowAllDomains ||
(assistant?.rag?.allowedDomains?.length ?? 0) > 0 ||
(assistant?.rag?.allowedLinks?.length ?? 0) > 0 ||
assistant?.dynamicPrompt;
const prefix = PUBLIC_SHARE_PREFIX || `${PUBLIC_ORIGIN || $page.url.origin}${base}`;
$: shareUrl = `${prefix}/assistant/${assistant?._id}`;
let isCopied = false;
</script>
<div class="flex h-full w-full flex-col content-center items-center justify-center pb-52">
<div
class="relative mt-auto rounded-2xl bg-gray-100 text-gray-600 dark:border-gray-800 dark:bg-gray-800/60 dark:text-gray-300"
>
<div
class="mt-3 flex min-w-[80dvw] items-center gap-4 p-4 pr-1 sm:min-w-[440px] md:p-8 xl:gap-8"
>
{#if assistant.avatar}
<img
src={`${base}/settings/assistants/${assistant._id.toString()}/avatar.jpg?hash=${
assistant.avatar
}`}
alt="avatar"
class="size-16 flex-none rounded-full object-cover max-sm:self-start md:size-32"
/>
{:else}
<div
class="flex size-12 flex-none items-center justify-center rounded-full bg-gray-300 object-cover text-xl font-bold uppercase text-gray-500 max-sm:self-start sm:text-4xl md:size-32 dark:bg-gray-600"
>
{assistant?.name[0]}
</div>
{/if}
<div class="flex h-full flex-col gap-2 text-balance">
<p class="-mb-1">Assistant</p>
<p class="text-xl font-bold sm:text-2xl">{assistant.name}</p>
{#if assistant.description}
<p class="line-clamp-6 text-sm text-gray-500 dark:text-gray-400">
{assistant.description}
</p>
{/if}
{#if hasRag}
<div
class="flex h-5 w-fit items-center gap-1 rounded-full bg-blue-500/10 pl-1 pr-2 text-xs"
title="This assistant uses the websearch."
>
<IconInternet classNames="text-sm text-blue-600" />
Has internet access
</div>
{/if}
{#if assistant.createdByName}
<p class="pt-1 text-sm text-gray-400 dark:text-gray-500">
Created by
<a class="hover:underline" href="{base}/assistants?user={assistant.createdByName}">
{assistant.createdByName}
</a>
{#if assistant.userCount && assistant.userCount > 1}
<span class="mx-1">·</span>
<div
class="inline-flex items-baseline gap-1 text-sm text-gray-400 dark:text-gray-500"
title="Number of users"
>
<CarbonUserMultiple class="text-xxs" />{formatUserCount(assistant.userCount)} users
</div>
{/if}
</p>
{/if}
</div>
</div>
<div class="absolute right-3 top-3 md:right-4 md:top-4">
<div class="flex flex-row items-center gap-1">
<button
class="flex h-7 items-center gap-1.5 rounded-full border bg-white px-2.5 py-1 text-gray-800 shadow-sm hover:shadow-inner max-sm:px-1.5 md:text-sm dark:border-gray-700 dark:bg-gray-700 dark:text-gray-300/90 dark:hover:bg-gray-800"
on:click={() => {
if (!isCopied) {
share(shareUrl, assistant.name);
isCopied = true;
setTimeout(() => {
isCopied = false;
}, 2000);
}
}}
>
{#if isCopied}
<CarbonCheckmark class="text-xxs text-green-600 max-sm:text-xs" />
<span class="text-green-600 max-sm:hidden"> Copied </span>
{:else}
<CarbonExport class="text-xxs max-sm:text-xs" />
<span class="max-sm:hidden"> Share </span>
{/if}
</button>
<a
href="{base}/settings/assistants/{assistant._id.toString()}"
class="flex h-7 items-center gap-1.5 rounded-full border bg-white px-2.5 py-1 text-gray-800 shadow-sm hover:shadow-inner md:text-sm dark:border-gray-700 dark:bg-gray-700 dark:text-gray-300/90 dark:hover:bg-gray-800"
><IconGear class="text-xxs" />Settings</a
>
</div>
</div>
</div>
{#if assistant.exampleInputs}
<div class="mx-auto mt-auto w-full gap-8 sm:-mb-8">
<div class="md:col-span-2 md:mt-6">
<div
class="grid grid-cols-1 gap-3 {assistant.exampleInputs.length > 1
? 'md:grid-cols-2'
: ''}"
>
{#each assistant.exampleInputs as example}
<button
type="button"
class="truncate whitespace-nowrap rounded-xl border bg-gray-50 px-3 py-2 text-left text-smd text-gray-600 hover:bg-gray-100 dark:border-gray-800 dark:bg-gray-800 dark:text-gray-300 dark:hover:bg-gray-700"
on:click={() => dispatch("message", example)}
>
{example}
</button>
{/each}
</div>
</div>
</div>
{/if}
</div>
|
chat-ui/src/lib/components/chat/AssistantIntroduction.svelte/0
|
{
"file_path": "chat-ui/src/lib/components/chat/AssistantIntroduction.svelte",
"repo_id": "chat-ui",
"token_count": 2390
}
| 53
|
import { afterEach, assert, describe, expect, it } from "vitest";
import { migrations } from "./routines";
import { acquireLock, isDBLocked, refreshLock, releaseLock } from "./lock";
import { collections } from "$lib/server/database";
const LOCK_KEY = "migrations";
describe("migrations", () => {
it("should not have duplicates guid", async () => {
const guids = migrations.map((m) => m._id.toString());
const uniqueGuids = [...new Set(guids)];
expect(uniqueGuids.length).toBe(guids.length);
});
it("should acquire only one lock on DB", async () => {
const results = await Promise.all(new Array(1000).fill(0).map(() => acquireLock(LOCK_KEY)));
const locks = results.filter((r) => r);
const semaphores = await collections.semaphores.find({}).toArray();
expect(locks.length).toBe(1);
expect(semaphores).toBeDefined();
expect(semaphores.length).toBe(1);
expect(semaphores?.[0].key).toBe("migrations");
});
it("should read the lock correctly", async () => {
const lockId = await acquireLock(LOCK_KEY);
assert(lockId);
expect(await isDBLocked(LOCK_KEY)).toBe(true);
expect(!!(await acquireLock(LOCK_KEY))).toBe(false);
await releaseLock(LOCK_KEY, lockId);
expect(await isDBLocked(LOCK_KEY)).toBe(false);
});
it("should refresh the lock", async () => {
const lockId = await acquireLock(LOCK_KEY);
assert(lockId);
// get the updatedAt time
const updatedAtInitially = (await collections.semaphores.findOne({}))?.updatedAt;
await refreshLock(LOCK_KEY, lockId);
const updatedAtAfterRefresh = (await collections.semaphores.findOne({}))?.updatedAt;
expect(updatedAtInitially).toBeDefined();
expect(updatedAtAfterRefresh).toBeDefined();
expect(updatedAtInitially).not.toBe(updatedAtAfterRefresh);
});
});
afterEach(async () => {
await collections.semaphores.deleteMany({});
await collections.migrationResults.deleteMany({});
});
|
chat-ui/src/lib/migrations/migrations.spec.ts/0
|
{
"file_path": "chat-ui/src/lib/migrations/migrations.spec.ts",
"repo_id": "chat-ui",
"token_count": 663
}
| 54
|
import type { Conversation } from "$lib/types/Conversation";
import type { TextGenerationStreamOutput } from "@huggingface/inference";
import { endpointTgi, endpointTgiParametersSchema } from "./tgi/endpointTgi";
import { z } from "zod";
import endpointAws, { endpointAwsParametersSchema } from "./aws/endpointAws";
import { endpointOAIParametersSchema, endpointOai } from "./openai/endpointOai";
import endpointLlamacpp, { endpointLlamacppParametersSchema } from "./llamacpp/endpointLlamacpp";
import endpointOllama, { endpointOllamaParametersSchema } from "./ollama/endpointOllama";
import endpointVertex, { endpointVertexParametersSchema } from "./google/endpointVertex";
import {
endpointAnthropic,
endpointAnthropicParametersSchema,
} from "./anthropic/endpointAnthropic";
import type { Model } from "$lib/types/Model";
import endpointCloudflare, {
endpointCloudflareParametersSchema,
} from "./cloudflare/endpointCloudflare";
import { endpointCohere, endpointCohereParametersSchema } from "./cohere/endpointCohere";
// parameters passed when generating text
export interface EndpointParameters {
messages: Omit<Conversation["messages"][0], "id">[];
preprompt?: Conversation["preprompt"];
continueMessage?: boolean; // used to signal that the last message will be extended
generateSettings?: Partial<Model["parameters"]>;
}
interface CommonEndpoint {
weight: number;
}
// type signature for the endpoint
export type Endpoint = (
params: EndpointParameters
) => Promise<AsyncGenerator<TextGenerationStreamOutput, void, void>>;
// generator function that takes in parameters for defining the endpoint and return the endpoint
export type EndpointGenerator<T extends CommonEndpoint> = (parameters: T) => Endpoint;
// list of all endpoint generators
export const endpoints = {
tgi: endpointTgi,
anthropic: endpointAnthropic,
aws: endpointAws,
openai: endpointOai,
llamacpp: endpointLlamacpp,
ollama: endpointOllama,
vertex: endpointVertex,
cloudflare: endpointCloudflare,
cohere: endpointCohere,
};
export const endpointSchema = z.discriminatedUnion("type", [
endpointAnthropicParametersSchema,
endpointAwsParametersSchema,
endpointOAIParametersSchema,
endpointTgiParametersSchema,
endpointLlamacppParametersSchema,
endpointOllamaParametersSchema,
endpointVertexParametersSchema,
endpointCloudflareParametersSchema,
endpointCohereParametersSchema,
]);
export default endpoints;
|
chat-ui/src/lib/server/endpoints/endpoints.ts/0
|
{
"file_path": "chat-ui/src/lib/server/endpoints/endpoints.ts",
"repo_id": "chat-ui",
"token_count": 725
}
| 55
|
import { LLM_SUMMERIZATION } from "$env/static/private";
import { generateFromDefaultEndpoint } from "$lib/server/generateFromDefaultEndpoint";
import type { Message } from "$lib/types/Message";
export async function summarize(prompt: string) {
if (!LLM_SUMMERIZATION) {
return prompt.split(/\s+/g).slice(0, 5).join(" ");
}
const messages: Array<Omit<Message, "id">> = [
{ from: "user", content: "Who is the president of Gabon?" },
{ from: "assistant", content: "🇬🇦 President of Gabon" },
{ from: "user", content: "Who is Julien Chaumond?" },
{ from: "assistant", content: "🧑 Julien Chaumond" },
{ from: "user", content: "what is 1 + 1?" },
{ from: "assistant", content: "🔢 Simple math operation" },
{ from: "user", content: "What are the latest news?" },
{ from: "assistant", content: "📰 Latest news" },
{ from: "user", content: "How to make a great cheesecake?" },
{ from: "assistant", content: "🍰 Cheesecake recipe" },
{ from: "user", content: "what is your favorite movie? do a short answer." },
{ from: "assistant", content: "🎥 Favorite movie" },
{ from: "user", content: "Explain the concept of artificial intelligence in one sentence" },
{ from: "assistant", content: "🤖 AI definition" },
{ from: "user", content: prompt },
];
return await generateFromDefaultEndpoint({
messages,
preprompt: `You are a summarization AI. You'll never answer a user's question directly, but instead summarize the user's request into a single short sentence of four words or less. Always start your answer with an emoji relevant to the summary.`,
})
.then((summary) => {
// add an emoji if none is found in the first three characters
if (!/\p{Emoji}/u.test(summary.slice(0, 3))) {
return "💬 " + summary;
}
return summary;
})
.catch((e) => {
console.error(e);
return null;
});
}
|
chat-ui/src/lib/server/summarize.ts/0
|
{
"file_path": "chat-ui/src/lib/server/summarize.ts",
"repo_id": "chat-ui",
"token_count": 638
}
| 56
|
export function switchTheme() {
const { classList } = document.querySelector("html") as HTMLElement;
const metaTheme = document.querySelector('meta[name="theme-color"]') as HTMLMetaElement;
if (classList.contains("dark")) {
classList.remove("dark");
metaTheme.setAttribute("content", "rgb(249, 250, 251)");
localStorage.theme = "light";
} else {
classList.add("dark");
metaTheme.setAttribute("content", "rgb(26, 36, 50)");
localStorage.theme = "dark";
}
}
|
chat-ui/src/lib/switchTheme.ts/0
|
{
"file_path": "chat-ui/src/lib/switchTheme.ts",
"repo_id": "chat-ui",
"token_count": 164
}
| 57
|
import type { Conversation } from "./Conversation";
export type SharedConversation = Pick<
Conversation,
| "model"
| "embeddingModel"
| "title"
| "rootMessageId"
| "messages"
| "preprompt"
| "assistantId"
| "createdAt"
| "updatedAt"
> & {
_id: string;
hash: string;
};
|
chat-ui/src/lib/types/SharedConversation.ts/0
|
{
"file_path": "chat-ui/src/lib/types/SharedConversation.ts",
"repo_id": "chat-ui",
"token_count": 114
}
| 58
|
import type { Model } from "$lib/types/Model";
import { AutoTokenizer, PreTrainedTokenizer } from "@xenova/transformers";
export async function getTokenizer(_modelTokenizer: Exclude<Model["tokenizer"], undefined>) {
if (typeof _modelTokenizer === "string") {
// return auto tokenizer
return await AutoTokenizer.from_pretrained(_modelTokenizer);
} else {
// construct & return pretrained tokenizer
const { tokenizerUrl, tokenizerConfigUrl } = _modelTokenizer satisfies {
tokenizerUrl: string;
tokenizerConfigUrl: string;
};
const tokenizerJSON = await (await fetch(tokenizerUrl)).json();
const tokenizerConfig = await (await fetch(tokenizerConfigUrl)).json();
return new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig);
}
}
|
chat-ui/src/lib/utils/getTokenizer.ts/0
|
{
"file_path": "chat-ui/src/lib/utils/getTokenizer.ts",
"repo_id": "chat-ui",
"token_count": 228
}
| 59
|
import { collections } from "$lib/server/database";
import { ObjectId } from "mongodb";
import { describe, expect, it } from "vitest";
import { insertLegacyConversation, insertSideBranchesConversation } from "./treeHelpers.spec";
import { addChildren } from "./addChildren";
import type { Message } from "$lib/types/Message";
const newMessage: Omit<Message, "id"> = {
content: "new message",
from: "user",
};
Object.freeze(newMessage);
describe("addChildren", async () => {
it("should let you append on legacy conversations", async () => {
const convId = await insertLegacyConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
const convLength = conv.messages.length;
addChildren(conv, newMessage, conv.messages[conv.messages.length - 1].id);
expect(conv.messages.length).toEqual(convLength + 1);
});
it("should not let you create branches on legacy conversations", async () => {
const convId = await insertLegacyConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
expect(() => addChildren(conv, newMessage, conv.messages[0].id)).toThrow();
});
it("should not let you create a message that already exists", async () => {
const convId = await insertLegacyConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
const messageThatAlreadyExists: Message = {
id: conv.messages[0].id,
content: "new message",
from: "user",
};
expect(() => addChildren(conv, messageThatAlreadyExists, conv.messages[0].id)).toThrow();
});
it("should let you create branches on conversations with subtrees", async () => {
const convId = await insertSideBranchesConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
const nChildren = conv.messages[0].children?.length;
if (!nChildren) throw new Error("No children found");
addChildren(conv, newMessage, conv.messages[0].id);
expect(conv.messages[0].children?.length).toEqual(nChildren + 1);
});
it("should let you create a new leaf", async () => {
const convId = await insertSideBranchesConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
const parentId = conv.messages[conv.messages.length - 1].id;
const nChildren = conv.messages[conv.messages.length - 1].children?.length;
if (nChildren === undefined) throw new Error("No children found");
expect(nChildren).toEqual(0);
addChildren(conv, newMessage, parentId);
expect(conv.messages[conv.messages.length - 2].children?.length).toEqual(nChildren + 1);
});
it("should let you append to an empty conversation without specifying a parentId", async () => {
const conv = {
_id: new ObjectId(),
rootMessageId: undefined,
messages: [] as Message[],
};
addChildren(conv, newMessage);
expect(conv.messages.length).toEqual(1);
expect(conv.rootMessageId).toEqual(conv.messages[0].id);
});
it("should throw if you don't specify a parentId in a conversation with messages", async () => {
const convId = await insertLegacyConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
expect(() => addChildren(conv, newMessage)).toThrow();
});
it("should return the id of the new message", async () => {
const convId = await insertLegacyConversation();
const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) });
if (!conv) throw new Error("Conversation not found");
expect(addChildren(conv, newMessage, conv.messages[conv.messages.length - 1].id)).toEqual(
conv.messages[conv.messages.length - 1].id
);
});
});
|
chat-ui/src/lib/utils/tree/addChildren.spec.ts/0
|
{
"file_path": "chat-ui/src/lib/utils/tree/addChildren.spec.ts",
"repo_id": "chat-ui",
"token_count": 1301
}
| 60
|
import { json } from "@sveltejs/kit";
import type { ConversationStats } from "$lib/types/ConversationStats";
import { CONVERSATION_STATS_COLLECTION, collections } from "$lib/server/database.js";
// Triger like this:
// curl -X POST "http://localhost:5173/chat/admin/stats/compute" -H "Authorization: Bearer <ADMIN_API_SECRET>"
export async function POST() {
for (const span of ["day", "week", "month"] as const) {
computeStats({ dateField: "updatedAt", type: "conversation", span }).catch(console.error);
computeStats({ dateField: "createdAt", type: "conversation", span }).catch(console.error);
computeStats({ dateField: "createdAt", type: "message", span }).catch(console.error);
}
return json({}, { status: 202 });
}
async function computeStats(params: {
dateField: ConversationStats["date"]["field"];
span: ConversationStats["date"]["span"];
type: ConversationStats["type"];
}) {
const lastComputed = await collections.conversationStats.findOne(
{ "date.field": params.dateField, "date.span": params.span, type: params.type },
{ sort: { "date.at": -1 } }
);
// If the last computed week is at the beginning of the last computed month, we need to include some days from the previous month
// In those cases we need to compute the stats from before the last month as everything is one aggregation
const minDate = lastComputed ? lastComputed.date.at : new Date(0);
console.log("Computing stats for", params.type, params.span, params.dateField, "from", minDate);
const dateField = params.type === "message" ? "messages." + params.dateField : params.dateField;
const pipeline = [
{
$match: {
[dateField]: { $gte: minDate },
},
},
{
$project: {
[dateField]: 1,
sessionId: 1,
userId: 1,
},
},
...(params.type === "message"
? [
{
$unwind: "$messages",
},
{
$match: {
[dateField]: { $gte: minDate },
},
},
]
: []),
{
$sort: {
[dateField]: 1,
},
},
{
$facet: {
userId: [
{
$match: {
userId: { $exists: true },
},
},
{
$group: {
_id: {
at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } },
userId: "$userId",
},
},
},
{
$group: {
_id: "$_id.at",
count: { $sum: 1 },
},
},
{
$project: {
_id: 0,
date: {
at: "$_id",
field: params.dateField,
span: params.span,
},
distinct: "userId",
count: 1,
},
},
],
sessionId: [
{
$match: {
sessionId: { $exists: true },
},
},
{
$group: {
_id: {
at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } },
sessionId: "$sessionId",
},
},
},
{
$group: {
_id: "$_id.at",
count: { $sum: 1 },
},
},
{
$project: {
_id: 0,
date: {
at: "$_id",
field: params.dateField,
span: params.span,
},
distinct: "sessionId",
count: 1,
},
},
],
userOrSessionId: [
{
$group: {
_id: {
at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } },
userOrSessionId: { $ifNull: ["$userId", "$sessionId"] },
},
},
},
{
$group: {
_id: "$_id.at",
count: { $sum: 1 },
},
},
{
$project: {
_id: 0,
date: {
at: "$_id",
field: params.dateField,
span: params.span,
},
distinct: "userOrSessionId",
count: 1,
},
},
],
_id: [
{
$group: {
_id: { $dateTrunc: { date: `$${dateField}`, unit: params.span } },
count: { $sum: 1 },
},
},
{
$project: {
_id: 0,
date: {
at: "$_id",
field: params.dateField,
span: params.span,
},
distinct: "_id",
count: 1,
},
},
],
},
},
{
$project: {
stats: {
$concatArrays: ["$userId", "$sessionId", "$userOrSessionId", "$_id"],
},
},
},
{
$unwind: "$stats",
},
{
$replaceRoot: {
newRoot: "$stats",
},
},
{
$set: {
type: params.type,
},
},
{
$merge: {
into: CONVERSATION_STATS_COLLECTION,
on: ["date.at", "type", "date.span", "date.field", "distinct"],
whenMatched: "replace",
whenNotMatched: "insert",
},
},
];
await collections.conversations.aggregate(pipeline, { allowDiskUse: true }).next();
console.log("Computed stats for", params.type, params.span, params.dateField);
}
|
chat-ui/src/routes/admin/stats/compute/+server.ts/0
|
{
"file_path": "chat-ui/src/routes/admin/stats/compute/+server.ts",
"repo_id": "chat-ui",
"token_count": 2379
}
| 61
|
import { MESSAGES_BEFORE_LOGIN, ENABLE_ASSISTANTS_RAG } from "$env/static/private";
import { startOfHour } from "date-fns";
import { authCondition, requiresUser } from "$lib/server/auth";
import { collections } from "$lib/server/database";
import { models } from "$lib/server/models";
import { ERROR_MESSAGES } from "$lib/stores/errors";
import type { Message } from "$lib/types/Message";
import { error } from "@sveltejs/kit";
import { ObjectId } from "mongodb";
import { z } from "zod";
import type { MessageUpdate } from "$lib/types/MessageUpdate";
import { runWebSearch } from "$lib/server/websearch/runWebSearch";
import { abortedGenerations } from "$lib/server/abortedGenerations";
import { summarize } from "$lib/server/summarize";
import { uploadFile } from "$lib/server/files/uploadFile";
import sizeof from "image-size";
import type { Assistant } from "$lib/types/Assistant";
import { convertLegacyConversation } from "$lib/utils/tree/convertLegacyConversation";
import { isMessageId } from "$lib/utils/tree/isMessageId";
import { buildSubtree } from "$lib/utils/tree/buildSubtree.js";
import { addChildren } from "$lib/utils/tree/addChildren.js";
import { addSibling } from "$lib/utils/tree/addSibling.js";
import { preprocessMessages } from "$lib/server/preprocessMessages.js";
import { usageLimits } from "$lib/server/usageLimits";
import { isURLLocal } from "$lib/server/isURLLocal.js";
export async function POST({ request, locals, params, getClientAddress }) {
const id = z.string().parse(params.id);
const convId = new ObjectId(id);
const promptedAt = new Date();
const userId = locals.user?._id ?? locals.sessionId;
// check user
if (!userId) {
throw error(401, "Unauthorized");
}
// check if the user has access to the conversation
const convBeforeCheck = await collections.conversations.findOne({
_id: convId,
...authCondition(locals),
});
if (convBeforeCheck && !convBeforeCheck.rootMessageId) {
const res = await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
...convBeforeCheck,
...convertLegacyConversation(convBeforeCheck),
},
}
);
if (!res.acknowledged) {
throw error(500, "Failed to convert conversation");
}
}
const conv = await collections.conversations.findOne({
_id: convId,
...authCondition(locals),
});
if (!conv) {
throw error(404, "Conversation not found");
}
// register the event for ratelimiting
await collections.messageEvents.insertOne({
userId,
createdAt: new Date(),
ip: getClientAddress(),
});
const messagesBeforeLogin = MESSAGES_BEFORE_LOGIN ? parseInt(MESSAGES_BEFORE_LOGIN) : 0;
// guest mode check
if (!locals.user?._id && requiresUser && messagesBeforeLogin) {
const totalMessages =
(
await collections.conversations
.aggregate([
{ $match: { ...authCondition(locals), "messages.from": "assistant" } },
{ $project: { messages: 1 } },
{ $limit: messagesBeforeLogin + 1 },
{ $unwind: "$messages" },
{ $match: { "messages.from": "assistant" } },
{ $count: "messages" },
])
.toArray()
)[0]?.messages ?? 0;
if (totalMessages > messagesBeforeLogin) {
throw error(429, "Exceeded number of messages before login");
}
}
if (usageLimits?.messagesPerMinute) {
// check if the user is rate limited
const nEvents = Math.max(
await collections.messageEvents.countDocuments({ userId }),
await collections.messageEvents.countDocuments({ ip: getClientAddress() })
);
if (nEvents > usageLimits.messagesPerMinute) {
throw error(429, ERROR_MESSAGES.rateLimited);
}
}
if (usageLimits?.messages && conv.messages.length > usageLimits.messages) {
throw error(
429,
`This conversation has more than ${usageLimits.messages} messages. Start a new one to continue`
);
}
// fetch the model
const model = models.find((m) => m.id === conv.model);
if (!model) {
throw error(410, "Model not available anymore");
}
// finally parse the content of the request
const json = await request.json();
const {
inputs: newPrompt,
id: messageId,
is_retry: isRetry,
is_continue: isContinue,
web_search: webSearch,
files: b64files,
} = z
.object({
id: z.string().uuid().refine(isMessageId).optional(), // parent message id to append to for a normal message, or the message id for a retry/continue
inputs: z.optional(
z
.string()
.trim()
.min(1)
.transform((s) => s.replace(/\r\n/g, "\n"))
),
is_retry: z.optional(z.boolean()),
is_continue: z.optional(z.boolean()),
web_search: z.optional(z.boolean()),
files: z.optional(z.array(z.string())),
})
.parse(json);
if (usageLimits?.messageLength && (newPrompt?.length ?? 0) > usageLimits.messageLength) {
throw error(400, "Message too long.");
}
// files is an array of base64 strings encoding Blob objects
// we need to convert this array to an array of File objects
const files = b64files?.map((file) => {
const blob = Buffer.from(file, "base64");
return new File([blob], "image.png");
});
// check sizes
if (files) {
const filechecks = await Promise.all(
files.map(async (file) => {
const dimensions = sizeof(Buffer.from(await file.arrayBuffer()));
return (
file.size > 2 * 1024 * 1024 ||
(dimensions.width ?? 0) > 224 ||
(dimensions.height ?? 0) > 224
);
})
);
if (filechecks.some((check) => check)) {
throw error(413, "File too large, should be <2MB and 224x224 max.");
}
}
let hashes: undefined | string[];
if (files) {
hashes = await Promise.all(files.map(async (file) => await uploadFile(file, conv)));
}
// we will append tokens to the content of this message
let messageToWriteToId: Message["id"] | undefined = undefined;
// used for building the prompt, subtree of the conversation that goes from the latest message to the root
let messagesForPrompt: Message[] = [];
if (isContinue && messageId) {
// if it's the last message and we continue then we build the prompt up to the last message
// we will strip the end tokens afterwards when the prompt is built
if ((conv.messages.find((msg) => msg.id === messageId)?.children?.length ?? 0) > 0) {
throw error(400, "Can only continue the last message");
}
messageToWriteToId = messageId;
messagesForPrompt = buildSubtree(conv, messageId);
} else if (isRetry && messageId) {
// two cases, if we're retrying a user message with a newPrompt set,
// it means we're editing a user message
// if we're retrying on an assistant message, newPrompt cannot be set
// it means we're retrying the last assistant message for a new answer
const messageToRetry = conv.messages.find((message) => message.id === messageId);
if (!messageToRetry) {
throw error(404, "Message not found");
}
if (messageToRetry.from === "user" && newPrompt) {
// add a sibling to this message from the user, with the alternative prompt
// add a children to that sibling, where we can write to
const newUserMessageId = addSibling(conv, { from: "user", content: newPrompt }, messageId);
messageToWriteToId = addChildren(
conv,
{ from: "assistant", content: "", files: hashes },
newUserMessageId
);
messagesForPrompt = buildSubtree(conv, newUserMessageId);
} else if (messageToRetry.from === "assistant") {
// we're retrying an assistant message, to generate a new answer
// just add a sibling to the assistant answer where we can write to
messageToWriteToId = addSibling(conv, { from: "assistant", content: "" }, messageId);
messagesForPrompt = buildSubtree(conv, messageId);
messagesForPrompt.pop(); // don't need the latest assistant message in the prompt since we're retrying it
}
} else {
// just a normal linear conversation, so we add the user message
// and the blank assistant message back to back
const newUserMessageId = addChildren(
conv,
{
from: "user",
content: newPrompt ?? "",
files: hashes,
createdAt: new Date(),
updatedAt: new Date(),
},
messageId
);
messageToWriteToId = addChildren(
conv,
{
from: "assistant",
content: "",
createdAt: new Date(),
updatedAt: new Date(),
},
newUserMessageId
);
// build the prompt from the user message
messagesForPrompt = buildSubtree(conv, newUserMessageId);
}
const messageToWriteTo = conv.messages.find((message) => message.id === messageToWriteToId);
if (!messageToWriteTo) {
throw error(500, "Failed to create message");
}
if (messagesForPrompt.length === 0) {
throw error(500, "Failed to create prompt");
}
// update the conversation with the new messages
await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
messages: conv.messages,
title: conv.title,
updatedAt: new Date(),
},
}
);
let doneStreaming = false;
// we now build the stream
const stream = new ReadableStream({
async start(controller) {
messageToWriteTo.updates ??= [];
function update(newUpdate: MessageUpdate) {
if (newUpdate.type !== "stream") {
messageToWriteTo?.updates?.push(newUpdate);
}
if (newUpdate.type === "stream" && newUpdate.token === "") {
return;
}
controller.enqueue(JSON.stringify(newUpdate) + "\n");
if (newUpdate.type === "finalAnswer") {
// 4096 of spaces to make sure the browser doesn't blocking buffer that holding the response
controller.enqueue(" ".repeat(4096));
}
}
update({ type: "status", status: "started" });
const summarizeIfNeeded = (async () => {
if (conv.title === "New Chat" && conv.messages.length === 3) {
try {
conv.title = (await summarize(conv.messages[1].content)) ?? conv.title;
update({ type: "status", status: "title", message: conv.title });
await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
title: conv?.title,
updatedAt: new Date(),
},
}
);
} catch (e) {
console.error(e);
}
}
})();
await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
title: conv.title,
updatedAt: new Date(),
},
}
);
// check if assistant has a rag
const assistant = await collections.assistants.findOne<
Pick<Assistant, "rag" | "dynamicPrompt" | "generateSettings">
>(
{ _id: conv.assistantId },
{ projection: { rag: 1, dynamicPrompt: 1, generateSettings: 1 } }
);
const assistantHasRAG =
ENABLE_ASSISTANTS_RAG === "true" &&
assistant &&
((assistant.rag &&
(assistant.rag.allowedLinks.length > 0 ||
assistant.rag.allowedDomains.length > 0 ||
assistant.rag.allowAllDomains)) ||
assistant.dynamicPrompt);
// perform websearch if needed
if (
!isContinue &&
((webSearch && !conv.assistantId) || (assistantHasRAG && !assistant.dynamicPrompt))
) {
messageToWriteTo.webSearch = await runWebSearch(
conv,
messagesForPrompt,
update,
assistant?.rag
);
}
let preprompt = conv.preprompt;
if (assistant?.dynamicPrompt && preprompt && ENABLE_ASSISTANTS_RAG === "true") {
// process the preprompt
const urlRegex = /{{\s?url=(.*?)\s?}}/g;
let match;
while ((match = urlRegex.exec(preprompt)) !== null) {
try {
const url = new URL(match[1]);
if (await isURLLocal(url)) {
throw new Error("URL couldn't be fetched, it resolved to a local address.");
}
const res = await fetch(url.href);
if (!res.ok) {
throw new Error("URL couldn't be fetched, error " + res.status);
}
const text = await res.text();
preprompt = preprompt.replaceAll(match[0], text);
} catch (e) {
preprompt = preprompt.replaceAll(match[0], (e as Error).message);
}
}
if (messagesForPrompt[0].from === "system") {
messagesForPrompt[0].content = preprompt;
}
}
// inject websearch result & optionally images into the messages
const processedMessages = await preprocessMessages(
messagesForPrompt,
messageToWriteTo.webSearch,
model.multimodal,
convId
);
const previousText = messageToWriteTo.content;
let hasError = false;
let buffer = "";
try {
const endpoint = await model.getEndpoint();
for await (const output of await endpoint({
messages: processedMessages,
preprompt,
continueMessage: isContinue,
generateSettings: assistant?.generateSettings,
})) {
// if not generated_text is here it means the generation is not done
if (!output.generated_text) {
if (!output.token.special) {
// 33% chance to send the stream update, with a max buffer size of 30 chars
buffer += output.token.text;
if (Math.random() < 0.33 || buffer.length > 30) {
update({
type: "stream",
token: buffer,
});
buffer = "";
}
// abort check
const date = abortedGenerations.get(convId.toString());
if (date && date > promptedAt) {
break;
}
// no output check
if (!output) {
break;
}
// otherwise we just concatenate tokens
messageToWriteTo.content += output.token.text;
}
} else {
messageToWriteTo.interrupted = !output.token.special;
// add output.generated text to the last message
// strip end tokens from the output.generated_text
const text = (model.parameters.stop ?? []).reduce((acc: string, curr: string) => {
if (acc.endsWith(curr)) {
messageToWriteTo.interrupted = false;
return acc.slice(0, acc.length - curr.length);
}
return acc;
}, output.generated_text.trimEnd());
messageToWriteTo.content = previousText + text;
messageToWriteTo.updatedAt = new Date();
}
}
} catch (e) {
hasError = true;
update({ type: "status", status: "error", message: (e as Error).message });
} finally {
// check if no output was generated
if (!hasError && messageToWriteTo.content === previousText) {
update({
type: "status",
status: "error",
message: "No output was generated. Something went wrong.",
});
}
if (buffer) {
update({
type: "stream",
token: buffer,
});
}
}
await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
messages: conv.messages,
title: conv?.title,
updatedAt: new Date(),
},
}
);
// used to detect if cancel() is called bc of interrupt or just because the connection closes
doneStreaming = true;
update({
type: "finalAnswer",
text: messageToWriteTo.content,
});
await summarizeIfNeeded;
controller.close();
return;
},
async cancel() {
if (!doneStreaming) {
await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
messages: conv.messages,
title: conv.title,
updatedAt: new Date(),
},
}
);
}
},
});
if (conv.assistantId) {
await collections.assistantStats.updateOne(
{ assistantId: conv.assistantId, "date.at": startOfHour(new Date()), "date.span": "hour" },
{ $inc: { count: 1 } },
{ upsert: true }
);
}
// Todo: maybe we should wait for the message to be saved before ending the response - in case of errors
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
},
});
}
export async function DELETE({ locals, params }) {
const convId = new ObjectId(params.id);
const conv = await collections.conversations.findOne({
_id: convId,
...authCondition(locals),
});
if (!conv) {
throw error(404, "Conversation not found");
}
await collections.conversations.deleteOne({ _id: conv._id });
return new Response();
}
export async function PATCH({ request, locals, params }) {
const { title } = z
.object({ title: z.string().trim().min(1).max(100) })
.parse(await request.json());
const convId = new ObjectId(params.id);
const conv = await collections.conversations.findOne({
_id: convId,
...authCondition(locals),
});
if (!conv) {
throw error(404, "Conversation not found");
}
await collections.conversations.updateOne(
{
_id: convId,
},
{
$set: {
title,
},
}
);
return new Response();
}
|
chat-ui/src/routes/conversation/[id]/+server.ts/0
|
{
"file_path": "chat-ui/src/routes/conversation/[id]/+server.ts",
"repo_id": "chat-ui",
"token_count": 6374
}
| 62
|
<script lang="ts">
import { PUBLIC_APP_COLOR } from "$env/static/public";
import { isHuggingChat } from "$lib/utils/isHuggingChat";
export let name: string;
export let logoUrl: string | undefined;
import logo from "../../../../../static/huggingchat/logo.svg?raw";
</script>
<div class=" flex h-[648px] w-full flex-col items-center bg-white">
<div class="flex flex-1 flex-col items-center justify-center">
{#if logoUrl}
<img class="h-48 w-48" src={logoUrl} alt="avatar" />
{/if}
<h1 class="m-0 text-5xl font-bold text-black">
{name}
</h1>
</div>
<div
class="flex h-[200px] w-full flex-col items-center justify-center rounded-b-none bg-{PUBLIC_APP_COLOR}-500/10 pb-10 pt-10 text-4xl text-gray-500"
style="border-radius: 100% 100% 0 0;"
>
Try it now
{#if isHuggingChat}
on
{/if}
{#if isHuggingChat}
<div class="flex flex-row pt-3 text-5xl font-bold text-black">
<div class="mr-5 flex items-center justify-center" id="logo">
<!-- eslint-disable-next-line -->
{@html logo}
</div>
<span>HuggingChat</span>
</div>
{/if}
</div>
</div>
|
chat-ui/src/routes/models/[...model]/thumbnail.png/ModelThumbnail.svelte/0
|
{
"file_path": "chat-ui/src/routes/models/[...model]/thumbnail.png/ModelThumbnail.svelte",
"repo_id": "chat-ui",
"token_count": 475
}
| 63
|
<script lang="ts">
import type { ActionData, PageData } from "./$types";
import AssistantSettings from "$lib/components/AssistantSettings.svelte";
export let data: PageData;
export let form: ActionData;
</script>
<AssistantSettings bind:form models={data.models} />
|
chat-ui/src/routes/settings/(nav)/assistants/new/+page@settings.svelte/0
|
{
"file_path": "chat-ui/src/routes/settings/(nav)/assistants/new/+page@settings.svelte",
"repo_id": "chat-ui",
"token_count": 80
}
| 64
|
{
"background_color": "#ffffff",
"name": "HuggingChat",
"short_name": "HuggingChat",
"display": "standalone",
"start_url": "/chat",
"icons": [
{
"src": "/chat/huggingchat/icon-128x128.png",
"sizes": "128x128",
"type": "image/png"
},
{
"src": "/chat/huggingchat/icon-256x256.png",
"sizes": "256x256",
"type": "image/png"
},
{
"src": "/chat/huggingchat/icon-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
]
}
|
chat-ui/static/huggingchat/manifest.json/0
|
{
"file_path": "chat-ui/static/huggingchat/manifest.json",
"repo_id": "chat-ui",
"token_count": 233
}
| 65
|
import json
import os
import tempfile
import datasets
from datasets.arrow_writer import ArrowWriter
from datasets.features import Array2D
from utils import generate_examples, get_duration
SHAPE_TEST_1 = (30, 487)
SHAPE_TEST_2 = (36, 1024)
SPEED_TEST_SHAPE = (100, 100)
SPEED_TEST_N_EXAMPLES = 100
DEFAULT_FEATURES = datasets.Features(
{"text": Array2D(SHAPE_TEST_1, dtype="float32"), "image": Array2D(SHAPE_TEST_2, dtype="float32")}
)
RESULTS_BASEPATH, RESULTS_FILENAME = os.path.split(__file__)
RESULTS_FILE_PATH = os.path.join(RESULTS_BASEPATH, "results", RESULTS_FILENAME.replace(".py", ".json"))
@get_duration
def write(my_features, dummy_data, tmp_dir):
with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer:
for key, record in dummy_data:
example = my_features.encode_example(record)
writer.write(example)
num_examples, num_bytes = writer.finalize()
@get_duration
def read_unformated(feats, tmp_dir):
dataset = datasets.Dataset.from_file(
filename=os.path.join(tmp_dir, "beta.arrow"), info=datasets.DatasetInfo(features=feats)
)
for _ in dataset:
pass
@get_duration
def read_formatted_as_numpy(feats, tmp_dir):
dataset = datasets.Dataset.from_file(
filename=os.path.join(tmp_dir, "beta.arrow"), info=datasets.DatasetInfo(features=feats)
)
dataset.set_format("numpy")
for _ in dataset:
pass
@get_duration
def read_batch_unformated(feats, tmp_dir):
batch_size = 10
dataset = datasets.Dataset.from_file(
filename=os.path.join(tmp_dir, "beta.arrow"), info=datasets.DatasetInfo(features=feats)
)
for i in range(0, len(dataset), batch_size):
_ = dataset[i : i + batch_size]
@get_duration
def read_batch_formatted_as_numpy(feats, tmp_dir):
batch_size = 10
dataset = datasets.Dataset.from_file(
filename=os.path.join(tmp_dir, "beta.arrow"), info=datasets.DatasetInfo(features=feats)
)
dataset.set_format("numpy")
for i in range(0, len(dataset), batch_size):
_ = dataset[i : i + batch_size]
@get_duration
def read_col_unformated(feats, tmp_dir):
dataset = datasets.Dataset.from_file(
filename=os.path.join(tmp_dir, "beta.arrow"), info=datasets.DatasetInfo(features=feats)
)
for col in feats:
_ = dataset[col]
@get_duration
def read_col_formatted_as_numpy(feats, tmp_dir):
dataset = datasets.Dataset.from_file(
filename=os.path.join(tmp_dir, "beta.arrow"), info=datasets.DatasetInfo(features=feats)
)
dataset.set_format("numpy")
for col in feats:
_ = dataset[col]
def benchmark_array_xd():
times = {}
read_functions = (
read_unformated,
read_formatted_as_numpy,
read_batch_unformated,
read_batch_formatted_as_numpy,
read_col_unformated,
read_col_formatted_as_numpy,
)
with tempfile.TemporaryDirectory() as tmp_dir:
feats = datasets.Features({"image": Array2D(SPEED_TEST_SHAPE, dtype="float32")})
data = generate_examples(features=feats, num_examples=SPEED_TEST_N_EXAMPLES)
times["write_array2d"] = write(feats, data, tmp_dir)
for read_func in read_functions:
times[read_func.__name__ + " after write_array2d"] = read_func(feats, tmp_dir)
with tempfile.TemporaryDirectory() as tmp_dir:
# don't use fixed length for fair comparison
# feats = datasets.Features(
# {"image": datasets.Sequence(datasets.Sequence(datasets.Value("float32"), SPEED_TEST_SHAPE[1]), SPEED_TEST_SHAPE[0])}
# )
feats = datasets.Features({"image": datasets.Sequence(datasets.Sequence(datasets.Value("float32")))})
data = generate_examples(
features=feats, num_examples=SPEED_TEST_N_EXAMPLES, seq_shapes={"image": SPEED_TEST_SHAPE}
)
times["write_nested_sequence"] = write(feats, data, tmp_dir)
for read_func in read_functions:
times[read_func.__name__ + " after write_nested_sequence"] = read_func(feats, tmp_dir)
with tempfile.TemporaryDirectory() as tmp_dir:
# don't use fixed length for fair comparison
# feats = datasets.Features(
# {"image": datasets.Sequence(datasets.Value("float32"), SPEED_TEST_SHAPE[0] * SPEED_TEST_SHAPE[1])}
# )
feats = datasets.Features({"image": datasets.Sequence(datasets.Value("float32"))})
data = generate_examples(
features=feats,
num_examples=SPEED_TEST_N_EXAMPLES,
seq_shapes={"image": [SPEED_TEST_SHAPE[0] * SPEED_TEST_SHAPE[1]]},
)
times["write_flattened_sequence"] = write(feats, data, tmp_dir)
for read_func in read_functions:
times[read_func.__name__ + " after write_flattened_sequence"] = read_func(feats, tmp_dir)
with open(RESULTS_FILE_PATH, "wb") as f:
f.write(json.dumps(times).encode("utf-8"))
if __name__ == "__main__": # useful to run the profiler
benchmark_array_xd()
|
datasets/benchmarks/benchmark_array_xd.py/0
|
{
"file_path": "datasets/benchmarks/benchmark_array_xd.py",
"repo_id": "datasets",
"token_count": 2176
}
| 66
|
- sections:
- local: index
title: 🤗 Datasets
- local: quickstart
title: Quickstart
- local: installation
title: Installation
title: Get started
- sections:
- local: tutorial
title: Overview
- local: load_hub
title: Load a dataset from the Hub
- local: access
title: Know your dataset
- local: use_dataset
title: Preprocess
- local: metrics
title: Evaluate predictions
- local: create_dataset
title: Create a dataset
- local: upload_dataset
title: Share a dataset to the Hub
title: "Tutorials"
- sections:
- local: how_to
title: Overview
- sections:
- local: loading
title: Load
- local: process
title: Process
- local: stream
title: Stream
- local: use_with_tensorflow
title: Use with TensorFlow
- local: use_with_pytorch
title: Use with PyTorch
- local: use_with_jax
title: Use with JAX
- local: use_with_spark
title: Use with Spark
- local: cache
title: Cache management
- local: filesystems
title: Cloud storage
- local: faiss_es
title: Search index
- local: how_to_metrics
title: Metrics
- local: beam
title: Beam Datasets
- local: troubleshoot
title: Troubleshooting
title: "General usage"
- sections:
- local: audio_load
title: Load audio data
- local: audio_process
title: Process audio data
- local: audio_dataset
title: Create an audio dataset
title: "Audio"
- sections:
- local: image_load
title: Load image data
- local: image_process
title: Process image data
- local: image_dataset
title: Create an image dataset
- local: depth_estimation
title: Depth estimation
- local: image_classification
title: Image classification
- local: semantic_segmentation
title: Semantic segmentation
- local: object_detection
title: Object detection
title: "Vision"
- sections:
- local: nlp_load
title: Load text data
- local: nlp_process
title: Process text data
title: "Text"
- sections:
- local: tabular_load
title: Load tabular data
title: "Tabular"
- sections:
- local: share
title: Share
- local: dataset_card
title: Create a dataset card
- local: repository_structure
title: Structure your repository
- local: dataset_script
title: Create a dataset loading script
title: "Dataset repository"
title: "How-to guides"
- sections:
- local: about_arrow
title: Datasets 🤝 Arrow
- local: about_cache
title: The cache
- local: about_mapstyle_vs_iterable
title: Dataset or IterableDataset
- local: about_dataset_features
title: Dataset features
- local: about_dataset_load
title: Build and load
- local: about_map_batch
title: Batch mapping
- local: about_metrics
title: All about metrics
title: "Conceptual guides"
- sections:
- local: package_reference/main_classes
title: Main classes
- local: package_reference/builder_classes
title: Builder classes
- local: package_reference/loading_methods
title: Loading methods
- local: package_reference/table_classes
title: Table Classes
- local: package_reference/utilities
title: Utilities
- local: package_reference/task_templates
title: Task templates
title: "Reference"
|
datasets/docs/source/_toctree.yml/0
|
{
"file_path": "datasets/docs/source/_toctree.yml",
"repo_id": "datasets",
"token_count": 1247
}
| 67
|
# Create a dataset loading script
<Tip>
The dataset loading script is likely not needed if your dataset is in one of the following formats: CSV, JSON, JSON lines, text, images, audio or Parquet.
With those formats, you should be able to load your dataset automatically with [`~datasets.load_dataset`],
as long as your dataset repository has a [required structure](./repository_structure).
</Tip>
<Tip warning=true>
In the next major release, the new safety features of 🤗 Datasets will disable running dataset loading scripts by default, and you will have to pass `trust_remote_code=True` to load datasets that require running a dataset script.
</Tip>
Write a dataset script to load and share datasets that consist of data files in unsupported formats or require more complex data preparation.
This is a more advanced way to define a dataset than using [YAML metadata in the dataset card](./repository_structure#define-your-splits-in-yaml).
A dataset script is a Python file that defines the different configurations and splits of your dataset, as well as how to download and process the data.
The script can download data files from any website, or from the same dataset repository.
A dataset loading script should have the same name as a dataset repository or directory. For example, a repository named `my_dataset` should contain `my_dataset.py` script. This way it can be loaded with:
```
my_dataset/
├── README.md
└── my_dataset.py
```
```py
>>> from datasets import load_dataset
>>> load_dataset("path/to/my_dataset")
```
The following guide includes instructions for dataset scripts for how to:
- Add dataset metadata.
- Download data files.
- Generate samples.
- Generate dataset metadata.
- Upload a dataset to the Hub.
Open the [SQuAD dataset loading script](https://huggingface.co/datasets/squad/blob/main/squad.py) template to follow along on how to share a dataset.
<Tip>
To help you get started, try beginning with the dataset loading script [template](https://github.com/huggingface/datasets/blob/main/templates/new_dataset_script.py)!
</Tip>
## Add dataset attributes
The first step is to add some information, or attributes, about your dataset in [`DatasetBuilder._info`]. The most important attributes you should specify are:
1. `DatasetInfo.description` provides a concise description of your dataset. The description informs the user what's in the dataset, how it was collected, and how it can be used for a NLP task.
2. `DatasetInfo.features` defines the name and type of each column in your dataset. This will also provide the structure for each example, so it is possible to create nested subfields in a column if you want. Take a look at [`Features`] for a full list of feature types you can use.
```py
datasets.Features(
{
"id": datasets.Value("string"),
"title": datasets.Value("string"),
"context": datasets.Value("string"),
"question": datasets.Value("string"),
"answers": datasets.Sequence(
{
"text": datasets.Value("string"),
"answer_start": datasets.Value("int32"),
}
),
}
)
```
3. `DatasetInfo.homepage` contains the URL to the dataset homepage so users can find more details about the dataset.
4. `DatasetInfo.citation` contains a BibTeX citation for the dataset.
After you've filled out all these fields in the template, it should look like the following example from the SQuAD loading script:
```py
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"id": datasets.Value("string"),
"title": datasets.Value("string"),
"context": datasets.Value("string"),
"question": datasets.Value("string"),
"answers": datasets.features.Sequence(
{"text": datasets.Value("string"), "answer_start": datasets.Value("int32"),}
),
}
),
# No default supervised_keys (as we have to pass both question
# and context as input).
supervised_keys=None,
homepage="https://rajpurkar.github.io/SQuAD-explorer/",
citation=_CITATION,
)
```
### Multiple configurations
In some cases, your dataset may have multiple configurations. For example, the [SuperGLUE](https://huggingface.co/datasets/super_glue) dataset is a collection of 5 datasets designed to evaluate language understanding tasks. 🤗 Datasets provides [`BuilderConfig`] which allows you to create different configurations for the user to select from.
Let's study the [SuperGLUE loading script](https://huggingface.co/datasets/super_glue/blob/main/super_glue.py) to see how you can define several configurations.
1. Create a [`BuilderConfig`] subclass with attributes about your dataset. These attributes can be the features of your dataset, label classes, and a URL to the data files.
```py
class SuperGlueConfig(datasets.BuilderConfig):
"""BuilderConfig for SuperGLUE."""
def __init__(self, features, data_url, citation, url, label_classes=("False", "True"), **kwargs):
"""BuilderConfig for SuperGLUE.
Args:
features: *list[string]*, list of the features that will appear in the
feature dict. Should not include "label".
data_url: *string*, url to download the zip file from.
citation: *string*, citation for the data set.
url: *string*, url for information about the data set.
label_classes: *list[string]*, the list of classes for the label if the
label is present as a string. Non-string labels will be cast to either
'False' or 'True'.
**kwargs: keyword arguments forwarded to super.
"""
# Version history:
# 1.0.2: Fixed non-nondeterminism in ReCoRD.
# 1.0.1: Change from the pre-release trial version of SuperGLUE (v1.9) to
# the full release (v2.0).
# 1.0.0: S3 (new shuffling, sharding and slicing mechanism).
# 0.0.2: Initial version.
super().__init__(version=datasets.Version("1.0.2"), **kwargs)
self.features = features
self.label_classes = label_classes
self.data_url = data_url
self.citation = citation
self.url = url
```
2. Create instances of your config to specify the values of the attributes of each configuration. This gives you the flexibility to specify all the name and description of each configuration. These sub-class instances should be listed under `DatasetBuilder.BUILDER_CONFIGS`:
```py
class SuperGlue(datasets.GeneratorBasedBuilder):
"""The SuperGLUE benchmark."""
BUILDER_CONFIG_CLASS = SuperGlueConfig
BUILDER_CONFIGS = [
SuperGlueConfig(
name="boolq",
description=_BOOLQ_DESCRIPTION,
features=["question", "passage"],
data_url="https://dl.fbaipublicfiles.com/glue/superglue/data/v2/BoolQ.zip",
citation=_BOOLQ_CITATION,
url="https://github.com/google-research-datasets/boolean-questions",
),
...
...
SuperGlueConfig(
name="axg",
description=_AXG_DESCRIPTION,
features=["premise", "hypothesis"],
label_classes=["entailment", "not_entailment"],
data_url="https://dl.fbaipublicfiles.com/glue/superglue/data/v2/AX-g.zip",
citation=_AXG_CITATION,
url="https://github.com/rudinger/winogender-schemas",
),
```
3. Now, users can load a specific configuration of the dataset with the configuration `name`:
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset('super_glue', 'boolq')
```
Additionally, users can instantiate a custom builder configuration by passing the builder configuration arguments to [`load_dataset`]:
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset('super_glue', data_url="https://custom_url")
```
### Default configurations
Users must specify a configuration name when they load a dataset with multiple configurations. Otherwise, 🤗 Datasets will raise a `ValueError`, and prompt the user to select a configuration name. You can avoid this by setting a default dataset configuration with the `DEFAULT_CONFIG_NAME` attribute:
```py
class NewDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.1.0")
BUILDER_CONFIGS = [
datasets.BuilderConfig(name="first_domain", version=VERSION, description="This part of my dataset covers a first domain"),
datasets.BuilderConfig(name="second_domain", version=VERSION, description="This part of my dataset covers a second domain"),
]
DEFAULT_CONFIG_NAME = "first_domain"
```
<Tip warning={true}>
Only use a default configuration when it makes sense. Don't set one because it may be more convenient for the user to not specify a configuration when they load your dataset. For example, multi-lingual datasets often have a separate configuration for each language. An appropriate default may be an aggregated configuration that loads all the languages of the dataset if the user doesn't request a particular one.
</Tip>
## Download data files and organize splits
After you've defined the attributes of your dataset, the next step is to download the data files and organize them according to their splits.
1. Create a dictionary of URLs in the loading script that point to the original SQuAD data files:
```py
_URL = "https://rajpurkar.github.io/SQuAD-explorer/dataset/"
_URLS = {
"train": _URL + "train-v1.1.json",
"dev": _URL + "dev-v1.1.json",
}
```
<Tip>
If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs.
</Tip>
2. [`DownloadManager.download_and_extract`] takes this dictionary and downloads the data files. Once the files are downloaded, use [`SplitGenerator`] to organize each split in the dataset. This is a simple class that contains:
- The `name` of each split. You should use the standard split names: `Split.TRAIN`, `Split.TEST`, and `Split.VALIDATION`.
- `gen_kwargs` provides the file paths to the data files to load for each split.
Your `DatasetBuilder._split_generator()` should look like this now:
```py
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
urls_to_download = self._URLS
downloaded_files = dl_manager.download_and_extract(urls_to_download)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
]
```
## Generate samples
At this point, you have:
- Added the dataset attributes.
- Provided instructions for how to download the data files.
- Organized the splits.
The next step is to actually generate the samples in each split.
1. `DatasetBuilder._generate_examples` takes the file path provided by `gen_kwargs` to read and parse the data files. You need to write a function that loads the data files and extracts the columns.
2. Your function should yield a tuple of an `id_`, and an example from the dataset.
```py
def _generate_examples(self, filepath):
"""This function returns the examples in the raw (text) form."""
logger.info("generating examples from = %s", filepath)
with open(filepath) as f:
squad = json.load(f)
for article in squad["data"]:
title = article.get("title", "").strip()
for paragraph in article["paragraphs"]:
context = paragraph["context"].strip()
for qa in paragraph["qas"]:
question = qa["question"].strip()
id_ = qa["id"]
answer_starts = [answer["answer_start"] for answer in qa["answers"]]
answers = [answer["text"].strip() for answer in qa["answers"]]
# Features currently used are "context", "question", and "answers".
# Others are extracted here for the ease of future expansions.
yield id_, {
"title": title,
"context": context,
"question": question,
"id": id_,
"answers": {"answer_start": answer_starts, "text": answers,},
}
```
## (Optional) Generate dataset metadata
Adding dataset metadata is a great way to include information about your dataset. The metadata is stored in the dataset card `README.md` in YAML. It includes information like the number of examples required to confirm the dataset was correctly generated, and information about the dataset like its `features`.
Run the following command to generate your dataset metadata in `README.md` and make sure your new dataset loading script works correctly:
```
datasets-cli test path/to/<your-dataset-loading-script> --save_info --all_configs
```
If your dataset loading script passed the test, you should now have a `README.md` file in your dataset folder containing a `dataset_info` field with some metadata.
## Upload to the Hub
Once your script is ready, [create a dataset card](dataset_card) and [upload it to the Hub](share).
Congratulations, you can now load your dataset from the Hub! 🥳
```py
>>> from datasets import load_dataset
>>> load_dataset("<username>/my_dataset")
```
## Advanced features
### Sharding
If your dataset is made of many big files, 🤗 Datasets automatically runs your script in parallel to make it super fast!
It can help if you have hundreds or thousands of TAR archives, or JSONL files like [oscar](https://huggingface.co/datasets/oscar/blob/main/oscar.py) for example.
To make it work, we consider lists of files in `gen_kwargs` to be shards.
Therefore 🤗 Datasets can automatically spawn several workers to run `_generate_examples` in parallel, and each worker is given a subset of shards to process.
```python
class MyShardedDataset(datasets.GeneratorBasedBuilder):
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
downloaded_files = dl_manager.download([f"data/shard_{i}.jsonl" for i in range(1024)])
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": downloaded_files}),
]
def _generate_examples(self, filepaths):
# Each worker can be given a slice of the original `filepaths` list defined in the `gen_kwargs`
# so that this code can run in parallel on several shards at the same time
for filepath in filepaths:
...
```
Users can also specify `num_proc=` in `load_dataset()` to specify the number of processes to use as workers.
### ArrowBasedBuilder
For some datasets it can be much faster to yield batches of data rather than examples one by one.
You can speed up the dataset generation by yielding Arrow tables directly, instead of examples.
This is especially useful if your data comes from Pandas DataFrames for example, since the conversion from Pandas to Arrow is as simple as:
```python
import pyarrow as pa
pa_table = pa.Table.from_pandas(df)
```
To yield Arrow tables instead of single examples, make your dataset builder inherit from [`ArrowBasedBuilder`] instead of [`GeneratorBasedBuilder`], and use `_generate_tables` instead of `_generate_examples`:
```python
class MySuperFastDataset(datasets.ArrowBasedBuilder):
def _generate_tables(self, filepaths):
idx = 0
for filepath in filepaths:
...
yield idx, pa_table
idx += 1
```
Don't forget to keep your script memory efficient, in case users run them on machines with a low amount of RAM.
|
datasets/docs/source/dataset_script.mdx/0
|
{
"file_path": "datasets/docs/source/dataset_script.mdx",
"repo_id": "datasets",
"token_count": 5380
}
| 68
|
# Evaluate predictions
<Tip warning={true}>
Metrics is deprecated in 🤗 Datasets. To learn more about how to use metrics, take a look at the library 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index)! In addition to metrics, you can find more tools for evaluating models and datasets.
</Tip>
🤗 Datasets provides various common and NLP-specific [metrics](https://huggingface.co/metrics) for you to measure your models performance. In this section of the tutorials, you will load a metric and use it to evaluate your models predictions.
You can see what metrics are available with [`list_metrics`]:
```py
>>> from datasets import list_metrics
>>> metrics_list = list_metrics()
>>> len(metrics_list)
28
>>> print(metrics_list)
['accuracy', 'bertscore', 'bleu', 'bleurt', 'cer', 'comet', 'coval', 'cuad', 'f1', 'gleu', 'glue', 'indic_glue', 'matthews_correlation', 'meteor', 'pearsonr', 'precision', 'recall', 'rouge', 'sacrebleu', 'sari', 'seqeval', 'spearmanr', 'squad', 'squad_v2', 'super_glue', 'wer', 'wiki_split', 'xnli']
```
## Load metric
It is very easy to load a metric with 🤗 Datasets. In fact, you will notice that it is very similar to loading a dataset! Load a metric from the Hub with [`load_metric`]:
```py
>>> from datasets import load_metric
>>> metric = load_metric('glue', 'mrpc')
```
This will load the metric associated with the MRPC dataset from the GLUE benchmark.
## Select a configuration
If you are using a benchmark dataset, you need to select a metric that is associated with the configuration you are using. Select a metric configuration by providing the configuration name:
```py
>>> metric = load_metric('glue', 'mrpc')
```
## Metrics object
Before you begin using a [`Metric`] object, you should get to know it a little better. As with a dataset, you can return some basic information about a metric. For example, access the `inputs_description` parameter in [`datasets.MetricInfo`] to get more information about a metrics expected input format and some usage examples:
```py
>>> print(metric.inputs_description)
Compute GLUE evaluation metric associated to each GLUE dataset.
Args:
predictions: list of predictions to score.
Each translation should be tokenized into a list of tokens.
references: list of lists of references for each translation.
Each reference should be tokenized into a list of tokens.
Returns: depending on the GLUE subset, one or several of:
"accuracy": Accuracy
"f1": F1 score
"pearson": Pearson Correlation
"spearmanr": Spearman Correlation
"matthews_correlation": Matthew Correlation
Examples:
>>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of ["mnli", "mnli_mismatched", "mnli_matched", "qnli", "rte", "wnli", "hans"]
>>> references = [0, 1]
>>> predictions = [0, 1]
>>> results = glue_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'accuracy': 1.0}
...
>>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp'
>>> references = [0, 1]
>>> predictions = [0, 1]
>>> results = glue_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'accuracy': 1.0, 'f1': 1.0}
...
```
Notice for the MRPC configuration, the metric expects the input format to be zero or one. For a complete list of attributes you can return with your metric, take a look at [`MetricInfo`].
## Compute metric
Once you have loaded a metric, you are ready to use it to evaluate a models predictions. Provide the model predictions and references to [`~datasets.Metric.compute`]:
```py
>>> model_predictions = model(model_inputs)
>>> final_score = metric.compute(predictions=model_predictions, references=gold_references)
```
|
datasets/docs/source/metrics.mdx/0
|
{
"file_path": "datasets/docs/source/metrics.mdx",
"repo_id": "datasets",
"token_count": 1193
}
| 69
|
# Load tabular data
A tabular dataset is a generic dataset used to describe any data stored in rows and columns, where the rows represent an example and the columns represent a feature (can be continuous or categorical). These datasets are commonly stored in CSV files, Pandas DataFrames, and in database tables. This guide will show you how to load and create a tabular dataset from:
- CSV files
- Pandas DataFrames
- Databases
## CSV files
🤗 Datasets can read CSV files by specifying the generic `csv` dataset builder name in the [`~datasets.load_dataset`] method. To load more than one CSV file, pass them as a list to the `data_files` parameter:
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("csv", data_files="my_file.csv")
# load multiple CSV files
>>> dataset = load_dataset("csv", data_files=["my_file_1.csv", "my_file_2.csv", "my_file_3.csv"])
```
You can also map specific CSV files to the train and test splits:
```py
>>> dataset = load_dataset("csv", data_files={"train": ["my_train_file_1.csv", "my_train_file_2.csv"], "test": "my_test_file.csv"})
```
To load remote CSV files, pass the URLs instead:
```py
>>> base_url = "https://huggingface.co/datasets/lhoestq/demo1/resolve/main/data/"
>>> dataset = load_dataset('csv', data_files={"train": base_url + "train.csv", "test": base_url + "test.csv"})
```
To load zipped CSV files:
```py
>>> url = "https://domain.org/train_data.zip"
>>> data_files = {"train": url}
>>> dataset = load_dataset("csv", data_files=data_files)
```
## Pandas DataFrames
🤗 Datasets also supports loading datasets from [Pandas DataFrames](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) with the [`~datasets.Dataset.from_pandas`] method:
```py
>>> from datasets import Dataset
>>> import pandas as pd
# create a Pandas DataFrame
>>> df = pd.read_csv("https://huggingface.co/datasets/imodels/credit-card/raw/main/train.csv")
>>> df = pd.DataFrame(df)
# load Dataset from Pandas DataFrame
>>> dataset = Dataset.from_pandas(df)
```
Use the `splits` parameter to specify the name of the dataset split:
```py
>>> train_ds = Dataset.from_pandas(train_df, split="train")
>>> test_ds = Dataset.from_pandas(test_df, split="test")
```
If the dataset doesn't look as expected, you should explicitly [specify your dataset features](loading#specify-features). A [pandas.Series](https://pandas.pydata.org/docs/reference/api/pandas.Series.html) may not always carry enough information for Arrow to automatically infer a data type. For example, if a DataFrame is of length `0` or if the Series only contains `None/NaN` objects, the type is set to `null`.
## Databases
Datasets stored in databases are typically accessed with SQL queries. With 🤗 Datasets, you can connect to a database, query for the data you need, and create a dataset out of it. Then you can use all the processing features of 🤗 Datasets to prepare your dataset for training.
### SQLite
SQLite is a small, lightweight database that is fast and easy to set up. You can use an existing database if you'd like, or follow along and start from scratch.
Start by creating a quick SQLite database with this [Covid-19 data](https://github.com/nytimes/covid-19-data/blob/master/us-states.csv) from the New York Times:
```py
>>> import sqlite3
>>> import pandas as pd
>>> conn = sqlite3.connect("us_covid_data.db")
>>> df = pd.read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv")
>>> df.to_sql("states", conn, if_exists="replace")
```
This creates a `states` table in the `us_covid_data.db` database which you can now load into a dataset.
To connect to the database, you'll need the [URI string](https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls) that identifies your database. Connecting to a database with a URI caches the returned dataset. The URI string differs for each database dialect, so be sure to check the [Database URLs](https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls) for whichever database you're using.
For SQLite, it is:
```py
>>> uri = "sqlite:///us_covid_data.db"
```
Load the table by passing the table name and URI to [`~datasets.Dataset.from_sql`]:
```py
>>> from datasets import Dataset
>>> ds = Dataset.from_sql("states", uri)
>>> ds
Dataset({
features: ['index', 'date', 'state', 'fips', 'cases', 'deaths'],
num_rows: 54382
})
```
Then you can use all of 🤗 Datasets process features like [`~datasets.Dataset.filter`] for example:
```py
>>> ds.filter(lambda x: x["state"] == "California")
```
You can also load a dataset from a SQL query instead of an entire table, which is useful for querying and joining multiple tables.
Load the dataset by passing your query and URI to [`~datasets.Dataset.from_sql`]:
```py
>>> from datasets import Dataset
>>> ds = Dataset.from_sql('SELECT * FROM states WHERE state="California";', uri)
>>> ds
Dataset({
features: ['index', 'date', 'state', 'fips', 'cases', 'deaths'],
num_rows: 1019
})
```
Then you can use all of 🤗 Datasets process features like [`~datasets.Dataset.filter`] for example:
```py
>>> ds.filter(lambda x: x["cases"] > 10000)
```
### PostgreSQL
You can also connect and load a dataset from a PostgreSQL database, however we won't directly demonstrate how in the documentation because the example is only meant to be run in a notebook. Instead, take a look at how to install and setup a PostgreSQL server in this [notebook](https://colab.research.google.com/github/nateraw/huggingface-hub-examples/blob/main/sql_with_huggingface_datasets.ipynb#scrollTo=d83yGQMPHGFi)!
After you've setup your PostgreSQL database, you can use the [`~datasets.Dataset.from_sql`] method to load a dataset from a table or query.
|
datasets/docs/source/tabular_load.mdx/0
|
{
"file_path": "datasets/docs/source/tabular_load.mdx",
"repo_id": "datasets",
"token_count": 1868
}
| 70
|
# Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BLEURT metric."""
import os
from bleurt import score # From: git+https://github.com/google-research/bleurt.git
import datasets
logger = datasets.logging.get_logger(__name__)
_CITATION = """\
@inproceedings{bleurt,
title={BLEURT: Learning Robust Metrics for Text Generation},
author={Thibault Sellam and Dipanjan Das and Ankur P. Parikh},
booktitle={ACL},
year={2020},
url={https://arxiv.org/abs/2004.04696}
}
"""
_DESCRIPTION = """\
BLEURT a learnt evaluation metric for Natural Language Generation. It is built using multiple phases of transfer learning starting from a pretrained BERT model (Devlin et al. 2018)
and then employing another pre-training phrase using synthetic data. Finally it is trained on WMT human annotations. You may run BLEURT out-of-the-box or fine-tune
it for your specific application (the latter is expected to perform better).
See the project's README at https://github.com/google-research/bleurt#readme for more information.
"""
_KWARGS_DESCRIPTION = """
BLEURT score.
Args:
`predictions` (list of str): prediction/candidate sentences
`references` (list of str): reference sentences
`checkpoint` BLEURT checkpoint. Will default to BLEURT-tiny if None.
Returns:
'scores': List of scores.
Examples:
>>> predictions = ["hello there", "general kenobi"]
>>> references = ["hello there", "general kenobi"]
>>> bleurt = datasets.load_metric("bleurt")
>>> results = bleurt.compute(predictions=predictions, references=references)
>>> print([round(v, 2) for v in results["scores"]])
[1.03, 1.04]
"""
CHECKPOINT_URLS = {
"bleurt-tiny-128": "https://storage.googleapis.com/bleurt-oss/bleurt-tiny-128.zip",
"bleurt-tiny-512": "https://storage.googleapis.com/bleurt-oss/bleurt-tiny-512.zip",
"bleurt-base-128": "https://storage.googleapis.com/bleurt-oss/bleurt-base-128.zip",
"bleurt-base-512": "https://storage.googleapis.com/bleurt-oss/bleurt-base-512.zip",
"bleurt-large-128": "https://storage.googleapis.com/bleurt-oss/bleurt-large-128.zip",
"bleurt-large-512": "https://storage.googleapis.com/bleurt-oss/bleurt-large-512.zip",
"BLEURT-20-D3": "https://storage.googleapis.com/bleurt-oss-21/BLEURT-20-D3.zip",
"BLEURT-20-D6": "https://storage.googleapis.com/bleurt-oss-21/BLEURT-20-D6.zip",
"BLEURT-20-D12": "https://storage.googleapis.com/bleurt-oss-21/BLEURT-20-D12.zip",
"BLEURT-20": "https://storage.googleapis.com/bleurt-oss-21/BLEURT-20.zip",
}
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class BLEURT(datasets.Metric):
def _info(self):
return datasets.MetricInfo(
description=_DESCRIPTION,
citation=_CITATION,
homepage="https://github.com/google-research/bleurt",
inputs_description=_KWARGS_DESCRIPTION,
features=datasets.Features(
{
"predictions": datasets.Value("string", id="sequence"),
"references": datasets.Value("string", id="sequence"),
}
),
codebase_urls=["https://github.com/google-research/bleurt"],
reference_urls=["https://github.com/google-research/bleurt", "https://arxiv.org/abs/2004.04696"],
)
def _download_and_prepare(self, dl_manager):
# check that config name specifies a valid BLEURT model
if self.config_name == "default":
logger.warning(
"Using default BLEURT-Base checkpoint for sequence maximum length 128. "
"You can use a bigger model for better results with e.g.: datasets.load_metric('bleurt', 'bleurt-large-512')."
)
self.config_name = "bleurt-base-128"
if self.config_name.lower() in CHECKPOINT_URLS:
checkpoint_name = self.config_name.lower()
elif self.config_name.upper() in CHECKPOINT_URLS:
checkpoint_name = self.config_name.upper()
else:
raise KeyError(
f"{self.config_name} model not found. You should supply the name of a model checkpoint for bleurt in {CHECKPOINT_URLS.keys()}"
)
# download the model checkpoint specified by self.config_name and set up the scorer
model_path = dl_manager.download_and_extract(CHECKPOINT_URLS[checkpoint_name])
self.scorer = score.BleurtScorer(os.path.join(model_path, checkpoint_name))
def _compute(self, predictions, references):
scores = self.scorer.score(references=references, candidates=predictions)
return {"scores": scores}
|
datasets/metrics/bleurt/bleurt.py/0
|
{
"file_path": "datasets/metrics/bleurt/bleurt.py",
"repo_id": "datasets",
"token_count": 1981
}
| 71
|
# Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""CUAD metric."""
import datasets
from .evaluate import evaluate
_CITATION = """\
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
"""
_DESCRIPTION = """
This metric wrap the official scoring script for version 1 of the Contract
Understanding Atticus Dataset (CUAD).
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510
commercial legal contracts that have been manually labeled to identify 41 categories of important
clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
"""
_KWARGS_DESCRIPTION = """
Computes CUAD scores (EM, F1, AUPR, Precision@80%Recall, and Precision@90%Recall).
Args:
predictions: List of question-answers dictionaries with the following key-values:
- 'id': id of the question-answer pair as given in the references (see below)
- 'prediction_text': list of possible texts for the answer, as a list of strings
depending on a threshold on the confidence probability of each prediction.
references: List of question-answers dictionaries with the following key-values:
- 'id': id of the question-answer pair (see above),
- 'answers': a Dict in the CUAD dataset format
{
'text': list of possible texts for the answer, as a list of strings
'answer_start': list of start positions for the answer, as a list of ints
}
Note that answer_start values are not taken into account to compute the metric.
Returns:
'exact_match': Exact match (the normalized answer exactly match the gold answer)
'f1': The F-score of predicted tokens versus the gold answer
'aupr': Area Under the Precision-Recall curve
'prec_at_80_recall': Precision at 80% recall
'prec_at_90_recall': Precision at 90% recall
Examples:
>>> predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}]
>>> references = [{'answers': {'answer_start': [143, 49], 'text': ['The seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co., Ltd.']}, 'id': 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply Agreement__Parties'}]
>>> cuad_metric = datasets.load_metric("cuad")
>>> results = cuad_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'exact_match': 100.0, 'f1': 100.0, 'aupr': 0.0, 'prec_at_80_recall': 1.0, 'prec_at_90_recall': 1.0}
"""
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class CUAD(datasets.Metric):
def _info(self):
return datasets.MetricInfo(
description=_DESCRIPTION,
citation=_CITATION,
inputs_description=_KWARGS_DESCRIPTION,
features=datasets.Features(
{
"predictions": {
"id": datasets.Value("string"),
"prediction_text": datasets.features.Sequence(datasets.Value("string")),
},
"references": {
"id": datasets.Value("string"),
"answers": datasets.features.Sequence(
{
"text": datasets.Value("string"),
"answer_start": datasets.Value("int32"),
}
),
},
}
),
codebase_urls=["https://www.atticusprojectai.org/cuad"],
reference_urls=["https://www.atticusprojectai.org/cuad"],
)
def _compute(self, predictions, references):
pred_dict = {prediction["id"]: prediction["prediction_text"] for prediction in predictions}
dataset = [
{
"paragraphs": [
{
"qas": [
{
"answers": [{"text": answer_text} for answer_text in ref["answers"]["text"]],
"id": ref["id"],
}
for ref in references
]
}
]
}
]
score = evaluate(dataset=dataset, predictions=pred_dict)
return score
|
datasets/metrics/cuad/cuad.py/0
|
{
"file_path": "datasets/metrics/cuad/cuad.py",
"repo_id": "datasets",
"token_count": 2242
}
| 72
|
# Metric Card for Mahalanobis Distance
## Metric Description
Mahalonobis distance is the distance between a point and a distribution (as opposed to the distance between two points), making it the multivariate equivalent of the Euclidean distance.
It is often used in multivariate anomaly detection, classification on highly imbalanced datasets and one-class classification.
## How to Use
At minimum, this metric requires two `list`s of datapoints:
```python
>>> mahalanobis_metric = datasets.load_metric("mahalanobis")
>>> results = mahalanobis_metric.compute(reference_distribution=[[0, 1], [1, 0]], X=[[0, 1]])
```
### Inputs
- `X` (`list`): data points to be compared with the `reference_distribution`.
- `reference_distribution` (`list`): data points from the reference distribution that we want to compare to.
### Output Values
`mahalanobis` (`array`): the Mahalonobis distance for each data point in `X`.
```python
>>> print(results)
{'mahalanobis': array([0.5])}
```
#### Values from Popular Papers
*N/A*
### Example
```python
>>> mahalanobis_metric = datasets.load_metric("mahalanobis")
>>> results = mahalanobis_metric.compute(reference_distribution=[[0, 1], [1, 0]], X=[[0, 1]])
>>> print(results)
{'mahalanobis': array([0.5])}
```
## Limitations and Bias
The Mahalanobis distance is only able to capture linear relationships between the variables, which means it cannot capture all types of outliers. Mahalanobis distance also fails to faithfully represent data that is highly skewed or multimodal.
## Citation
```bibtex
@inproceedings{mahalanobis1936generalized,
title={On the generalized distance in statistics},
author={Mahalanobis, Prasanta Chandra},
year={1936},
organization={National Institute of Science of India}
}
```
```bibtex
@article{de2000mahalanobis,
title={The Mahalanobis distance},
author={De Maesschalck, Roy and Jouan-Rimbaud, Delphine and Massart, D{\'e}sir{\'e} L},
journal={Chemometrics and intelligent laboratory systems},
volume={50},
number={1},
pages={1--18},
year={2000},
publisher={Elsevier}
}
```
## Further References
-[Wikipedia -- Mahalanobis Distance](https://en.wikipedia.org/wiki/Mahalanobis_distance)
-[Machine Learning Plus -- Mahalanobis Distance](https://www.machinelearningplus.com/statistics/mahalanobis-distance/)
|
datasets/metrics/mahalanobis/README.md/0
|
{
"file_path": "datasets/metrics/mahalanobis/README.md",
"repo_id": "datasets",
"token_count": 738
}
| 73
|
# Metric Card for Precision
## Metric Description
Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation:
Precision = TP / (TP + FP)
where TP is the True positives (i.e. the examples correctly labeled as positive) and FP is the False positive examples (i.e. the examples incorrectly labeled as positive).
## How to Use
At minimum, precision takes as input a list of predicted labels, `predictions`, and a list of output labels, `references`.
```python
>>> precision_metric = datasets.load_metric("precision")
>>> results = precision_metric.compute(references=[0, 1], predictions=[0, 1])
>>> print(results)
{'precision': 1.0}
```
### Inputs
- **predictions** (`list` of `int`): Predicted class labels.
- **references** (`list` of `int`): Actual class labels.
- **labels** (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
- **pos_label** (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
- **average** (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
- 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
- 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
- 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
- **sample_weight** (`list` of `float`): Sample weights Defaults to None.
- **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
- 0: Returns 0 when there is a zero division.
- 1: Returns 1 when there is a zero division.
- 'warn': Raises warnings and then returns 0 when there is a zero division.
### Output Values
- **precision**(`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.
Output Example(s):
```python
{'precision': 0.2222222222222222}
```
```python
{'precision': array([0.66666667, 0.0, 0.0])}
```
#### Values from Popular Papers
### Examples
Example 1-A simple binary example
```python
>>> precision_metric = datasets.load_metric("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(results)
{'precision': 0.5}
```
Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
```python
>>> precision_metric = datasets.load_metric("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
>>> print(round(results['precision'], 2))
0.67
```
Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
```python
>>> precision_metric = datasets.load_metric("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(results)
{'precision': 0.23529411764705882}
```
Example 4-A multiclass example, with different values for the `average` input.
```python
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')
>>> print(results)
{'precision': 0.2222222222222222}
>>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')
>>> print(results)
{'precision': 0.3333333333333333}
>>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')
>>> print(results)
{'precision': 0.2222222222222222}
>>> results = precision_metric.compute(predictions=predictions, references=references, average=None)
>>> print([round(res, 2) for res in results['precision']])
[0.67, 0.0, 0.0]
```
## Limitations and Bias
[Precision](https://huggingface.co/metrics/precision) and [recall](https://huggingface.co/metrics/recall) are complementary and can be used to measure different aspects of model performance -- using both of them (or an averaged measure like [F1 score](https://huggingface.co/metrics/F1) to better represent different aspects of performance. See [Wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall) for more information.
## Citation(s)
```bibtex
@article{scikit-learn,
title={Scikit-learn: Machine Learning in {P}ython},
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal={Journal of Machine Learning Research},
volume={12},
pages={2825--2830},
year={2011}
}
```
## Further References
- [Wikipedia -- Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)
|
datasets/metrics/precision/README.md/0
|
{
"file_path": "datasets/metrics/precision/README.md",
"repo_id": "datasets",
"token_count": 1878
}
| 74
|
# Metric Card for SQuAD
## Metric description
This metric wraps the official scoring script for version 1 of the [Stanford Question Answering Dataset (SQuAD)](https://huggingface.co/datasets/squad).
SQuAD is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
## How to use
The metric takes two files or two lists of question-answers dictionaries as inputs : one with the predictions of the model and the other with the references to be compared to:
```python
from datasets import load_metric
squad_metric = load_metric("squad")
results = squad_metric.compute(predictions=predictions, references=references)
```
## Output values
This metric outputs a dictionary with two values: the average exact match score and the average [F1 score](https://huggingface.co/metrics/f1).
```
{'exact_match': 100.0, 'f1': 100.0}
```
The range of `exact_match` is 0-100, where 0.0 means no answers were matched and 100.0 means all answers were matched.
The range of `f1` is 0-1 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall.
### Values from popular papers
The [original SQuAD paper](https://nlp.stanford.edu/pubs/rajpurkar2016squad.pdf) reported an F1 score of 51.0% and an Exact Match score of 40.0%. They also report that human performance on the dataset represents an F1 score of 90.5% and an Exact Match score of 80.3%.
For more recent model performance, see the [dataset leaderboard](https://paperswithcode.com/dataset/squad).
## Examples
Maximal values for both exact match and F1 (perfect match):
```python
from datasets import load_metric
squad_metric = load_metric("squad")
predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}]
results = squad_metric.compute(predictions=predictions, references=references)
results
{'exact_match': 100.0, 'f1': 100.0}
```
Minimal values for both exact match and F1 (no match):
```python
from datasets import load_metric
squad_metric = load_metric("squad")
predictions = [{'prediction_text': '1999', 'id': '56e10a3be3433e1400422b22'}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}]
results = squad_metric.compute(predictions=predictions, references=references)
results
{'exact_match': 0.0, 'f1': 0.0}
```
Partial match (2 out of 3 answers correct) :
```python
from datasets import load_metric
squad_metric = load_metric("squad")
predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}, {'prediction_text': 'Beyonce', 'id': '56d2051ce7d4791d0090260b'}, {'prediction_text': 'climate change', 'id': '5733b5344776f419006610e1'}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}, {'answers': {'answer_start': [233], 'text': ['Beyoncé and Bruno Mars']}, 'id': '56d2051ce7d4791d0090260b'}, {'answers': {'answer_start': [891], 'text': ['climate change']}, 'id': '5733b5344776f419006610e1'}]
results = squad_metric.compute(predictions=predictions, references=references)
results
{'exact_match': 66.66666666666667, 'f1': 66.66666666666667}
```
## Limitations and bias
This metric works only with datasets that have the same format as [SQuAD v.1 dataset](https://huggingface.co/datasets/squad).
The SQuAD dataset does contain a certain amount of noise, such as duplicate questions as well as missing answers, but these represent a minority of the 100,000 question-answer pairs. Also, neither exact match nor F1 score reflect whether models do better on certain types of questions (e.g. who questions) or those that cover a certain gender or geographical area -- carrying out more in-depth error analysis can complement these numbers.
## Citation
@inproceedings{Rajpurkar2016SQuAD10,
title={SQuAD: 100, 000+ Questions for Machine Comprehension of Text},
author={Pranav Rajpurkar and Jian Zhang and Konstantin Lopyrev and Percy Liang},
booktitle={EMNLP},
year={2016}
}
## Further References
- [The Stanford Question Answering Dataset: Background, Challenges, Progress (blog post)](https://rajpurkar.github.io/mlx/qa-and-squad/)
- [Hugging Face Course -- Question Answering](https://huggingface.co/course/chapter7/7)
|
datasets/metrics/squad/README.md/0
|
{
"file_path": "datasets/metrics/squad/README.md",
"repo_id": "datasets",
"token_count": 1494
}
| 75
|
# Copyright 2020 The HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""XNLI benchmark metric."""
import datasets
_CITATION = """\
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
"""
_DESCRIPTION = """\
XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels).
"""
_KWARGS_DESCRIPTION = """
Computes XNLI score which is just simple accuracy.
Args:
predictions: Predicted labels.
references: Ground truth labels.
Returns:
'accuracy': accuracy
Examples:
>>> predictions = [0, 1]
>>> references = [0, 1]
>>> xnli_metric = datasets.load_metric("xnli")
>>> results = xnli_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'accuracy': 1.0}
"""
def simple_accuracy(preds, labels):
return (preds == labels).mean()
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class Xnli(datasets.Metric):
def _info(self):
return datasets.MetricInfo(
description=_DESCRIPTION,
citation=_CITATION,
inputs_description=_KWARGS_DESCRIPTION,
features=datasets.Features(
{
"predictions": datasets.Value("int64" if self.config_name != "sts-b" else "float32"),
"references": datasets.Value("int64" if self.config_name != "sts-b" else "float32"),
}
),
codebase_urls=[],
reference_urls=[],
format="numpy",
)
def _compute(self, predictions, references):
return {"accuracy": simple_accuracy(predictions, references)}
|
datasets/metrics/xnli/xnli.py/0
|
{
"file_path": "datasets/metrics/xnli/xnli.py",
"repo_id": "datasets",
"token_count": 1107
}
| 76
|
import fnmatch
import json
import os
import shutil
import tempfile
import xml.etree.ElementTree as ET
from argparse import ArgumentParser
from pathlib import Path
from typing import Optional
from datasets import config
from datasets.commands import BaseDatasetsCLICommand
from datasets.download.download_config import DownloadConfig
from datasets.download.download_manager import DownloadManager
from datasets.download.mock_download_manager import MockDownloadManager
from datasets.load import dataset_module_factory, import_main_class
from datasets.utils.deprecation_utils import deprecated
from datasets.utils.logging import get_logger, set_verbosity_warning
from datasets.utils.py_utils import map_nested
logger = get_logger(__name__)
DEFAULT_ENCODING = "utf-8"
def dummy_data_command_factory(args):
return DummyDataCommand(
args.path_to_dataset,
args.auto_generate,
args.n_lines,
args.json_field,
args.xml_tag,
args.match_text_files,
args.keep_uncompressed,
args.cache_dir,
args.encoding,
)
class DummyDataGeneratorDownloadManager(DownloadManager):
def __init__(self, mock_download_manager, *args, **kwargs):
super().__init__(*args, **kwargs)
self.mock_download_manager = mock_download_manager
self.downloaded_dummy_paths = []
self.expected_dummy_paths = []
def download(self, url_or_urls):
output = super().download(url_or_urls)
dummy_output = self.mock_download_manager.download(url_or_urls)
map_nested(self.downloaded_dummy_paths.append, output, map_tuple=True)
map_nested(self.expected_dummy_paths.append, dummy_output, map_tuple=True)
return output
def download_and_extract(self, url_or_urls):
output = super().extract(super().download(url_or_urls))
dummy_output = self.mock_download_manager.download(url_or_urls)
map_nested(self.downloaded_dummy_paths.append, output, map_tuple=True)
map_nested(self.expected_dummy_paths.append, dummy_output, map_tuple=True)
return output
def auto_generate_dummy_data_folder(
self,
n_lines: int = 5,
json_field: Optional[str] = None,
xml_tag: Optional[str] = None,
match_text_files: Optional[str] = None,
encoding: Optional[str] = None,
) -> bool:
os.makedirs(
os.path.join(
self.mock_download_manager.datasets_scripts_dir,
self.mock_download_manager.dataset_name,
self.mock_download_manager.dummy_data_folder,
"dummy_data",
),
exist_ok=True,
)
total = 0
self.mock_download_manager.load_existing_dummy_data = False
for src_path, relative_dst_path in zip(self.downloaded_dummy_paths, self.expected_dummy_paths):
dst_path = os.path.join(
self.mock_download_manager.datasets_scripts_dir,
self.mock_download_manager.dataset_name,
self.mock_download_manager.dummy_data_folder,
relative_dst_path,
)
total += self._create_dummy_data(
src_path,
dst_path,
n_lines=n_lines,
json_field=json_field,
xml_tag=xml_tag,
match_text_files=match_text_files,
encoding=encoding,
)
if total == 0:
logger.error(
"Dummy data generation failed: no dummy files were created. "
"Make sure the data files format is supported by the auto-generation."
)
return total > 0
def _create_dummy_data(
self,
src_path: str,
dst_path: str,
n_lines: int,
json_field: Optional[str] = None,
xml_tag: Optional[str] = None,
match_text_files: Optional[str] = None,
encoding: Optional[str] = None,
) -> int:
encoding = encoding or DEFAULT_ENCODING
if os.path.isfile(src_path):
logger.debug(f"Trying to generate dummy data file {dst_path}")
dst_path_extensions = Path(dst_path).suffixes
line_by_line_extensions = [".txt", ".csv", ".jsonl", ".tsv"]
is_line_by_line_text_file = any(extension in dst_path_extensions for extension in line_by_line_extensions)
if match_text_files is not None:
file_name = os.path.basename(dst_path)
for pattern in match_text_files.split(","):
is_line_by_line_text_file |= fnmatch.fnmatch(file_name, pattern)
# Line by line text file (txt, csv etc.)
if is_line_by_line_text_file:
Path(dst_path).parent.mkdir(exist_ok=True, parents=True)
with open(src_path, encoding=encoding) as src_file:
with open(dst_path, "w", encoding=encoding) as dst_file:
first_lines = []
for i, line in enumerate(src_file):
if i >= n_lines:
break
first_lines.append(line)
dst_file.write("".join(first_lines).strip())
return 1
# json file
elif ".json" in dst_path_extensions:
with open(src_path, encoding=encoding) as src_file:
json_data = json.load(src_file)
if json_field is not None:
json_data = json_data[json_field]
if isinstance(json_data, dict):
if not all(isinstance(v, list) for v in json_data.values()):
raise ValueError(
f"Couldn't parse columns {list(json_data.keys())}. "
"Maybe specify which json field must be used "
"to read the data with --json_field <my_field>."
)
first_json_data = {k: v[:n_lines] for k, v in json_data.items()}
else:
first_json_data = json_data[:n_lines]
if json_field is not None:
first_json_data = {json_field: first_json_data}
Path(dst_path).parent.mkdir(exist_ok=True, parents=True)
with open(dst_path, "w", encoding=encoding) as dst_file:
json.dump(first_json_data, dst_file)
return 1
# xml file
elif any(extension in dst_path_extensions for extension in [".xml", ".txm"]):
if xml_tag is None:
logger.warning("Found xml file but 'xml_tag' is set to None. Please provide --xml_tag")
else:
self._create_xml_dummy_data(src_path, dst_path, xml_tag, n_lines=n_lines, encoding=encoding)
return 1
logger.warning(
f"Couldn't generate dummy file '{dst_path}'. " "Ignore that if this file is not useful for dummy data."
)
return 0
# directory, iterate through all files
elif os.path.isdir(src_path):
total = 0
for path, _, files in os.walk(src_path):
for name in files:
if not name.startswith("."): # ignore files like .DS_Store etc.
src_file_path = os.path.join(path, name)
dst_file_path = os.path.join(dst_path, Path(src_file_path).relative_to(src_path))
total += self._create_dummy_data(
src_file_path,
dst_file_path,
n_lines=n_lines,
json_field=json_field,
xml_tag=xml_tag,
match_text_files=match_text_files,
encoding=encoding,
)
return total
@staticmethod
def _create_xml_dummy_data(src_path, dst_path, xml_tag, n_lines=5, encoding=DEFAULT_ENCODING):
Path(dst_path).parent.mkdir(exist_ok=True, parents=True)
with open(src_path, encoding=encoding) as src_file:
n_line = 0
parents = []
for event, elem in ET.iterparse(src_file, events=("start", "end")):
if event == "start":
parents.append(elem)
else:
_ = parents.pop()
if elem.tag == xml_tag:
if n_line < n_lines:
n_line += 1
else:
if parents:
parents[-1].remove(elem)
ET.ElementTree(element=elem).write(dst_path, encoding=encoding)
def compress_autogenerated_dummy_data(self, path_to_dataset):
root_dir = os.path.join(path_to_dataset, self.mock_download_manager.dummy_data_folder)
base_name = os.path.join(root_dir, "dummy_data")
base_dir = "dummy_data"
logger.info(f"Compressing dummy data folder to '{base_name}.zip'")
shutil.make_archive(base_name, "zip", root_dir, base_dir)
shutil.rmtree(base_name)
@deprecated(
"The `datasets` repository does not host the dataset scripts anymore. Therefore, dummy data is no longer needed to test their loading with CI."
)
class DummyDataCommand(BaseDatasetsCLICommand):
@staticmethod
def register_subcommand(parser: ArgumentParser):
test_parser = parser.add_parser("dummy_data", help="Generate dummy data.")
test_parser.add_argument("--auto_generate", action="store_true", help="Automatically generate dummy data")
test_parser.add_argument(
"--n_lines", type=int, default=5, help="Number of lines or samples to keep when auto-generating dummy data"
)
test_parser.add_argument(
"--json_field",
type=str,
default=None,
help="Optional, json field to read the data from when auto-generating dummy data. In the json data files, this field must point to a list of samples as json objects (ex: the 'data' field for squad-like files)",
)
test_parser.add_argument(
"--xml_tag",
type=str,
default=None,
help="Optional, xml tag name of the samples inside the xml files when auto-generating dummy data.",
)
test_parser.add_argument(
"--match_text_files",
type=str,
default=None,
help="Optional, a comma separated list of file patterns that looks for line-by-line text files other than *.txt or *.csv. Example: --match_text_files *.label",
)
test_parser.add_argument(
"--keep_uncompressed",
action="store_true",
help="Whether to leave the dummy data folders uncompressed when auto-generating dummy data. Useful for debugging for to do manual adjustements before compressing.",
)
test_parser.add_argument(
"--cache_dir",
type=str,
default=None,
help="Cache directory to download and cache files when auto-generating dummy data",
)
test_parser.add_argument(
"--encoding",
type=str,
default=None,
help=f"Encoding to use when auto-generating dummy data. Defaults to {DEFAULT_ENCODING}",
)
test_parser.add_argument("path_to_dataset", type=str, help="Path to the dataset (example: ./datasets/squad)")
test_parser.set_defaults(func=dummy_data_command_factory)
def __init__(
self,
path_to_dataset: str,
auto_generate: bool,
n_lines: int,
json_field: Optional[str],
xml_tag: Optional[str],
match_text_files: Optional[str],
keep_uncompressed: bool,
cache_dir: Optional[str],
encoding: Optional[str],
):
self._path_to_dataset = path_to_dataset
if os.path.isdir(path_to_dataset):
self._dataset_name = path_to_dataset.replace(os.sep, "/").split("/")[-1]
else:
self._dataset_name = path_to_dataset.replace(os.sep, "/").split("/")[-2]
cache_dir = os.path.expanduser(cache_dir or config.HF_DATASETS_CACHE)
self._auto_generate = auto_generate
self._n_lines = n_lines
self._json_field = json_field
self._xml_tag = xml_tag
self._match_text_files = match_text_files
self._keep_uncompressed = keep_uncompressed
self._cache_dir = cache_dir
self._encoding = encoding
def run(self):
set_verbosity_warning()
dataset_module = dataset_module_factory(self._path_to_dataset)
builder_cls = import_main_class(dataset_module.module_path)
# use `None` as config if no configs
builder_configs = builder_cls.BUILDER_CONFIGS or [None]
auto_generate_results = []
with tempfile.TemporaryDirectory() as tmp_dir:
for builder_config in builder_configs:
config_name = builder_config.name if builder_config else None
dataset_builder = builder_cls(config_name=config_name, hash=dataset_module.hash, cache_dir=tmp_dir)
version = builder_config.version if builder_config else dataset_builder.config.version
mock_dl_manager = MockDownloadManager(
dataset_name=self._dataset_name,
config=builder_config,
version=version,
use_local_dummy_data=True,
load_existing_dummy_data=False,
)
if self._auto_generate:
auto_generate_results.append(
self._autogenerate_dummy_data(
dataset_builder=dataset_builder,
mock_dl_manager=mock_dl_manager,
keep_uncompressed=self._keep_uncompressed,
)
)
else:
self._print_dummy_data_instructions(
dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager
)
if self._auto_generate and not self._keep_uncompressed:
if all(auto_generate_results):
print(f"Automatic dummy data generation succeeded for all configs of '{self._path_to_dataset}'")
else:
print(f"Automatic dummy data generation failed for some configs of '{self._path_to_dataset}'")
def _autogenerate_dummy_data(self, dataset_builder, mock_dl_manager, keep_uncompressed) -> Optional[bool]:
dl_cache_dir = (
os.path.join(self._cache_dir, config.DOWNLOADED_DATASETS_DIR)
if self._cache_dir
else config.DOWNLOADED_DATASETS_PATH
)
download_config = DownloadConfig(cache_dir=dl_cache_dir)
dl_manager = DummyDataGeneratorDownloadManager(
dataset_name=self._dataset_name, mock_download_manager=mock_dl_manager, download_config=download_config
)
dataset_builder._split_generators(dl_manager)
mock_dl_manager.load_existing_dummy_data = False # don't use real dummy data
dl_manager.auto_generate_dummy_data_folder(
n_lines=self._n_lines,
json_field=self._json_field,
xml_tag=self._xml_tag,
match_text_files=self._match_text_files,
encoding=self._encoding,
)
if not keep_uncompressed:
path_do_dataset = os.path.join(mock_dl_manager.datasets_scripts_dir, mock_dl_manager.dataset_name)
dl_manager.compress_autogenerated_dummy_data(path_do_dataset)
# now test that the dummy_data.zip file actually works
mock_dl_manager.load_existing_dummy_data = True # use real dummy data
n_examples_per_split = {}
os.makedirs(dataset_builder._cache_dir, exist_ok=True)
try:
split_generators = dataset_builder._split_generators(mock_dl_manager)
for split_generator in split_generators:
dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)
n_examples_per_split[split_generator.name] = split_generator.split_info.num_examples
except OSError as e:
logger.error(
f"Failed to load dummy data for config '{dataset_builder.config.name}''.\nOriginal error:\n"
+ str(e)
)
return False
else:
if all(n_examples > 0 for n_examples in n_examples_per_split.values()):
logger.warning(
f"Dummy data generation done and dummy data test succeeded for config '{dataset_builder.config.name}''."
)
return True
else:
empty_splits = [
split_name for split_name in n_examples_per_split if n_examples_per_split[split_name] == 0
]
logger.warning(
f"Dummy data generation done but dummy data test failed since splits {empty_splits} have 0 examples for config '{dataset_builder.config.name}''."
)
return False
else:
generated_dummy_data_dir = os.path.join(self._path_to_dataset, mock_dl_manager.dummy_data_folder)
logger.info(
f"Dummy data generated in directory '{generated_dummy_data_dir}' but kept uncompressed. "
"Please compress this directory into a zip file to use it for dummy data tests."
)
def _print_dummy_data_instructions(self, dataset_builder, mock_dl_manager):
dummy_data_folder = os.path.join(self._path_to_dataset, mock_dl_manager.dummy_data_folder)
logger.info(f"Creating dummy folder structure for {dummy_data_folder}... ")
os.makedirs(dummy_data_folder, exist_ok=True)
try:
generator_splits = dataset_builder._split_generators(mock_dl_manager)
except FileNotFoundError as e:
print(
f"Dataset {self._dataset_name} with config {mock_dl_manager.config} seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file {e.filename}."
)
files_to_create = set()
split_names = []
dummy_file_name = mock_dl_manager.dummy_file_name
for split in generator_splits:
logger.info(f"Collecting dummy data file paths to create for {split.name}")
split_names.append(split.name)
gen_kwargs = split.gen_kwargs
generator = dataset_builder._generate_examples(**gen_kwargs)
try:
dummy_data_guidance_print = "\n" + 30 * "=" + "DUMMY DATA INSTRUCTIONS" + 30 * "=" + "\n"
config_string = (
f"config {mock_dl_manager.config.name} of " if mock_dl_manager.config is not None else ""
)
dummy_data_guidance_print += (
"- In order to create the dummy data for "
+ config_string
+ f"{self._dataset_name}, please go into the folder '{dummy_data_folder}' with `cd {dummy_data_folder}` . \n\n"
)
# trigger generate function
for key, record in generator:
pass
dummy_data_guidance_print += f"- It appears that the function `_generate_examples(...)` expects one or more files in the folder {dummy_file_name} using the function `glob.glob(...)`. In this case, please refer to the `_generate_examples(...)` method to see under which filename the dummy data files should be created. \n\n"
except FileNotFoundError as e:
files_to_create.add(e.filename)
split_names = ", ".join(split_names)
if len(files_to_create) > 0:
# no glob.glob(...) in `_generate_examples(...)`
if len(files_to_create) == 1 and next(iter(files_to_create)) == dummy_file_name:
dummy_data_guidance_print += f"- Please create a single dummy data file called '{next(iter(files_to_create))}' from the folder '{dummy_data_folder}'. Make sure that the dummy data file provides at least one example for the split(s) '{split_names}' \n\n"
files_string = dummy_file_name
else:
files_string = ", ".join(files_to_create)
dummy_data_guidance_print += f"- Please create the following dummy data files '{files_string}' from the folder '{dummy_data_folder}'\n\n"
dummy_data_guidance_print += f"- For each of the splits '{split_names}', make sure that one or more of the dummy data files provide at least one example \n\n"
dummy_data_guidance_print += f"- If the method `_generate_examples(...)` includes multiple `open()` statements, you might have to create other files in addition to '{files_string}'. In this case please refer to the `_generate_examples(...)` method \n\n"
if len(files_to_create) == 1 and next(iter(files_to_create)) == dummy_file_name:
dummy_data_guidance_print += f"- After the dummy data file is created, it should be zipped to '{dummy_file_name}.zip' with the command `zip {dummy_file_name}.zip {dummy_file_name}` \n\n"
dummy_data_guidance_print += (
f"- You can now delete the file '{dummy_file_name}' with the command `rm {dummy_file_name}` \n\n"
)
dummy_data_guidance_print += f"- To get the file '{dummy_file_name}' back for further changes to the dummy data, simply unzip {dummy_file_name}.zip with the command `unzip {dummy_file_name}.zip` \n\n"
else:
dummy_data_guidance_print += f"- After all dummy data files are created, they should be zipped recursively to '{dummy_file_name}.zip' with the command `zip -r {dummy_file_name}.zip {dummy_file_name}/` \n\n"
dummy_data_guidance_print += (
f"- You can now delete the folder '{dummy_file_name}' with the command `rm -r {dummy_file_name}` \n\n"
)
dummy_data_guidance_print += f"- To get the folder '{dummy_file_name}' back for further changes to the dummy data, simply unzip {dummy_file_name}.zip with the command `unzip {dummy_file_name}.zip` \n\n"
dummy_data_guidance_print += (
f"- Make sure you have created the file '{dummy_file_name}.zip' in '{dummy_data_folder}' \n"
)
dummy_data_guidance_print += 83 * "=" + "\n"
print(dummy_data_guidance_print)
|
datasets/src/datasets/commands/dummy_data.py/0
|
{
"file_path": "datasets/src/datasets/commands/dummy_data.py",
"repo_id": "datasets",
"token_count": 11107
}
| 77
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""This class handle features definition in datasets and some utilities to display table type."""
import copy
import json
import re
import sys
from collections.abc import Iterable, Mapping
from collections.abc import Sequence as SequenceABC
from dataclasses import InitVar, dataclass, field, fields
from functools import reduce, wraps
from operator import mul
from typing import Any, Callable, ClassVar, Dict, List, Optional, Tuple, Union
from typing import Sequence as Sequence_
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.compute as pc
import pyarrow.types
import pyarrow_hotfix # noqa: F401 # to fix vulnerability on pyarrow<14.0.1
from pandas.api.extensions import ExtensionArray as PandasExtensionArray
from pandas.api.extensions import ExtensionDtype as PandasExtensionDtype
from .. import config
from ..naming import camelcase_to_snakecase, snakecase_to_camelcase
from ..table import array_cast
from ..utils import experimental, logging
from ..utils.py_utils import asdict, first_non_null_value, zip_dict
from .audio import Audio
from .image import Image, encode_pil_image
from .translation import Translation, TranslationVariableLanguages
logger = logging.get_logger(__name__)
def _arrow_to_datasets_dtype(arrow_type: pa.DataType) -> str:
"""
_arrow_to_datasets_dtype takes a pyarrow.DataType and converts it to a datasets string dtype.
In effect, `dt == string_to_arrow(_arrow_to_datasets_dtype(dt))`
"""
if pyarrow.types.is_null(arrow_type):
return "null"
elif pyarrow.types.is_boolean(arrow_type):
return "bool"
elif pyarrow.types.is_int8(arrow_type):
return "int8"
elif pyarrow.types.is_int16(arrow_type):
return "int16"
elif pyarrow.types.is_int32(arrow_type):
return "int32"
elif pyarrow.types.is_int64(arrow_type):
return "int64"
elif pyarrow.types.is_uint8(arrow_type):
return "uint8"
elif pyarrow.types.is_uint16(arrow_type):
return "uint16"
elif pyarrow.types.is_uint32(arrow_type):
return "uint32"
elif pyarrow.types.is_uint64(arrow_type):
return "uint64"
elif pyarrow.types.is_float16(arrow_type):
return "float16" # pyarrow dtype is "halffloat"
elif pyarrow.types.is_float32(arrow_type):
return "float32" # pyarrow dtype is "float"
elif pyarrow.types.is_float64(arrow_type):
return "float64" # pyarrow dtype is "double"
elif pyarrow.types.is_time32(arrow_type):
return f"time32[{pa.type_for_alias(str(arrow_type)).unit}]"
elif pyarrow.types.is_time64(arrow_type):
return f"time64[{pa.type_for_alias(str(arrow_type)).unit}]"
elif pyarrow.types.is_timestamp(arrow_type):
if arrow_type.tz is None:
return f"timestamp[{arrow_type.unit}]"
elif arrow_type.tz:
return f"timestamp[{arrow_type.unit}, tz={arrow_type.tz}]"
else:
raise ValueError(f"Unexpected timestamp object {arrow_type}.")
elif pyarrow.types.is_date32(arrow_type):
return "date32" # pyarrow dtype is "date32[day]"
elif pyarrow.types.is_date64(arrow_type):
return "date64" # pyarrow dtype is "date64[ms]"
elif pyarrow.types.is_duration(arrow_type):
return f"duration[{arrow_type.unit}]"
elif pyarrow.types.is_decimal128(arrow_type):
return f"decimal128({arrow_type.precision}, {arrow_type.scale})"
elif pyarrow.types.is_decimal256(arrow_type):
return f"decimal256({arrow_type.precision}, {arrow_type.scale})"
elif pyarrow.types.is_binary(arrow_type):
return "binary"
elif pyarrow.types.is_large_binary(arrow_type):
return "large_binary"
elif pyarrow.types.is_string(arrow_type):
return "string"
elif pyarrow.types.is_large_string(arrow_type):
return "large_string"
else:
raise ValueError(f"Arrow type {arrow_type} does not have a datasets dtype equivalent.")
def string_to_arrow(datasets_dtype: str) -> pa.DataType:
"""
string_to_arrow takes a datasets string dtype and converts it to a pyarrow.DataType.
In effect, `dt == string_to_arrow(_arrow_to_datasets_dtype(dt))`
This is necessary because the datasets.Value() primitive type is constructed using a string dtype
Value(dtype=str)
But Features.type (via `get_nested_type()` expects to resolve Features into a pyarrow Schema,
which means that each Value() must be able to resolve into a corresponding pyarrow.DataType, which is the
purpose of this function.
"""
def _dtype_error_msg(dtype, pa_dtype, examples=None, urls=None):
msg = f"{dtype} is not a validly formatted string representation of the pyarrow {pa_dtype} type."
if examples:
examples = ", ".join(examples[:-1]) + " or " + examples[-1] if len(examples) > 1 else examples[0]
msg += f"\nValid examples include: {examples}."
if urls:
urls = ", ".join(urls[:-1]) + " and " + urls[-1] if len(urls) > 1 else urls[0]
msg += f"\nFor more insformation, see: {urls}."
return msg
if datasets_dtype in pa.__dict__:
return pa.__dict__[datasets_dtype]()
if (datasets_dtype + "_") in pa.__dict__:
return pa.__dict__[datasets_dtype + "_"]()
timestamp_matches = re.search(r"^timestamp\[(.*)\]$", datasets_dtype)
if timestamp_matches:
timestamp_internals = timestamp_matches.group(1)
internals_matches = re.search(r"^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+\-:]*)$", timestamp_internals)
if timestamp_internals in ["s", "ms", "us", "ns"]:
return pa.timestamp(timestamp_internals)
elif internals_matches:
return pa.timestamp(internals_matches.group(1), internals_matches.group(2))
else:
raise ValueError(
_dtype_error_msg(
datasets_dtype,
"timestamp",
examples=["timestamp[us]", "timestamp[us, tz=America/New_York"],
urls=["https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html"],
)
)
duration_matches = re.search(r"^duration\[(.*)\]$", datasets_dtype)
if duration_matches:
duration_internals = duration_matches.group(1)
if duration_internals in ["s", "ms", "us", "ns"]:
return pa.duration(duration_internals)
else:
raise ValueError(
_dtype_error_msg(
datasets_dtype,
"duration",
examples=["duration[s]", "duration[us]"],
urls=["https://arrow.apache.org/docs/python/generated/pyarrow.duration.html"],
)
)
time_matches = re.search(r"^time(.*)\[(.*)\]$", datasets_dtype)
if time_matches:
time_internals_bits = time_matches.group(1)
if time_internals_bits == "32":
time_internals_unit = time_matches.group(2)
if time_internals_unit in ["s", "ms"]:
return pa.time32(time_internals_unit)
else:
raise ValueError(
f"{time_internals_unit} is not a valid unit for the pyarrow time32 type. Supported units: s (second) and ms (millisecond)."
)
elif time_internals_bits == "64":
time_internals_unit = time_matches.group(2)
if time_internals_unit in ["us", "ns"]:
return pa.time64(time_internals_unit)
else:
raise ValueError(
f"{time_internals_unit} is not a valid unit for the pyarrow time64 type. Supported units: us (microsecond) and ns (nanosecond)."
)
else:
raise ValueError(
_dtype_error_msg(
datasets_dtype,
"time",
examples=["time32[s]", "time64[us]"],
urls=[
"https://arrow.apache.org/docs/python/generated/pyarrow.time32.html",
"https://arrow.apache.org/docs/python/generated/pyarrow.time64.html",
],
)
)
decimal_matches = re.search(r"^decimal(.*)\((.*)\)$", datasets_dtype)
if decimal_matches:
decimal_internals_bits = decimal_matches.group(1)
if decimal_internals_bits == "128":
decimal_internals_precision_and_scale = re.search(r"^(\d+),\s*(-?\d+)$", decimal_matches.group(2))
if decimal_internals_precision_and_scale:
precision = decimal_internals_precision_and_scale.group(1)
scale = decimal_internals_precision_and_scale.group(2)
return pa.decimal128(int(precision), int(scale))
else:
raise ValueError(
_dtype_error_msg(
datasets_dtype,
"decimal128",
examples=["decimal128(10, 2)", "decimal128(4, -2)"],
urls=["https://arrow.apache.org/docs/python/generated/pyarrow.decimal128.html"],
)
)
elif decimal_internals_bits == "256":
decimal_internals_precision_and_scale = re.search(r"^(\d+),\s*(-?\d+)$", decimal_matches.group(2))
if decimal_internals_precision_and_scale:
precision = decimal_internals_precision_and_scale.group(1)
scale = decimal_internals_precision_and_scale.group(2)
return pa.decimal256(int(precision), int(scale))
else:
raise ValueError(
_dtype_error_msg(
datasets_dtype,
"decimal256",
examples=["decimal256(30, 2)", "decimal256(38, -4)"],
urls=["https://arrow.apache.org/docs/python/generated/pyarrow.decimal256.html"],
)
)
else:
raise ValueError(
_dtype_error_msg(
datasets_dtype,
"decimal",
examples=["decimal128(12, 3)", "decimal256(40, 6)"],
urls=[
"https://arrow.apache.org/docs/python/generated/pyarrow.decimal128.html",
"https://arrow.apache.org/docs/python/generated/pyarrow.decimal256.html",
],
)
)
raise ValueError(
f"Neither {datasets_dtype} nor {datasets_dtype + '_'} seems to be a pyarrow data type. "
f"Please make sure to use a correct data type, see: "
f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions"
)
def _cast_to_python_objects(obj: Any, only_1d_for_numpy: bool, optimize_list_casting: bool) -> Tuple[Any, bool]:
"""
Cast pytorch/tensorflow/pandas objects to python numpy array/lists.
It works recursively.
If `optimize_list_casting` is True, to avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be casted.
If the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same.
This trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example.
Args:
obj: the object (nested struct) to cast.
only_1d_for_numpy (bool): whether to keep the full multi-dim tensors as multi-dim numpy arrays, or convert them to
nested lists of 1-dimensional numpy arrays. This can be useful to keep only 1-d arrays to instantiate Arrow arrays.
Indeed Arrow only support converting 1-dimensional array values.
optimize_list_casting (bool): whether to optimize list casting by checking the first non-null element to see if it needs to be casted
and if it doesn't, not checking the rest of the list elements.
Returns:
casted_obj: the casted object
has_changed (bool): True if the object has been changed, False if it is identical
"""
if config.TF_AVAILABLE and "tensorflow" in sys.modules:
import tensorflow as tf
if config.TORCH_AVAILABLE and "torch" in sys.modules:
import torch
if config.JAX_AVAILABLE and "jax" in sys.modules:
import jax.numpy as jnp
if config.PIL_AVAILABLE and "PIL" in sys.modules:
import PIL.Image
if isinstance(obj, np.ndarray):
if obj.ndim == 0:
return obj[()], True
elif not only_1d_for_numpy or obj.ndim == 1:
return obj, False
else:
return (
[
_cast_to_python_objects(
x, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
for x in obj
],
True,
)
elif config.TORCH_AVAILABLE and "torch" in sys.modules and isinstance(obj, torch.Tensor):
if obj.ndim == 0:
return obj.detach().cpu().numpy()[()], True
elif not only_1d_for_numpy or obj.ndim == 1:
return obj.detach().cpu().numpy(), True
else:
return (
[
_cast_to_python_objects(
x, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
for x in obj.detach().cpu().numpy()
],
True,
)
elif config.TF_AVAILABLE and "tensorflow" in sys.modules and isinstance(obj, tf.Tensor):
if obj.ndim == 0:
return obj.numpy()[()], True
elif not only_1d_for_numpy or obj.ndim == 1:
return obj.numpy(), True
else:
return (
[
_cast_to_python_objects(
x, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
for x in obj.numpy()
],
True,
)
elif config.JAX_AVAILABLE and "jax" in sys.modules and isinstance(obj, jnp.ndarray):
if obj.ndim == 0:
return np.asarray(obj)[()], True
elif not only_1d_for_numpy or obj.ndim == 1:
return np.asarray(obj), True
else:
return (
[
_cast_to_python_objects(
x, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
for x in np.asarray(obj)
],
True,
)
elif config.PIL_AVAILABLE and "PIL" in sys.modules and isinstance(obj, PIL.Image.Image):
return encode_pil_image(obj), True
elif isinstance(obj, pd.Series):
return (
_cast_to_python_objects(
obj.tolist(), only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0],
True,
)
elif isinstance(obj, pd.DataFrame):
return (
{
key: _cast_to_python_objects(
value, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
for key, value in obj.to_dict("series").items()
},
True,
)
elif isinstance(obj, pd.Timestamp):
return obj.to_pydatetime(), True
elif isinstance(obj, pd.Timedelta):
return obj.to_pytimedelta(), True
elif isinstance(obj, Mapping):
has_changed = not isinstance(obj, dict)
output = {}
for k, v in obj.items():
casted_v, has_changed_v = _cast_to_python_objects(
v, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)
has_changed |= has_changed_v
output[k] = casted_v
return output if has_changed else obj, has_changed
elif hasattr(obj, "__array__"):
return (
_cast_to_python_objects(
obj.__array__(), only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0],
True,
)
elif isinstance(obj, (list, tuple)):
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt):
break
casted_first_elmt, has_changed_first_elmt = _cast_to_python_objects(
first_elmt, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)
if has_changed_first_elmt or not optimize_list_casting:
return (
[
_cast_to_python_objects(
elmt, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
for elmt in obj
],
True,
)
else:
if isinstance(obj, (list, tuple)):
return obj, False
else:
return list(obj), True
else:
return obj, False
else:
return obj, False
def cast_to_python_objects(obj: Any, only_1d_for_numpy=False, optimize_list_casting=True) -> Any:
"""
Cast numpy/pytorch/tensorflow/pandas objects to python lists.
It works recursively.
If `optimize_list_casting` is True, To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be casted.
If the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same.
This trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example.
Args:
obj: the object (nested struct) to cast
only_1d_for_numpy (bool, default ``False``): whether to keep the full multi-dim tensors as multi-dim numpy arrays, or convert them to
nested lists of 1-dimensional numpy arrays. This can be useful to keep only 1-d arrays to instantiate Arrow arrays.
Indeed Arrow only support converting 1-dimensional array values.
optimize_list_casting (bool, default ``True``): whether to optimize list casting by checking the first non-null element to see if it needs to be casted
and if it doesn't, not checking the rest of the list elements.
Returns:
casted_obj: the casted object
"""
return _cast_to_python_objects(
obj, only_1d_for_numpy=only_1d_for_numpy, optimize_list_casting=optimize_list_casting
)[0]
@dataclass
class Value:
"""
The `Value` dtypes are as follows:
- `null`
- `bool`
- `int8`
- `int16`
- `int32`
- `int64`
- `uint8`
- `uint16`
- `uint32`
- `uint64`
- `float16`
- `float32` (alias float)
- `float64` (alias double)
- `time32[(s|ms)]`
- `time64[(us|ns)]`
- `timestamp[(s|ms|us|ns)]`
- `timestamp[(s|ms|us|ns), tz=(tzstring)]`
- `date32`
- `date64`
- `duration[(s|ms|us|ns)]`
- `decimal128(precision, scale)`
- `decimal256(precision, scale)`
- `binary`
- `large_binary`
- `string`
- `large_string`
Example:
```py
>>> from datasets import Features
>>> features = Features({'stars': Value(dtype='int32')})
>>> features
{'stars': Value(dtype='int32', id=None)}
```
"""
dtype: str
id: Optional[str] = None
# Automatically constructed
pa_type: ClassVar[Any] = None
_type: str = field(default="Value", init=False, repr=False)
def __post_init__(self):
if self.dtype == "double": # fix inferred type
self.dtype = "float64"
if self.dtype == "float": # fix inferred type
self.dtype = "float32"
self.pa_type = string_to_arrow(self.dtype)
def __call__(self):
return self.pa_type
def encode_example(self, value):
if pa.types.is_boolean(self.pa_type):
return bool(value)
elif pa.types.is_integer(self.pa_type):
return int(value)
elif pa.types.is_floating(self.pa_type):
return float(value)
elif pa.types.is_string(self.pa_type):
return str(value)
else:
return value
class _ArrayXD:
def __post_init__(self):
self.shape = tuple(self.shape)
def __call__(self):
pa_type = globals()[self.__class__.__name__ + "ExtensionType"](self.shape, self.dtype)
return pa_type
def encode_example(self, value):
return value
@dataclass
class Array2D(_ArrayXD):
"""Create a two-dimensional array.
Args:
shape (`tuple`):
The size of each dimension.
dtype (`str`):
The value of the data type.
Example:
```py
>>> from datasets import Features
>>> features = Features({'x': Array2D(shape=(1, 3), dtype='int32')})
```
"""
shape: tuple
dtype: str
id: Optional[str] = None
# Automatically constructed
_type: str = field(default="Array2D", init=False, repr=False)
@dataclass
class Array3D(_ArrayXD):
"""Create a three-dimensional array.
Args:
shape (`tuple`):
The size of each dimension.
dtype (`str`):
The value of the data type.
Example:
```py
>>> from datasets import Features
>>> features = Features({'x': Array3D(shape=(1, 2, 3), dtype='int32')})
```
"""
shape: tuple
dtype: str
id: Optional[str] = None
# Automatically constructed
_type: str = field(default="Array3D", init=False, repr=False)
@dataclass
class Array4D(_ArrayXD):
"""Create a four-dimensional array.
Args:
shape (`tuple`):
The size of each dimension.
dtype (`str`):
The value of the data type.
Example:
```py
>>> from datasets import Features
>>> features = Features({'x': Array4D(shape=(1, 2, 2, 3), dtype='int32')})
```
"""
shape: tuple
dtype: str
id: Optional[str] = None
# Automatically constructed
_type: str = field(default="Array4D", init=False, repr=False)
@dataclass
class Array5D(_ArrayXD):
"""Create a five-dimensional array.
Args:
shape (`tuple`):
The size of each dimension.
dtype (`str`):
The value of the data type.
Example:
```py
>>> from datasets import Features
>>> features = Features({'x': Array5D(shape=(1, 2, 2, 3, 3), dtype='int32')})
```
"""
shape: tuple
dtype: str
id: Optional[str] = None
# Automatically constructed
_type: str = field(default="Array5D", init=False, repr=False)
class _ArrayXDExtensionType(pa.ExtensionType):
ndims: Optional[int] = None
def __init__(self, shape: tuple, dtype: str):
if self.ndims is None or self.ndims <= 1:
raise ValueError("You must instantiate an array type with a value for dim that is > 1")
if len(shape) != self.ndims:
raise ValueError(f"shape={shape} and ndims={self.ndims} don't match")
for dim in range(1, self.ndims):
if shape[dim] is None:
raise ValueError(f"Support only dynamic size on first dimension. Got: {shape}")
self.shape = tuple(shape)
self.value_type = dtype
self.storage_dtype = self._generate_dtype(self.value_type)
pa.ExtensionType.__init__(self, self.storage_dtype, f"{self.__class__.__module__}.{self.__class__.__name__}")
def __arrow_ext_serialize__(self):
return json.dumps((self.shape, self.value_type)).encode()
@classmethod
def __arrow_ext_deserialize__(cls, storage_type, serialized):
args = json.loads(serialized)
return cls(*args)
# This was added to pa.ExtensionType in pyarrow >= 13.0.0
def __reduce__(self):
return self.__arrow_ext_deserialize__, (self.storage_type, self.__arrow_ext_serialize__())
def __hash__(self):
return hash((self.__class__, self.shape, self.value_type))
def __arrow_ext_class__(self):
return ArrayExtensionArray
def _generate_dtype(self, dtype):
dtype = string_to_arrow(dtype)
for d in reversed(self.shape):
dtype = pa.list_(dtype)
# Don't specify the size of the list, since fixed length list arrays have issues
# being validated after slicing in pyarrow 0.17.1
return dtype
def to_pandas_dtype(self):
return PandasArrayExtensionDtype(self.value_type)
class Array2DExtensionType(_ArrayXDExtensionType):
ndims = 2
class Array3DExtensionType(_ArrayXDExtensionType):
ndims = 3
class Array4DExtensionType(_ArrayXDExtensionType):
ndims = 4
class Array5DExtensionType(_ArrayXDExtensionType):
ndims = 5
# Register the extension types for deserialization
pa.register_extension_type(Array2DExtensionType((1, 2), "int64"))
pa.register_extension_type(Array3DExtensionType((1, 2, 3), "int64"))
pa.register_extension_type(Array4DExtensionType((1, 2, 3, 4), "int64"))
pa.register_extension_type(Array5DExtensionType((1, 2, 3, 4, 5), "int64"))
def _is_zero_copy_only(pa_type: pa.DataType, unnest: bool = False) -> bool:
"""
When converting a pyarrow array to a numpy array, we must know whether this could be done in zero-copy or not.
This function returns the value of the ``zero_copy_only`` parameter to pass to ``.to_numpy()``, given the type of the pyarrow array.
# zero copy is available for all primitive types except booleans and temporal types (date, time, timestamp or duration)
# primitive types are types for which the physical representation in arrow and in numpy
# https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/types.pxi#L821
# see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy
# and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22
"""
def _unnest_pa_type(pa_type: pa.DataType) -> pa.DataType:
if pa.types.is_list(pa_type):
return _unnest_pa_type(pa_type.value_type)
return pa_type
if unnest:
pa_type = _unnest_pa_type(pa_type)
return pa.types.is_primitive(pa_type) and not (pa.types.is_boolean(pa_type) or pa.types.is_temporal(pa_type))
class ArrayExtensionArray(pa.ExtensionArray):
def __array__(self):
zero_copy_only = _is_zero_copy_only(self.storage.type, unnest=True)
return self.to_numpy(zero_copy_only=zero_copy_only)
def __getitem__(self, i):
return self.storage[i]
def to_numpy(self, zero_copy_only=True):
storage: pa.ListArray = self.storage
null_mask = storage.is_null().to_numpy(zero_copy_only=False)
if self.type.shape[0] is not None:
size = 1
null_indices = np.arange(len(storage))[null_mask] - np.arange(np.sum(null_mask))
for i in range(self.type.ndims):
size *= self.type.shape[i]
storage = storage.flatten()
numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)
numpy_arr = numpy_arr.reshape(len(self) - len(null_indices), *self.type.shape)
if len(null_indices):
numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)
else:
shape = self.type.shape
ndims = self.type.ndims
arrays = []
first_dim_offsets = np.array([off.as_py() for off in storage.offsets])
for i, is_null in enumerate(null_mask):
if is_null:
arrays.append(np.nan)
else:
storage_el = storage[i : i + 1]
first_dim = first_dim_offsets[i + 1] - first_dim_offsets[i]
# flatten storage
for _ in range(ndims):
storage_el = storage_el.flatten()
numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)
arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))
if len(np.unique(np.diff(first_dim_offsets))) > 1:
# ragged
numpy_arr = np.empty(len(arrays), dtype=object)
numpy_arr[:] = arrays
else:
numpy_arr = np.array(arrays)
return numpy_arr
def to_pylist(self):
zero_copy_only = _is_zero_copy_only(self.storage.type, unnest=True)
numpy_arr = self.to_numpy(zero_copy_only=zero_copy_only)
if self.type.shape[0] is None and numpy_arr.dtype == object:
return [arr.tolist() for arr in numpy_arr.tolist()]
else:
return numpy_arr.tolist()
class PandasArrayExtensionDtype(PandasExtensionDtype):
_metadata = "value_type"
def __init__(self, value_type: Union["PandasArrayExtensionDtype", np.dtype]):
self._value_type = value_type
def __from_arrow__(self, array: Union[pa.Array, pa.ChunkedArray]):
if isinstance(array, pa.ChunkedArray):
array = array.type.wrap_array(pa.concat_arrays([chunk.storage for chunk in array.chunks]))
zero_copy_only = _is_zero_copy_only(array.storage.type, unnest=True)
numpy_arr = array.to_numpy(zero_copy_only=zero_copy_only)
return PandasArrayExtensionArray(numpy_arr)
@classmethod
def construct_array_type(cls):
return PandasArrayExtensionArray
@property
def type(self) -> type:
return np.ndarray
@property
def kind(self) -> str:
return "O"
@property
def name(self) -> str:
return f"array[{self.value_type}]"
@property
def value_type(self) -> np.dtype:
return self._value_type
class PandasArrayExtensionArray(PandasExtensionArray):
def __init__(self, data: np.ndarray, copy: bool = False):
self._data = data if not copy else np.array(data)
self._dtype = PandasArrayExtensionDtype(data.dtype)
def __array__(self, dtype=None):
"""
Convert to NumPy Array.
Note that Pandas expects a 1D array when dtype is set to object.
But for other dtypes, the returned shape is the same as the one of ``data``.
More info about pandas 1D requirement for PandasExtensionArray here:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray
"""
if dtype == object:
out = np.empty(len(self._data), dtype=object)
for i in range(len(self._data)):
out[i] = self._data[i]
return out
if dtype is None:
return self._data
else:
return self._data.astype(dtype)
def copy(self, deep: bool = False) -> "PandasArrayExtensionArray":
return PandasArrayExtensionArray(self._data, copy=True)
@classmethod
def _from_sequence(
cls, scalars, dtype: Optional[PandasArrayExtensionDtype] = None, copy: bool = False
) -> "PandasArrayExtensionArray":
if len(scalars) > 1 and all(
isinstance(x, np.ndarray) and x.shape == scalars[0].shape and x.dtype == scalars[0].dtype for x in scalars
):
data = np.array(scalars, dtype=dtype if dtype is None else dtype.value_type, copy=copy)
else:
data = np.empty(len(scalars), dtype=object)
data[:] = scalars
return cls(data, copy=copy)
@classmethod
def _concat_same_type(cls, to_concat: Sequence_["PandasArrayExtensionArray"]) -> "PandasArrayExtensionArray":
if len(to_concat) > 1 and all(
va._data.shape == to_concat[0]._data.shape and va._data.dtype == to_concat[0]._data.dtype
for va in to_concat
):
data = np.vstack([va._data for va in to_concat])
else:
data = np.empty(len(to_concat), dtype=object)
data[:] = [va._data for va in to_concat]
return cls(data, copy=False)
@property
def dtype(self) -> PandasArrayExtensionDtype:
return self._dtype
@property
def nbytes(self) -> int:
return self._data.nbytes
def isna(self) -> np.ndarray:
return np.array([pd.isna(arr).any() for arr in self._data])
def __setitem__(self, key: Union[int, slice, np.ndarray], value: Any) -> None:
raise NotImplementedError()
def __getitem__(self, item: Union[int, slice, np.ndarray]) -> Union[np.ndarray, "PandasArrayExtensionArray"]:
if isinstance(item, int):
return self._data[item]
return PandasArrayExtensionArray(self._data[item], copy=False)
def take(
self, indices: Sequence_[int], allow_fill: bool = False, fill_value: bool = None
) -> "PandasArrayExtensionArray":
indices: np.ndarray = np.asarray(indices, dtype=int)
if allow_fill:
fill_value = (
self.dtype.na_value if fill_value is None else np.asarray(fill_value, dtype=self.dtype.value_type)
)
mask = indices == -1
if (indices < -1).any():
raise ValueError("Invalid value in `indices`, must be all >= -1 for `allow_fill` is True")
elif len(self) > 0:
pass
elif not np.all(mask):
raise IndexError("Invalid take for empty PandasArrayExtensionArray, must be all -1.")
else:
data = np.array([fill_value] * len(indices), dtype=self.dtype.value_type)
return PandasArrayExtensionArray(data, copy=False)
took = self._data.take(indices, axis=0)
if allow_fill and mask.any():
took[mask] = [fill_value] * np.sum(mask)
return PandasArrayExtensionArray(took, copy=False)
def __len__(self) -> int:
return len(self._data)
def __eq__(self, other) -> np.ndarray:
if not isinstance(other, PandasArrayExtensionArray):
raise NotImplementedError(f"Invalid type to compare to: {type(other)}")
return (self._data == other._data).all()
def pandas_types_mapper(dtype):
if isinstance(dtype, _ArrayXDExtensionType):
return PandasArrayExtensionDtype(dtype.value_type)
@dataclass
class ClassLabel:
"""Feature type for integer class labels.
There are 3 ways to define a `ClassLabel`, which correspond to the 3 arguments:
* `num_classes`: Create 0 to (num_classes-1) labels.
* `names`: List of label strings.
* `names_file`: File containing the list of labels.
Under the hood the labels are stored as integers.
You can use negative integers to represent unknown/missing labels.
Args:
num_classes (`int`, *optional*):
Number of classes. All labels must be < `num_classes`.
names (`list` of `str`, *optional*):
String names for the integer classes.
The order in which the names are provided is kept.
names_file (`str`, *optional*):
Path to a file with names for the integer classes, one per line.
Example:
```py
>>> from datasets import Features
>>> features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
>>> features
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
```
"""
num_classes: InitVar[Optional[int]] = None # Pseudo-field: ignored by asdict/fields when converting to/from dict
names: List[str] = None
names_file: InitVar[Optional[str]] = None # Pseudo-field: ignored by asdict/fields when converting to/from dict
id: Optional[str] = None
# Automatically constructed
dtype: ClassVar[str] = "int64"
pa_type: ClassVar[Any] = pa.int64()
_str2int: ClassVar[Dict[str, int]] = None
_int2str: ClassVar[Dict[int, int]] = None
_type: str = field(default="ClassLabel", init=False, repr=False)
def __post_init__(self, num_classes, names_file):
self.num_classes = num_classes
self.names_file = names_file
if self.names_file is not None and self.names is not None:
raise ValueError("Please provide either names or names_file but not both.")
# Set self.names
if self.names is None:
if self.names_file is not None:
self.names = self._load_names_from_file(self.names_file)
elif self.num_classes is not None:
self.names = [str(i) for i in range(self.num_classes)]
else:
raise ValueError("Please provide either num_classes, names or names_file.")
elif not isinstance(self.names, SequenceABC):
raise TypeError(f"Please provide names as a list, is {type(self.names)}")
# Set self.num_classes
if self.num_classes is None:
self.num_classes = len(self.names)
elif self.num_classes != len(self.names):
raise ValueError(
"ClassLabel number of names do not match the defined num_classes. "
f"Got {len(self.names)} names VS {self.num_classes} num_classes"
)
# Prepare mappings
self._int2str = [str(name) for name in self.names]
self._str2int = {name: i for i, name in enumerate(self._int2str)}
if len(self._int2str) != len(self._str2int):
raise ValueError("Some label names are duplicated. Each label name should be unique.")
def __call__(self):
return self.pa_type
def str2int(self, values: Union[str, Iterable]) -> Union[int, Iterable]:
"""Conversion class name `string` => `integer`.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> ds.features["label"].str2int('neg')
0
```
"""
if not isinstance(values, str) and not isinstance(values, Iterable):
raise ValueError(
f"Values {values} should be a string or an Iterable (list, numpy array, pytorch, tensorflow tensors)"
)
return_list = True
if isinstance(values, str):
values = [values]
return_list = False
output = [self._strval2int(value) for value in values]
return output if return_list else output[0]
def _strval2int(self, value: str) -> int:
failed_parse = False
value = str(value)
# first attempt - raw string value
int_value = self._str2int.get(value)
if int_value is None:
# second attempt - strip whitespace
int_value = self._str2int.get(value.strip())
if int_value is None:
# third attempt - convert str to int
try:
int_value = int(value)
except ValueError:
failed_parse = True
else:
if int_value < -1 or int_value >= self.num_classes:
failed_parse = True
if failed_parse:
raise ValueError(f"Invalid string class label {value}")
return int_value
def int2str(self, values: Union[int, Iterable]) -> Union[str, Iterable]:
"""Conversion `integer` => class name `string`.
Regarding unknown/missing labels: passing negative integers raises `ValueError`.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> ds.features["label"].int2str(0)
'neg'
```
"""
if not isinstance(values, int) and not isinstance(values, Iterable):
raise ValueError(
f"Values {values} should be an integer or an Iterable (list, numpy array, pytorch, tensorflow tensors)"
)
return_list = True
if isinstance(values, int):
values = [values]
return_list = False
for v in values:
if not 0 <= v < self.num_classes:
raise ValueError(f"Invalid integer class label {v:d}")
output = [self._int2str[int(v)] for v in values]
return output if return_list else output[0]
def encode_example(self, example_data):
if self.num_classes is None:
raise ValueError(
"Trying to use ClassLabel feature with undefined number of class. "
"Please set ClassLabel.names or num_classes."
)
# If a string is given, convert to associated integer
if isinstance(example_data, str):
example_data = self.str2int(example_data)
# Allowing -1 to mean no label.
if not -1 <= example_data < self.num_classes:
raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}")
return example_data
def cast_storage(self, storage: Union[pa.StringArray, pa.IntegerArray]) -> pa.Int64Array:
"""Cast an Arrow array to the `ClassLabel` arrow storage type.
The Arrow types that can be converted to the `ClassLabel` pyarrow storage type are:
- `pa.string()`
- `pa.int()`
Args:
storage (`Union[pa.StringArray, pa.IntegerArray]`):
PyArrow array to cast.
Returns:
`pa.Int64Array`: Array in the `ClassLabel` arrow storage type.
"""
if isinstance(storage, pa.IntegerArray) and len(storage) > 0:
min_max = pc.min_max(storage).as_py()
if min_max["max"] is not None and min_max["max"] >= self.num_classes:
raise ValueError(
f"Class label {min_max['max']} greater than configured num_classes {self.num_classes}"
)
elif isinstance(storage, pa.StringArray):
storage = pa.array(
[self._strval2int(label) if label is not None else None for label in storage.to_pylist()]
)
return array_cast(storage, self.pa_type)
@staticmethod
def _load_names_from_file(names_filepath):
with open(names_filepath, encoding="utf-8") as f:
return [name.strip() for name in f.read().split("\n") if name.strip()] # Filter empty names
@dataclass
class Sequence:
"""Construct a list of feature from a single type or a dict of types.
Mostly here for compatiblity with tfds.
Args:
feature:
A list of features of a single type or a dictionary of types.
length (`int`):
Length of the sequence.
Example:
```py
>>> from datasets import Features, Sequence, Value, ClassLabel
>>> features = Features({'post': Sequence(feature={'text': Value(dtype='string'), 'upvotes': Value(dtype='int32'), 'label': ClassLabel(num_classes=2, names=['hot', 'cold'])})})
>>> features
{'post': Sequence(feature={'text': Value(dtype='string', id=None), 'upvotes': Value(dtype='int32', id=None), 'label': ClassLabel(num_classes=2, names=['hot', 'cold'], id=None)}, length=-1, id=None)}
```
"""
feature: Any
length: int = -1
id: Optional[str] = None
# Automatically constructed
dtype: ClassVar[str] = "list"
pa_type: ClassVar[Any] = None
_type: str = field(default="Sequence", init=False, repr=False)
FeatureType = Union[
dict,
list,
tuple,
Value,
ClassLabel,
Translation,
TranslationVariableLanguages,
Sequence,
Array2D,
Array3D,
Array4D,
Array5D,
Audio,
Image,
]
def _check_non_null_non_empty_recursive(obj, schema: Optional[FeatureType] = None) -> bool:
"""
Check if the object is not None.
If the object is a list or a tuple, recursively check the first element of the sequence and stop if at any point the first element is not a sequence or is an empty sequence.
"""
if obj is None:
return False
elif isinstance(obj, (list, tuple)) and (schema is None or isinstance(schema, (list, tuple, Sequence))):
if len(obj) > 0:
if schema is None:
pass
elif isinstance(schema, (list, tuple)):
schema = schema[0]
else:
schema = schema.feature
return _check_non_null_non_empty_recursive(obj[0], schema)
else:
return False
else:
return True
def get_nested_type(schema: FeatureType) -> pa.DataType:
"""
get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of
generate_from_arrow_type().
It performs double-duty as the implementation of Features.type and handles the conversion of
datasets.Feature->pa.struct
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, Features):
return pa.struct(
{key: get_nested_type(schema[key]) for key in schema}
) # Features is subclass of dict, and dict order is deterministic since Python 3.6
elif isinstance(schema, dict):
return pa.struct(
{key: get_nested_type(schema[key]) for key in schema}
) # however don't sort on struct types since the order matters
elif isinstance(schema, (list, tuple)):
if len(schema) != 1:
raise ValueError("When defining list feature, you should just provide one example of the inner type")
value_type = get_nested_type(schema[0])
return pa.list_(value_type)
elif isinstance(schema, Sequence):
value_type = get_nested_type(schema.feature)
# We allow to reverse list of dict => dict of list for compatibility with tfds
if isinstance(schema.feature, dict):
return pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type})
return pa.list_(value_type, schema.length)
# Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods)
return schema()
def encode_nested_example(schema, obj, level=0):
"""Encode a nested example.
This is used since some features (in particular ClassLabel) have some logic during encoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be encoded.
If the first element needs to be encoded, then all the elements of the list will be encoded, otherwise they'll stay the same.
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
if level == 0 and obj is None:
raise ValueError("Got None but expected a dictionary instead")
return (
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
if obj is not None
else None
)
elif isinstance(schema, (list, tuple)):
sub_schema = schema[0]
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, sub_schema):
break
if encode_nested_example(sub_schema, first_elmt, level=level + 1) != first_elmt:
return [encode_nested_example(sub_schema, o, level=level + 1) for o in obj]
return list(obj)
elif isinstance(schema, Sequence):
if obj is None:
return None
# We allow to reverse list of dict => dict of list for compatiblity with tfds
if isinstance(schema.feature, dict):
# dict of list to fill
list_dict = {}
if isinstance(obj, (list, tuple)):
# obj is a list of dict
for k in schema.feature:
list_dict[k] = [encode_nested_example(schema.feature[k], o.get(k), level=level + 1) for o in obj]
return list_dict
else:
# obj is a single dict
for k in schema.feature:
list_dict[k] = (
[encode_nested_example(schema.feature[k], o, level=level + 1) for o in obj[k]]
if k in obj
else None
)
return list_dict
# schema.feature is not a dict
if isinstance(obj, str): # don't interpret a string as a list
raise ValueError(f"Got a string but expected a list instead: '{obj}'")
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, schema.feature):
break
# be careful when comparing tensors here
if (
not isinstance(first_elmt, list)
or encode_nested_example(schema.feature, first_elmt, level=level + 1) != first_elmt
):
return [encode_nested_example(schema.feature, o, level=level + 1) for o in obj]
return list(obj)
# Object with special encoding:
# ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
elif isinstance(schema, (Audio, Image, ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
return schema.encode_example(obj) if obj is not None else None
# Other object should be directly convertible to a native Arrow type (like Translation and Translation)
return obj
def decode_nested_example(schema, obj, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
"""Decode a nested example.
This is used since some features (in particular Audio and Image) have some logic during decoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be decoded.
If the first element needs to be decoded, then all the elements of the list will be decoded, otherwise they'll stay the same.
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return (
{k: decode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
if obj is not None
else None
)
elif isinstance(schema, (list, tuple)):
sub_schema = schema[0]
if obj is None:
return None
else:
if len(obj) > 0:
for first_elmt in obj:
if _check_non_null_non_empty_recursive(first_elmt, sub_schema):
break
if decode_nested_example(sub_schema, first_elmt) != first_elmt:
return [decode_nested_example(sub_schema, o) for o in obj]
return list(obj)
elif isinstance(schema, Sequence):
# We allow to reverse list of dict => dict of list for compatiblity with tfds
if isinstance(schema.feature, dict):
return {k: decode_nested_example([schema.feature[k]], obj[k]) for k in schema.feature}
else:
return decode_nested_example([schema.feature], obj)
# Object with special decoding:
elif isinstance(schema, (Audio, Image)):
# we pass the token to read and decode files from private repositories in streaming mode
if obj is not None and schema.decode:
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
return obj
_FEATURE_TYPES: Dict[str, FeatureType] = {
Value.__name__: Value,
ClassLabel.__name__: ClassLabel,
Translation.__name__: Translation,
TranslationVariableLanguages.__name__: TranslationVariableLanguages,
Sequence.__name__: Sequence,
Array2D.__name__: Array2D,
Array3D.__name__: Array3D,
Array4D.__name__: Array4D,
Array5D.__name__: Array5D,
Audio.__name__: Audio,
Image.__name__: Image,
}
@experimental
def register_feature(
feature_cls: type,
feature_type: str,
):
"""
Register a Feature object using a name and class.
This function must be used on a Feature class.
"""
if feature_type in _FEATURE_TYPES:
logger.warning(
f"Overwriting feature type '{feature_type}' ({_FEATURE_TYPES[feature_type].__name__} -> {feature_cls.__name__})"
)
_FEATURE_TYPES[feature_type] = feature_cls
def generate_from_dict(obj: Any):
"""Regenerate the nested feature object from a deserialized dict.
We use the '_type' fields to get the dataclass name to load.
generate_from_dict is the recursive helper for Features.from_dict, and allows for a convenient constructor syntax
to define features from deserialized JSON dictionaries. This function is used in particular when deserializing
a :class:`DatasetInfo` that was dumped to a JSON object. This acts as an analogue to
:meth:`Features.from_arrow_schema` and handles the recursive field-by-field instantiation, but doesn't require any
mapping to/from pyarrow, except for the fact that it takes advantage of the mapping of pyarrow primitive dtypes
that :class:`Value` automatically performs.
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(obj, list):
return [generate_from_dict(value) for value in obj]
# Otherwise we have a dict or a dataclass
if "_type" not in obj or isinstance(obj["_type"], dict):
return {key: generate_from_dict(value) for key, value in obj.items()}
obj = dict(obj)
_type = obj.pop("_type")
class_type = _FEATURE_TYPES.get(_type, None) or globals().get(_type, None)
if class_type is None:
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
if class_type == Sequence:
return Sequence(feature=generate_from_dict(obj["feature"]), length=obj.get("length", -1))
field_names = {f.name for f in fields(class_type)}
return class_type(**{k: v for k, v in obj.items() if k in field_names})
def generate_from_arrow_type(pa_type: pa.DataType) -> FeatureType:
"""
generate_from_arrow_type accepts an arrow DataType and returns a datasets FeatureType to be used as the type for
a single field.
This is the high-level arrow->datasets type conversion and is inverted by get_nested_type().
This operates at the individual *field* level, whereas Features.from_arrow_schema() operates at the
full schema level and holds the methods that represent the bijection from Features<->pyarrow.Schema
"""
if isinstance(pa_type, pa.StructType):
return {field.name: generate_from_arrow_type(field.type) for field in pa_type}
elif isinstance(pa_type, pa.FixedSizeListType):
return Sequence(feature=generate_from_arrow_type(pa_type.value_type), length=pa_type.list_size)
elif isinstance(pa_type, pa.ListType):
feature = generate_from_arrow_type(pa_type.value_type)
if isinstance(feature, (dict, tuple, list)):
return [feature]
return Sequence(feature=feature)
elif isinstance(pa_type, _ArrayXDExtensionType):
array_feature = [None, None, Array2D, Array3D, Array4D, Array5D][pa_type.ndims]
return array_feature(shape=pa_type.shape, dtype=pa_type.value_type)
elif isinstance(pa_type, pa.DictionaryType):
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
elif isinstance(pa_type, pa.DataType):
return Value(dtype=_arrow_to_datasets_dtype(pa_type))
else:
raise ValueError(f"Cannot convert {pa_type} to a Feature type.")
def numpy_to_pyarrow_listarray(arr: np.ndarray, type: pa.DataType = None) -> pa.ListArray:
"""Build a PyArrow ListArray from a multidimensional NumPy array"""
arr = np.array(arr)
values = pa.array(arr.flatten(), type=type)
for i in range(arr.ndim - 1):
n_offsets = reduce(mul, arr.shape[: arr.ndim - i - 1], 1)
step_offsets = arr.shape[arr.ndim - i - 1]
offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32())
values = pa.ListArray.from_arrays(offsets, values)
return values
def list_of_pa_arrays_to_pyarrow_listarray(l_arr: List[Optional[pa.Array]]) -> pa.ListArray:
null_mask = np.array([arr is None for arr in l_arr])
null_indices = np.arange(len(null_mask))[null_mask] - np.arange(np.sum(null_mask))
l_arr = [arr for arr in l_arr if arr is not None]
offsets = np.cumsum(
[0] + [len(arr) for arr in l_arr], dtype=object
) # convert to dtype object to allow None insertion
offsets = np.insert(offsets, null_indices, None)
offsets = pa.array(offsets, type=pa.int32())
values = pa.concat_arrays(l_arr)
return pa.ListArray.from_arrays(offsets, values)
def list_of_np_array_to_pyarrow_listarray(l_arr: List[np.ndarray], type: pa.DataType = None) -> pa.ListArray:
"""Build a PyArrow ListArray from a possibly nested list of NumPy arrays"""
if len(l_arr) > 0:
return list_of_pa_arrays_to_pyarrow_listarray(
[numpy_to_pyarrow_listarray(arr, type=type) if arr is not None else None for arr in l_arr]
)
else:
return pa.array([], type=type)
def contains_any_np_array(data: Any):
"""Return `True` if data is a NumPy ndarray or (recursively) if first non-null value in list is a NumPy ndarray.
Args:
data (Any): Data.
Returns:
bool
"""
if isinstance(data, np.ndarray):
return True
elif isinstance(data, list):
return contains_any_np_array(first_non_null_value(data)[1])
else:
return False
def any_np_array_to_pyarrow_listarray(data: Union[np.ndarray, List], type: pa.DataType = None) -> pa.ListArray:
"""Convert to PyArrow ListArray either a NumPy ndarray or (recursively) a list that may contain any NumPy ndarray.
Args:
data (Union[np.ndarray, List]): Data.
type (pa.DataType): Explicit PyArrow DataType passed to coerce the ListArray data type.
Returns:
pa.ListArray
"""
if isinstance(data, np.ndarray):
return numpy_to_pyarrow_listarray(data, type=type)
elif isinstance(data, list):
return list_of_pa_arrays_to_pyarrow_listarray([any_np_array_to_pyarrow_listarray(i, type=type) for i in data])
def to_pyarrow_listarray(data: Any, pa_type: _ArrayXDExtensionType) -> pa.Array:
"""Convert to PyArrow ListArray.
Args:
data (Any): Sequence, iterable, np.ndarray or pd.Series.
pa_type (_ArrayXDExtensionType): Any of the ArrayNDExtensionType.
Returns:
pyarrow.Array
"""
if contains_any_np_array(data):
return any_np_array_to_pyarrow_listarray(data, type=pa_type.value_type)
else:
return pa.array(data, pa_type.storage_dtype)
def _visit(feature: FeatureType, func: Callable[[FeatureType], Optional[FeatureType]]) -> FeatureType:
"""Visit a (possibly nested) feature.
Args:
feature (FeatureType): the feature type to be checked
Returns:
visited feature (FeatureType)
"""
if isinstance(feature, dict):
out = func({k: _visit(f, func) for k, f in feature.items()})
elif isinstance(feature, (list, tuple)):
out = func([_visit(feature[0], func)])
elif isinstance(feature, Sequence):
out = func(Sequence(_visit(feature.feature, func), length=feature.length))
else:
out = func(feature)
return feature if out is None else out
def require_decoding(feature: FeatureType, ignore_decode_attribute: bool = False) -> bool:
"""Check if a (possibly nested) feature requires decoding.
Args:
feature (FeatureType): the feature type to be checked
ignore_decode_attribute (:obj:`bool`, default ``False``): Whether to ignore the current value
of the `decode` attribute of the decodable feature types.
Returns:
:obj:`bool`
"""
if isinstance(feature, dict):
return any(require_decoding(f) for f in feature.values())
elif isinstance(feature, (list, tuple)):
return require_decoding(feature[0])
elif isinstance(feature, Sequence):
return require_decoding(feature.feature)
else:
return hasattr(feature, "decode_example") and (feature.decode if not ignore_decode_attribute else True)
def require_storage_cast(feature: FeatureType) -> bool:
"""Check if a (possibly nested) feature requires storage casting.
Args:
feature (FeatureType): the feature type to be checked
Returns:
:obj:`bool`
"""
if isinstance(feature, dict):
return any(require_storage_cast(f) for f in feature.values())
elif isinstance(feature, (list, tuple)):
return require_storage_cast(feature[0])
elif isinstance(feature, Sequence):
return require_storage_cast(feature.feature)
else:
return hasattr(feature, "cast_storage")
def require_storage_embed(feature: FeatureType) -> bool:
"""Check if a (possibly nested) feature requires embedding data into storage.
Args:
feature (FeatureType): the feature type to be checked
Returns:
:obj:`bool`
"""
if isinstance(feature, dict):
return any(require_storage_cast(f) for f in feature.values())
elif isinstance(feature, (list, tuple)):
return require_storage_cast(feature[0])
elif isinstance(feature, Sequence):
return require_storage_cast(feature.feature)
else:
return hasattr(feature, "embed_storage")
def keep_features_dicts_synced(func):
"""
Wrapper to keep the secondary dictionary, which tracks whether keys are decodable, of the :class:`datasets.Features` object
in sync with the main dictionary.
"""
@wraps(func)
def wrapper(*args, **kwargs):
if args:
self: "Features" = args[0]
args = args[1:]
else:
self: "Features" = kwargs.pop("self")
out = func(self, *args, **kwargs)
assert hasattr(self, "_column_requires_decoding")
self._column_requires_decoding = {col: require_decoding(feature) for col, feature in self.items()}
return out
wrapper._decorator_name_ = "_keep_dicts_synced"
return wrapper
class Features(dict):
"""A special dictionary that defines the internal structure of a dataset.
Instantiated with a dictionary of type `dict[str, FieldType]`, where keys are the desired column names,
and values are the type of that column.
`FieldType` can be one of the following:
- a [`~datasets.Value`] feature specifies a single typed value, e.g. `int64` or `string`.
- a [`~datasets.ClassLabel`] feature specifies a field with a predefined set of classes which can have labels
associated to them and will be stored as integers in the dataset.
- a python `dict` which specifies that the field is a nested field containing a mapping of sub-fields to sub-fields
features. It's possible to have nested fields of nested fields in an arbitrary manner.
- a python `list` or a [`~datasets.Sequence`] specifies that the field contains a list of objects. The python
`list` or [`~datasets.Sequence`] should be provided with a single sub-feature as an example of the feature
type hosted in this list.
<Tip>
A [`~datasets.Sequence`] with a internal dictionary feature will be automatically converted into a dictionary of
lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be
un-wanted in some cases. If you don't want this behavior, you can use a python `list` instead of the
[`~datasets.Sequence`].
</Tip>
- a [`Array2D`], [`Array3D`], [`Array4D`] or [`Array5D`] feature for multidimensional arrays.
- an [`Audio`] feature to store the absolute path to an audio file or a dictionary with the relative path
to an audio file ("path" key) and its bytes content ("bytes" key). This feature extracts the audio data.
- an [`Image`] feature to store the absolute path to an image file, an `np.ndarray` object, a `PIL.Image.Image` object
or a dictionary with the relative path to an image file ("path" key) and its bytes content ("bytes" key). This feature extracts the image data.
- [`~datasets.Translation`] and [`~datasets.TranslationVariableLanguages`], the two features specific to Machine Translation.
"""
def __init__(*args, **kwargs):
# self not in the signature to allow passing self as a kwarg
if not args:
raise TypeError("descriptor '__init__' of 'Features' object needs an argument")
self, *args = args
super(Features, self).__init__(*args, **kwargs)
self._column_requires_decoding: Dict[str, bool] = {
col: require_decoding(feature) for col, feature in self.items()
}
__setitem__ = keep_features_dicts_synced(dict.__setitem__)
__delitem__ = keep_features_dicts_synced(dict.__delitem__)
update = keep_features_dicts_synced(dict.update)
setdefault = keep_features_dicts_synced(dict.setdefault)
pop = keep_features_dicts_synced(dict.pop)
popitem = keep_features_dicts_synced(dict.popitem)
clear = keep_features_dicts_synced(dict.clear)
def __reduce__(self):
return Features, (dict(self),)
@property
def type(self):
"""
Features field types.
Returns:
:obj:`pyarrow.DataType`
"""
return get_nested_type(self)
@property
def arrow_schema(self):
"""
Features schema.
Returns:
:obj:`pyarrow.Schema`
"""
hf_metadata = {"info": {"features": self.to_dict()}}
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
@classmethod
def from_arrow_schema(cls, pa_schema: pa.Schema) -> "Features":
"""
Construct [`Features`] from Arrow Schema.
It also checks the schema metadata for Hugging Face Datasets features.
Non-nullable fields are not supported and set to nullable.
Args:
pa_schema (`pyarrow.Schema`):
Arrow Schema.
Returns:
[`Features`]
"""
# try to load features from the arrow schema metadata
metadata_features = Features()
if pa_schema.metadata is not None and "huggingface".encode("utf-8") in pa_schema.metadata:
metadata = json.loads(pa_schema.metadata["huggingface".encode("utf-8")].decode())
if "info" in metadata and "features" in metadata["info"] and metadata["info"]["features"] is not None:
metadata_features = Features.from_dict(metadata["info"]["features"])
metadata_features_schema = metadata_features.arrow_schema
obj = {
field.name: (
metadata_features[field.name]
if field.name in metadata_features and metadata_features_schema.field(field.name) == field
else generate_from_arrow_type(field.type)
)
for field in pa_schema
}
return cls(**obj)
@classmethod
def from_dict(cls, dic) -> "Features":
"""
Construct [`Features`] from dict.
Regenerate the nested feature object from a deserialized dict.
We use the `_type` key to infer the dataclass name of the feature `FieldType`.
It allows for a convenient constructor syntax
to define features from deserialized JSON dictionaries. This function is used in particular when deserializing
a [`DatasetInfo`] that was dumped to a JSON object. This acts as an analogue to
[`Features.from_arrow_schema`] and handles the recursive field-by-field instantiation, but doesn't require
any mapping to/from pyarrow, except for the fact that it takes advantage of the mapping of pyarrow primitive
dtypes that [`Value`] automatically performs.
Args:
dic (`dict[str, Any]`):
Python dictionary.
Returns:
`Features`
Example::
>>> Features.from_dict({'_type': {'dtype': 'string', 'id': None, '_type': 'Value'}})
{'_type': Value(dtype='string', id=None)}
"""
obj = generate_from_dict(dic)
return cls(**obj)
def to_dict(self):
return asdict(self)
def _to_yaml_list(self) -> list:
# we compute the YAML list from the dict representation that is used for JSON dump
yaml_data = self.to_dict()
def simplify(feature: dict) -> dict:
if not isinstance(feature, dict):
raise TypeError(f"Expected a dict but got a {type(feature)}: {feature}")
#
# sequence: -> sequence: int32
# dtype: int32 ->
#
if isinstance(feature.get("sequence"), dict) and list(feature["sequence"]) == ["dtype"]:
feature["sequence"] = feature["sequence"]["dtype"]
#
# sequence: -> sequence:
# struct: -> - name: foo
# - name: foo -> dtype: int32
# dtype: int32 ->
#
if isinstance(feature.get("sequence"), dict) and list(feature["sequence"]) == ["struct"]:
feature["sequence"] = feature["sequence"]["struct"]
#
# list: -> list: int32
# dtype: int32 ->
#
if isinstance(feature.get("list"), dict) and list(feature["list"]) == ["dtype"]:
feature["list"] = feature["list"]["dtype"]
#
# list: -> list:
# struct: -> - name: foo
# - name: foo -> dtype: int32
# dtype: int32 ->
#
if isinstance(feature.get("list"), dict) and list(feature["list"]) == ["struct"]:
feature["list"] = feature["list"]["struct"]
#
# class_label: -> class_label:
# names: -> names:
# - negative -> '0': negative
# - positive -> '1': positive
#
if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), list):
# server-side requirement: keys must be strings
feature["class_label"]["names"] = {
str(label_id): label_name for label_id, label_name in enumerate(feature["class_label"]["names"])
}
return feature
def to_yaml_inner(obj: Union[dict, list]) -> dict:
if isinstance(obj, dict):
_type = obj.pop("_type", None)
if _type == "Sequence":
_feature = obj.pop("feature")
return simplify({"sequence": to_yaml_inner(_feature), **obj})
elif _type == "Value":
return obj
elif _type and not obj:
return {"dtype": camelcase_to_snakecase(_type)}
elif _type:
return {"dtype": simplify({camelcase_to_snakecase(_type): obj})}
else:
return {"struct": [{"name": name, **to_yaml_inner(_feature)} for name, _feature in obj.items()]}
elif isinstance(obj, list):
return simplify({"list": simplify(to_yaml_inner(obj[0]))})
elif isinstance(obj, tuple):
return to_yaml_inner(list(obj))
else:
raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
def to_yaml_types(obj: dict) -> dict:
if isinstance(obj, dict):
return {k: to_yaml_types(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [to_yaml_types(v) for v in obj]
elif isinstance(obj, tuple):
return to_yaml_types(list(obj))
else:
return obj
return to_yaml_types(to_yaml_inner(yaml_data)["struct"])
@classmethod
def _from_yaml_list(cls, yaml_data: list) -> "Features":
yaml_data = copy.deepcopy(yaml_data)
# we convert the list obtained from YAML data into the dict representation that is used for JSON dump
def unsimplify(feature: dict) -> dict:
if not isinstance(feature, dict):
raise TypeError(f"Expected a dict but got a {type(feature)}: {feature}")
#
# sequence: int32 -> sequence:
# -> dtype: int32
#
if isinstance(feature.get("sequence"), str):
feature["sequence"] = {"dtype": feature["sequence"]}
#
# list: int32 -> list:
# -> dtype: int32
#
if isinstance(feature.get("list"), str):
feature["list"] = {"dtype": feature["list"]}
#
# class_label: -> class_label:
# names: -> names:
# '0': negative -> - negative
# '1': positive -> - positive
#
if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict):
label_ids = sorted(feature["class_label"]["names"], key=int)
if label_ids and [int(label_id) for label_id in label_ids] != list(range(int(label_ids[-1]) + 1)):
raise ValueError(
f"ClassLabel expected a value for all label ids [0:{int(label_ids[-1]) + 1}] but some ids are missing."
)
feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids]
return feature
def from_yaml_inner(obj: Union[dict, list]) -> Union[dict, list]:
if isinstance(obj, dict):
if not obj:
return {}
_type = next(iter(obj))
if _type == "sequence":
_feature = unsimplify(obj).pop(_type)
return {"feature": from_yaml_inner(_feature), **obj, "_type": "Sequence"}
if _type == "list":
return [from_yaml_inner(unsimplify(obj)[_type])]
if _type == "struct":
return from_yaml_inner(obj["struct"])
elif _type == "dtype":
if isinstance(obj["dtype"], str):
# e.g. int32, float64, string, audio, image
try:
Value(obj["dtype"])
return {**obj, "_type": "Value"}
except ValueError:
# e.g. Audio, Image, ArrayXD
return {"_type": snakecase_to_camelcase(obj["dtype"])}
else:
return from_yaml_inner(obj["dtype"])
else:
return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
elif isinstance(obj, list):
names = [_feature.pop("name") for _feature in obj]
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
else:
raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
return cls.from_dict(from_yaml_inner(yaml_data))
def encode_example(self, example):
"""
Encode example into a format for Arrow.
Args:
example (`dict[str, Any]`):
Data in a Dataset row.
Returns:
`dict[str, Any]`
"""
example = cast_to_python_objects(example)
return encode_nested_example(self, example)
def encode_column(self, column, column_name: str):
"""
Encode column into a format for Arrow.
Args:
column (`list[Any]`):
Data in a Dataset column.
column_name (`str`):
Dataset column name.
Returns:
`list[Any]`
"""
column = cast_to_python_objects(column)
return [encode_nested_example(self[column_name], obj, level=1) for obj in column]
def encode_batch(self, batch):
"""
Encode batch into a format for Arrow.
Args:
batch (`dict[str, list[Any]]`):
Data in a Dataset batch.
Returns:
`dict[str, list[Any]]`
"""
encoded_batch = {}
if set(batch) != set(self):
raise ValueError(f"Column mismatch between batch {set(batch)} and features {set(self)}")
for key, column in batch.items():
column = cast_to_python_objects(column)
encoded_batch[key] = [encode_nested_example(self[key], obj, level=1) for obj in column]
return encoded_batch
def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
"""Decode example with custom feature decoding.
Args:
example (`dict[str, Any]`):
Dataset row data.
token_per_repo_id (`dict`, *optional*):
To access and decode audio or image files from private repositories on the Hub, you can pass
a dictionary `repo_id (str) -> token (bool or str)`.
Returns:
`dict[str, Any]`
"""
return {
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
if self._column_requires_decoding[column_name]
else value
for column_name, (feature, value) in zip_dict(
{key: value for key, value in self.items() if key in example}, example
)
}
def decode_column(self, column: list, column_name: str):
"""Decode column with custom feature decoding.
Args:
column (`list[Any]`):
Dataset column data.
column_name (`str`):
Dataset column name.
Returns:
`list[Any]`
"""
return (
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
if self._column_requires_decoding[column_name]
else column
)
def decode_batch(self, batch: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
"""Decode batch with custom feature decoding.
Args:
batch (`dict[str, list[Any]]`):
Dataset batch data.
token_per_repo_id (`dict`, *optional*):
To access and decode audio or image files from private repositories on the Hub, you can pass
a dictionary repo_id (str) -> token (bool or str)
Returns:
`dict[str, list[Any]]`
"""
decoded_batch = {}
for column_name, column in batch.items():
decoded_batch[column_name] = (
[
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
if value is not None
else None
for value in column
]
if self._column_requires_decoding[column_name]
else column
)
return decoded_batch
def copy(self) -> "Features":
"""
Make a deep copy of [`Features`].
Returns:
[`Features`]
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> copy_of_features = ds.features.copy()
>>> copy_of_features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
```
"""
return copy.deepcopy(self)
def reorder_fields_as(self, other: "Features") -> "Features":
"""
Reorder Features fields to match the field order of other [`Features`].
The order of the fields is important since it matters for the underlying arrow data.
Re-ordering the fields allows to make the underlying arrow data type match.
Args:
other ([`Features`]):
The other [`Features`] to align with.
Returns:
[`Features`]
Example::
>>> from datasets import Features, Sequence, Value
>>> # let's say we have to features with a different order of nested fields (for a and b for example)
>>> f1 = Features({"root": Sequence({"a": Value("string"), "b": Value("string")})})
>>> f2 = Features({"root": {"b": Sequence(Value("string")), "a": Sequence(Value("string"))}})
>>> assert f1.type != f2.type
>>> # re-ordering keeps the base structure (here Sequence is defined at the root level), but make the fields order match
>>> f1.reorder_fields_as(f2)
{'root': Sequence(feature={'b': Value(dtype='string', id=None), 'a': Value(dtype='string', id=None)}, length=-1, id=None)}
>>> assert f1.reorder_fields_as(f2).type == f2.type
"""
def recursive_reorder(source, target, stack=""):
stack_position = " at " + stack[1:] if stack else ""
if isinstance(target, Sequence):
target = target.feature
if isinstance(target, dict):
target = {k: [v] for k, v in target.items()}
else:
target = [target]
if isinstance(source, Sequence):
source, id_, length = source.feature, source.id, source.length
if isinstance(source, dict):
source = {k: [v] for k, v in source.items()}
reordered = recursive_reorder(source, target, stack)
return Sequence({k: v[0] for k, v in reordered.items()}, id=id_, length=length)
else:
source = [source]
reordered = recursive_reorder(source, target, stack)
return Sequence(reordered[0], id=id_, length=length)
elif isinstance(source, dict):
if not isinstance(target, dict):
raise ValueError(f"Type mismatch: between {source} and {target}" + stack_position)
if sorted(source) != sorted(target):
message = (
f"Keys mismatch: between {source} (source) and {target} (target).\n"
f"{source.keys()-target.keys()} are missing from target "
f"and {target.keys()-source.keys()} are missing from source" + stack_position
)
raise ValueError(message)
return {key: recursive_reorder(source[key], target[key], stack + f".{key}") for key in target}
elif isinstance(source, list):
if not isinstance(target, list):
raise ValueError(f"Type mismatch: between {source} and {target}" + stack_position)
if len(source) != len(target):
raise ValueError(f"Length mismatch: between {source} and {target}" + stack_position)
return [recursive_reorder(source[i], target[i], stack + ".<list>") for i in range(len(target))]
else:
return source
return Features(recursive_reorder(self, other))
def flatten(self, max_depth=16) -> "Features":
"""Flatten the features. Every dictionary column is removed and is replaced by
all the subfields it contains. The new fields are named by concatenating the
name of the original column and the subfield name like this: `<original>.<subfield>`.
If a column contains nested dictionaries, then all the lower-level subfields names are
also concatenated to form new columns: `<original>.<subfield>.<subsubfield>`, etc.
Returns:
[`Features`]:
The flattened features.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("squad", split="train")
>>> ds.features.flatten()
{'answers.answer_start': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'answers.text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'context': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}
```
"""
for depth in range(1, max_depth):
no_change = True
flattened = self.copy()
for column_name, subfeature in self.items():
if isinstance(subfeature, dict):
no_change = False
flattened.update({f"{column_name}.{k}": v for k, v in subfeature.items()})
del flattened[column_name]
elif isinstance(subfeature, Sequence) and isinstance(subfeature.feature, dict):
no_change = False
flattened.update(
{
f"{column_name}.{k}": Sequence(v) if not isinstance(v, dict) else [v]
for k, v in subfeature.feature.items()
}
)
del flattened[column_name]
elif hasattr(subfeature, "flatten") and subfeature.flatten() != subfeature:
no_change = False
flattened.update({f"{column_name}.{k}": v for k, v in subfeature.flatten().items()})
del flattened[column_name]
self = flattened
if no_change:
break
return self
def _align_features(features_list: List[Features]) -> List[Features]:
"""Align dictionaries of features so that the keys that are found in multiple dictionaries share the same feature."""
name2feature = {}
for features in features_list:
for k, v in features.items():
if k in name2feature and isinstance(v, dict):
# Recursively align features.
name2feature[k] = _align_features([name2feature[k], v])[0]
elif k not in name2feature or (isinstance(name2feature[k], Value) and name2feature[k].dtype == "null"):
name2feature[k] = v
return [Features({k: name2feature[k] for k in features.keys()}) for features in features_list]
def _check_if_features_can_be_aligned(features_list: List[Features]):
"""Check if the dictionaries of features can be aligned.
Two dictonaries of features can be aligned if the keys they share have the same type or some of them is of type `Value("null")`.
"""
name2feature = {}
for features in features_list:
for k, v in features.items():
if k not in name2feature or (isinstance(name2feature[k], Value) and name2feature[k].dtype == "null"):
name2feature[k] = v
for features in features_list:
for k, v in features.items():
if isinstance(v, dict) and isinstance(name2feature[k], dict):
# Deep checks for structure.
_check_if_features_can_be_aligned([name2feature[k], v])
elif not (isinstance(v, Value) and v.dtype == "null") and name2feature[k] != v:
raise ValueError(
f'The features can\'t be aligned because the key {k} of features {features} has unexpected type - {v} (expected either {name2feature[k]} or Value("null").'
)
|
datasets/src/datasets/features/features.py/0
|
{
"file_path": "datasets/src/datasets/features/features.py",
"repo_id": "datasets",
"token_count": 40149
}
| 78
|
import itertools
from dataclasses import dataclass
from typing import Optional
import pyarrow as pa
import datasets
from datasets.table import table_cast
logger = datasets.utils.logging.get_logger(__name__)
@dataclass
class ArrowConfig(datasets.BuilderConfig):
"""BuilderConfig for Arrow."""
features: Optional[datasets.Features] = None
class Arrow(datasets.ArrowBasedBuilder):
BUILDER_CONFIG_CLASS = ArrowConfig
def _info(self):
return datasets.DatasetInfo(features=self.config.features)
def _split_generators(self, dl_manager):
"""We handle string, list and dicts in datafiles"""
if not self.config.data_files:
raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
data_files = dl_manager.download_and_extract(self.config.data_files)
if isinstance(data_files, (str, list, tuple)):
files = data_files
if isinstance(files, str):
files = [files]
# Use `dl_manager.iter_files` to skip hidden files in an extracted archive
files = [dl_manager.iter_files(file) for file in files]
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"files": files})]
splits = []
for split_name, files in data_files.items():
if isinstance(files, str):
files = [files]
# Use `dl_manager.iter_files` to skip hidden files in an extracted archive
files = [dl_manager.iter_files(file) for file in files]
# Infer features is they are stoed in the arrow schema
if self.info.features is None:
for file in itertools.chain.from_iterable(files):
with open(file, "rb") as f:
self.info.features = datasets.Features.from_arrow_schema(pa.ipc.open_stream(f).schema)
break
splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
return splits
def _cast_table(self, pa_table: pa.Table) -> pa.Table:
if self.info.features is not None:
# more expensive cast to support nested features with keys in a different order
# allows str <-> int/float or str to Audio for example
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
return pa_table
def _generate_tables(self, files):
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
with open(file, "rb") as f:
try:
for batch_idx, record_batch in enumerate(pa.ipc.open_stream(f)):
pa_table = pa.Table.from_batches([record_batch])
# Uncomment for debugging (will print the Arrow table size and elements)
# logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
# logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
except ValueError as e:
logger.error(f"Failed to read file '{file}' with error {type(e)}: {e}")
raise
|
datasets/src/datasets/packaged_modules/arrow/arrow.py/0
|
{
"file_path": "datasets/src/datasets/packaged_modules/arrow/arrow.py",
"repo_id": "datasets",
"token_count": 1473
}
| 79
|
import itertools
import warnings
from dataclasses import dataclass
from typing import Optional
import pandas as pd
import pyarrow as pa
import datasets
from datasets.table import table_cast
@dataclass
class PandasConfig(datasets.BuilderConfig):
"""BuilderConfig for Pandas."""
features: Optional[datasets.Features] = None
class Pandas(datasets.ArrowBasedBuilder):
BUILDER_CONFIG_CLASS = PandasConfig
def _info(self):
warnings.warn(
"The Pandas builder is deprecated and will be removed in the next major version of datasets.",
FutureWarning,
)
return datasets.DatasetInfo(features=self.config.features)
def _split_generators(self, dl_manager):
"""We handle string, list and dicts in datafiles"""
if not self.config.data_files:
raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
data_files = dl_manager.download_and_extract(self.config.data_files)
if isinstance(data_files, (str, list, tuple)):
files = data_files
if isinstance(files, str):
files = [files]
# Use `dl_manager.iter_files` to skip hidden files in an extracted archive
files = [dl_manager.iter_files(file) for file in files]
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"files": files})]
splits = []
for split_name, files in data_files.items():
if isinstance(files, str):
files = [files]
# Use `dl_manager.iter_files` to skip hidden files in an extracted archive
files = [dl_manager.iter_files(file) for file in files]
splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
return splits
def _cast_table(self, pa_table: pa.Table) -> pa.Table:
if self.config.features is not None:
# more expensive cast to support nested features with keys in a different order
# allows str <-> int/float or str to Audio for example
pa_table = table_cast(pa_table, self.config.features.arrow_schema)
return pa_table
def _generate_tables(self, files):
for i, file in enumerate(itertools.chain.from_iterable(files)):
with open(file, "rb") as f:
pa_table = pa.Table.from_pandas(pd.read_pickle(f))
yield i, self._cast_table(pa_table)
|
datasets/src/datasets/packaged_modules/pandas/pandas.py/0
|
{
"file_path": "datasets/src/datasets/packaged_modules/pandas/pandas.py",
"repo_id": "datasets",
"token_count": 1011
}
| 80
|
import importlib
import inspect
from functools import wraps
from typing import TYPE_CHECKING, Optional
from .download.download_config import DownloadConfig
from .download.streaming_download_manager import (
xbasename,
xdirname,
xet_parse,
xexists,
xgetsize,
xglob,
xgzip_open,
xisdir,
xisfile,
xjoin,
xlistdir,
xnumpy_load,
xopen,
xpandas_read_csv,
xpandas_read_excel,
xPath,
xpyarrow_parquet_read_table,
xrelpath,
xsio_loadmat,
xsplit,
xsplitext,
xwalk,
xxml_dom_minidom_parse,
)
from .utils.logging import get_logger
from .utils.patching import patch_submodule
from .utils.py_utils import get_imports, lock_importable_file
logger = get_logger(__name__)
if TYPE_CHECKING:
from .builder import DatasetBuilder
def extend_module_for_streaming(module_path, download_config: Optional[DownloadConfig] = None):
"""Extend the module to support streaming.
We patch some functions in the module to use `fsspec` to support data streaming:
- We use `fsspec.open` to open and read remote files. We patch the module function:
- `open`
- We use the "::" hop separator to join paths and navigate remote compressed/archive files. We patch the module
functions:
- `os.path.join`
- `pathlib.Path.joinpath` and `pathlib.Path.__truediv__` (called when using the "/" operator)
The patched functions are replaced with custom functions defined to work with the
:class:`~download.streaming_download_manager.StreamingDownloadManager`.
Args:
module_path: Path to the module to be extended.
download_config : mainly use use_auth_token or storage_options to support different platforms and auth types.
"""
module = importlib.import_module(module_path)
# TODO(QL): always update the module to add subsequent new authentication without removing old ones
if hasattr(module, "_patched_for_streaming") and module._patched_for_streaming:
if isinstance(module._patched_for_streaming, DownloadConfig):
module._patched_for_streaming.token = download_config.token
module._patched_for_streaming.storage_options = download_config.storage_options
return
def wrap_auth(function):
@wraps(function)
def wrapper(*args, **kwargs):
return function(*args, download_config=download_config, **kwargs)
wrapper._decorator_name_ = "wrap_auth"
return wrapper
# open files in a streaming fashion
patch_submodule(module, "open", wrap_auth(xopen)).start()
patch_submodule(module, "os.listdir", wrap_auth(xlistdir)).start()
patch_submodule(module, "os.walk", wrap_auth(xwalk)).start()
patch_submodule(module, "glob.glob", wrap_auth(xglob)).start()
# allow to navigate in remote zip files
patch_submodule(module, "os.path.join", xjoin).start()
patch_submodule(module, "os.path.dirname", xdirname).start()
patch_submodule(module, "os.path.basename", xbasename).start()
patch_submodule(module, "os.path.relpath", xrelpath).start()
patch_submodule(module, "os.path.split", xsplit).start()
patch_submodule(module, "os.path.splitext", xsplitext).start()
# allow checks on paths
patch_submodule(module, "os.path.exists", wrap_auth(xexists)).start()
patch_submodule(module, "os.path.isdir", wrap_auth(xisdir)).start()
patch_submodule(module, "os.path.isfile", wrap_auth(xisfile)).start()
patch_submodule(module, "os.path.getsize", wrap_auth(xgetsize)).start()
patch_submodule(module, "pathlib.Path", xPath).start()
# file readers
patch_submodule(module, "gzip.open", wrap_auth(xgzip_open)).start()
patch_submodule(module, "numpy.load", wrap_auth(xnumpy_load)).start()
patch_submodule(module, "pandas.read_csv", wrap_auth(xpandas_read_csv), attrs=["__version__"]).start()
patch_submodule(module, "pandas.read_excel", wrap_auth(xpandas_read_excel), attrs=["__version__"]).start()
patch_submodule(module, "scipy.io.loadmat", wrap_auth(xsio_loadmat), attrs=["__version__"]).start()
patch_submodule(module, "xml.etree.ElementTree.parse", wrap_auth(xet_parse)).start()
patch_submodule(module, "xml.dom.minidom.parse", wrap_auth(xxml_dom_minidom_parse)).start()
# pyarrow: do not patch pyarrow attribute in packaged modules
if not module.__name__.startswith("datasets.packaged_modules."):
patch_submodule(module, "pyarrow.parquet.read_table", wrap_auth(xpyarrow_parquet_read_table)).start()
module._patched_for_streaming = download_config
def extend_dataset_builder_for_streaming(builder: "DatasetBuilder"):
"""Extend the dataset builder module and the modules imported by it to support streaming.
Args:
builder (:class:`DatasetBuilder`): Dataset builder instance.
"""
# this extends the open and os.path.join functions for data streaming
download_config = DownloadConfig(storage_options=builder.storage_options, token=builder.token)
extend_module_for_streaming(builder.__module__, download_config=download_config)
# if needed, we also have to extend additional internal imports (like wmt14 -> wmt_utils)
if not builder.__module__.startswith("datasets."): # check that it's not a packaged builder like csv
importable_file = inspect.getfile(builder.__class__)
with lock_importable_file(importable_file):
for imports in get_imports(importable_file):
if imports[0] == "internal":
internal_import_name = imports[1]
internal_module_name = ".".join(builder.__module__.split(".")[:-1] + [internal_import_name])
extend_module_for_streaming(internal_module_name, download_config=download_config)
# builders can inherit from other builders that might use streaming functionality
# (for example, ImageFolder and AudioFolder inherit from FolderBuilder which implements examples generation)
# but these parents builders are not patched automatically as they are not instantiated, so we patch them here
from .builder import DatasetBuilder
parent_builder_modules = [
cls.__module__
for cls in type(builder).__mro__[1:] # make sure it's not the same module we've already patched
if issubclass(cls, DatasetBuilder) and cls.__module__ != DatasetBuilder.__module__
] # check it's not a standard builder from datasets.builder
for module in parent_builder_modules:
extend_module_for_streaming(module, download_config=download_config)
|
datasets/src/datasets/streaming.py/0
|
{
"file_path": "datasets/src/datasets/streaming.py",
"repo_id": "datasets",
"token_count": 2364
}
| 81
|
import enum
import inspect
import warnings
from functools import wraps
from typing import Callable, Optional
from .logging import get_logger
_emitted_deprecation_warnings = set()
logger = get_logger(__name__)
def deprecated(help_message: Optional[str] = None):
"""Decorator to mark a class or a function as deprecated.
Args:
help_message (:obj:`str`, optional): An optional message to guide the user on how to
switch to non-deprecated usage of the library.
"""
def decorator(deprecated_class_or_function: Callable):
global _emitted_deprecation_warnings
if inspect.isclass(deprecated_class_or_function):
deprecated_function = deprecated_class_or_function.__init__
name = deprecated_class_or_function.__name__
else:
deprecated_function = deprecated_class_or_function
name = deprecated_function.__name__
# Support deprecating __init__ class method: class name instead
name = name if name != "__init__" else deprecated_function.__qualname__.split(".")[-2]
warning_msg = (
f"{name} is deprecated and will be removed in the next major version of datasets." + f" {help_message}"
if help_message
else ""
)
@wraps(deprecated_function)
def wrapper(*args, **kwargs):
func_hash = hash(deprecated_function)
if func_hash not in _emitted_deprecation_warnings:
warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)
_emitted_deprecation_warnings.add(func_hash)
return deprecated_function(*args, **kwargs)
wrapper._decorator_name_ = "deprecated"
if inspect.isclass(deprecated_class_or_function):
deprecated_class_or_function.__init__ = wrapper
return deprecated_class_or_function
else:
return wrapper
return decorator
class OnAccess(enum.EnumMeta):
"""
Enum metaclass that calls a user-specified function whenever a member is accessed.
"""
def __getattribute__(cls, name):
obj = super().__getattribute__(name)
if isinstance(obj, enum.Enum) and obj._on_access:
obj._on_access()
return obj
def __getitem__(cls, name):
member = super().__getitem__(name)
if member._on_access:
member._on_access()
return member
def __call__(cls, value, names=None, *, module=None, qualname=None, type=None, start=1):
obj = super().__call__(value, names, module=module, qualname=qualname, type=type, start=start)
if isinstance(obj, enum.Enum) and obj._on_access:
obj._on_access()
return obj
class DeprecatedEnum(enum.Enum, metaclass=OnAccess):
"""
Enum class that calls `deprecate` method whenever a member is accessed.
"""
def __new__(cls, value):
member = object.__new__(cls)
member._value_ = value
member._on_access = member.deprecate
return member
@property
def help_message(self):
return ""
def deprecate(self):
help_message = f" {self.help_message}" if self.help_message else ""
warnings.warn(
f"'{self.__objclass__.__name__}' is deprecated and will be removed in the next major version of datasets."
+ help_message,
FutureWarning,
stacklevel=3,
)
|
datasets/src/datasets/utils/deprecation_utils.py/0
|
{
"file_path": "datasets/src/datasets/utils/deprecation_utils.py",
"repo_id": "datasets",
"token_count": 1426
}
| 82
|
{
"code": "Programming language (C++, Java, Javascript, Python, etc.)",
"aa": "Afar",
"aaa": "Ghotuo",
"aab": "Alumu-Tesu",
"aac": "Ari",
"aad": "Amal",
"aae": "Arbëreshë Albanian",
"aaf": "Aranadan",
"aag": "Ambrak",
"aah": "Abu' Arapesh",
"aai": "Arifama-Miniafia",
"aak": "Ankave",
"aal": "Afade",
"aan": "Anambé",
"aao": "Algerian Saharan Arabic",
"aap": "Pará Arára",
"aaq": "Eastern Abnaki",
"aas": "Aasáx",
"aat": "Arvanitika Albanian",
"aau": "Abau",
"aav": "Austro-Asiatic languages",
"aaw": "Solong",
"aax": "Mandobo Atas",
"aaz": "Amarasi",
"ab": "Abkhazian",
"aba": "Abé",
"abb": "Bankon",
"abc": "Ambala Ayta",
"abd": "Manide",
"abe": "Western Abnaki",
"abf": "Abai Sungai",
"abg": "Abaga",
"abh": "Tajiki Arabic",
"abi": "Abidji",
"abj": "Aka-Bea",
"abl": "Lampung Nyo",
"abm": "Abanyom",
"abn": "Abua",
"abo": "Abon",
"abp": "Abellen Ayta",
"abq": "Abaza",
"abr": "Abron",
"abs": "Ambonese Malay",
"abt": "Ambulas",
"abu": "Abure",
"abv": "Baharna Arabic",
"abw": "Pal",
"abx": "Inabaknon",
"aby": "Aneme Wake",
"abz": "Abui",
"aca": "Achagua",
"acb": "Áncá",
"acd": "Gikyode",
"ace": "Achinese",
"acf": "Saint Lucian Creole French",
"ach": "Acoli",
"aci": "Aka-Cari",
"ack": "Aka-Kora",
"acl": "Akar-Bale",
"acm": "Mesopotamian Arabic",
"acn": "Achang",
"acp": "Eastern Acipa",
"acq": "Ta'izzi-Adeni Arabic",
"acr": "Achi",
"acs": "Acroá",
"act": "Achterhoeks",
"acu": "Achuar-Shiwiar",
"acv": "Achumawi",
"acw": "Hijazi Arabic",
"acx": "Omani Arabic",
"acy": "Cypriot Arabic",
"acz": "Acheron",
"ada": "Adangme",
"adb": "Atauran",
"add": "Lidzonka; Dzodinka",
"ade": "Adele",
"adf": "Dhofari Arabic",
"adg": "Andegerebinha",
"adh": "Adhola",
"adi": "Adi",
"adj": "Adioukrou",
"adl": "Galo",
"adn": "Adang",
"ado": "Abu",
"adq": "Adangbe",
"adr": "Adonara",
"ads": "Adamorobe Sign Language",
"adt": "Adnyamathanha",
"adu": "Aduge",
"adw": "Amundava",
"adx": "Amdo Tibetan",
"ady": "Adyghe; Adygei",
"adz": "Adzera",
"ae": "Avestan",
"aea": "Areba",
"aeb": "Tunisian Arabic",
"aec": "Saidi Arabic",
"aed": "Argentine Sign Language",
"aee": "Northeast Pashai; Northeast Pashayi",
"aek": "Haeke",
"ael": "Ambele",
"aem": "Arem",
"aen": "Armenian Sign Language",
"aeq": "Aer",
"aer": "Eastern Arrernte",
"aes": "Alsea",
"aeu": "Akeu",
"aew": "Ambakich",
"aey": "Amele",
"aez": "Aeka",
"af": "Afrikaans",
"afa": "Afro-Asiatic languages",
"afb": "Gulf Arabic",
"afd": "Andai",
"afe": "Putukwam",
"afg": "Afghan Sign Language",
"afh": "Afrihili",
"afi": "Akrukay; Chini",
"afk": "Nanubae",
"afn": "Defaka",
"afo": "Eloyi",
"afp": "Tapei",
"afs": "Afro-Seminole Creole",
"aft": "Afitti",
"afu": "Awutu",
"afz": "Obokuitai",
"aga": "Aguano",
"agb": "Legbo",
"agc": "Agatu",
"agd": "Agarabi",
"age": "Angal",
"agf": "Arguni",
"agg": "Angor",
"agh": "Ngelima",
"agi": "Agariya",
"agj": "Argobba",
"agk": "Isarog Agta",
"agl": "Fembe",
"agm": "Angaataha",
"agn": "Agutaynen",
"ago": "Tainae",
"agq": "Aghem",
"agr": "Aguaruna",
"ags": "Esimbi",
"agt": "Central Cagayan Agta",
"agu": "Aguacateco",
"agv": "Remontado Dumagat",
"agw": "Kahua",
"agx": "Aghul",
"agy": "Southern Alta",
"agz": "Mt. Iriga Agta",
"aha": "Ahanta",
"ahb": "Axamb",
"ahg": "Qimant",
"ahh": "Aghu",
"ahi": "Tiagbamrin Aizi",
"ahk": "Akha",
"ahl": "Igo",
"ahm": "Mobumrin Aizi",
"ahn": "Àhàn",
"aho": "Ahom",
"ahp": "Aproumu Aizi",
"ahr": "Ahirani",
"ahs": "Ashe",
"aht": "Ahtena",
"aia": "Arosi",
"aib": "Ainu (China)",
"aic": "Ainbai",
"aid": "Alngith",
"aie": "Amara",
"aif": "Agi",
"aig": "Antigua and Barbuda Creole English",
"aih": "Ai-Cham",
"aii": "Assyrian Neo-Aramaic",
"aij": "Lishanid Noshan",
"aik": "Ake",
"ail": "Aimele",
"aim": "Aimol",
"ain": "Ainu (Japan)",
"aio": "Aiton",
"aip": "Burumakok",
"aiq": "Aimaq",
"air": "Airoran",
"ait": "Arikem",
"aiw": "Aari",
"aix": "Aighon",
"aiy": "Ali",
"aja": "Aja (South Sudan)",
"ajg": "Aja (Benin)",
"aji": "Ajië",
"ajn": "Andajin",
"ajp": "South Levantine Arabic",
"ajs": "Algerian Jewish Sign Language",
"aju": "Judeo-Moroccan Arabic",
"ajw": "Ajawa",
"ajz": "Amri Karbi",
"ak": "Akan",
"akb": "Batak Angkola",
"akc": "Mpur",
"akd": "Ukpet-Ehom",
"ake": "Akawaio",
"akf": "Akpa",
"akg": "Anakalangu",
"akh": "Angal Heneng",
"aki": "Aiome",
"akj": "Aka-Jeru",
"akk": "Akkadian",
"akl": "Aklanon",
"akm": "Aka-Bo",
"ako": "Akurio",
"akp": "Siwu",
"akq": "Ak",
"akr": "Araki",
"aks": "Akaselem",
"akt": "Akolet",
"aku": "Akum",
"akv": "Akhvakh",
"akw": "Akwa",
"akx": "Aka-Kede",
"aky": "Aka-Kol",
"akz": "Alabama",
"ala": "Alago",
"alc": "Qawasqar",
"ald": "Alladian",
"ale": "Aleut",
"alf": "Alege",
"alg": "Algonquian languages",
"alh": "Alawa",
"ali": "Amaimon",
"alj": "Alangan",
"alk": "Alak",
"all": "Allar",
"alm": "Amblong",
"aln": "Gheg Albanian",
"alo": "Larike-Wakasihu",
"alp": "Alune",
"alq": "Algonquin",
"alr": "Alutor",
"als": "Tosk Albanian",
"alt": "Southern Altai",
"alu": "'Are'are",
"alv": "Atlantic-Congo languages",
"alw": "Alaba-K’abeena; Wanbasana",
"alx": "Amol",
"aly": "Alyawarr",
"alz": "Alur",
"am": "Amharic",
"ama": "Amanayé",
"amb": "Ambo",
"amc": "Amahuaca",
"ame": "Yanesha'",
"amf": "Hamer-Banna",
"amg": "Amurdak",
"ami": "Amis",
"amj": "Amdang",
"amk": "Ambai",
"aml": "War-Jaintia",
"amm": "Ama (Papua New Guinea)",
"amn": "Amanab",
"amo": "Amo",
"amp": "Alamblak",
"amq": "Amahai",
"amr": "Amarakaeri",
"ams": "Southern Amami-Oshima",
"amt": "Amto",
"amu": "Guerrero Amuzgo",
"amv": "Ambelau",
"amw": "Western Neo-Aramaic",
"amx": "Anmatyerre",
"amy": "Ami",
"amz": "Atampaya",
"an": "Aragonese",
"ana": "Andaqui",
"anb": "Andoa",
"anc": "Ngas",
"and": "Ansus",
"ane": "Xârâcùù",
"anf": "Animere",
"ang": "Old English (ca. 450-1100)",
"anh": "Nend",
"ani": "Andi",
"anj": "Anor",
"ank": "Goemai",
"anl": "Anu-Hkongso Chin",
"anm": "Anal",
"ann": "Obolo",
"ano": "Andoque",
"anp": "Angika",
"anq": "Jarawa (India)",
"anr": "Andh",
"ans": "Anserma",
"ant": "Antakarinya; Antikarinya",
"anu": "Anuak",
"anv": "Denya",
"anw": "Anaang",
"anx": "Andra-Hus",
"any": "Anyin",
"anz": "Anem",
"aoa": "Angolar",
"aob": "Abom",
"aoc": "Pemon",
"aod": "Andarum",
"aoe": "Angal Enen",
"aof": "Bragat",
"aog": "Angoram",
"aoi": "Anindilyakwa",
"aoj": "Mufian",
"aok": "Arhö",
"aol": "Alor",
"aom": "Ömie",
"aon": "Bumbita Arapesh",
"aor": "Aore",
"aos": "Taikat",
"aot": "Atong (India); A'tong",
"aou": "A'ou",
"aox": "Atorada",
"aoz": "Uab Meto",
"apa": "Apache languages",
"apb": "Sa'a",
"apc": "North Levantine Arabic",
"apd": "Sudanese Arabic",
"ape": "Bukiyip",
"apf": "Pahanan Agta",
"apg": "Ampanang",
"aph": "Athpariya",
"api": "Apiaká",
"apj": "Jicarilla Apache",
"apk": "Kiowa Apache",
"apl": "Lipan Apache",
"apm": "Mescalero-Chiricahua Apache",
"apn": "Apinayé",
"apo": "Ambul",
"app": "Apma",
"apq": "A-Pucikwar",
"apr": "Arop-Lokep",
"aps": "Arop-Sissano",
"apt": "Apatani",
"apu": "Apurinã",
"apv": "Alapmunte",
"apw": "Western Apache",
"apx": "Aputai",
"apy": "Apalaí",
"apz": "Safeyoka",
"aqa": "Alacalufan languages",
"aqc": "Archi",
"aqd": "Ampari Dogon",
"aqg": "Arigidi",
"aqk": "Aninka",
"aql": "Algic languages",
"aqm": "Atohwaim",
"aqn": "Northern Alta",
"aqp": "Atakapa",
"aqr": "Arhâ",
"aqt": "Angaité",
"aqz": "Akuntsu",
"ar": "Arabic",
"arb": "Standard Arabic",
"arc": "Official Aramaic (700-300 BCE); Imperial Aramaic (700-300 BCE)",
"ard": "Arabana",
"are": "Western Arrarnta",
"arh": "Arhuaco",
"ari": "Arikara",
"arj": "Arapaso",
"ark": "Arikapú",
"arl": "Arabela",
"arn": "Mapudungun; Mapuche",
"aro": "Araona",
"arp": "Arapaho",
"arq": "Algerian Arabic",
"arr": "Karo (Brazil)",
"ars": "Najdi Arabic",
"art": "Artificial languages",
"aru": "Aruá (Amazonas State); Arawá",
"arv": "Arbore",
"arw": "Arawak",
"arx": "Aruá (Rodonia State)",
"ary": "Moroccan Arabic",
"arz": "Egyptian Arabic",
"as": "Assamese",
"asa": "Asu (Tanzania)",
"asb": "Assiniboine",
"asc": "Casuarina Coast Asmat",
"ase": "American Sign Language",
"asf": "Auslan; Australian Sign Language",
"asg": "Cishingini",
"ash": "Abishira",
"asi": "Buruwai",
"asj": "Sari",
"ask": "Ashkun",
"asl": "Asilulu",
"asn": "Xingú Asuriní",
"aso": "Dano",
"asp": "Algerian Sign Language",
"asq": "Austrian Sign Language",
"asr": "Asuri",
"ass": "Ipulo",
"ast": "Asturian; Asturleonese; Bable; Leonese",
"asu": "Tocantins Asurini",
"asv": "Asoa",
"asw": "Australian Aborigines Sign Language",
"asx": "Muratayak",
"asy": "Yaosakor Asmat",
"asz": "As",
"ata": "Pele-Ata",
"atb": "Zaiwa",
"atc": "Atsahuaca",
"atd": "Ata Manobo",
"ate": "Atemble",
"atg": "Ivbie North-Okpela-Arhe",
"ath": "Athapascan languages",
"ati": "Attié",
"atj": "Atikamekw",
"atk": "Ati",
"atl": "Mt. Iraya Agta",
"atm": "Ata",
"atn": "Ashtiani",
"ato": "Atong (Cameroon)",
"atp": "Pudtol Atta",
"atq": "Aralle-Tabulahan",
"atr": "Waimiri-Atroari",
"ats": "Gros Ventre",
"att": "Pamplona Atta",
"atu": "Reel",
"atv": "Northern Altai",
"atw": "Atsugewi",
"atx": "Arutani",
"aty": "Aneityum",
"atz": "Arta",
"aua": "Asumboa",
"aub": "Alugu",
"auc": "Waorani",
"aud": "Anuta",
"auf": "Arauan languages",
"aug": "Aguna",
"auh": "Aushi",
"aui": "Anuki",
"auj": "Awjilah",
"auk": "Heyo",
"aul": "Aulua",
"aum": "Asu (Nigeria)",
"aun": "Molmo One",
"auo": "Auyokawa",
"aup": "Makayam",
"auq": "Anus; Korur",
"aur": "Aruek",
"aus": "Australian languages",
"aut": "Austral",
"auu": "Auye",
"auw": "Awyi",
"aux": "Aurá",
"auy": "Awiyaana",
"auz": "Uzbeki Arabic",
"av": "Avaric",
"avb": "Avau",
"avd": "Alviri-Vidari",
"avi": "Avikam",
"avk": "Kotava",
"avl": "Eastern Egyptian Bedawi Arabic",
"avm": "Angkamuthi",
"avn": "Avatime",
"avo": "Agavotaguerra",
"avs": "Aushiri",
"avt": "Au",
"avu": "Avokaya",
"avv": "Avá-Canoeiro",
"awa": "Awadhi",
"awb": "Awa (Papua New Guinea)",
"awc": "Cicipu",
"awd": "Arawakan languages",
"awe": "Awetí",
"awg": "Anguthimri",
"awh": "Awbono",
"awi": "Aekyom",
"awk": "Awabakal",
"awm": "Arawum",
"awn": "Awngi",
"awo": "Awak",
"awr": "Awera",
"aws": "South Awyu",
"awt": "Araweté",
"awu": "Central Awyu",
"awv": "Jair Awyu",
"aww": "Awun",
"awx": "Awara",
"awy": "Edera Awyu",
"axb": "Abipon",
"axe": "Ayerrerenge",
"axg": "Mato Grosso Arára",
"axk": "Yaka (Central African Republic)",
"axl": "Lower Southern Aranda",
"axm": "Middle Armenian",
"axx": "Xârâgurè",
"ay": "Aymara",
"aya": "Awar",
"ayb": "Ayizo Gbe",
"ayc": "Southern Aymara",
"ayd": "Ayabadhu",
"aye": "Ayere",
"ayg": "Ginyanga",
"ayh": "Hadrami Arabic",
"ayi": "Leyigha",
"ayk": "Akuku",
"ayl": "Libyan Arabic",
"ayn": "Sanaani Arabic",
"ayo": "Ayoreo",
"ayp": "North Mesopotamian Arabic",
"ayq": "Ayi (Papua New Guinea)",
"ayr": "Central Aymara",
"ays": "Sorsogon Ayta",
"ayt": "Magbukun Ayta",
"ayu": "Ayu",
"ayz": "Mai Brat",
"az": "Azerbaijani",
"aza": "Azha",
"azb": "South Azerbaijani",
"azc": "Uto-Aztecan languages",
"azd": "Eastern Durango Nahuatl",
"azg": "San Pedro Amuzgos Amuzgo",
"azj": "North Azerbaijani",
"azm": "Ipalapa Amuzgo",
"azn": "Western Durango Nahuatl",
"azo": "Awing",
"azt": "Faire Atta",
"azz": "Highland Puebla Nahuatl",
"ba": "Bashkir",
"baa": "Babatana",
"bab": "Bainouk-Gunyuño",
"bac": "Badui",
"bad": "Banda languages",
"bae": "Baré",
"baf": "Nubaca",
"bag": "Tuki",
"bah": "Bahamas Creole English",
"bai": "Bamileke languages",
"baj": "Barakai",
"bal": "Baluchi",
"ban": "Balinese",
"bao": "Waimaha",
"bap": "Bantawa",
"bar": "Bavarian",
"bas": "Basa (Cameroon)",
"bat": "Baltic languages",
"bau": "Bada (Nigeria)",
"bav": "Vengo",
"baw": "Bambili-Bambui",
"bax": "Bamun",
"bay": "Batuley",
"bba": "Baatonum",
"bbb": "Barai",
"bbc": "Batak Toba",
"bbd": "Bau",
"bbe": "Bangba",
"bbf": "Baibai",
"bbg": "Barama",
"bbh": "Bugan",
"bbi": "Barombi",
"bbj": "Ghomálá'",
"bbk": "Babanki",
"bbl": "Bats",
"bbm": "Babango",
"bbn": "Uneapa",
"bbo": "Northern Bobo Madaré; Konabéré",
"bbp": "West Central Banda",
"bbq": "Bamali",
"bbr": "Girawa",
"bbs": "Bakpinka",
"bbt": "Mburku",
"bbu": "Kulung (Nigeria)",
"bbv": "Karnai",
"bbw": "Baba",
"bbx": "Bubia",
"bby": "Befang",
"bca": "Central Bai",
"bcb": "Bainouk-Samik",
"bcc": "Southern Balochi",
"bcd": "North Babar",
"bce": "Bamenyam",
"bcf": "Bamu",
"bcg": "Baga Pokur",
"bch": "Bariai",
"bci": "Baoulé",
"bcj": "Bardi",
"bck": "Bunuba",
"bcl": "Central Bikol",
"bcm": "Bannoni",
"bcn": "Bali (Nigeria)",
"bco": "Kaluli",
"bcp": "Bali (Democratic Republic of Congo)",
"bcq": "Bench",
"bcr": "Babine",
"bcs": "Kohumono",
"bct": "Bendi",
"bcu": "Awad Bing",
"bcv": "Shoo-Minda-Nye",
"bcw": "Bana",
"bcy": "Bacama",
"bcz": "Bainouk-Gunyaamolo",
"bda": "Bayot",
"bdb": "Basap",
"bdc": "Emberá-Baudó",
"bdd": "Bunama",
"bde": "Bade",
"bdf": "Biage",
"bdg": "Bonggi",
"bdh": "Baka (South Sudan)",
"bdi": "Burun",
"bdj": "Bai (South Sudan); Bai",
"bdk": "Budukh",
"bdl": "Indonesian Bajau",
"bdm": "Buduma",
"bdn": "Baldemu",
"bdo": "Morom",
"bdp": "Bende",
"bdq": "Bahnar",
"bdr": "West Coast Bajau",
"bds": "Burunge",
"bdt": "Bokoto",
"bdu": "Oroko",
"bdv": "Bodo Parja",
"bdw": "Baham",
"bdx": "Budong-Budong",
"bdy": "Bandjalang",
"bdz": "Badeshi",
"be": "Belarusian",
"bea": "Beaver",
"beb": "Bebele",
"bec": "Iceve-Maci",
"bed": "Bedoanas",
"bee": "Byangsi",
"bef": "Benabena",
"beg": "Belait",
"beh": "Biali",
"bei": "Bekati'",
"bej": "Beja; Bedawiyet",
"bek": "Bebeli",
"bem": "Bemba (Zambia)",
"beo": "Beami",
"bep": "Besoa",
"beq": "Beembe",
"ber": "Berber languages",
"bes": "Besme",
"bet": "Guiberoua Béte",
"beu": "Blagar",
"bev": "Daloa Bété",
"bew": "Betawi",
"bex": "Jur Modo",
"bey": "Beli (Papua New Guinea)",
"bez": "Bena (Tanzania)",
"bfa": "Bari",
"bfb": "Pauri Bareli",
"bfc": "Panyi Bai; Northern Bai",
"bfd": "Bafut",
"bfe": "Betaf; Tena",
"bff": "Bofi",
"bfg": "Busang Kayan",
"bfh": "Blafe",
"bfi": "British Sign Language",
"bfj": "Bafanji",
"bfk": "Ban Khor Sign Language",
"bfl": "Banda-Ndélé",
"bfm": "Mmen",
"bfn": "Bunak",
"bfo": "Malba Birifor",
"bfp": "Beba",
"bfq": "Badaga",
"bfr": "Bazigar",
"bfs": "Southern Bai",
"bft": "Balti",
"bfu": "Gahri",
"bfw": "Bondo",
"bfx": "Bantayanon",
"bfy": "Bagheli",
"bfz": "Mahasu Pahari",
"bg": "Bulgarian",
"bga": "Gwamhi-Wuri",
"bgb": "Bobongko",
"bgc": "Haryanvi",
"bgd": "Rathwi Bareli",
"bge": "Bauria",
"bgf": "Bangandu",
"bgg": "Bugun",
"bgi": "Giangan",
"bgj": "Bangolan",
"bgk": "Bit; Buxinhua",
"bgl": "Bo (Laos)",
"bgn": "Western Balochi",
"bgo": "Baga Koga",
"bgp": "Eastern Balochi",
"bgq": "Bagri",
"bgr": "Bawm Chin",
"bgs": "Tagabawa",
"bgt": "Bughotu",
"bgu": "Mbongno",
"bgv": "Warkay-Bipim",
"bgw": "Bhatri",
"bgx": "Balkan Gagauz Turkish",
"bgy": "Benggoi",
"bgz": "Banggai",
"bh": "Bihari languages",
"bha": "Bharia",
"bhb": "Bhili",
"bhc": "Biga",
"bhd": "Bhadrawahi",
"bhe": "Bhaya",
"bhf": "Odiai",
"bhg": "Binandere",
"bhh": "Bukharic",
"bhi": "Bhilali",
"bhj": "Bahing",
"bhl": "Bimin",
"bhm": "Bathari",
"bhn": "Bohtan Neo-Aramaic",
"bho": "Bhojpuri",
"bhp": "Bima",
"bhq": "Tukang Besi South",
"bhr": "Bara Malagasy",
"bhs": "Buwal",
"bht": "Bhattiyali",
"bhu": "Bhunjia",
"bhv": "Bahau",
"bhw": "Biak",
"bhx": "Bhalay",
"bhy": "Bhele",
"bhz": "Bada (Indonesia)",
"bi": "Bislama",
"bia": "Badimaya",
"bib": "Bissa; Bisa",
"bid": "Bidiyo",
"bie": "Bepour",
"bif": "Biafada",
"big": "Biangai",
"bik": "Bikol",
"bil": "Bile",
"bim": "Bimoba",
"bin": "Bini; Edo",
"bio": "Nai",
"bip": "Bila",
"biq": "Bipi",
"bir": "Bisorio",
"bit": "Berinomo",
"biu": "Biete",
"biv": "Southern Birifor",
"biw": "Kol (Cameroon)",
"bix": "Bijori",
"biy": "Birhor",
"biz": "Baloi",
"bja": "Budza",
"bjb": "Banggarla",
"bjc": "Bariji",
"bje": "Biao-Jiao Mien",
"bjf": "Barzani Jewish Neo-Aramaic",
"bjg": "Bidyogo",
"bjh": "Bahinemo",
"bji": "Burji",
"bjj": "Kanauji",
"bjk": "Barok",
"bjl": "Bulu (Papua New Guinea)",
"bjm": "Bajelani",
"bjn": "Banjar",
"bjo": "Mid-Southern Banda",
"bjp": "Fanamaket",
"bjr": "Binumarien",
"bjs": "Bajan",
"bjt": "Balanta-Ganja",
"bju": "Busuu",
"bjv": "Bedjond",
"bjw": "Bakwé",
"bjx": "Banao Itneg",
"bjy": "Bayali",
"bjz": "Baruga",
"bka": "Kyak",
"bkc": "Baka (Cameroon)",
"bkd": "Binukid; Talaandig",
"bkf": "Beeke",
"bkg": "Buraka",
"bkh": "Bakoko",
"bki": "Baki",
"bkj": "Pande",
"bkk": "Brokskat",
"bkl": "Berik",
"bkm": "Kom (Cameroon)",
"bkn": "Bukitan",
"bko": "Kwa'",
"bkp": "Boko (Democratic Republic of Congo)",
"bkq": "Bakairí",
"bkr": "Bakumpai",
"bks": "Northern Sorsoganon",
"bkt": "Boloki",
"bku": "Buhid",
"bkv": "Bekwarra",
"bkw": "Bekwel",
"bkx": "Baikeno",
"bky": "Bokyi",
"bkz": "Bungku",
"bla": "Siksika",
"blb": "Bilua",
"blc": "Bella Coola",
"bld": "Bolango",
"ble": "Balanta-Kentohe",
"blf": "Buol",
"blh": "Kuwaa",
"bli": "Bolia",
"blj": "Bolongan",
"blk": "Pa'o Karen; Pa'O",
"bll": "Biloxi",
"blm": "Beli (South Sudan)",
"bln": "Southern Catanduanes Bikol",
"blo": "Anii",
"blp": "Blablanga",
"blq": "Baluan-Pam",
"blr": "Blang",
"bls": "Balaesang",
"blt": "Tai Dam",
"blv": "Kibala; Bolo",
"blw": "Balangao",
"blx": "Mag-Indi Ayta",
"bly": "Notre",
"blz": "Balantak",
"bm": "Bambara",
"bma": "Lame",
"bmb": "Bembe",
"bmc": "Biem",
"bmd": "Baga Manduri",
"bme": "Limassa",
"bmf": "Bom-Kim",
"bmg": "Bamwe",
"bmh": "Kein",
"bmi": "Bagirmi",
"bmj": "Bote-Majhi",
"bmk": "Ghayavi",
"bml": "Bomboli",
"bmm": "Northern Betsimisaraka Malagasy",
"bmn": "Bina (Papua New Guinea)",
"bmo": "Bambalang",
"bmp": "Bulgebi",
"bmq": "Bomu",
"bmr": "Muinane",
"bms": "Bilma Kanuri",
"bmt": "Biao Mon",
"bmu": "Somba-Siawari",
"bmv": "Bum",
"bmw": "Bomwali",
"bmx": "Baimak",
"bmz": "Baramu",
"bn": "Bengali; Bangla",
"bna": "Bonerate",
"bnb": "Bookan",
"bnc": "Bontok",
"bnd": "Banda (Indonesia)",
"bne": "Bintauna",
"bnf": "Masiwang",
"bng": "Benga",
"bni": "Bangi",
"bnj": "Eastern Tawbuid",
"bnk": "Bierebo",
"bnl": "Boon",
"bnm": "Batanga",
"bnn": "Bunun",
"bno": "Bantoanon",
"bnp": "Bola",
"bnq": "Bantik",
"bnr": "Butmas-Tur",
"bns": "Bundeli",
"bnt": "Bantu languages",
"bnu": "Bentong",
"bnv": "Bonerif; Beneraf; Edwas",
"bnw": "Bisis",
"bnx": "Bangubangu",
"bny": "Bintulu",
"bnz": "Beezen",
"bo": "Tibetan",
"boa": "Bora",
"bob": "Aweer",
"boe": "Mundabli",
"bof": "Bolon",
"bog": "Bamako Sign Language",
"boh": "Boma",
"boi": "Barbareño",
"boj": "Anjam",
"bok": "Bonjo",
"bol": "Bole",
"bom": "Berom",
"bon": "Bine",
"boo": "Tiemacèwè Bozo",
"bop": "Bonkiman",
"boq": "Bogaya",
"bor": "Borôro",
"bot": "Bongo",
"bou": "Bondei",
"bov": "Tuwuli",
"bow": "Rema",
"box": "Buamu",
"boy": "Bodo (Central African Republic)",
"boz": "Tiéyaxo Bozo",
"bpa": "Daakaka",
"bpc": "Mbuk",
"bpd": "Banda-Banda",
"bpe": "Bauni",
"bpg": "Bonggo",
"bph": "Botlikh",
"bpi": "Bagupi",
"bpj": "Binji",
"bpk": "Orowe; 'Ôrôê",
"bpl": "Broome Pearling Lugger Pidgin",
"bpm": "Biyom",
"bpn": "Dzao Min",
"bpo": "Anasi",
"bpp": "Kaure",
"bpq": "Banda Malay",
"bpr": "Koronadal Blaan",
"bps": "Sarangani Blaan",
"bpt": "Barrow Point",
"bpu": "Bongu",
"bpv": "Bian Marind",
"bpw": "Bo (Papua New Guinea)",
"bpx": "Palya Bareli",
"bpy": "Bishnupriya",
"bpz": "Bilba",
"bqa": "Tchumbuli",
"bqb": "Bagusa",
"bqc": "Boko (Benin); Boo",
"bqd": "Bung",
"bqf": "Baga Kaloum",
"bqg": "Bago-Kusuntu",
"bqh": "Baima",
"bqi": "Bakhtiari",
"bqj": "Bandial",
"bqk": "Banda-Mbrès",
"bql": "Bilakura",
"bqm": "Wumboko",
"bqn": "Bulgarian Sign Language",
"bqo": "Balo",
"bqp": "Busa",
"bqq": "Biritai",
"bqr": "Burusu",
"bqs": "Bosngun",
"bqt": "Bamukumbit",
"bqu": "Boguru",
"bqv": "Koro Wachi; Begbere-Ejar",
"bqw": "Buru (Nigeria)",
"bqx": "Baangi",
"bqy": "Bengkala Sign Language",
"bqz": "Bakaka",
"br": "Breton",
"bra": "Braj",
"brb": "Brao; Lave",
"brc": "Berbice Creole Dutch",
"brd": "Baraamu",
"brf": "Bira",
"brg": "Baure",
"brh": "Brahui",
"bri": "Mokpwe",
"brj": "Bieria",
"brk": "Birked",
"brl": "Birwa",
"brm": "Barambu",
"brn": "Boruca",
"bro": "Brokkat",
"brp": "Barapasi",
"brq": "Breri",
"brr": "Birao",
"brs": "Baras",
"brt": "Bitare",
"bru": "Eastern Bru",
"brv": "Western Bru",
"brw": "Bellari",
"brx": "Bodo (India)",
"bry": "Burui",
"brz": "Bilbil",
"bs": "Bosnian",
"bsa": "Abinomn",
"bsb": "Brunei Bisaya",
"bsc": "Bassari; Oniyan",
"bse": "Wushi",
"bsf": "Bauchi",
"bsg": "Bashkardi",
"bsh": "Kati",
"bsi": "Bassossi",
"bsj": "Bangwinji",
"bsk": "Burushaski",
"bsl": "Basa-Gumna",
"bsm": "Busami",
"bsn": "Barasana-Eduria",
"bso": "Buso",
"bsp": "Baga Sitemu",
"bsq": "Bassa",
"bsr": "Bassa-Kontagora",
"bss": "Akoose",
"bst": "Basketo",
"bsu": "Bahonsuai",
"bsv": "Baga Sobané",
"bsw": "Baiso",
"bsx": "Yangkam",
"bsy": "Sabah Bisaya",
"bta": "Bata",
"btc": "Bati (Cameroon)",
"btd": "Batak Dairi",
"bte": "Gamo-Ningi",
"btf": "Birgit",
"btg": "Gagnoa Bété",
"bth": "Biatah Bidayuh",
"bti": "Burate",
"btj": "Bacanese Malay",
"btk": "Batak languages",
"btm": "Batak Mandailing",
"btn": "Ratagnon",
"bto": "Rinconada Bikol",
"btp": "Budibud",
"btq": "Batek",
"btr": "Baetora",
"bts": "Batak Simalungun",
"btt": "Bete-Bendi",
"btu": "Batu",
"btv": "Bateri",
"btw": "Butuanon",
"btx": "Batak Karo",
"bty": "Bobot",
"btz": "Batak Alas-Kluet",
"bua": "Buriat",
"bub": "Bua",
"buc": "Bushi",
"bud": "Ntcham",
"bue": "Beothuk",
"buf": "Bushoong",
"bug": "Buginese",
"buh": "Younuo Bunu",
"bui": "Bongili",
"buj": "Basa-Gurmana",
"buk": "Bugawac",
"bum": "Bulu (Cameroon)",
"bun": "Sherbro",
"buo": "Terei",
"bup": "Busoa",
"buq": "Brem",
"bus": "Bokobaru",
"but": "Bungain",
"buu": "Budu",
"buv": "Bun",
"buw": "Bubi",
"bux": "Boghom",
"buy": "Bullom So",
"buz": "Bukwen",
"bva": "Barein",
"bvb": "Bube",
"bvc": "Baelelea",
"bvd": "Baeggu",
"bve": "Berau Malay",
"bvf": "Boor",
"bvg": "Bonkeng",
"bvh": "Bure",
"bvi": "Belanda Viri",
"bvj": "Baan",
"bvk": "Bukat",
"bvl": "Bolivian Sign Language",
"bvm": "Bamunka",
"bvn": "Buna",
"bvo": "Bolgo",
"bvp": "Bumang",
"bvq": "Birri",
"bvr": "Burarra",
"bvt": "Bati (Indonesia)",
"bvu": "Bukit Malay",
"bvv": "Baniva",
"bvw": "Boga",
"bvx": "Dibole",
"bvy": "Baybayanon",
"bvz": "Bauzi",
"bwa": "Bwatoo",
"bwb": "Namosi-Naitasiri-Serua",
"bwc": "Bwile",
"bwd": "Bwaidoka",
"bwe": "Bwe Karen",
"bwf": "Boselewa",
"bwg": "Barwe",
"bwh": "Bishuo",
"bwi": "Baniwa",
"bwj": "Láá Láá Bwamu",
"bwk": "Bauwaki",
"bwl": "Bwela",
"bwm": "Biwat",
"bwn": "Wunai Bunu",
"bwo": "Boro (Ethiopia); Borna (Ethiopia)",
"bwp": "Mandobo Bawah",
"bwq": "Southern Bobo Madaré",
"bwr": "Bura-Pabir",
"bws": "Bomboma",
"bwt": "Bafaw-Balong",
"bwu": "Buli (Ghana)",
"bww": "Bwa",
"bwx": "Bu-Nao Bunu",
"bwy": "Cwi Bwamu",
"bwz": "Bwisi",
"bxa": "Tairaha",
"bxb": "Belanda Bor",
"bxc": "Molengue",
"bxd": "Pela",
"bxe": "Birale",
"bxf": "Bilur; Minigir",
"bxg": "Bangala",
"bxh": "Buhutu",
"bxi": "Pirlatapa",
"bxj": "Bayungu",
"bxk": "Bukusu; Lubukusu",
"bxl": "Jalkunan",
"bxm": "Mongolia Buriat",
"bxn": "Burduna",
"bxo": "Barikanchi",
"bxp": "Bebil",
"bxq": "Beele",
"bxr": "Russia Buriat",
"bxs": "Busam",
"bxu": "China Buriat",
"bxv": "Berakou",
"bxw": "Bankagooma",
"bxz": "Binahari",
"bya": "Batak",
"byb": "Bikya",
"byc": "Ubaghara",
"byd": "Benyadu'",
"bye": "Pouye",
"byf": "Bete",
"byg": "Baygo",
"byh": "Bhujel",
"byi": "Buyu",
"byj": "Bina (Nigeria)",
"byk": "Biao",
"byl": "Bayono",
"bym": "Bidjara",
"byn": "Bilin; Blin",
"byo": "Biyo",
"byp": "Bumaji",
"byq": "Basay",
"byr": "Baruya; Yipma",
"bys": "Burak",
"byt": "Berti",
"byv": "Medumba",
"byw": "Belhariya",
"byx": "Qaqet",
"byz": "Banaro",
"bza": "Bandi",
"bzb": "Andio",
"bzc": "Southern Betsimisaraka Malagasy",
"bzd": "Bribri",
"bze": "Jenaama Bozo",
"bzf": "Boikin",
"bzg": "Babuza",
"bzh": "Mapos Buang",
"bzi": "Bisu",
"bzj": "Belize Kriol English",
"bzk": "Nicaragua Creole English",
"bzl": "Boano (Sulawesi)",
"bzm": "Bolondo",
"bzn": "Boano (Maluku)",
"bzo": "Bozaba",
"bzp": "Kemberano",
"bzq": "Buli (Indonesia)",
"bzr": "Biri",
"bzs": "Brazilian Sign Language",
"bzt": "Brithenig",
"bzu": "Burmeso",
"bzv": "Naami",
"bzw": "Basa (Nigeria)",
"bzx": "Kɛlɛngaxo Bozo",
"bzy": "Obanliku",
"bzz": "Evant",
"ca": "Catalan; Valencian",
"caa": "Chortí",
"cab": "Garifuna",
"cac": "Chuj",
"cad": "Caddo",
"cae": "Lehar; Laalaa",
"caf": "Southern Carrier",
"cag": "Nivaclé",
"cah": "Cahuarano",
"cai": "Central American Indian languages",
"caj": "Chané",
"cak": "Kaqchikel; Cakchiquel",
"cal": "Carolinian",
"cam": "Cemuhî",
"can": "Chambri",
"cao": "Chácobo",
"cap": "Chipaya",
"caq": "Car Nicobarese",
"car": "Galibi Carib",
"cas": "Tsimané",
"cau": "Caucasian languages",
"cav": "Cavineña",
"caw": "Callawalla",
"cax": "Chiquitano",
"cay": "Cayuga",
"caz": "Canichana",
"cba": "Chibchan languages",
"cbb": "Cabiyarí",
"cbc": "Carapana",
"cbd": "Carijona",
"cbg": "Chimila",
"cbi": "Chachi",
"cbj": "Ede Cabe",
"cbk": "Chavacano",
"cbl": "Bualkhaw Chin",
"cbn": "Nyahkur",
"cbo": "Izora",
"cbq": "Tsucuba; Cuba",
"cbr": "Cashibo-Cacataibo",
"cbs": "Cashinahua",
"cbt": "Chayahuita",
"cbu": "Candoshi-Shapra",
"cbv": "Cacua",
"cbw": "Kinabalian",
"cby": "Carabayo",
"ccc": "Chamicuro",
"ccd": "Cafundo Creole",
"cce": "Chopi",
"ccg": "Samba Daka",
"cch": "Atsam",
"ccj": "Kasanga",
"ccl": "Cutchi-Swahili",
"ccm": "Malaccan Creole Malay",
"ccn": "North Caucasian languages",
"cco": "Comaltepec Chinantec",
"ccp": "Chakma",
"ccr": "Cacaopera",
"ccs": "South Caucasian languages",
"cda": "Choni",
"cdc": "Chadic languages",
"cdd": "Caddoan languages",
"cde": "Chenchu",
"cdf": "Chiru",
"cdh": "Chambeali",
"cdi": "Chodri",
"cdj": "Churahi",
"cdm": "Chepang",
"cdn": "Chaudangsi",
"cdo": "Min Dong Chinese",
"cdr": "Cinda-Regi-Tiyal",
"cds": "Chadian Sign Language",
"cdy": "Chadong",
"cdz": "Koda",
"ce": "Chechen",
"cea": "Lower Chehalis",
"ceb": "Cebuano",
"ceg": "Chamacoco",
"cek": "Eastern Khumi Chin",
"cel": "Celtic languages",
"cen": "Cen",
"cet": "Centúúm",
"cey": "Ekai Chin",
"cfa": "Dijim-Bwilim",
"cfd": "Cara",
"cfg": "Como Karim",
"cfm": "Falam Chin",
"cga": "Changriwa",
"cgc": "Kagayanen",
"cgg": "Chiga",
"cgk": "Chocangacakha",
"ch": "Chamorro",
"chb": "Chibcha",
"chc": "Catawba",
"chd": "Highland Oaxaca Chontal",
"chf": "Tabasco Chontal",
"chg": "Chagatai",
"chh": "Chinook",
"chj": "Ojitlán Chinantec",
"chk": "Chuukese",
"chl": "Cahuilla",
"chm": "Mari (Russia)",
"chn": "Chinook jargon",
"cho": "Choctaw",
"chp": "Chipewyan; Dene Suline",
"chq": "Quiotepec Chinantec",
"chr": "Cherokee",
"cht": "Cholón",
"chw": "Chuwabu",
"chx": "Chantyal",
"chy": "Cheyenne",
"chz": "Ozumacín Chinantec",
"cia": "Cia-Cia",
"cib": "Ci Gbe",
"cic": "Chickasaw",
"cid": "Chimariko",
"cie": "Cineni",
"cih": "Chinali",
"cik": "Chitkuli Kinnauri",
"cim": "Cimbrian",
"cin": "Cinta Larga",
"cip": "Chiapanec",
"cir": "Tiri; Haméa; Méa",
"ciw": "Chippewa",
"ciy": "Chaima",
"cja": "Western Cham",
"cje": "Chru",
"cjh": "Upper Chehalis",
"cji": "Chamalal",
"cjk": "Chokwe",
"cjm": "Eastern Cham",
"cjn": "Chenapian",
"cjo": "Ashéninka Pajonal",
"cjp": "Cabécar",
"cjs": "Shor",
"cjv": "Chuave",
"cjy": "Jinyu Chinese",
"ckb": "Central Kurdish",
"ckh": "Chak",
"ckl": "Cibak",
"ckm": "Chakavian",
"ckn": "Kaang Chin",
"cko": "Anufo",
"ckq": "Kajakse",
"ckr": "Kairak",
"cks": "Tayo",
"ckt": "Chukot",
"cku": "Koasati",
"ckv": "Kavalan",
"ckx": "Caka",
"cky": "Cakfem-Mushere",
"ckz": "Cakchiquel-Quiché Mixed Language",
"cla": "Ron",
"clc": "Chilcotin",
"cld": "Chaldean Neo-Aramaic",
"cle": "Lealao Chinantec",
"clh": "Chilisso",
"cli": "Chakali",
"clj": "Laitu Chin",
"clk": "Idu-Mishmi",
"cll": "Chala",
"clm": "Clallam",
"clo": "Lowland Oaxaca Chontal",
"clt": "Lautu Chin",
"clu": "Caluyanun",
"clw": "Chulym",
"cly": "Eastern Highland Chatino",
"cma": "Maa",
"cmc": "Chamic languages",
"cme": "Cerma",
"cmg": "Classical Mongolian",
"cmi": "Emberá-Chamí",
"cml": "Campalagian",
"cmm": "Michigamea",
"cmn": "Mandarin Chinese",
"cmo": "Central Mnong",
"cmr": "Mro-Khimi Chin",
"cms": "Messapic",
"cmt": "Camtho",
"cna": "Changthang",
"cnb": "Chinbon Chin",
"cnc": "Côông",
"cng": "Northern Qiang",
"cnh": "Hakha Chin; Haka Chin",
"cni": "Asháninka",
"cnk": "Khumi Chin",
"cnl": "Lalana Chinantec",
"cno": "Con",
"cnp": "Northern Ping Chinese; Northern Pinghua",
"cnq": "Chung",
"cnr": "Montenegrin",
"cns": "Central Asmat",
"cnt": "Tepetotutla Chinantec",
"cnu": "Chenoua",
"cnw": "Ngawn Chin",
"cnx": "Middle Cornish",
"co": "Corsican",
"coa": "Cocos Islands Malay",
"cob": "Chicomuceltec",
"coc": "Cocopa",
"cod": "Cocama-Cocamilla",
"coe": "Koreguaje",
"cof": "Colorado",
"cog": "Chong",
"coh": "Chonyi-Dzihana-Kauma; Chichonyi-Chidzihana-Chikauma",
"coj": "Cochimi",
"cok": "Santa Teresa Cora",
"col": "Columbia-Wenatchi",
"com": "Comanche",
"con": "Cofán",
"coo": "Comox",
"cop": "Coptic",
"coq": "Coquille",
"cot": "Caquinte",
"cou": "Wamey",
"cov": "Cao Miao",
"cow": "Cowlitz",
"cox": "Nanti",
"coz": "Chochotec",
"cpa": "Palantla Chinantec",
"cpb": "Ucayali-Yurúa Ashéninka",
"cpc": "Ajyíninka Apurucayali",
"cpe": "English-based creoles and pidgins",
"cpf": "French-based creoles and pidgins",
"cpg": "Cappadocian Greek",
"cpi": "Chinese Pidgin English",
"cpn": "Cherepon",
"cpo": "Kpeego",
"cpp": "Portuguese-based creoles and pidgins",
"cps": "Capiznon",
"cpu": "Pichis Ashéninka",
"cpx": "Pu-Xian Chinese",
"cpy": "South Ucayali Ashéninka",
"cqd": "Chuanqiandian Cluster Miao",
"cr": "Cree",
"cra": "Chara",
"crb": "Island Carib",
"crc": "Lonwolwol",
"crd": "Coeur d'Alene",
"crf": "Caramanta",
"crg": "Michif",
"crh": "Crimean Tatar; Crimean Turkish",
"cri": "Sãotomense",
"crj": "Southern East Cree",
"crk": "Plains Cree",
"crl": "Northern East Cree",
"crm": "Moose Cree",
"crn": "El Nayar Cora",
"cro": "Crow",
"crp": "Creoles and pidgins",
"crq": "Iyo'wujwa Chorote",
"crr": "Carolina Algonquian",
"crs": "Seselwa Creole French",
"crt": "Iyojwa'ja Chorote",
"crv": "Chaura",
"crw": "Chrau",
"crx": "Carrier",
"cry": "Cori",
"crz": "Cruzeño",
"cs": "Czech",
"csa": "Chiltepec Chinantec",
"csb": "Kashubian",
"csc": "Catalan Sign Language; Lengua de señas catalana; Llengua de Signes Catalana",
"csd": "Chiangmai Sign Language",
"cse": "Czech Sign Language",
"csf": "Cuba Sign Language",
"csg": "Chilean Sign Language",
"csh": "Asho Chin",
"csi": "Coast Miwok",
"csj": "Songlai Chin",
"csk": "Jola-Kasa",
"csl": "Chinese Sign Language",
"csm": "Central Sierra Miwok",
"csn": "Colombian Sign Language",
"cso": "Sochiapam Chinantec; Sochiapan Chinantec",
"csp": "Southern Ping Chinese; Southern Pinghua",
"csq": "Croatia Sign Language",
"csr": "Costa Rican Sign Language",
"css": "Southern Ohlone",
"cst": "Northern Ohlone",
"csu": "Central Sudanic languages",
"csv": "Sumtu Chin",
"csw": "Swampy Cree",
"csx": "Cambodian Sign Language",
"csy": "Siyin Chin",
"csz": "Coos",
"cta": "Tataltepec Chatino",
"ctc": "Chetco",
"ctd": "Tedim Chin",
"cte": "Tepinapa Chinantec",
"ctg": "Chittagonian",
"cth": "Thaiphum Chin",
"ctl": "Tlacoatzintepec Chinantec",
"ctm": "Chitimacha",
"ctn": "Chhintange",
"cto": "Emberá-Catío",
"ctp": "Western Highland Chatino",
"cts": "Northern Catanduanes Bikol",
"ctt": "Wayanad Chetti",
"ctu": "Chol",
"cty": "Moundadan Chetty",
"ctz": "Zacatepec Chatino",
"cu": "Church Slavic; Church Slavonic; Old Bulgarian; Old Church Slavonic; Old Slavonic",
"cua": "Cua",
"cub": "Cubeo",
"cuc": "Usila Chinantec",
"cuh": "Chuka; Gichuka",
"cui": "Cuiba",
"cuj": "Mashco Piro",
"cuk": "San Blas Kuna",
"cul": "Culina; Kulina",
"cuo": "Cumanagoto",
"cup": "Cupeño",
"cuq": "Cun",
"cur": "Chhulung",
"cus": "Cushitic languages",
"cut": "Teutila Cuicatec",
"cuu": "Tai Ya",
"cuv": "Cuvok",
"cuw": "Chukwa",
"cux": "Tepeuxila Cuicatec",
"cuy": "Cuitlatec",
"cv": "Chuvash",
"cvg": "Chug",
"cvn": "Valle Nacional Chinantec",
"cwa": "Kabwa",
"cwb": "Maindo",
"cwd": "Woods Cree",
"cwe": "Kwere",
"cwg": "Chewong; Cheq Wong",
"cwt": "Kuwaataay",
"cy": "Welsh",
"cya": "Nopala Chatino",
"cyb": "Cayubaba",
"cyo": "Cuyonon",
"czh": "Huizhou Chinese",
"czk": "Knaanic",
"czn": "Zenzontepec Chatino",
"czo": "Min Zhong Chinese",
"czt": "Zotung Chin",
"da": "Danish",
"daa": "Dangaléat",
"dac": "Dambi",
"dad": "Marik",
"dae": "Duupa",
"dag": "Dagbani",
"dah": "Gwahatike",
"dai": "Day",
"daj": "Dar Fur Daju",
"dak": "Dakota",
"dal": "Dahalo",
"dam": "Damakawa",
"dao": "Daai Chin",
"daq": "Dandami Maria",
"dar": "Dargwa",
"das": "Daho-Doo",
"dau": "Dar Sila Daju",
"dav": "Taita; Dawida",
"daw": "Davawenyo",
"dax": "Dayi",
"day": "Land Dayak languages",
"daz": "Dao",
"dba": "Bangime",
"dbb": "Deno",
"dbd": "Dadiya",
"dbe": "Dabe",
"dbf": "Edopi",
"dbg": "Dogul Dom Dogon",
"dbi": "Doka",
"dbj": "Ida'an",
"dbl": "Dyirbal",
"dbm": "Duguri",
"dbn": "Duriankere",
"dbo": "Dulbu",
"dbp": "Duwai",
"dbq": "Daba",
"dbr": "Dabarre",
"dbt": "Ben Tey Dogon",
"dbu": "Bondum Dom Dogon",
"dbv": "Dungu",
"dbw": "Bankan Tey Dogon",
"dby": "Dibiyaso",
"dcc": "Deccan",
"dcr": "Negerhollands",
"dda": "Dadi Dadi",
"ddd": "Dongotono",
"dde": "Doondo",
"ddg": "Fataluku",
"ddi": "West Goodenough",
"ddj": "Jaru",
"ddn": "Dendi (Benin)",
"ddo": "Dido",
"ddr": "Dhudhuroa",
"dds": "Donno So Dogon",
"ddw": "Dawera-Daweloor",
"de": "German",
"dec": "Dagik",
"ded": "Dedua",
"dee": "Dewoin",
"def": "Dezfuli",
"deg": "Degema",
"deh": "Dehwari",
"dei": "Demisa",
"dek": "Dek",
"del": "Delaware",
"dem": "Dem",
"den": "Slave (Athapascan)",
"dep": "Pidgin Delaware",
"deq": "Dendi (Central African Republic)",
"der": "Deori",
"des": "Desano",
"dev": "Domung",
"dez": "Dengese",
"dga": "Southern Dagaare",
"dgb": "Bunoge Dogon",
"dgc": "Casiguran Dumagat Agta",
"dgd": "Dagaari Dioula",
"dge": "Degenan",
"dgg": "Doga",
"dgh": "Dghwede",
"dgi": "Northern Dagara",
"dgk": "Dagba",
"dgl": "Andaandi; Dongolawi",
"dgn": "Dagoman",
"dgo": "Dogri (individual language)",
"dgr": "Dogrib; Tłı̨chǫ",
"dgs": "Dogoso",
"dgt": "Ndra'ngith",
"dgw": "Daungwurrung",
"dgx": "Doghoro",
"dgz": "Daga",
"dhd": "Dhundari",
"dhg": "Dhangu-Djangu; Dhangu; Djangu",
"dhi": "Dhimal",
"dhl": "Dhalandji",
"dhm": "Zemba",
"dhn": "Dhanki",
"dho": "Dhodia",
"dhr": "Dhargari",
"dhs": "Dhaiso",
"dhu": "Dhurga",
"dhv": "Dehu; Drehu",
"dhw": "Dhanwar (Nepal)",
"dhx": "Dhungaloo",
"dia": "Dia",
"dib": "South Central Dinka",
"dic": "Lakota Dida",
"did": "Didinga",
"dif": "Dieri; Diyari",
"dig": "Digo; Chidigo",
"dih": "Kumiai",
"dii": "Dimbong",
"dij": "Dai",
"dik": "Southwestern Dinka",
"dil": "Dilling",
"dim": "Dime",
"din": "Dinka",
"dio": "Dibo",
"dip": "Northeastern Dinka",
"diq": "Dimli (individual language)",
"dir": "Dirim",
"dis": "Dimasa",
"diu": "Diriku",
"diw": "Northwestern Dinka",
"dix": "Dixon Reef",
"diy": "Diuwe",
"diz": "Ding",
"dja": "Djadjawurrung",
"djb": "Djinba",
"djc": "Dar Daju Daju",
"djd": "Djamindjung; Ngaliwurru",
"dje": "Zarma",
"djf": "Djangun",
"dji": "Djinang",
"djj": "Djeebbana",
"djk": "Eastern Maroon Creole; Businenge Tongo; Nenge",
"djm": "Jamsay Dogon",
"djn": "Jawoyn; Djauan",
"djo": "Jangkang",
"djr": "Djambarrpuyngu",
"dju": "Kapriman",
"djw": "Djawi",
"dka": "Dakpakha",
"dkg": "Kadung",
"dkk": "Dakka",
"dkr": "Kuijau",
"dks": "Southeastern Dinka",
"dkx": "Mazagway",
"dlg": "Dolgan",
"dlk": "Dahalik",
"dlm": "Dalmatian",
"dln": "Darlong",
"dma": "Duma",
"dmb": "Mombo Dogon",
"dmc": "Gavak",
"dmd": "Madhi Madhi",
"dme": "Dugwor",
"dmf": "Medefaidrin",
"dmg": "Upper Kinabatangan",
"dmk": "Domaaki",
"dml": "Dameli",
"dmm": "Dama",
"dmn": "Mande languages",
"dmo": "Kemedzung",
"dmr": "East Damar",
"dms": "Dampelas",
"dmu": "Dubu; Tebi",
"dmv": "Dumpas",
"dmw": "Mudburra",
"dmx": "Dema",
"dmy": "Demta; Sowari",
"dna": "Upper Grand Valley Dani",
"dnd": "Daonda",
"dne": "Ndendeule",
"dng": "Dungan",
"dni": "Lower Grand Valley Dani",
"dnj": "Dan",
"dnk": "Dengka",
"dnn": "Dzùùngoo",
"dno": "Ndrulo; Northern Lendu",
"dnr": "Danaru",
"dnt": "Mid Grand Valley Dani",
"dnu": "Danau",
"dnv": "Danu",
"dnw": "Western Dani",
"dny": "Dení",
"doa": "Dom",
"dob": "Dobu",
"doc": "Northern Dong",
"doe": "Doe",
"dof": "Domu",
"doh": "Dong",
"doi": "Dogri (macrolanguage)",
"dok": "Dondo",
"dol": "Doso",
"don": "Toura (Papua New Guinea)",
"doo": "Dongo",
"dop": "Lukpa",
"doq": "Dominican Sign Language",
"dor": "Dori'o",
"dos": "Dogosé",
"dot": "Dass",
"dov": "Dombe",
"dow": "Doyayo",
"dox": "Bussa",
"doy": "Dompo",
"doz": "Dorze",
"dpp": "Papar",
"dra": "Dravidian languages",
"drb": "Dair",
"drc": "Minderico",
"drd": "Darmiya",
"dre": "Dolpo",
"drg": "Rungus",
"dri": "C'Lela",
"drl": "Paakantyi",
"drn": "West Damar",
"dro": "Daro-Matu Melanau",
"drq": "Dura",
"drs": "Gedeo",
"drt": "Drents",
"dru": "Rukai",
"dry": "Darai",
"dsb": "Lower Sorbian",
"dse": "Dutch Sign Language",
"dsh": "Daasanach",
"dsi": "Disa",
"dsl": "Danish Sign Language",
"dsn": "Dusner",
"dso": "Desiya",
"dsq": "Tadaksahak",
"dsz": "Mardin Sign Language",
"dta": "Daur",
"dtb": "Labuk-Kinabatangan Kadazan",
"dtd": "Ditidaht",
"dth": "Adithinngithigh",
"dti": "Ana Tinga Dogon",
"dtk": "Tene Kan Dogon",
"dtm": "Tomo Kan Dogon",
"dtn": "Daatsʼíin",
"dto": "Tommo So Dogon",
"dtp": "Kadazan Dusun; Central Dusun",
"dtr": "Lotud",
"dts": "Toro So Dogon",
"dtt": "Toro Tegu Dogon",
"dtu": "Tebul Ure Dogon",
"dty": "Dotyali",
"dua": "Duala",
"dub": "Dubli",
"duc": "Duna",
"due": "Umiray Dumaget Agta",
"duf": "Dumbea; Drubea",
"dug": "Duruma; Chiduruma",
"duh": "Dungra Bhil",
"dui": "Dumun",
"duk": "Uyajitaya",
"dul": "Alabat Island Agta",
"dum": "Middle Dutch (ca. 1050-1350)",
"dun": "Dusun Deyah",
"duo": "Dupaninan Agta",
"dup": "Duano",
"duq": "Dusun Malang",
"dur": "Dii",
"dus": "Dumi",
"duu": "Drung",
"duv": "Duvle",
"duw": "Dusun Witu",
"dux": "Duungooma",
"duy": "Dicamay Agta",
"duz": "Duli-Gey",
"dv": "Dhivehi; Divehi; Maldivian",
"dva": "Duau",
"dwa": "Diri",
"dwk": "Dawik Kui",
"dwr": "Dawro",
"dws": "Dutton World Speedwords",
"dwu": "Dhuwal",
"dww": "Dawawa",
"dwy": "Dhuwaya",
"dwz": "Dewas Rai",
"dya": "Dyan",
"dyb": "Dyaberdyaber",
"dyd": "Dyugun",
"dyg": "Villa Viciosa Agta",
"dyi": "Djimini Senoufo",
"dym": "Yanda Dom Dogon",
"dyn": "Dyangadi; Dhanggatti",
"dyo": "Jola-Fonyi",
"dyu": "Dyula",
"dyy": "Djabugay; Dyaabugay",
"dz": "Dzongkha",
"dza": "Tunzu",
"dze": "Djiwarli",
"dzg": "Dazaga",
"dzl": "Dzalakha",
"dzn": "Dzando",
"eaa": "Karenggapa",
"ebc": "Beginci",
"ebg": "Ebughu",
"ebk": "Eastern Bontok",
"ebo": "Teke-Ebo",
"ebr": "Ebrié",
"ebu": "Embu; Kiembu",
"ecr": "Eteocretan",
"ecs": "Ecuadorian Sign Language",
"ecy": "Eteocypriot",
"ee": "Ewe",
"eee": "E",
"efa": "Efai",
"efe": "Efe",
"efi": "Efik",
"ega": "Ega",
"egl": "Emilian",
"egm": "Benamanga",
"ego": "Eggon",
"egx": "Egyptian languages",
"egy": "Egyptian (Ancient)",
"ehs": "Miyakubo Sign Language",
"ehu": "Ehueun",
"eip": "Eipomek",
"eit": "Eitiep",
"eiv": "Askopan",
"eja": "Ejamat",
"eka": "Ekajuk",
"eke": "Ekit",
"ekg": "Ekari",
"eki": "Eki",
"ekk": "Standard Estonian",
"ekl": "Kol (Bangladesh); Kol",
"ekm": "Elip",
"eko": "Koti",
"ekp": "Ekpeye",
"ekr": "Yace",
"eky": "Eastern Kayah",
"el": "Modern Greek (1453-)",
"ele": "Elepi",
"elh": "El Hugeirat",
"eli": "Nding",
"elk": "Elkei",
"elm": "Eleme",
"elo": "El Molo",
"elu": "Elu",
"elx": "Elamite",
"ema": "Emai-Iuleha-Ora",
"emb": "Embaloh",
"eme": "Emerillon",
"emg": "Eastern Meohang",
"emi": "Mussau-Emira",
"emk": "Eastern Maninkakan",
"emm": "Mamulique",
"emn": "Eman",
"emp": "Northern Emberá",
"emq": "Eastern Minyag",
"ems": "Pacific Gulf Yupik",
"emu": "Eastern Muria",
"emw": "Emplawas",
"emx": "Erromintxela",
"emy": "Epigraphic Mayan",
"emz": "Mbessa",
"en": "English",
"ena": "Apali",
"enb": "Markweeta",
"enc": "En",
"end": "Ende",
"enf": "Forest Enets",
"enh": "Tundra Enets",
"enl": "Enlhet",
"enm": "Middle English (1100-1500)",
"enn": "Engenni",
"eno": "Enggano",
"enq": "Enga",
"enr": "Emumu; Emem",
"enu": "Enu",
"env": "Enwan (Edo State)",
"enw": "Enwan (Akwa Ibom State)",
"enx": "Enxet",
"eo": "Esperanto",
"eot": "Beti (Côte d'Ivoire)",
"epi": "Epie",
"era": "Eravallan",
"erg": "Sie",
"erh": "Eruwa",
"eri": "Ogea",
"erk": "South Efate",
"ero": "Horpa",
"err": "Erre",
"ers": "Ersu",
"ert": "Eritai",
"erw": "Erokwanas",
"es": "Spanish; Castilian",
"ese": "Ese Ejja",
"esg": "Aheri Gondi",
"esh": "Eshtehardi",
"esi": "North Alaskan Inupiatun",
"esk": "Northwest Alaska Inupiatun",
"esl": "Egypt Sign Language",
"esm": "Esuma",
"esn": "Salvadoran Sign Language",
"eso": "Estonian Sign Language",
"esq": "Esselen",
"ess": "Central Siberian Yupik",
"esu": "Central Yupik",
"esx": "Eskimo-Aleut languages",
"esy": "Eskayan",
"et": "Estonian",
"etb": "Etebi",
"etc": "Etchemin",
"eth": "Ethiopian Sign Language",
"etn": "Eton (Vanuatu)",
"eto": "Eton (Cameroon)",
"etr": "Edolo",
"ets": "Yekhee",
"ett": "Etruscan",
"etu": "Ejagham",
"etx": "Eten",
"etz": "Semimi",
"eu": "Basque",
"euq": "Basque (family)",
"eve": "Even",
"evh": "Uvbie",
"evn": "Evenki",
"ewo": "Ewondo",
"ext": "Extremaduran",
"eya": "Eyak",
"eyo": "Keiyo",
"eza": "Ezaa",
"eze": "Uzekwe",
"fa": "Persian",
"faa": "Fasu",
"fab": "Fa d'Ambu",
"fad": "Wagi",
"faf": "Fagani",
"fag": "Finongan",
"fah": "Baissa Fali",
"fai": "Faiwol",
"faj": "Faita",
"fak": "Fang (Cameroon)",
"fal": "South Fali",
"fam": "Fam",
"fan": "Fang (Equatorial Guinea)",
"fap": "Paloor",
"far": "Fataleka",
"fat": "Fanti",
"fau": "Fayu",
"fax": "Fala",
"fay": "Southwestern Fars",
"faz": "Northwestern Fars",
"fbl": "West Albay Bikol",
"fcs": "Quebec Sign Language",
"fer": "Feroge",
"ff": "Fulah",
"ffi": "Foia Foia",
"ffm": "Maasina Fulfulde",
"fgr": "Fongoro",
"fi": "Finnish",
"fia": "Nobiin",
"fie": "Fyer",
"fif": "Faifi",
"fil": "Filipino; Pilipino",
"fip": "Fipa",
"fir": "Firan",
"fit": "Tornedalen Finnish; Meänkieli",
"fiu": "Finno-Ugrian languages",
"fiw": "Fiwaga",
"fj": "Fijian",
"fkk": "Kirya-Konzəl",
"fkv": "Kven Finnish",
"fla": "Kalispel-Pend d'Oreille",
"flh": "Foau",
"fli": "Fali",
"fll": "North Fali",
"fln": "Flinders Island",
"flr": "Fuliiru",
"fly": "Flaaitaal; Tsotsitaal",
"fmp": "Fe'fe'",
"fmu": "Far Western Muria",
"fnb": "Fanbak",
"fng": "Fanagalo",
"fni": "Fania",
"fo": "Faroese",
"fod": "Foodo",
"foi": "Foi",
"fom": "Foma",
"fon": "Fon",
"for": "Fore",
"fos": "Siraya",
"fox": "Formosan languages",
"fpe": "Fernando Po Creole English",
"fqs": "Fas",
"fr": "French",
"frc": "Cajun French",
"frd": "Fordata",
"frk": "Frankish",
"frm": "Middle French (ca. 1400-1600)",
"fro": "Old French (842-ca. 1400)",
"frp": "Arpitan; Francoprovençal",
"frq": "Forak",
"frr": "Northern Frisian",
"frs": "Eastern Frisian",
"frt": "Fortsenal",
"fse": "Finnish Sign Language",
"fsl": "French Sign Language",
"fss": "Finland-Swedish Sign Language; finlandssvenskt teckenspråk; suomenruotsalainen viittomakieli",
"fub": "Adamawa Fulfulde",
"fuc": "Pulaar",
"fud": "East Futuna",
"fue": "Borgu Fulfulde",
"fuf": "Pular",
"fuh": "Western Niger Fulfulde",
"fui": "Bagirmi Fulfulde",
"fuj": "Ko",
"fum": "Fum",
"fun": "Fulniô",
"fuq": "Central-Eastern Niger Fulfulde",
"fur": "Friulian",
"fut": "Futuna-Aniwa",
"fuu": "Furu",
"fuv": "Nigerian Fulfulde",
"fuy": "Fuyug",
"fvr": "Fur",
"fwa": "Fwâi",
"fwe": "Fwe",
"fy": "Western Frisian",
"ga": "Irish",
"gaa": "Ga",
"gab": "Gabri",
"gac": "Mixed Great Andamanese",
"gad": "Gaddang",
"gae": "Guarequena",
"gaf": "Gende",
"gag": "Gagauz",
"gah": "Alekano",
"gai": "Borei",
"gaj": "Gadsup",
"gak": "Gamkonora",
"gal": "Galolen",
"gam": "Kandawo",
"gan": "Gan Chinese",
"gao": "Gants",
"gap": "Gal",
"gaq": "Gata'",
"gar": "Galeya",
"gas": "Adiwasi Garasia",
"gat": "Kenati",
"gau": "Mudhili Gadaba",
"gaw": "Nobonob",
"gax": "Borana-Arsi-Guji Oromo",
"gay": "Gayo",
"gaz": "West Central Oromo",
"gba": "Gbaya (Central African Republic)",
"gbb": "Kaytetye",
"gbd": "Karajarri",
"gbe": "Niksek",
"gbf": "Gaikundi",
"gbg": "Gbanziri",
"gbh": "Defi Gbe",
"gbi": "Galela",
"gbj": "Bodo Gadaba",
"gbk": "Gaddi",
"gbl": "Gamit",
"gbm": "Garhwali",
"gbn": "Mo'da",
"gbo": "Northern Grebo",
"gbp": "Gbaya-Bossangoa",
"gbq": "Gbaya-Bozoum",
"gbr": "Gbagyi",
"gbs": "Gbesi Gbe",
"gbu": "Gagadu",
"gbv": "Gbanu",
"gbw": "Gabi-Gabi",
"gbx": "Eastern Xwla Gbe",
"gby": "Gbari",
"gbz": "Zoroastrian Dari",
"gcc": "Mali",
"gcd": "Ganggalida",
"gce": "Galice",
"gcf": "Guadeloupean Creole French",
"gcl": "Grenadian Creole English",
"gcn": "Gaina",
"gcr": "Guianese Creole French",
"gct": "Colonia Tovar German",
"gd": "Scottish Gaelic; Gaelic",
"gda": "Gade Lohar",
"gdb": "Pottangi Ollar Gadaba",
"gdc": "Gugu Badhun",
"gdd": "Gedaged",
"gde": "Gude",
"gdf": "Guduf-Gava",
"gdg": "Ga'dang",
"gdh": "Gadjerawang; Gajirrabeng",
"gdi": "Gundi",
"gdj": "Gurdjar",
"gdk": "Gadang",
"gdl": "Dirasha",
"gdm": "Laal",
"gdn": "Umanakaina",
"gdo": "Ghodoberi",
"gdq": "Mehri",
"gdr": "Wipi",
"gds": "Ghandruk Sign Language",
"gdt": "Kungardutyi",
"gdu": "Gudu",
"gdx": "Godwari",
"gea": "Geruma",
"geb": "Kire",
"gec": "Gboloo Grebo",
"ged": "Gade",
"gef": "Gerai",
"geg": "Gengle",
"geh": "Hutterite German; Hutterisch",
"gei": "Gebe",
"gej": "Gen",
"gek": "Ywom",
"gel": "ut-Ma'in",
"gem": "Germanic languages",
"geq": "Geme",
"ges": "Geser-Gorom",
"gev": "Eviya",
"gew": "Gera",
"gex": "Garre",
"gey": "Enya",
"gez": "Geez",
"gfk": "Patpatar",
"gft": "Gafat",
"gga": "Gao",
"ggb": "Gbii",
"ggd": "Gugadj",
"gge": "Gurr-goni",
"ggg": "Gurgula",
"ggk": "Kungarakany",
"ggl": "Ganglau",
"ggt": "Gitua",
"ggu": "Gagu; Gban",
"ggw": "Gogodala",
"gha": "Ghadamès",
"ghc": "Hiberno-Scottish Gaelic",
"ghe": "Southern Ghale",
"ghh": "Northern Ghale",
"ghk": "Geko Karen",
"ghl": "Ghulfan",
"ghn": "Ghanongga",
"gho": "Ghomara",
"ghr": "Ghera",
"ghs": "Guhu-Samane",
"ght": "Kuke; Kutang Ghale",
"gia": "Kija",
"gib": "Gibanawa",
"gic": "Gail",
"gid": "Gidar",
"gie": "Gaɓogbo; Guébie",
"gig": "Goaria",
"gih": "Githabul",
"gii": "Girirra",
"gil": "Gilbertese",
"gim": "Gimi (Eastern Highlands)",
"gin": "Hinukh",
"gip": "Gimi (West New Britain)",
"giq": "Green Gelao",
"gir": "Red Gelao",
"gis": "North Giziga",
"git": "Gitxsan",
"giu": "Mulao",
"giw": "White Gelao",
"gix": "Gilima",
"giy": "Giyug",
"giz": "South Giziga",
"gjk": "Kachi Koli",
"gjm": "Gunditjmara",
"gjn": "Gonja",
"gjr": "Gurindji Kriol",
"gju": "Gujari",
"gka": "Guya",
"gkd": "Magɨ (Madang Province)",
"gke": "Ndai",
"gkn": "Gokana",
"gko": "Kok-Nar",
"gkp": "Guinea Kpelle",
"gku": "ǂUngkue",
"gl": "Galician",
"glb": "Belning",
"glc": "Bon Gula",
"gld": "Nanai",
"glh": "Northwest Pashai; Northwest Pashayi",
"glj": "Gula Iro",
"glk": "Gilaki",
"gll": "Garlali",
"glo": "Galambu",
"glr": "Glaro-Twabo",
"glu": "Gula (Chad)",
"glw": "Glavda",
"gly": "Gule",
"gma": "Gambera",
"gmb": "Gula'alaa",
"gmd": "Mághdì",
"gme": "East Germanic languages",
"gmg": "Magɨyi",
"gmh": "Middle High German (ca. 1050-1500)",
"gml": "Middle Low German",
"gmm": "Gbaya-Mbodomo",
"gmn": "Gimnime",
"gmq": "North Germanic languages",
"gmr": "Mirning; Mirniny",
"gmu": "Gumalu",
"gmv": "Gamo",
"gmw": "West Germanic languages",
"gmx": "Magoma",
"gmy": "Mycenaean Greek",
"gmz": "Mgbolizhia",
"gn": "Guarani",
"gna": "Kaansa",
"gnb": "Gangte",
"gnc": "Guanche",
"gnd": "Zulgo-Gemzek",
"gne": "Ganang",
"gng": "Ngangam",
"gnh": "Lere",
"gni": "Gooniyandi",
"gnj": "Ngen",
"gnk": "ǁGana",
"gnl": "Gangulu",
"gnm": "Ginuman",
"gnn": "Gumatj",
"gno": "Northern Gondi",
"gnq": "Gana",
"gnr": "Gureng Gureng",
"gnt": "Guntai",
"gnu": "Gnau",
"gnw": "Western Bolivian Guaraní",
"gnz": "Ganzi",
"goa": "Guro",
"gob": "Playero",
"goc": "Gorakor",
"god": "Godié",
"goe": "Gongduk",
"gof": "Gofa",
"gog": "Gogo",
"goh": "Old High German (ca. 750-1050)",
"goi": "Gobasi",
"goj": "Gowlan",
"gok": "Gowli",
"gol": "Gola",
"gom": "Goan Konkani",
"gon": "Gondi",
"goo": "Gone Dau",
"gop": "Yeretuar",
"goq": "Gorap",
"gor": "Gorontalo",
"gos": "Gronings",
"got": "Gothic",
"gou": "Gavar",
"gov": "Goo",
"gow": "Gorowa",
"gox": "Gobu",
"goy": "Goundo",
"goz": "Gozarkhani",
"gpa": "Gupa-Abawa",
"gpe": "Ghanaian Pidgin English",
"gpn": "Taiap",
"gqa": "Ga'anda",
"gqi": "Guiqiong",
"gqn": "Guana (Brazil)",
"gqr": "Gor",
"gqu": "Qau",
"gra": "Rajput Garasia",
"grb": "Grebo",
"grc": "Ancient Greek (to 1453)",
"grd": "Guruntum-Mbaaru",
"grg": "Madi",
"grh": "Gbiri-Niragu",
"gri": "Ghari",
"grj": "Southern Grebo",
"grk": "Greek languages",
"grm": "Kota Marudu Talantang",
"gro": "Groma",
"grq": "Gorovu",
"grr": "Taznatit",
"grs": "Gresi",
"grt": "Garo",
"gru": "Kistane",
"grv": "Central Grebo",
"grw": "Gweda",
"grx": "Guriaso",
"gry": "Barclayville Grebo",
"grz": "Guramalum",
"gse": "Ghanaian Sign Language",
"gsg": "German Sign Language",
"gsl": "Gusilay",
"gsm": "Guatemalan Sign Language",
"gsn": "Nema; Gusan",
"gso": "Southwest Gbaya",
"gsp": "Wasembo",
"gss": "Greek Sign Language",
"gsw": "Swiss German; Alemannic; Alsatian",
"gta": "Guató",
"gtu": "Aghu-Tharnggala",
"gu": "Gujarati",
"gua": "Shiki",
"gub": "Guajajára",
"guc": "Wayuu",
"gud": "Yocoboué Dida",
"gue": "Gurindji",
"guf": "Gupapuyngu",
"gug": "Paraguayan Guaraní",
"guh": "Guahibo",
"gui": "Eastern Bolivian Guaraní",
"guk": "Gumuz",
"gul": "Sea Island Creole English",
"gum": "Guambiano",
"gun": "Mbyá Guaraní",
"guo": "Guayabero",
"gup": "Gunwinggu",
"guq": "Aché",
"gur": "Farefare",
"gus": "Guinean Sign Language",
"gut": "Maléku Jaíka",
"guu": "Yanomamö",
"guw": "Gun",
"gux": "Gourmanchéma",
"guz": "Gusii; Ekegusii",
"gv": "Manx",
"gva": "Guana (Paraguay)",
"gvc": "Guanano",
"gve": "Duwet",
"gvf": "Golin",
"gvj": "Guajá",
"gvl": "Gulay",
"gvm": "Gurmana",
"gvn": "Kuku-Yalanji",
"gvo": "Gavião Do Jiparaná",
"gvp": "Pará Gavião",
"gvr": "Gurung",
"gvs": "Gumawana",
"gvy": "Guyani",
"gwa": "Mbato",
"gwb": "Gwa",
"gwc": "Gawri; Kalami",
"gwd": "Gawwada",
"gwe": "Gweno",
"gwf": "Gowro",
"gwg": "Moo",
"gwi": "Gwichʼin",
"gwj": "ǀGwi",
"gwm": "Awngthim",
"gwn": "Gwandara",
"gwr": "Gwere",
"gwt": "Gawar-Bati",
"gwu": "Guwamu",
"gww": "Kwini",
"gwx": "Gua",
"gxx": "Wè Southern",
"gya": "Northwest Gbaya",
"gyb": "Garus",
"gyd": "Kayardild",
"gye": "Gyem",
"gyf": "Gungabula",
"gyg": "Gbayi",
"gyi": "Gyele",
"gyl": "Gayil",
"gym": "Ngäbere",
"gyn": "Guyanese Creole English",
"gyo": "Gyalsumdo",
"gyr": "Guarayu",
"gyy": "Gunya",
"gyz": "Geji; Gyaazi",
"gza": "Ganza",
"gzi": "Gazi",
"gzn": "Gane",
"ha": "Hausa",
"haa": "Han",
"hab": "Hanoi Sign Language",
"hac": "Gurani",
"had": "Hatam",
"hae": "Eastern Oromo",
"haf": "Haiphong Sign Language",
"hag": "Hanga",
"hah": "Hahon",
"hai": "Haida",
"haj": "Hajong",
"hak": "Hakka Chinese",
"hal": "Halang",
"ham": "Hewa",
"han": "Hangaza",
"hao": "Hakö",
"hap": "Hupla",
"haq": "Ha",
"har": "Harari",
"has": "Haisla",
"hav": "Havu",
"haw": "Hawaiian",
"hax": "Southern Haida",
"hay": "Haya",
"haz": "Hazaragi",
"hba": "Hamba",
"hbb": "Huba",
"hbn": "Heiban",
"hbo": "Ancient Hebrew",
"hbu": "Habu",
"hca": "Andaman Creole Hindi",
"hch": "Huichol",
"hdn": "Northern Haida",
"hds": "Honduras Sign Language",
"hdy": "Hadiyya",
"he": "Hebrew",
"hea": "Northern Qiandong Miao",
"hed": "Herdé",
"heg": "Helong",
"heh": "Hehe",
"hei": "Heiltsuk",
"hem": "Hemba",
"hgm": "Haiǁom",
"hgw": "Haigwai",
"hhi": "Hoia Hoia",
"hhr": "Kerak",
"hhy": "Hoyahoya",
"hi": "Hindi",
"hia": "Lamang",
"hib": "Hibito",
"hid": "Hidatsa",
"hif": "Fiji Hindi",
"hig": "Kamwe",
"hih": "Pamosu",
"hii": "Hinduri",
"hij": "Hijuk",
"hik": "Seit-Kaitetu",
"hil": "Hiligaynon",
"him": "Himachali languages; Western Pahari languages",
"hio": "Tsoa",
"hir": "Himarimã",
"hit": "Hittite",
"hiw": "Hiw",
"hix": "Hixkaryána",
"hji": "Haji",
"hka": "Kahe",
"hke": "Hunde",
"hkh": "Khah; Poguli",
"hkk": "Hunjara-Kaina Ke",
"hkn": "Mel-Khaonh",
"hks": "Hong Kong Sign Language; Heung Kong Sau Yue",
"hla": "Halia",
"hlb": "Halbi",
"hld": "Halang Doan",
"hle": "Hlersu",
"hlt": "Matu Chin",
"hlu": "Hieroglyphic Luwian",
"hma": "Southern Mashan Hmong; Southern Mashan Miao",
"hmb": "Humburi Senni Songhay",
"hmc": "Central Huishui Hmong; Central Huishui Miao",
"hmd": "Large Flowery Miao; A-hmaos; Da-Hua Miao",
"hme": "Eastern Huishui Hmong; Eastern Huishui Miao",
"hmf": "Hmong Don",
"hmg": "Southwestern Guiyang Hmong",
"hmh": "Southwestern Huishui Hmong; Southwestern Huishui Miao",
"hmi": "Northern Huishui Hmong; Northern Huishui Miao",
"hmj": "Ge; Gejia",
"hmk": "Maek",
"hml": "Luopohe Hmong; Luopohe Miao",
"hmm": "Central Mashan Hmong; Central Mashan Miao",
"hmn": "Hmong; Mong",
"hmp": "Northern Mashan Hmong; Northern Mashan Miao",
"hmq": "Eastern Qiandong Miao",
"hmr": "Hmar",
"hms": "Southern Qiandong Miao",
"hmt": "Hamtai",
"hmu": "Hamap",
"hmv": "Hmong Dô",
"hmw": "Western Mashan Hmong; Western Mashan Miao",
"hmx": "Hmong-Mien languages",
"hmy": "Southern Guiyang Hmong; Southern Guiyang Miao",
"hmz": "Hmong Shua; Sinicized Miao",
"hna": "Mina (Cameroon)",
"hnd": "Southern Hindko",
"hne": "Chhattisgarhi",
"hng": "Hungu",
"hnh": "ǁAni",
"hni": "Hani",
"hnj": "Hmong Njua; Mong Leng; Mong Njua",
"hnn": "Hanunoo",
"hno": "Northern Hindko",
"hns": "Caribbean Hindustani",
"hnu": "Hung",
"ho": "Hiri Motu",
"hoa": "Hoava",
"hob": "Mari (Madang Province)",
"hoc": "Ho",
"hod": "Holma",
"hoe": "Horom",
"hoh": "Hobyót",
"hoi": "Holikachuk",
"hoj": "Hadothi; Haroti",
"hok": "Hokan languages",
"hol": "Holu",
"hom": "Homa",
"hoo": "Holoholo",
"hop": "Hopi",
"hor": "Horo",
"hos": "Ho Chi Minh City Sign Language",
"hot": "Hote; Malê",
"hov": "Hovongan",
"how": "Honi",
"hoy": "Holiya",
"hoz": "Hozo",
"hpo": "Hpon",
"hps": "Hawai'i Sign Language (HSL); Hawai'i Pidgin Sign Language",
"hr": "Croatian",
"hra": "Hrangkhol",
"hrc": "Niwer Mil",
"hre": "Hre",
"hrk": "Haruku",
"hrm": "Horned Miao",
"hro": "Haroi",
"hrp": "Nhirrpi",
"hrt": "Hértevin",
"hru": "Hruso",
"hrw": "Warwar Feni",
"hrx": "Hunsrik",
"hrz": "Harzani",
"hsb": "Upper Sorbian",
"hsh": "Hungarian Sign Language",
"hsl": "Hausa Sign Language",
"hsn": "Xiang Chinese",
"hss": "Harsusi",
"ht": "Haitian; Haitian Creole",
"hti": "Hoti",
"hto": "Minica Huitoto",
"hts": "Hadza",
"htu": "Hitu",
"htx": "Middle Hittite",
"hu": "Hungarian",
"hub": "Huambisa",
"huc": "ǂHua; ǂʼAmkhoe",
"hud": "Huaulu",
"hue": "San Francisco Del Mar Huave",
"huf": "Humene",
"hug": "Huachipaeri",
"huh": "Huilliche",
"hui": "Huli",
"huj": "Northern Guiyang Hmong; Northern Guiyang Miao",
"huk": "Hulung",
"hul": "Hula",
"hum": "Hungana",
"huo": "Hu",
"hup": "Hupa",
"huq": "Tsat",
"hur": "Halkomelem",
"hus": "Huastec",
"hut": "Humla",
"huu": "Murui Huitoto",
"huv": "San Mateo Del Mar Huave",
"huw": "Hukumina",
"hux": "Nüpode Huitoto",
"huy": "Hulaulá",
"huz": "Hunzib",
"hvc": "Haitian Vodoun Culture Language",
"hve": "San Dionisio Del Mar Huave",
"hvk": "Haveke",
"hvn": "Sabu",
"hvv": "Santa María Del Mar Huave",
"hwa": "Wané",
"hwc": "Hawai'i Creole English; Hawai'i Pidgin",
"hwo": "Hwana",
"hy": "Armenian",
"hya": "Hya",
"hyw": "Western Armenian",
"hyx": "Armenian (family)",
"hz": "Herero",
"ia": "Interlingua (International Auxiliary Language Association)",
"iai": "Iaai",
"ian": "Iatmul",
"iar": "Purari",
"iba": "Iban",
"ibb": "Ibibio",
"ibd": "Iwaidja",
"ibe": "Akpes",
"ibg": "Ibanag",
"ibh": "Bih",
"ibl": "Ibaloi",
"ibm": "Agoi",
"ibn": "Ibino",
"ibr": "Ibuoro",
"ibu": "Ibu",
"iby": "Ibani",
"ica": "Ede Ica",
"ich": "Etkywan",
"icl": "Icelandic Sign Language",
"icr": "Islander Creole English",
"id": "Indonesian",
"ida": "Idakho-Isukha-Tiriki; Luidakho-Luisukha-Lutirichi",
"idb": "Indo-Portuguese",
"idc": "Idon; Ajiya",
"idd": "Ede Idaca",
"ide": "Idere",
"idi": "Idi",
"idr": "Indri",
"ids": "Idesa",
"idt": "Idaté",
"idu": "Idoma",
"ie": "Interlingue; Occidental",
"ifa": "Amganad Ifugao",
"ifb": "Batad Ifugao; Ayangan Ifugao",
"ife": "Ifè",
"iff": "Ifo",
"ifk": "Tuwali Ifugao",
"ifm": "Teke-Fuumu",
"ifu": "Mayoyao Ifugao",
"ify": "Keley-I Kallahan",
"ig": "Igbo",
"igb": "Ebira",
"ige": "Igede",
"igg": "Igana",
"igl": "Igala",
"igm": "Kanggape",
"ign": "Ignaciano",
"igo": "Isebe",
"igs": "Interglossa",
"igw": "Igwe",
"ihb": "Iha Based Pidgin",
"ihi": "Ihievbe",
"ihp": "Iha",
"ihw": "Bidhawal",
"ii": "Sichuan Yi; Nuosu",
"iin": "Thiin",
"iir": "Indo-Iranian languages",
"ijc": "Izon",
"ije": "Biseni",
"ijj": "Ede Ije",
"ijn": "Kalabari",
"ijo": "Ijo languages",
"ijs": "Southeast Ijo",
"ik": "Inupiaq",
"ike": "Eastern Canadian Inuktitut",
"iki": "Iko",
"ikk": "Ika",
"ikl": "Ikulu",
"iko": "Olulumo-Ikom",
"ikp": "Ikpeshi",
"ikr": "Ikaranggal",
"iks": "Inuit Sign Language",
"ikt": "Inuinnaqtun; Western Canadian Inuktitut",
"ikv": "Iku-Gora-Ankwa",
"ikw": "Ikwere",
"ikx": "Ik",
"ikz": "Ikizu",
"ila": "Ile Ape",
"ilb": "Ila",
"ilg": "Garig-Ilgar",
"ili": "Ili Turki",
"ilk": "Ilongot",
"ilm": "Iranun (Malaysia)",
"ilo": "Iloko",
"ilp": "Iranun (Philippines)",
"ils": "International Sign",
"ilu": "Ili'uun",
"ilv": "Ilue",
"ima": "Mala Malasar",
"imi": "Anamgura",
"iml": "Miluk",
"imn": "Imonda",
"imo": "Imbongu",
"imr": "Imroing",
"ims": "Marsian",
"imt": "Imotong",
"imy": "Milyan",
"inb": "Inga",
"inc": "Indic languages",
"ine": "Indo-European languages",
"ing": "Degexit'an",
"inh": "Ingush",
"inj": "Jungle Inga",
"inl": "Indonesian Sign Language",
"inm": "Minaean",
"inn": "Isinai",
"ino": "Inoke-Yate",
"inp": "Iñapari",
"ins": "Indian Sign Language",
"int": "Intha",
"inz": "Ineseño",
"io": "Ido",
"ior": "Inor",
"iou": "Tuma-Irumu",
"iow": "Iowa-Oto",
"ipi": "Ipili",
"ipo": "Ipiko",
"iqu": "Iquito",
"iqw": "Ikwo",
"ira": "Iranian languages",
"ire": "Iresim",
"irh": "Irarutu",
"iri": "Rigwe; Irigwe",
"irk": "Iraqw",
"irn": "Irántxe",
"iro": "Iroquoian languages",
"irr": "Ir",
"iru": "Irula",
"irx": "Kamberau",
"iry": "Iraya",
"is": "Icelandic",
"isa": "Isabi",
"isc": "Isconahua",
"isd": "Isnag",
"ise": "Italian Sign Language",
"isg": "Irish Sign Language",
"ish": "Esan",
"isi": "Nkem-Nkum",
"isk": "Ishkashimi",
"ism": "Masimasi",
"isn": "Isanzu",
"iso": "Isoko",
"isr": "Israeli Sign Language",
"ist": "Istriot",
"isu": "Isu (Menchum Division)",
"it": "Italian",
"itb": "Binongan Itneg",
"itc": "Italic languages",
"itd": "Southern Tidung",
"ite": "Itene",
"iti": "Inlaod Itneg",
"itk": "Judeo-Italian",
"itl": "Itelmen",
"itm": "Itu Mbon Uzo",
"ito": "Itonama",
"itr": "Iteri",
"its": "Isekiri",
"itt": "Maeng Itneg",
"itv": "Itawit",
"itw": "Ito",
"itx": "Itik",
"ity": "Moyadan Itneg",
"itz": "Itzá",
"iu": "Inuktitut",
"ium": "Iu Mien",
"ivb": "Ibatan",
"ivv": "Ivatan",
"iwk": "I-Wak",
"iwm": "Iwam",
"iwo": "Iwur",
"iws": "Sepik Iwam",
"ixc": "Ixcatec",
"ixl": "Ixil",
"iya": "Iyayu",
"iyo": "Mesaka",
"iyx": "Yaka (Congo)",
"izh": "Ingrian",
"izr": "Izere",
"izz": "Izii",
"ja": "Japanese",
"jaa": "Jamamadí",
"jab": "Hyam",
"jac": "Popti'; Jakalteko",
"jad": "Jahanka",
"jae": "Yabem",
"jaf": "Jara",
"jah": "Jah Hut",
"jaj": "Zazao",
"jak": "Jakun",
"jal": "Yalahatan",
"jam": "Jamaican Creole English",
"jan": "Jandai",
"jao": "Yanyuwa",
"jaq": "Yaqay",
"jas": "New Caledonian Javanese",
"jat": "Jakati",
"jau": "Yaur",
"jax": "Jambi Malay",
"jay": "Yan-nhangu; Nhangu",
"jaz": "Jawe",
"jbe": "Judeo-Berber",
"jbi": "Badjiri",
"jbj": "Arandai",
"jbk": "Barikewa",
"jbm": "Bijim",
"jbn": "Nafusi",
"jbo": "Lojban",
"jbr": "Jofotek-Bromnya",
"jbt": "Jabutí",
"jbu": "Jukun Takum",
"jbw": "Yawijibaya",
"jcs": "Jamaican Country Sign Language",
"jct": "Krymchak",
"jda": "Jad",
"jdg": "Jadgali",
"jdt": "Judeo-Tat",
"jeb": "Jebero",
"jee": "Jerung",
"jeh": "Jeh",
"jei": "Yei",
"jek": "Jeri Kuo",
"jel": "Yelmek",
"jen": "Dza",
"jer": "Jere",
"jet": "Manem",
"jeu": "Jonkor Bourmataguil",
"jgb": "Ngbee",
"jge": "Judeo-Georgian",
"jgk": "Gwak",
"jgo": "Ngomba",
"jhi": "Jehai",
"jhs": "Jhankot Sign Language",
"jia": "Jina",
"jib": "Jibu",
"jic": "Tol",
"jid": "Bu (Kaduna State)",
"jie": "Jilbe",
"jig": "Jingulu; Djingili",
"jih": "sTodsde; Shangzhai",
"jii": "Jiiddu",
"jil": "Jilim",
"jim": "Jimi (Cameroon)",
"jio": "Jiamao",
"jiq": "Guanyinqiao; Lavrung",
"jit": "Jita",
"jiu": "Youle Jinuo",
"jiv": "Shuar",
"jiy": "Buyuan Jinuo",
"jje": "Jejueo",
"jjr": "Bankal",
"jka": "Kaera",
"jkm": "Mobwa Karen",
"jko": "Kubo",
"jkp": "Paku Karen",
"jkr": "Koro (India)",
"jks": "Amami Koniya Sign Language",
"jku": "Labir",
"jle": "Ngile",
"jls": "Jamaican Sign Language",
"jma": "Dima",
"jmb": "Zumbun",
"jmc": "Machame",
"jmd": "Yamdena",
"jmi": "Jimi (Nigeria)",
"jml": "Jumli",
"jmn": "Makuri Naga",
"jmr": "Kamara",
"jms": "Mashi (Nigeria)",
"jmw": "Mouwase",
"jmx": "Western Juxtlahuaca Mixtec",
"jna": "Jangshung",
"jnd": "Jandavra",
"jng": "Yangman",
"jni": "Janji",
"jnj": "Yemsa",
"jnl": "Rawat",
"jns": "Jaunsari",
"job": "Joba",
"jod": "Wojenaka",
"jog": "Jogi",
"jor": "Jorá",
"jos": "Jordanian Sign Language",
"jow": "Jowulu",
"jpa": "Jewish Palestinian Aramaic",
"jpr": "Judeo-Persian",
"jpx": "Japanese (family)",
"jqr": "Jaqaru",
"jra": "Jarai",
"jrb": "Judeo-Arabic",
"jrr": "Jiru",
"jrt": "Jakattoe",
"jru": "Japrería",
"jsl": "Japanese Sign Language",
"jua": "Júma",
"jub": "Wannu",
"juc": "Jurchen",
"jud": "Worodougou",
"juh": "Hõne",
"jui": "Ngadjuri",
"juk": "Wapan",
"jul": "Jirel",
"jum": "Jumjum",
"jun": "Juang",
"juo": "Jiba",
"jup": "Hupdë",
"jur": "Jurúna",
"jus": "Jumla Sign Language",
"jut": "Jutish",
"juu": "Ju",
"juw": "Wãpha",
"juy": "Juray",
"jv": "Javanese",
"jvd": "Javindo",
"jvn": "Caribbean Javanese",
"jwi": "Jwira-Pepesa",
"jya": "Jiarong",
"jye": "Judeo-Yemeni Arabic",
"jyy": "Jaya",
"ka": "Georgian",
"kaa": "Kara-Kalpak; Karakalpak",
"kab": "Kabyle",
"kac": "Kachin; Jingpho",
"kad": "Adara",
"kae": "Ketangalan",
"kaf": "Katso",
"kag": "Kajaman",
"kah": "Kara (Central African Republic)",
"kai": "Karekare",
"kaj": "Jju",
"kak": "Kalanguya; Kayapa Kallahan",
"kam": "Kamba (Kenya)",
"kao": "Xaasongaxango",
"kap": "Bezhta",
"kaq": "Capanahua",
"kar": "Karen languages",
"kav": "Katukína",
"kaw": "Kawi",
"kax": "Kao",
"kay": "Kamayurá",
"kba": "Kalarko",
"kbb": "Kaxuiâna",
"kbc": "Kadiwéu",
"kbd": "Kabardian",
"kbe": "Kanju",
"kbg": "Khamba",
"kbh": "Camsá",
"kbi": "Kaptiau",
"kbj": "Kari",
"kbk": "Grass Koiari",
"kbl": "Kanembu",
"kbm": "Iwal",
"kbn": "Kare (Central African Republic)",
"kbo": "Keliko",
"kbp": "Kabiyè",
"kbq": "Kamano",
"kbr": "Kafa",
"kbs": "Kande",
"kbt": "Abadi",
"kbu": "Kabutra",
"kbv": "Dera (Indonesia)",
"kbw": "Kaiep",
"kbx": "Ap Ma",
"kby": "Manga Kanuri",
"kbz": "Duhwa",
"kca": "Khanty",
"kcb": "Kawacha",
"kcc": "Lubila",
"kcd": "Ngkâlmpw Kanum",
"kce": "Kaivi",
"kcf": "Ukaan",
"kcg": "Tyap",
"kch": "Vono",
"kci": "Kamantan",
"kcj": "Kobiana",
"kck": "Kalanga",
"kcl": "Kela (Papua New Guinea); Kala",
"kcm": "Gula (Central African Republic)",
"kcn": "Nubi",
"kco": "Kinalakna",
"kcp": "Kanga",
"kcq": "Kamo",
"kcr": "Katla",
"kcs": "Koenoem",
"kct": "Kaian",
"kcu": "Kami (Tanzania)",
"kcv": "Kete",
"kcw": "Kabwari",
"kcx": "Kachama-Ganjule",
"kcy": "Korandje",
"kcz": "Konongo",
"kda": "Worimi",
"kdc": "Kutu",
"kdd": "Yankunytjatjara",
"kde": "Makonde",
"kdf": "Mamusi",
"kdg": "Seba",
"kdh": "Tem",
"kdi": "Kumam",
"kdj": "Karamojong",
"kdk": "Numèè; Kwényi",
"kdl": "Tsikimba",
"kdm": "Kagoma",
"kdn": "Kunda",
"kdo": "Kordofanian languages",
"kdp": "Kaningdon-Nindem",
"kdq": "Koch",
"kdr": "Karaim",
"kdt": "Kuy",
"kdu": "Kadaru",
"kdw": "Koneraw",
"kdx": "Kam",
"kdy": "Keder; Keijar",
"kdz": "Kwaja",
"kea": "Kabuverdianu",
"keb": "Kélé",
"kec": "Keiga",
"ked": "Kerewe",
"kee": "Eastern Keres",
"kef": "Kpessi",
"keg": "Tese",
"keh": "Keak",
"kei": "Kei",
"kej": "Kadar",
"kek": "Kekchí",
"kel": "Kela (Democratic Republic of Congo)",
"kem": "Kemak",
"ken": "Kenyang",
"keo": "Kakwa",
"kep": "Kaikadi",
"keq": "Kamar",
"ker": "Kera",
"kes": "Kugbo",
"ket": "Ket",
"keu": "Akebu",
"kev": "Kanikkaran",
"kew": "West Kewa",
"kex": "Kukna",
"key": "Kupia",
"kez": "Kukele",
"kfa": "Kodava",
"kfb": "Northwestern Kolami",
"kfc": "Konda-Dora",
"kfd": "Korra Koraga",
"kfe": "Kota (India)",
"kff": "Koya",
"kfg": "Kudiya",
"kfh": "Kurichiya",
"kfi": "Kannada Kurumba",
"kfj": "Kemiehua",
"kfk": "Kinnauri",
"kfl": "Kung",
"kfm": "Khunsari",
"kfn": "Kuk",
"kfo": "Koro (Côte d'Ivoire)",
"kfp": "Korwa",
"kfq": "Korku",
"kfr": "Kachhi; Kutchi",
"kfs": "Bilaspuri",
"kft": "Kanjari",
"kfu": "Katkari",
"kfv": "Kurmukar",
"kfw": "Kharam Naga",
"kfx": "Kullu Pahari",
"kfy": "Kumaoni",
"kfz": "Koromfé",
"kg": "Kongo",
"kga": "Koyaga",
"kgb": "Kawe",
"kge": "Komering",
"kgf": "Kube",
"kgg": "Kusunda",
"kgi": "Selangor Sign Language",
"kgj": "Gamale Kham",
"kgk": "Kaiwá",
"kgl": "Kunggari",
"kgm": "Karipúna",
"kgn": "Karingani",
"kgo": "Krongo",
"kgp": "Kaingang",
"kgq": "Kamoro",
"kgr": "Abun",
"kgs": "Kumbainggar",
"kgt": "Somyev",
"kgu": "Kobol",
"kgv": "Karas",
"kgw": "Karon Dori",
"kgx": "Kamaru",
"kgy": "Kyerung",
"kha": "Khasi",
"khb": "Lü",
"khc": "Tukang Besi North",
"khd": "Bädi Kanum",
"khe": "Korowai",
"khf": "Khuen",
"khg": "Khams Tibetan",
"khh": "Kehu",
"khi": "Khoisan languages",
"khj": "Kuturmi",
"khk": "Halh Mongolian",
"khl": "Lusi",
"khn": "Khandesi",
"kho": "Khotanese; Sakan",
"khp": "Kapori; Kapauri",
"khq": "Koyra Chiini Songhay",
"khr": "Kharia",
"khs": "Kasua",
"kht": "Khamti",
"khu": "Nkhumbi",
"khv": "Khvarshi",
"khw": "Khowar",
"khx": "Kanu",
"khy": "Kele (Democratic Republic of Congo)",
"khz": "Keapara",
"ki": "Kikuyu; Gikuyu",
"kia": "Kim",
"kib": "Koalib",
"kic": "Kickapoo",
"kid": "Koshin",
"kie": "Kibet",
"kif": "Eastern Parbate Kham",
"kig": "Kimaama; Kimaghima",
"kih": "Kilmeri",
"kii": "Kitsai",
"kij": "Kilivila",
"kil": "Kariya",
"kim": "Karagas",
"kio": "Kiowa",
"kip": "Sheshi Kham",
"kiq": "Kosadle; Kosare",
"kis": "Kis",
"kit": "Agob",
"kiu": "Kirmanjki (individual language)",
"kiv": "Kimbu",
"kiw": "Northeast Kiwai",
"kix": "Khiamniungan Naga",
"kiy": "Kirikiri",
"kiz": "Kisi",
"kj": "Kuanyama; Kwanyama",
"kja": "Mlap",
"kjb": "Q'anjob'al; Kanjobal",
"kjc": "Coastal Konjo",
"kjd": "Southern Kiwai",
"kje": "Kisar",
"kjg": "Khmu",
"kjh": "Khakas",
"kji": "Zabana",
"kjj": "Khinalugh",
"kjk": "Highland Konjo",
"kjl": "Western Parbate Kham",
"kjm": "Kháng",
"kjn": "Kunjen",
"kjo": "Harijan Kinnauri",
"kjp": "Pwo Eastern Karen",
"kjq": "Western Keres",
"kjr": "Kurudu",
"kjs": "East Kewa",
"kjt": "Phrae Pwo Karen",
"kju": "Kashaya",
"kjv": "Kaikavian Literary Language",
"kjx": "Ramopa",
"kjy": "Erave",
"kjz": "Bumthangkha",
"kk": "Kazakh",
"kka": "Kakanda",
"kkb": "Kwerisa",
"kkc": "Odoodee",
"kkd": "Kinuku",
"kke": "Kakabe",
"kkf": "Kalaktang Monpa",
"kkg": "Mabaka Valley Kalinga",
"kkh": "Khün",
"kki": "Kagulu",
"kkj": "Kako",
"kkk": "Kokota",
"kkl": "Kosarek Yale",
"kkm": "Kiong",
"kkn": "Kon Keu",
"kko": "Karko",
"kkp": "Gugubera; Koko-Bera",
"kkq": "Kaeku",
"kkr": "Kir-Balar",
"kks": "Giiwo",
"kkt": "Koi",
"kku": "Tumi",
"kkv": "Kangean",
"kkw": "Teke-Kukuya",
"kkx": "Kohin",
"kky": "Guugu Yimidhirr; Guguyimidjir",
"kkz": "Kaska",
"kl": "Kalaallisut; Greenlandic",
"kla": "Klamath-Modoc",
"klb": "Kiliwa",
"klc": "Kolbila",
"kld": "Gamilaraay",
"kle": "Kulung (Nepal)",
"klf": "Kendeje",
"klg": "Tagakaulo",
"klh": "Weliki",
"kli": "Kalumpang",
"klj": "Khalaj",
"klk": "Kono (Nigeria)",
"kll": "Kagan Kalagan",
"klm": "Migum",
"kln": "Kalenjin",
"klo": "Kapya",
"klp": "Kamasa",
"klq": "Rumu",
"klr": "Khaling",
"kls": "Kalasha",
"klt": "Nukna",
"klu": "Klao",
"klv": "Maskelynes",
"klw": "Tado; Lindu",
"klx": "Koluwawa",
"kly": "Kalao",
"klz": "Kabola",
"km": "Khmer; Central Khmer",
"kma": "Konni",
"kmb": "Kimbundu",
"kmc": "Southern Dong",
"kmd": "Majukayang Kalinga",
"kme": "Bakole",
"kmf": "Kare (Papua New Guinea)",
"kmg": "Kâte",
"kmh": "Kalam",
"kmi": "Kami (Nigeria)",
"kmj": "Kumarbhag Paharia",
"kmk": "Limos Kalinga",
"kml": "Tanudan Kalinga",
"kmm": "Kom (India)",
"kmn": "Awtuw",
"kmo": "Kwoma",
"kmp": "Gimme",
"kmq": "Kwama",
"kmr": "Northern Kurdish",
"kms": "Kamasau",
"kmt": "Kemtuik",
"kmu": "Kanite",
"kmv": "Karipúna Creole French",
"kmw": "Komo (Democratic Republic of Congo)",
"kmx": "Waboda",
"kmy": "Koma",
"kmz": "Khorasani Turkish",
"kn": "Kannada",
"kna": "Dera (Nigeria)",
"knb": "Lubuagan Kalinga",
"knc": "Central Kanuri",
"knd": "Konda",
"kne": "Kankanaey",
"knf": "Mankanya",
"kng": "Koongo",
"kni": "Kanufi",
"knj": "Western Kanjobal",
"knk": "Kuranko",
"knl": "Keninjal",
"knm": "Kanamarí",
"knn": "Konkani (individual language)",
"kno": "Kono (Sierra Leone)",
"knp": "Kwanja",
"knq": "Kintaq",
"knr": "Kaningra",
"kns": "Kensiu",
"knt": "Panoan Katukína",
"knu": "Kono (Guinea)",
"knv": "Tabo",
"knw": "Kung-Ekoka",
"knx": "Kendayan; Salako",
"kny": "Kanyok",
"knz": "Kalamsé",
"ko": "Korean",
"koa": "Konomala",
"koc": "Kpati",
"kod": "Kodi",
"koe": "Kacipo-Bale Suri",
"kof": "Kubi",
"kog": "Cogui; Kogi",
"koh": "Koyo",
"koi": "Komi-Permyak",
"kok": "Konkani (macrolanguage)",
"kol": "Kol (Papua New Guinea)",
"koo": "Konzo",
"kop": "Waube",
"koq": "Kota (Gabon)",
"kos": "Kosraean",
"kot": "Lagwan",
"kou": "Koke",
"kov": "Kudu-Camo",
"kow": "Kugama",
"koy": "Koyukon",
"koz": "Korak",
"kpa": "Kutto",
"kpb": "Mullu Kurumba",
"kpc": "Curripaco",
"kpd": "Koba",
"kpe": "Kpelle",
"kpf": "Komba",
"kpg": "Kapingamarangi",
"kph": "Kplang",
"kpi": "Kofei",
"kpj": "Karajá",
"kpk": "Kpan",
"kpl": "Kpala",
"kpm": "Koho",
"kpn": "Kepkiriwát",
"kpo": "Ikposo",
"kpq": "Korupun-Sela",
"kpr": "Korafe-Yegha",
"kps": "Tehit",
"kpt": "Karata",
"kpu": "Kafoa",
"kpv": "Komi-Zyrian",
"kpw": "Kobon",
"kpx": "Mountain Koiali",
"kpy": "Koryak",
"kpz": "Kupsabiny",
"kqa": "Mum",
"kqb": "Kovai",
"kqc": "Doromu-Koki",
"kqd": "Koy Sanjaq Surat",
"kqe": "Kalagan",
"kqf": "Kakabai",
"kqg": "Khe",
"kqh": "Kisankasa",
"kqi": "Koitabu",
"kqj": "Koromira",
"kqk": "Kotafon Gbe",
"kql": "Kyenele",
"kqm": "Khisa",
"kqn": "Kaonde",
"kqo": "Eastern Krahn",
"kqp": "Kimré",
"kqq": "Krenak",
"kqr": "Kimaragang",
"kqs": "Northern Kissi",
"kqt": "Klias River Kadazan",
"kqu": "Seroa",
"kqv": "Okolod",
"kqw": "Kandas",
"kqx": "Mser",
"kqy": "Koorete",
"kqz": "Korana",
"kr": "Kanuri",
"kra": "Kumhali",
"krb": "Karkin",
"krc": "Karachay-Balkar",
"krd": "Kairui-Midiki",
"kre": "Panará",
"krf": "Koro (Vanuatu)",
"krh": "Kurama",
"kri": "Krio",
"krj": "Kinaray-A",
"krk": "Kerek",
"krl": "Karelian",
"krn": "Sapo",
"kro": "Kru languages",
"krp": "Korop",
"krr": "Krung",
"krs": "Gbaya (Sudan)",
"krt": "Tumari Kanuri",
"kru": "Kurukh",
"krv": "Kavet",
"krw": "Western Krahn",
"krx": "Karon",
"kry": "Kryts",
"krz": "Sota Kanum",
"ks": "Kashmiri",
"ksa": "Shuwa-Zamani",
"ksb": "Shambala",
"ksc": "Southern Kalinga",
"ksd": "Kuanua",
"kse": "Kuni",
"ksf": "Bafia",
"ksg": "Kusaghe",
"ksh": "Kölsch",
"ksi": "Krisa; I'saka",
"ksj": "Uare",
"ksk": "Kansa",
"ksl": "Kumalu",
"ksm": "Kumba",
"ksn": "Kasiguranin",
"kso": "Kofa",
"ksp": "Kaba",
"ksq": "Kwaami",
"ksr": "Borong",
"kss": "Southern Kisi",
"kst": "Winyé",
"ksu": "Khamyang",
"ksv": "Kusu",
"ksw": "S'gaw Karen",
"ksx": "Kedang",
"ksy": "Kharia Thar",
"ksz": "Kodaku",
"kta": "Katua",
"ktb": "Kambaata",
"ktc": "Kholok",
"ktd": "Kokata; Kukatha",
"kte": "Nubri",
"ktf": "Kwami",
"ktg": "Kalkutung",
"kth": "Karanga",
"kti": "North Muyu",
"ktj": "Plapo Krumen",
"ktk": "Kaniet",
"ktl": "Koroshi",
"ktm": "Kurti",
"ktn": "Karitiâna",
"kto": "Kuot",
"ktp": "Kaduo",
"ktq": "Katabaga",
"kts": "South Muyu",
"ktt": "Ketum",
"ktu": "Kituba (Democratic Republic of Congo)",
"ktv": "Eastern Katu",
"ktw": "Kato",
"ktx": "Kaxararí",
"kty": "Kango (Bas-Uélé District)",
"ktz": "Juǀʼhoan; Juǀʼhoansi",
"ku": "Kurdish",
"kub": "Kutep",
"kuc": "Kwinsu",
"kud": "'Auhelawa",
"kue": "Kuman (Papua New Guinea)",
"kuf": "Western Katu",
"kug": "Kupa",
"kuh": "Kushi",
"kui": "Kuikúro-Kalapálo; Kalapalo",
"kuj": "Kuria",
"kuk": "Kepo'",
"kul": "Kulere",
"kum": "Kumyk",
"kun": "Kunama",
"kuo": "Kumukio",
"kup": "Kunimaipa",
"kuq": "Karipuna",
"kus": "Kusaal",
"kut": "Kutenai",
"kuu": "Upper Kuskokwim",
"kuv": "Kur",
"kuw": "Kpagua",
"kux": "Kukatja",
"kuy": "Kuuku-Ya'u",
"kuz": "Kunza",
"kv": "Komi",
"kva": "Bagvalal",
"kvb": "Kubu",
"kvc": "Kove",
"kvd": "Kui (Indonesia)",
"kve": "Kalabakan",
"kvf": "Kabalai",
"kvg": "Kuni-Boazi",
"kvh": "Komodo",
"kvi": "Kwang",
"kvj": "Psikye",
"kvk": "Korean Sign Language",
"kvl": "Kayaw",
"kvm": "Kendem",
"kvn": "Border Kuna",
"kvo": "Dobel",
"kvp": "Kompane",
"kvq": "Geba Karen",
"kvr": "Kerinci",
"kvt": "Lahta Karen; Lahta",
"kvu": "Yinbaw Karen",
"kvv": "Kola",
"kvw": "Wersing",
"kvx": "Parkari Koli",
"kvy": "Yintale Karen; Yintale",
"kvz": "Tsakwambo; Tsaukambo",
"kw": "Cornish",
"kwa": "Dâw",
"kwb": "Kwa",
"kwc": "Likwala",
"kwd": "Kwaio",
"kwe": "Kwerba",
"kwf": "Kwara'ae",
"kwg": "Sara Kaba Deme",
"kwh": "Kowiai",
"kwi": "Awa-Cuaiquer",
"kwj": "Kwanga",
"kwk": "Kwakiutl",
"kwl": "Kofyar",
"kwm": "Kwambi",
"kwn": "Kwangali",
"kwo": "Kwomtari",
"kwp": "Kodia",
"kwr": "Kwer",
"kws": "Kwese",
"kwt": "Kwesten",
"kwu": "Kwakum",
"kwv": "Sara Kaba Náà",
"kww": "Kwinti",
"kwx": "Khirwar",
"kwy": "San Salvador Kongo",
"kwz": "Kwadi",
"kxa": "Kairiru",
"kxb": "Krobu",
"kxc": "Konso; Khonso",
"kxd": "Brunei",
"kxf": "Manumanaw Karen; Manumanaw",
"kxh": "Karo (Ethiopia)",
"kxi": "Keningau Murut",
"kxj": "Kulfa",
"kxk": "Zayein Karen",
"kxm": "Northern Khmer",
"kxn": "Kanowit-Tanjong Melanau",
"kxo": "Kanoé",
"kxp": "Wadiyara Koli",
"kxq": "Smärky Kanum",
"kxr": "Koro (Papua New Guinea)",
"kxs": "Kangjia",
"kxt": "Koiwat",
"kxv": "Kuvi",
"kxw": "Konai",
"kxx": "Likuba",
"kxy": "Kayong",
"kxz": "Kerewo",
"ky": "Kirghiz; Kyrgyz",
"kya": "Kwaya",
"kyb": "Butbut Kalinga",
"kyc": "Kyaka",
"kyd": "Karey",
"kye": "Krache",
"kyf": "Kouya",
"kyg": "Keyagana",
"kyh": "Karok",
"kyi": "Kiput",
"kyj": "Karao",
"kyk": "Kamayo",
"kyl": "Kalapuya",
"kym": "Kpatili",
"kyn": "Northern Binukidnon",
"kyo": "Kelon",
"kyp": "Kang",
"kyq": "Kenga",
"kyr": "Kuruáya",
"kys": "Baram Kayan",
"kyt": "Kayagar",
"kyu": "Western Kayah",
"kyv": "Kayort",
"kyw": "Kudmali",
"kyx": "Rapoisi",
"kyy": "Kambaira",
"kyz": "Kayabí",
"kza": "Western Karaboro",
"kzb": "Kaibobo",
"kzc": "Bondoukou Kulango",
"kzd": "Kadai",
"kze": "Kosena",
"kzf": "Da'a Kaili",
"kzg": "Kikai",
"kzi": "Kelabit",
"kzk": "Kazukuru",
"kzl": "Kayeli",
"kzm": "Kais",
"kzn": "Kokola",
"kzo": "Kaningi",
"kzp": "Kaidipang",
"kzq": "Kaike",
"kzr": "Karang",
"kzs": "Sugut Dusun",
"kzu": "Kayupulau",
"kzv": "Komyandaret",
"kzw": "Karirí-Xocó",
"kzx": "Kamarian",
"kzy": "Kango (Tshopo District)",
"kzz": "Kalabra",
"la": "Latin",
"laa": "Southern Subanen",
"lab": "Linear A",
"lac": "Lacandon",
"lad": "Ladino",
"lae": "Pattani",
"laf": "Lafofa",
"lag": "Langi",
"lah": "Lahnda",
"lai": "Lambya",
"laj": "Lango (Uganda)",
"lal": "Lalia",
"lam": "Lamba",
"lan": "Laru",
"lap": "Laka (Chad)",
"laq": "Qabiao",
"lar": "Larteh",
"las": "Lama (Togo)",
"lau": "Laba",
"law": "Lauje",
"lax": "Tiwa",
"lay": "Lama Bai",
"laz": "Aribwatsa",
"lb": "Luxembourgish; Letzeburgesch",
"lbb": "Label",
"lbc": "Lakkia",
"lbe": "Lak",
"lbf": "Tinani",
"lbg": "Laopang",
"lbi": "La'bi",
"lbj": "Ladakhi",
"lbk": "Central Bontok",
"lbl": "Libon Bikol",
"lbm": "Lodhi",
"lbn": "Rmeet",
"lbo": "Laven",
"lbq": "Wampar",
"lbr": "Lohorung",
"lbs": "Libyan Sign Language",
"lbt": "Lachi",
"lbu": "Labu",
"lbv": "Lavatbura-Lamusong",
"lbw": "Tolaki",
"lbx": "Lawangan",
"lby": "Lamalama; Lamu-Lamu",
"lbz": "Lardil",
"lcc": "Legenyem",
"lcd": "Lola",
"lce": "Loncong; Sekak",
"lcf": "Lubu",
"lch": "Luchazi",
"lcl": "Lisela",
"lcm": "Tungag",
"lcp": "Western Lawa",
"lcq": "Luhu",
"lcs": "Lisabata-Nuniali",
"lda": "Kla-Dan",
"ldb": "Dũya",
"ldd": "Luri",
"ldg": "Lenyima",
"ldh": "Lamja-Dengsa-Tola",
"ldi": "Laari",
"ldj": "Lemoro",
"ldk": "Leelau",
"ldl": "Kaan",
"ldm": "Landoma",
"ldn": "Láadan",
"ldo": "Loo",
"ldp": "Tso",
"ldq": "Lufu",
"lea": "Lega-Shabunda",
"leb": "Lala-Bisa",
"lec": "Leco",
"led": "Lendu",
"lee": "Lyélé",
"lef": "Lelemi",
"leh": "Lenje",
"lei": "Lemio",
"lej": "Lengola",
"lek": "Leipon",
"lel": "Lele (Democratic Republic of Congo)",
"lem": "Nomaande",
"len": "Lenca",
"leo": "Leti (Cameroon)",
"lep": "Lepcha",
"leq": "Lembena",
"ler": "Lenkau",
"les": "Lese",
"let": "Lesing-Gelimi; Amio-Gelimi",
"leu": "Kara (Papua New Guinea)",
"lev": "Lamma",
"lew": "Ledo Kaili",
"lex": "Luang",
"ley": "Lemolang",
"lez": "Lezghian",
"lfa": "Lefa",
"lfn": "Lingua Franca Nova",
"lg": "Ganda; Luganda",
"lga": "Lungga",
"lgb": "Laghu",
"lgg": "Lugbara",
"lgh": "Laghuu",
"lgi": "Lengilu",
"lgk": "Lingarak; Neverver",
"lgl": "Wala",
"lgm": "Lega-Mwenga",
"lgn": "T'apo; Opuuo",
"lgo": "Lango (South Sudan)",
"lgq": "Logba",
"lgr": "Lengo",
"lgt": "Pahi",
"lgu": "Longgu",
"lgz": "Ligenza",
"lha": "Laha (Viet Nam)",
"lhh": "Laha (Indonesia)",
"lhi": "Lahu Shi",
"lhl": "Lahul Lohar",
"lhm": "Lhomi",
"lhn": "Lahanan",
"lhp": "Lhokpu",
"lhs": "Mlahsö",
"lht": "Lo-Toga",
"lhu": "Lahu",
"li": "Limburgan; Limburger; Limburgish",
"lia": "West-Central Limba",
"lib": "Likum",
"lic": "Hlai",
"lid": "Nyindrou",
"lie": "Likila",
"lif": "Limbu",
"lig": "Ligbi",
"lih": "Lihir",
"lij": "Ligurian",
"lik": "Lika",
"lil": "Lillooet",
"lio": "Liki",
"lip": "Sekpele",
"liq": "Libido",
"lir": "Liberian English",
"lis": "Lisu",
"liu": "Logorik",
"liv": "Liv",
"liw": "Col",
"lix": "Liabuku",
"liy": "Banda-Bambari",
"liz": "Libinza",
"lja": "Golpa",
"lje": "Rampi",
"lji": "Laiyolo",
"ljl": "Li'o",
"ljp": "Lampung Api",
"ljw": "Yirandali",
"ljx": "Yuru",
"lka": "Lakalei",
"lkb": "Kabras; Lukabaras",
"lkc": "Kucong",
"lkd": "Lakondê",
"lke": "Kenyi",
"lkh": "Lakha",
"lki": "Laki",
"lkj": "Remun",
"lkl": "Laeko-Libuat",
"lkm": "Kalaamaya",
"lkn": "Lakon; Vure",
"lko": "Khayo; Olukhayo",
"lkr": "Päri",
"lks": "Kisa; Olushisa",
"lkt": "Lakota",
"lku": "Kungkari",
"lky": "Lokoya",
"lla": "Lala-Roba",
"llb": "Lolo",
"llc": "Lele (Guinea)",
"lld": "Ladin",
"lle": "Lele (Papua New Guinea)",
"llf": "Hermit",
"llg": "Lole",
"llh": "Lamu",
"lli": "Teke-Laali",
"llj": "Ladji Ladji",
"llk": "Lelak",
"lll": "Lilau",
"llm": "Lasalimu",
"lln": "Lele (Chad)",
"llp": "North Efate",
"llq": "Lolak",
"lls": "Lithuanian Sign Language",
"llu": "Lau",
"llx": "Lauan",
"lma": "East Limba",
"lmb": "Merei",
"lmc": "Limilngan",
"lmd": "Lumun",
"lme": "Pévé",
"lmf": "South Lembata",
"lmg": "Lamogai",
"lmh": "Lambichhong",
"lmi": "Lombi",
"lmj": "West Lembata",
"lmk": "Lamkang",
"lml": "Hano",
"lmn": "Lambadi",
"lmo": "Lombard",
"lmp": "Limbum",
"lmq": "Lamatuka",
"lmr": "Lamalera",
"lmu": "Lamenu",
"lmv": "Lomaiviti",
"lmw": "Lake Miwok",
"lmx": "Laimbue",
"lmy": "Lamboya",
"ln": "Lingala",
"lna": "Langbashe",
"lnb": "Mbalanhu",
"lnd": "Lundayeh; Lun Bawang",
"lng": "Langobardic",
"lnh": "Lanoh",
"lni": "Daantanai'",
"lnj": "Leningitij",
"lnl": "South Central Banda",
"lnm": "Langam",
"lnn": "Lorediakarkar",
"lns": "Lamnso'",
"lnu": "Longuda",
"lnw": "Lanima",
"lnz": "Lonzo",
"lo": "Lao",
"loa": "Loloda",
"lob": "Lobi",
"loc": "Inonhan",
"loe": "Saluan",
"lof": "Logol",
"log": "Logo",
"loh": "Narim",
"loi": "Loma (Côte d'Ivoire)",
"loj": "Lou",
"lok": "Loko",
"lol": "Mongo",
"lom": "Loma (Liberia)",
"lon": "Malawi Lomwe",
"loo": "Lombo",
"lop": "Lopa",
"loq": "Lobala",
"lor": "Téén",
"los": "Loniu",
"lot": "Otuho",
"lou": "Louisiana Creole",
"lov": "Lopi",
"low": "Tampias Lobu",
"lox": "Loun",
"loy": "Loke",
"loz": "Lozi",
"lpa": "Lelepa",
"lpe": "Lepki",
"lpn": "Long Phuri Naga",
"lpo": "Lipo",
"lpx": "Lopit",
"lqr": "Logir",
"lra": "Rara Bakati'",
"lrc": "Northern Luri",
"lre": "Laurentian",
"lrg": "Laragia",
"lri": "Marachi; Olumarachi",
"lrk": "Loarki",
"lrl": "Lari",
"lrm": "Marama; Olumarama",
"lrn": "Lorang",
"lro": "Laro",
"lrr": "Southern Yamphu",
"lrt": "Larantuka Malay",
"lrv": "Larevat",
"lrz": "Lemerig",
"lsa": "Lasgerdi",
"lsb": "Burundian Sign Language; Langue des Signes Burundaise",
"lsc": "Albarradas Sign Language; Lengua de señas Albarradas",
"lsd": "Lishana Deni",
"lse": "Lusengo",
"lsh": "Lish",
"lsi": "Lashi",
"lsl": "Latvian Sign Language",
"lsm": "Saamia; Olusamia",
"lsn": "Tibetan Sign Language",
"lso": "Laos Sign Language",
"lsp": "Panamanian Sign Language; Lengua de Señas Panameñas",
"lsr": "Aruop",
"lss": "Lasi",
"lst": "Trinidad and Tobago Sign Language",
"lsv": "Sivia Sign Language",
"lsw": "Seychelles Sign Language; Lalang Siny Seselwa; Langue des Signes Seychelloise",
"lsy": "Mauritian Sign Language",
"lt": "Lithuanian",
"ltc": "Late Middle Chinese",
"ltg": "Latgalian",
"lth": "Thur",
"lti": "Leti (Indonesia)",
"ltn": "Latundê",
"lto": "Tsotso; Olutsotso",
"lts": "Tachoni; Lutachoni",
"ltu": "Latu",
"lu": "Luba-Katanga",
"lua": "Luba-Lulua",
"luc": "Aringa",
"lud": "Ludian",
"lue": "Luvale",
"luf": "Laua",
"lui": "Luiseno",
"luj": "Luna",
"luk": "Lunanakha",
"lul": "Olu'bo",
"lum": "Luimbi",
"lun": "Lunda",
"luo": "Luo (Kenya and Tanzania); Dholuo",
"lup": "Lumbu",
"luq": "Lucumi",
"lur": "Laura",
"lus": "Lushai",
"lut": "Lushootseed",
"luu": "Lumba-Yakkha",
"luv": "Luwati",
"luw": "Luo (Cameroon)",
"luy": "Luyia; Oluluyia",
"luz": "Southern Luri",
"lv": "Latvian",
"lva": "Maku'a",
"lvi": "Lavi",
"lvk": "Lavukaleve",
"lvs": "Standard Latvian",
"lvu": "Levuka",
"lwa": "Lwalu",
"lwe": "Lewo Eleng",
"lwg": "Wanga; Oluwanga",
"lwh": "White Lachi",
"lwl": "Eastern Lawa",
"lwm": "Laomian",
"lwo": "Luwo",
"lws": "Malawian Sign Language",
"lwt": "Lewotobi",
"lwu": "Lawu",
"lww": "Lewo",
"lxm": "Lakurumau",
"lya": "Layakha",
"lyg": "Lyngngam",
"lyn": "Luyana",
"lzh": "Literary Chinese",
"lzl": "Litzlitz",
"lzn": "Leinong Naga",
"lzz": "Laz",
"maa": "San Jerónimo Tecóatl Mazatec",
"mab": "Yutanduchi Mixtec",
"mad": "Madurese",
"mae": "Bo-Rukul",
"maf": "Mafa",
"mag": "Magahi",
"mai": "Maithili",
"maj": "Jalapa De Díaz Mazatec",
"mak": "Makasar",
"mam": "Mam",
"man": "Mandingo; Manding",
"map": "Austronesian languages",
"maq": "Chiquihuitlán Mazatec",
"mas": "Masai",
"mat": "San Francisco Matlatzinca",
"mau": "Huautla Mazatec",
"mav": "Sateré-Mawé",
"maw": "Mampruli",
"max": "North Moluccan Malay",
"maz": "Central Mazahua",
"mba": "Higaonon",
"mbb": "Western Bukidnon Manobo",
"mbc": "Macushi",
"mbd": "Dibabawon Manobo",
"mbe": "Molale",
"mbf": "Baba Malay",
"mbh": "Mangseng",
"mbi": "Ilianen Manobo",
"mbj": "Nadëb",
"mbk": "Malol",
"mbl": "Maxakalí",
"mbm": "Ombamba",
"mbn": "Macaguán",
"mbo": "Mbo (Cameroon)",
"mbp": "Malayo",
"mbq": "Maisin",
"mbr": "Nukak Makú",
"mbs": "Sarangani Manobo",
"mbt": "Matigsalug Manobo",
"mbu": "Mbula-Bwazza",
"mbv": "Mbulungish",
"mbw": "Maring",
"mbx": "Mari (East Sepik Province)",
"mby": "Memoni",
"mbz": "Amoltepec Mixtec",
"mca": "Maca",
"mcb": "Machiguenga",
"mcc": "Bitur",
"mcd": "Sharanahua",
"mce": "Itundujia Mixtec",
"mcf": "Matsés",
"mcg": "Mapoyo",
"mch": "Maquiritari",
"mci": "Mese",
"mcj": "Mvanip",
"mck": "Mbunda",
"mcl": "Macaguaje",
"mcm": "Malaccan Creole Portuguese",
"mcn": "Masana",
"mco": "Coatlán Mixe",
"mcp": "Makaa",
"mcq": "Ese",
"mcr": "Menya",
"mcs": "Mambai",
"mct": "Mengisa",
"mcu": "Cameroon Mambila",
"mcv": "Minanibai",
"mcw": "Mawa (Chad)",
"mcx": "Mpiemo",
"mcy": "South Watut",
"mcz": "Mawan",
"mda": "Mada (Nigeria)",
"mdb": "Morigi",
"mdc": "Male (Papua New Guinea)",
"mdd": "Mbum",
"mde": "Maba (Chad)",
"mdf": "Moksha",
"mdg": "Massalat",
"mdh": "Maguindanaon",
"mdi": "Mamvu",
"mdj": "Mangbetu",
"mdk": "Mangbutu",
"mdl": "Maltese Sign Language",
"mdm": "Mayogo",
"mdn": "Mbati",
"mdp": "Mbala",
"mdq": "Mbole",
"mdr": "Mandar",
"mds": "Maria (Papua New Guinea)",
"mdt": "Mbere",
"mdu": "Mboko",
"mdv": "Santa Lucía Monteverde Mixtec",
"mdw": "Mbosi",
"mdx": "Dizin",
"mdy": "Male (Ethiopia)",
"mdz": "Suruí Do Pará",
"mea": "Menka",
"meb": "Ikobi",
"mec": "Marra",
"med": "Melpa",
"mee": "Mengen",
"mef": "Megam",
"meh": "Southwestern Tlaxiaco Mixtec",
"mei": "Midob",
"mej": "Meyah",
"mek": "Mekeo",
"mel": "Central Melanau",
"mem": "Mangala",
"men": "Mende (Sierra Leone)",
"meo": "Kedah Malay",
"mep": "Miriwoong",
"meq": "Merey",
"mer": "Meru",
"mes": "Masmaje",
"met": "Mato",
"meu": "Motu",
"mev": "Mano",
"mew": "Maaka",
"mey": "Hassaniyya",
"mez": "Menominee",
"mfa": "Pattani Malay",
"mfb": "Bangka",
"mfc": "Mba",
"mfd": "Mendankwe-Nkwen",
"mfe": "Morisyen",
"mff": "Naki",
"mfg": "Mogofin",
"mfh": "Matal",
"mfi": "Wandala",
"mfj": "Mefele",
"mfk": "North Mofu",
"mfl": "Putai",
"mfm": "Marghi South",
"mfn": "Cross River Mbembe",
"mfo": "Mbe",
"mfp": "Makassar Malay",
"mfq": "Moba",
"mfr": "Marrithiyel",
"mfs": "Mexican Sign Language",
"mft": "Mokerang",
"mfu": "Mbwela",
"mfv": "Mandjak",
"mfw": "Mulaha",
"mfx": "Melo",
"mfy": "Mayo",
"mfz": "Mabaan",
"mg": "Malagasy",
"mga": "Middle Irish (900-1200)",
"mgb": "Mararit",
"mgc": "Morokodo",
"mgd": "Moru",
"mge": "Mango",
"mgf": "Maklew",
"mgg": "Mpumpong",
"mgh": "Makhuwa-Meetto",
"mgi": "Lijili",
"mgj": "Abureni",
"mgk": "Mawes",
"mgl": "Maleu-Kilenge",
"mgm": "Mambae",
"mgn": "Mbangi",
"mgo": "Meta'",
"mgp": "Eastern Magar",
"mgq": "Malila",
"mgr": "Mambwe-Lungu",
"mgs": "Manda (Tanzania)",
"mgt": "Mongol",
"mgu": "Mailu",
"mgv": "Matengo",
"mgw": "Matumbi",
"mgy": "Mbunga",
"mgz": "Mbugwe",
"mh": "Marshallese",
"mha": "Manda (India)",
"mhb": "Mahongwe",
"mhc": "Mocho",
"mhd": "Mbugu",
"mhe": "Besisi; Mah Meri",
"mhf": "Mamaa",
"mhg": "Margu",
"mhi": "Ma'di",
"mhj": "Mogholi",
"mhk": "Mungaka",
"mhl": "Mauwake",
"mhm": "Makhuwa-Moniga",
"mhn": "Mócheno",
"mho": "Mashi (Zambia)",
"mhp": "Balinese Malay",
"mhq": "Mandan",
"mhr": "Eastern Mari",
"mhs": "Buru (Indonesia)",
"mht": "Mandahuaca",
"mhu": "Digaro-Mishmi; Darang Deng",
"mhw": "Mbukushu",
"mhx": "Maru; Lhaovo",
"mhy": "Ma'anyan",
"mhz": "Mor (Mor Islands)",
"mi": "Maori",
"mia": "Miami",
"mib": "Atatláhuca Mixtec",
"mic": "Mi'kmaq; Micmac",
"mid": "Mandaic",
"mie": "Ocotepec Mixtec",
"mif": "Mofu-Gudur",
"mig": "San Miguel El Grande Mixtec",
"mih": "Chayuco Mixtec",
"mii": "Chigmecatitlán Mixtec",
"mij": "Abar; Mungbam",
"mik": "Mikasuki",
"mil": "Peñoles Mixtec",
"mim": "Alacatlatzala Mixtec",
"min": "Minangkabau",
"mio": "Pinotepa Nacional Mixtec",
"mip": "Apasco-Apoala Mixtec",
"miq": "Mískito",
"mir": "Isthmus Mixe",
"mit": "Southern Puebla Mixtec",
"miu": "Cacaloxtepec Mixtec",
"miw": "Akoye",
"mix": "Mixtepec Mixtec",
"miy": "Ayutla Mixtec",
"miz": "Coatzospan Mixtec",
"mjb": "Makalero",
"mjc": "San Juan Colorado Mixtec",
"mjd": "Northwest Maidu",
"mje": "Muskum",
"mjg": "Tu",
"mjh": "Mwera (Nyasa)",
"mji": "Kim Mun",
"mjj": "Mawak",
"mjk": "Matukar",
"mjl": "Mandeali",
"mjm": "Medebur",
"mjn": "Ma (Papua New Guinea)",
"mjo": "Malankuravan",
"mjp": "Malapandaram",
"mjq": "Malaryan",
"mjr": "Malavedan",
"mjs": "Miship",
"mjt": "Sauria Paharia",
"mju": "Manna-Dora",
"mjv": "Mannan",
"mjw": "Karbi",
"mjx": "Mahali",
"mjy": "Mahican",
"mjz": "Majhi",
"mk": "Macedonian",
"mka": "Mbre",
"mkb": "Mal Paharia",
"mkc": "Siliput",
"mke": "Mawchi",
"mkf": "Miya",
"mkg": "Mak (China)",
"mkh": "Mon-Khmer languages",
"mki": "Dhatki",
"mkj": "Mokilese",
"mkk": "Byep",
"mkl": "Mokole",
"mkm": "Moklen",
"mkn": "Kupang Malay",
"mko": "Mingang Doso",
"mkp": "Moikodi",
"mkq": "Bay Miwok",
"mkr": "Malas",
"mks": "Silacayoapan Mixtec",
"mkt": "Vamale",
"mku": "Konyanka Maninka",
"mkv": "Mafea",
"mkw": "Kituba (Congo)",
"mkx": "Kinamiging Manobo",
"mky": "East Makian",
"mkz": "Makasae",
"ml": "Malayalam",
"mla": "Malo",
"mlb": "Mbule",
"mlc": "Cao Lan",
"mle": "Manambu",
"mlf": "Mal",
"mlh": "Mape",
"mli": "Malimpung",
"mlj": "Miltu",
"mlk": "Ilwana; Kiwilwana",
"mll": "Malua Bay",
"mlm": "Mulam",
"mln": "Malango",
"mlo": "Mlomp",
"mlp": "Bargam",
"mlq": "Western Maninkakan",
"mlr": "Vame",
"mls": "Masalit",
"mlu": "To'abaita",
"mlv": "Motlav; Mwotlap",
"mlw": "Moloko",
"mlx": "Malfaxal; Naha'ai",
"mlz": "Malaynon",
"mma": "Mama",
"mmb": "Momina",
"mmc": "Michoacán Mazahua",
"mmd": "Maonan",
"mme": "Mae",
"mmf": "Mundat",
"mmg": "North Ambrym",
"mmh": "Mehináku",
"mmi": "Musar",
"mmj": "Majhwar",
"mmk": "Mukha-Dora",
"mml": "Man Met",
"mmm": "Maii",
"mmn": "Mamanwa",
"mmo": "Mangga Buang",
"mmp": "Siawi",
"mmq": "Musak",
"mmr": "Western Xiangxi Miao",
"mmt": "Malalamai",
"mmu": "Mmaala",
"mmv": "Miriti",
"mmw": "Emae",
"mmx": "Madak",
"mmy": "Migaama",
"mmz": "Mabaale",
"mn": "Mongolian",
"mna": "Mbula",
"mnb": "Muna",
"mnc": "Manchu",
"mnd": "Mondé",
"mne": "Naba",
"mnf": "Mundani",
"mng": "Eastern Mnong",
"mnh": "Mono (Democratic Republic of Congo)",
"mni": "Manipuri",
"mnj": "Munji",
"mnk": "Mandinka",
"mnl": "Tiale",
"mnm": "Mapena",
"mnn": "Southern Mnong",
"mno": "Manobo languages",
"mnp": "Min Bei Chinese",
"mnq": "Minriq",
"mnr": "Mono (USA)",
"mns": "Mansi",
"mnu": "Mer",
"mnv": "Rennell-Bellona",
"mnw": "Mon",
"mnx": "Manikion",
"mny": "Manyawa",
"mnz": "Moni",
"moa": "Mwan",
"moc": "Mocoví",
"mod": "Mobilian",
"moe": "Innu; Montagnais",
"mog": "Mongondow",
"moh": "Mohawk",
"moi": "Mboi",
"moj": "Monzombo",
"mok": "Morori",
"mom": "Mangue",
"moo": "Monom",
"mop": "Mopán Maya",
"moq": "Mor (Bomberai Peninsula)",
"mor": "Moro",
"mos": "Mossi",
"mot": "Barí",
"mou": "Mogum",
"mov": "Mohave",
"mow": "Moi (Congo)",
"mox": "Molima",
"moy": "Shekkacho",
"moz": "Mukulu; Gergiko",
"mpa": "Mpoto",
"mpb": "Malak Malak; Mullukmulluk",
"mpc": "Mangarrayi",
"mpd": "Machinere",
"mpe": "Majang",
"mpg": "Marba",
"mph": "Maung",
"mpi": "Mpade",
"mpj": "Martu Wangka; Wangkajunga",
"mpk": "Mbara (Chad)",
"mpl": "Middle Watut",
"mpm": "Yosondúa Mixtec",
"mpn": "Mindiri",
"mpo": "Miu",
"mpp": "Migabac",
"mpq": "Matís",
"mpr": "Vangunu",
"mps": "Dadibi",
"mpt": "Mian",
"mpu": "Makuráp",
"mpv": "Mungkip",
"mpw": "Mapidian",
"mpx": "Misima-Panaeati",
"mpy": "Mapia",
"mpz": "Mpi",
"mqa": "Maba (Indonesia)",
"mqb": "Mbuko",
"mqc": "Mangole",
"mqe": "Matepi",
"mqf": "Momuna",
"mqg": "Kota Bangun Kutai Malay",
"mqh": "Tlazoyaltepec Mixtec",
"mqi": "Mariri",
"mqj": "Mamasa",
"mqk": "Rajah Kabunsuwan Manobo",
"mql": "Mbelime",
"mqm": "South Marquesan",
"mqn": "Moronene",
"mqo": "Modole",
"mqp": "Manipa",
"mqq": "Minokok",
"mqr": "Mander",
"mqs": "West Makian",
"mqt": "Mok",
"mqu": "Mandari",
"mqv": "Mosimo",
"mqw": "Murupi",
"mqx": "Mamuju",
"mqy": "Manggarai",
"mqz": "Pano",
"mr": "Marathi",
"mra": "Mlabri",
"mrb": "Marino",
"mrc": "Maricopa",
"mrd": "Western Magar",
"mre": "Martha's Vineyard Sign Language",
"mrf": "Elseng",
"mrg": "Mising",
"mrh": "Mara Chin",
"mrj": "Western Mari",
"mrk": "Hmwaveke",
"mrl": "Mortlockese",
"mrm": "Merlav; Mwerlap",
"mrn": "Cheke Holo",
"mro": "Mru",
"mrp": "Morouas",
"mrq": "North Marquesan",
"mrr": "Maria (India)",
"mrs": "Maragus",
"mrt": "Marghi Central",
"mru": "Mono (Cameroon)",
"mrv": "Mangareva",
"mrw": "Maranao",
"mrx": "Maremgi; Dineor",
"mry": "Mandaya",
"mrz": "Marind",
"ms": "Malay (macrolanguage)",
"msb": "Masbatenyo",
"msc": "Sankaran Maninka",
"msd": "Yucatec Maya Sign Language",
"mse": "Musey",
"msf": "Mekwei",
"msg": "Moraid",
"msh": "Masikoro Malagasy",
"msi": "Sabah Malay",
"msj": "Ma (Democratic Republic of Congo)",
"msk": "Mansaka",
"msl": "Molof; Poule",
"msm": "Agusan Manobo",
"msn": "Vurës",
"mso": "Mombum",
"msp": "Maritsauá",
"msq": "Caac",
"msr": "Mongolian Sign Language",
"mss": "West Masela",
"msu": "Musom",
"msv": "Maslam",
"msw": "Mansoanka",
"msx": "Moresada",
"msy": "Aruamu",
"msz": "Momare",
"mt": "Maltese",
"mta": "Cotabato Manobo",
"mtb": "Anyin Morofo",
"mtc": "Munit",
"mtd": "Mualang",
"mte": "Mono (Solomon Islands)",
"mtf": "Murik (Papua New Guinea)",
"mtg": "Una",
"mth": "Munggui",
"mti": "Maiwa (Papua New Guinea)",
"mtj": "Moskona",
"mtk": "Mbe'",
"mtl": "Montol",
"mtm": "Mator",
"mtn": "Matagalpa",
"mto": "Totontepec Mixe",
"mtp": "Wichí Lhamtés Nocten",
"mtq": "Muong",
"mtr": "Mewari",
"mts": "Yora",
"mtt": "Mota",
"mtu": "Tututepec Mixtec",
"mtv": "Asaro'o",
"mtw": "Southern Binukidnon",
"mtx": "Tidaá Mixtec",
"mty": "Nabi",
"mua": "Mundang",
"mub": "Mubi",
"muc": "Ajumbu",
"mud": "Mednyj Aleut",
"mue": "Media Lengua",
"mug": "Musgu",
"muh": "Mündü",
"mui": "Musi",
"muj": "Mabire",
"muk": "Mugom",
"mum": "Maiwala",
"mun": "Munda languages",
"muo": "Nyong",
"mup": "Malvi",
"muq": "Eastern Xiangxi Miao",
"mur": "Murle",
"mus": "Creek",
"mut": "Western Muria",
"muu": "Yaaku",
"muv": "Muthuvan",
"mux": "Bo-Ung",
"muy": "Muyang",
"muz": "Mursi",
"mva": "Manam",
"mvb": "Mattole",
"mvd": "Mamboru",
"mve": "Marwari (Pakistan)",
"mvf": "Peripheral Mongolian",
"mvg": "Yucuañe Mixtec",
"mvh": "Mulgi",
"mvi": "Miyako",
"mvk": "Mekmek",
"mvl": "Mbara (Australia)",
"mvn": "Minaveha",
"mvo": "Marovo",
"mvp": "Duri",
"mvq": "Moere",
"mvr": "Marau",
"mvs": "Massep",
"mvt": "Mpotovoro",
"mvu": "Marfa",
"mvv": "Tagal Murut",
"mvw": "Machinga",
"mvx": "Meoswar",
"mvy": "Indus Kohistani",
"mvz": "Mesqan",
"mwa": "Mwatebu",
"mwb": "Juwal",
"mwc": "Are",
"mwe": "Mwera (Chimwera)",
"mwf": "Murrinh-Patha",
"mwg": "Aiklep",
"mwh": "Mouk-Aria",
"mwi": "Labo; Ninde",
"mwk": "Kita Maninkakan",
"mwl": "Mirandese",
"mwm": "Sar",
"mwn": "Nyamwanga",
"mwo": "Central Maewo",
"mwp": "Kala Lagaw Ya",
"mwq": "Mün Chin",
"mwr": "Marwari",
"mws": "Mwimbi-Muthambi",
"mwt": "Moken",
"mwu": "Mittu",
"mwv": "Mentawai",
"mww": "Hmong Daw",
"mwz": "Moingi",
"mxa": "Northwest Oaxaca Mixtec",
"mxb": "Tezoatlán Mixtec",
"mxc": "Manyika",
"mxd": "Modang",
"mxe": "Mele-Fila",
"mxf": "Malgbe",
"mxg": "Mbangala",
"mxh": "Mvuba",
"mxi": "Mozarabic",
"mxj": "Miju-Mishmi; Geman Deng",
"mxk": "Monumbo",
"mxl": "Maxi Gbe",
"mxm": "Meramera",
"mxn": "Moi (Indonesia)",
"mxo": "Mbowe",
"mxp": "Tlahuitoltepec Mixe",
"mxq": "Juquila Mixe",
"mxr": "Murik (Malaysia)",
"mxs": "Huitepec Mixtec",
"mxt": "Jamiltepec Mixtec",
"mxu": "Mada (Cameroon)",
"mxv": "Metlatónoc Mixtec",
"mxw": "Namo",
"mxx": "Mahou; Mawukakan",
"mxy": "Southeastern Nochixtlán Mixtec",
"mxz": "Central Masela",
"my": "Burmese",
"myb": "Mbay",
"myc": "Mayeka",
"mye": "Myene",
"myf": "Bambassi",
"myg": "Manta",
"myh": "Makah",
"myj": "Mangayat",
"myk": "Mamara Senoufo",
"myl": "Moma",
"mym": "Me'en",
"myn": "Mayan languages",
"myo": "Anfillo",
"myp": "Pirahã",
"myr": "Muniche",
"mys": "Mesmes",
"myu": "Mundurukú",
"myv": "Erzya",
"myw": "Muyuw",
"myx": "Masaaba",
"myy": "Macuna",
"myz": "Classical Mandaic",
"mza": "Santa María Zacatepec Mixtec",
"mzb": "Tumzabt",
"mzc": "Madagascar Sign Language",
"mzd": "Malimba",
"mze": "Morawa",
"mzg": "Monastic Sign Language",
"mzh": "Wichí Lhamtés Güisnay",
"mzi": "Ixcatlán Mazatec",
"mzj": "Manya",
"mzk": "Nigeria Mambila",
"mzl": "Mazatlán Mixe",
"mzm": "Mumuye",
"mzn": "Mazanderani",
"mzo": "Matipuhy",
"mzp": "Movima",
"mzq": "Mori Atas",
"mzr": "Marúbo",
"mzs": "Macanese",
"mzt": "Mintil",
"mzu": "Inapang",
"mzv": "Manza",
"mzw": "Deg",
"mzx": "Mawayana",
"mzy": "Mozambican Sign Language",
"mzz": "Maiadomu",
"na": "Nauru",
"naa": "Namla",
"nab": "Southern Nambikuára",
"nac": "Narak",
"nae": "Naka'ela",
"naf": "Nabak",
"nag": "Naga Pidgin",
"nah": "Nahuatl languages",
"nai": "North American Indian languages",
"naj": "Nalu",
"nak": "Nakanai",
"nal": "Nalik",
"nam": "Ngan'gityemerri",
"nan": "Min Nan Chinese",
"nao": "Naaba",
"nap": "Neapolitan",
"naq": "Khoekhoe; Nama (Namibia)",
"nar": "Iguta",
"nas": "Naasioi",
"nat": "Ca̱hungwa̱rya̱; Hungworo",
"naw": "Nawuri",
"nax": "Nakwi",
"nay": "Ngarrindjeri",
"naz": "Coatepec Nahuatl",
"nb": "Norwegian Bokmål",
"nba": "Nyemba",
"nbb": "Ndoe",
"nbc": "Chang Naga",
"nbd": "Ngbinda",
"nbe": "Konyak Naga",
"nbg": "Nagarchal",
"nbh": "Ngamo",
"nbi": "Mao Naga",
"nbj": "Ngarinyman",
"nbk": "Nake",
"nbm": "Ngbaka Ma'bo",
"nbn": "Kuri",
"nbo": "Nkukoli",
"nbp": "Nnam",
"nbq": "Nggem",
"nbr": "Numana",
"nbs": "Namibian Sign Language",
"nbt": "Na",
"nbu": "Rongmei Naga",
"nbv": "Ngamambo",
"nbw": "Southern Ngbandi",
"nby": "Ningera",
"nca": "Iyo",
"ncb": "Central Nicobarese",
"ncc": "Ponam",
"ncd": "Nachering",
"nce": "Yale",
"ncf": "Notsi",
"ncg": "Nisga'a",
"nch": "Central Huasteca Nahuatl",
"nci": "Classical Nahuatl",
"ncj": "Northern Puebla Nahuatl",
"nck": "Na-kara",
"ncl": "Michoacán Nahuatl",
"ncm": "Nambo",
"ncn": "Nauna",
"nco": "Sibe",
"ncq": "Northern Katang",
"ncr": "Ncane",
"ncs": "Nicaraguan Sign Language",
"nct": "Chothe Naga",
"ncu": "Chumburung",
"ncx": "Central Puebla Nahuatl",
"ncz": "Natchez",
"nd": "North Ndebele",
"nda": "Ndasa",
"ndb": "Kenswei Nsei",
"ndc": "Ndau",
"ndd": "Nde-Nsele-Nta",
"ndf": "Nadruvian",
"ndg": "Ndengereko",
"ndh": "Ndali",
"ndi": "Samba Leko",
"ndj": "Ndamba",
"ndk": "Ndaka",
"ndl": "Ndolo",
"ndm": "Ndam",
"ndn": "Ngundi",
"ndp": "Ndo",
"ndq": "Ndombe",
"ndr": "Ndoola",
"nds": "Low German; Low Saxon",
"ndt": "Ndunga",
"ndu": "Dugun",
"ndv": "Ndut",
"ndw": "Ndobo",
"ndx": "Nduga",
"ndy": "Lutos",
"ndz": "Ndogo",
"ne": "Nepali (macrolanguage)",
"nea": "Eastern Ngad'a",
"neb": "Toura (Côte d'Ivoire)",
"nec": "Nedebang",
"ned": "Nde-Gbite",
"nee": "Nêlêmwa-Nixumwak",
"nef": "Nefamese",
"neg": "Negidal",
"neh": "Nyenkha",
"nei": "Neo-Hittite",
"nej": "Neko",
"nek": "Neku",
"nem": "Nemi",
"nen": "Nengone",
"neo": "Ná-Meo",
"neq": "North Central Mixe",
"ner": "Yahadian",
"nes": "Bhoti Kinnauri",
"net": "Nete",
"neu": "Neo",
"nev": "Nyaheun",
"new": "Newari; Nepal Bhasa",
"nex": "Neme",
"ney": "Neyo",
"nez": "Nez Perce",
"nfa": "Dhao",
"nfd": "Ahwai",
"nfl": "Ayiwo; Äiwoo",
"nfr": "Nafaanra",
"nfu": "Mfumte",
"ng": "Ndonga",
"nga": "Ngbaka",
"ngb": "Northern Ngbandi",
"ngc": "Ngombe (Democratic Republic of Congo)",
"ngd": "Ngando (Central African Republic)",
"nge": "Ngemba",
"ngf": "Trans-New Guinea languages",
"ngg": "Ngbaka Manza",
"ngh": "Nǁng",
"ngi": "Ngizim",
"ngj": "Ngie",
"ngk": "Dalabon",
"ngl": "Lomwe",
"ngm": "Ngatik Men's Creole",
"ngn": "Ngwo",
"ngp": "Ngulu",
"ngq": "Ngurimi; Ngoreme",
"ngr": "Engdewu",
"ngs": "Gvoko",
"ngt": "Kriang; Ngeq",
"ngu": "Guerrero Nahuatl",
"ngv": "Nagumi",
"ngw": "Ngwaba",
"ngx": "Nggwahyi",
"ngy": "Tibea",
"ngz": "Ngungwel",
"nha": "Nhanda",
"nhb": "Beng",
"nhc": "Tabasco Nahuatl",
"nhd": "Chiripá; Ava Guaraní",
"nhe": "Eastern Huasteca Nahuatl",
"nhf": "Nhuwala",
"nhg": "Tetelcingo Nahuatl",
"nhh": "Nahari",
"nhi": "Zacatlán-Ahuacatlán-Tepetzintla Nahuatl",
"nhk": "Isthmus-Cosoleacaque Nahuatl",
"nhm": "Morelos Nahuatl",
"nhn": "Central Nahuatl",
"nho": "Takuu",
"nhp": "Isthmus-Pajapan Nahuatl",
"nhq": "Huaxcaleca Nahuatl",
"nhr": "Naro",
"nht": "Ometepec Nahuatl",
"nhu": "Noone",
"nhv": "Temascaltepec Nahuatl",
"nhw": "Western Huasteca Nahuatl",
"nhx": "Isthmus-Mecayapan Nahuatl",
"nhy": "Northern Oaxaca Nahuatl",
"nhz": "Santa María La Alta Nahuatl",
"nia": "Nias",
"nib": "Nakame",
"nic": "Niger-Kordofanian languages",
"nid": "Ngandi",
"nie": "Niellim",
"nif": "Nek",
"nig": "Ngalakgan",
"nih": "Nyiha (Tanzania)",
"nii": "Nii",
"nij": "Ngaju",
"nik": "Southern Nicobarese",
"nil": "Nila",
"nim": "Nilamba",
"nin": "Ninzo",
"nio": "Nganasan",
"niq": "Nandi",
"nir": "Nimboran",
"nis": "Nimi",
"nit": "Southeastern Kolami",
"niu": "Niuean",
"niv": "Gilyak",
"niw": "Nimo",
"nix": "Hema",
"niy": "Ngiti",
"niz": "Ningil",
"nja": "Nzanyi",
"njb": "Nocte Naga",
"njd": "Ndonde Hamba",
"njh": "Lotha Naga",
"nji": "Gudanji",
"njj": "Njen",
"njl": "Njalgulgule",
"njm": "Angami Naga",
"njn": "Liangmai Naga",
"njo": "Ao Naga",
"njr": "Njerep",
"njs": "Nisa",
"njt": "Ndyuka-Trio Pidgin",
"nju": "Ngadjunmaya",
"njx": "Kunyi",
"njy": "Njyem",
"njz": "Nyishi",
"nka": "Nkoya",
"nkb": "Khoibu Naga",
"nkc": "Nkongho",
"nkd": "Koireng",
"nke": "Duke",
"nkf": "Inpui Naga",
"nkg": "Nekgini",
"nkh": "Khezha Naga",
"nki": "Thangal Naga",
"nkj": "Nakai",
"nkk": "Nokuku",
"nkm": "Namat",
"nkn": "Nkangala",
"nko": "Nkonya",
"nkp": "Niuatoputapu",
"nkq": "Nkami",
"nkr": "Nukuoro",
"nks": "North Asmat",
"nkt": "Nyika (Tanzania)",
"nku": "Bouna Kulango",
"nkv": "Nyika (Malawi and Zambia)",
"nkw": "Nkutu",
"nkx": "Nkoroo",
"nkz": "Nkari",
"nl": "Dutch; Flemish",
"nla": "Ngombale",
"nlc": "Nalca",
"nle": "East Nyala",
"nlg": "Gela",
"nli": "Grangali",
"nlj": "Nyali",
"nlk": "Ninia Yali",
"nll": "Nihali",
"nlm": "Mankiyali",
"nlo": "Ngul",
"nlq": "Lao Naga",
"nlu": "Nchumbulu",
"nlv": "Orizaba Nahuatl",
"nlw": "Walangama",
"nlx": "Nahali",
"nly": "Nyamal",
"nlz": "Nalögo",
"nma": "Maram Naga",
"nmb": "Big Nambas; V'ënen Taut",
"nmc": "Ngam",
"nmd": "Ndumu",
"nme": "Mzieme Naga",
"nmf": "Tangkhul Naga (India)",
"nmg": "Kwasio",
"nmh": "Monsang Naga",
"nmi": "Nyam",
"nmj": "Ngombe (Central African Republic)",
"nmk": "Namakura",
"nml": "Ndemli",
"nmm": "Manangba",
"nmn": "ǃXóõ",
"nmo": "Moyon Naga",
"nmp": "Nimanbur",
"nmq": "Nambya",
"nmr": "Nimbari",
"nms": "Letemboi",
"nmt": "Namonuito",
"nmu": "Northeast Maidu",
"nmv": "Ngamini",
"nmw": "Nimoa; Rifao",
"nmx": "Nama (Papua New Guinea)",
"nmy": "Namuyi",
"nmz": "Nawdm",
"nn": "Norwegian Nynorsk",
"nna": "Nyangumarta",
"nnb": "Nande",
"nnc": "Nancere",
"nnd": "West Ambae",
"nne": "Ngandyera",
"nnf": "Ngaing",
"nng": "Maring Naga",
"nnh": "Ngiemboon",
"nni": "North Nuaulu",
"nnj": "Nyangatom",
"nnk": "Nankina",
"nnl": "Northern Rengma Naga",
"nnm": "Namia",
"nnn": "Ngete",
"nnp": "Wancho Naga",
"nnq": "Ngindo",
"nnr": "Narungga",
"nnt": "Nanticoke",
"nnu": "Dwang",
"nnv": "Nugunu (Australia)",
"nnw": "Southern Nuni",
"nny": "Nyangga",
"nnz": "Nda'nda'",
"no": "Norwegian",
"noa": "Woun Meu",
"noc": "Nuk",
"nod": "Northern Thai",
"noe": "Nimadi",
"nof": "Nomane",
"nog": "Nogai",
"noh": "Nomu",
"noi": "Noiri",
"noj": "Nonuya",
"nok": "Nooksack",
"nol": "Nomlaki",
"nom": "Nocamán",
"non": "Old Norse",
"nop": "Numanggang",
"noq": "Ngongo",
"nos": "Eastern Nisu",
"not": "Nomatsiguenga",
"nou": "Ewage-Notu",
"nov": "Novial",
"now": "Nyambo",
"noy": "Noy",
"noz": "Nayi",
"npa": "Nar Phu",
"npb": "Nupbikha",
"npg": "Ponyo-Gongwang Naga",
"nph": "Phom Naga",
"npi": "Nepali (individual language)",
"npl": "Southeastern Puebla Nahuatl",
"npn": "Mondropolon",
"npo": "Pochuri Naga",
"nps": "Nipsan",
"npu": "Puimei Naga",
"npx": "Noipx",
"npy": "Napu",
"nqg": "Southern Nago",
"nqk": "Kura Ede Nago",
"nql": "Ngendelengo",
"nqm": "Ndom",
"nqn": "Nen",
"nqo": "N'Ko; N’Ko",
"nqq": "Kyan-Karyaw Naga",
"nqt": "Nteng",
"nqy": "Akyaung Ari Naga",
"nr": "South Ndebele",
"nra": "Ngom",
"nrb": "Nara",
"nrc": "Noric",
"nre": "Southern Rengma Naga",
"nrf": "Jèrriais; Guernésiais",
"nrg": "Narango",
"nri": "Chokri Naga",
"nrk": "Ngarla",
"nrl": "Ngarluma",
"nrm": "Narom",
"nrn": "Norn",
"nrp": "North Picene",
"nrr": "Norra; Nora",
"nrt": "Northern Kalapuya",
"nru": "Narua",
"nrx": "Ngurmbur",
"nrz": "Lala",
"nsa": "Sangtam Naga",
"nsb": "Lower Nossob",
"nsc": "Nshi",
"nsd": "Southern Nisu",
"nse": "Nsenga",
"nsf": "Northwestern Nisu",
"nsg": "Ngasa",
"nsh": "Ngoshie",
"nsi": "Nigerian Sign Language",
"nsk": "Naskapi",
"nsl": "Norwegian Sign Language",
"nsm": "Sumi Naga",
"nsn": "Nehan",
"nso": "Pedi; Northern Sotho; Sepedi",
"nsp": "Nepalese Sign Language",
"nsq": "Northern Sierra Miwok",
"nsr": "Maritime Sign Language",
"nss": "Nali",
"nst": "Tase Naga",
"nsu": "Sierra Negra Nahuatl",
"nsv": "Southwestern Nisu",
"nsw": "Navut",
"nsx": "Nsongo",
"nsy": "Nasal",
"nsz": "Nisenan",
"ntd": "Northern Tidung",
"nte": "Nathembo",
"ntg": "Ngantangarra",
"nti": "Natioro",
"ntj": "Ngaanyatjarra",
"ntk": "Ikoma-Nata-Isenye",
"ntm": "Nateni",
"nto": "Ntomba",
"ntp": "Northern Tepehuan",
"ntr": "Delo",
"ntu": "Natügu",
"ntw": "Nottoway",
"ntx": "Tangkhul Naga (Myanmar)",
"nty": "Mantsi",
"ntz": "Natanzi",
"nua": "Yuanga",
"nub": "Nubian languages",
"nuc": "Nukuini",
"nud": "Ngala",
"nue": "Ngundu",
"nuf": "Nusu",
"nug": "Nungali",
"nuh": "Ndunda",
"nui": "Ngumbi",
"nuj": "Nyole",
"nuk": "Nuu-chah-nulth; Nuuchahnulth",
"nul": "Nusa Laut",
"num": "Niuafo'ou",
"nun": "Anong",
"nuo": "Nguôn",
"nup": "Nupe-Nupe-Tako",
"nuq": "Nukumanu",
"nur": "Nukuria",
"nus": "Nuer",
"nut": "Nung (Viet Nam)",
"nuu": "Ngbundu",
"nuv": "Northern Nuni",
"nuw": "Nguluwan",
"nux": "Mehek",
"nuy": "Nunggubuyu",
"nuz": "Tlamacazapa Nahuatl",
"nv": "Navajo; Navaho",
"nvh": "Nasarian",
"nvm": "Namiae",
"nvo": "Nyokon",
"nwa": "Nawathinehena",
"nwb": "Nyabwa",
"nwc": "Classical Newari; Classical Nepal Bhasa; Old Newari",
"nwe": "Ngwe",
"nwg": "Ngayawung",
"nwi": "Southwest Tanna",
"nwm": "Nyamusa-Molo",
"nwo": "Nauo",
"nwr": "Nawaru",
"nww": "Ndwewe",
"nwx": "Middle Newar",
"nwy": "Nottoway-Meherrin",
"nxa": "Nauete",
"nxd": "Ngando (Democratic Republic of Congo)",
"nxe": "Nage",
"nxg": "Ngad'a",
"nxi": "Nindi",
"nxk": "Koki Naga",
"nxl": "South Nuaulu",
"nxm": "Numidian",
"nxn": "Ngawun",
"nxo": "Ndambomo",
"nxq": "Naxi",
"nxr": "Ninggerum",
"nxx": "Nafri",
"ny": "Nyanja; Chewa; Chichewa",
"nyb": "Nyangbo",
"nyc": "Nyanga-li",
"nyd": "Nyore; Olunyole",
"nye": "Nyengo",
"nyf": "Giryama; Kigiryama",
"nyg": "Nyindu",
"nyh": "Nyikina",
"nyi": "Ama (Sudan)",
"nyj": "Nyanga",
"nyk": "Nyaneka",
"nyl": "Nyeu",
"nym": "Nyamwezi",
"nyn": "Nyankole",
"nyo": "Nyoro",
"nyp": "Nyang'i",
"nyq": "Nayini",
"nyr": "Nyiha (Malawi)",
"nys": "Nyungar",
"nyt": "Nyawaygi",
"nyu": "Nyungwe",
"nyv": "Nyulnyul",
"nyw": "Nyaw",
"nyx": "Nganyaywana",
"nyy": "Nyakyusa-Ngonde",
"nza": "Tigon Mbembe",
"nzb": "Njebi",
"nzd": "Nzadi",
"nzi": "Nzima",
"nzk": "Nzakara",
"nzm": "Zeme Naga",
"nzs": "New Zealand Sign Language",
"nzu": "Teke-Nzikou",
"nzy": "Nzakambay",
"nzz": "Nanga Dama Dogon",
"oaa": "Orok",
"oac": "Oroch",
"oar": "Old Aramaic (up to 700 BCE); Ancient Aramaic (up to 700 BCE)",
"oav": "Old Avar",
"obi": "Obispeño",
"obk": "Southern Bontok",
"obl": "Oblo",
"obm": "Moabite",
"obo": "Obo Manobo",
"obr": "Old Burmese",
"obt": "Old Breton",
"obu": "Obulom",
"oc": "Occitan (post 1500)",
"oca": "Ocaina",
"och": "Old Chinese",
"ocm": "Old Cham",
"oco": "Old Cornish",
"ocu": "Atzingo Matlatzinca",
"oda": "Odut",
"odk": "Od",
"odt": "Old Dutch",
"odu": "Odual",
"ofo": "Ofo",
"ofs": "Old Frisian",
"ofu": "Efutop",
"ogb": "Ogbia",
"ogc": "Ogbah",
"oge": "Old Georgian",
"ogg": "Ogbogolo",
"ogo": "Khana",
"ogu": "Ogbronuagum",
"oht": "Old Hittite",
"ohu": "Old Hungarian",
"oia": "Oirata",
"oie": "Okolie",
"oin": "Inebu One",
"oj": "Ojibwa",
"ojb": "Northwestern Ojibwa",
"ojc": "Central Ojibwa",
"ojg": "Eastern Ojibwa",
"ojp": "Old Japanese",
"ojs": "Severn Ojibwa",
"ojv": "Ontong Java",
"ojw": "Western Ojibwa",
"oka": "Okanagan",
"okb": "Okobo",
"okc": "Kobo",
"okd": "Okodia",
"oke": "Okpe (Southwestern Edo)",
"okg": "Koko Babangk",
"okh": "Koresh-e Rostam",
"oki": "Okiek",
"okj": "Oko-Juwoi",
"okk": "Kwamtim One",
"okl": "Old Kentish Sign Language",
"okm": "Middle Korean (10th-16th cent.)",
"okn": "Oki-No-Erabu",
"oko": "Old Korean (3rd-9th cent.)",
"okr": "Kirike",
"oks": "Oko-Eni-Osayen",
"oku": "Oku",
"okv": "Orokaiva",
"okx": "Okpe (Northwestern Edo)",
"okz": "Old Khmer",
"ola": "Walungge",
"old": "Mochi",
"ole": "Olekha",
"olk": "Olkol",
"olm": "Oloma",
"olo": "Livvi",
"olr": "Olrat",
"olt": "Old Lithuanian",
"olu": "Kuvale",
"om": "Oromo",
"oma": "Omaha-Ponca",
"omb": "East Ambae",
"omc": "Mochica",
"omg": "Omagua",
"omi": "Omi",
"omk": "Omok",
"oml": "Ombo",
"omn": "Minoan",
"omo": "Utarmbung",
"omp": "Old Manipuri",
"omq": "Oto-Manguean languages",
"omr": "Old Marathi",
"omt": "Omotik",
"omu": "Omurano",
"omv": "Omotic languages",
"omw": "South Tairora",
"omx": "Old Mon",
"omy": "Old Malay",
"ona": "Ona",
"onb": "Lingao",
"one": "Oneida",
"ong": "Olo",
"oni": "Onin",
"onj": "Onjob",
"onk": "Kabore One",
"onn": "Onobasulu",
"ono": "Onondaga",
"onp": "Sartang",
"onr": "Northern One",
"ons": "Ono",
"ont": "Ontenu",
"onu": "Unua",
"onw": "Old Nubian",
"onx": "Onin Based Pidgin",
"ood": "Tohono O'odham",
"oog": "Ong",
"oon": "Önge",
"oor": "Oorlams",
"oos": "Old Ossetic",
"opa": "Okpamheri",
"opk": "Kopkaka",
"opm": "Oksapmin",
"opo": "Opao",
"opt": "Opata",
"opy": "Ofayé",
"or": "Oriya (macrolanguage); Odia (macrolanguage)",
"ora": "Oroha",
"orc": "Orma",
"ore": "Orejón",
"org": "Oring",
"orh": "Oroqen",
"orn": "Orang Kanaq",
"oro": "Orokolo",
"orr": "Oruma",
"ors": "Orang Seletar",
"ort": "Adivasi Oriya",
"oru": "Ormuri",
"orv": "Old Russian",
"orw": "Oro Win",
"orx": "Oro",
"ory": "Odia (individual language); Oriya (individual language)",
"orz": "Ormu",
"os": "Ossetian; Ossetic",
"osa": "Osage",
"osc": "Oscan",
"osi": "Osing",
"osn": "Old Sundanese",
"oso": "Ososo",
"osp": "Old Spanish",
"ost": "Osatu",
"osu": "Southern One",
"osx": "Old Saxon",
"ota": "Ottoman Turkish (1500-1928)",
"otb": "Old Tibetan",
"otd": "Ot Danum",
"ote": "Mezquital Otomi",
"oti": "Oti",
"otk": "Old Turkish",
"otl": "Tilapa Otomi",
"otm": "Eastern Highland Otomi",
"otn": "Tenango Otomi",
"oto": "Otomian languages",
"otq": "Querétaro Otomi",
"otr": "Otoro",
"ots": "Estado de México Otomi",
"ott": "Temoaya Otomi",
"otu": "Otuke",
"otw": "Ottawa",
"otx": "Texcatepec Otomi",
"oty": "Old Tamil",
"otz": "Ixtenco Otomi",
"oua": "Tagargrent",
"oub": "Glio-Oubi",
"oue": "Oune",
"oui": "Old Uighur",
"oum": "Ouma",
"ovd": "Elfdalian; Övdalian",
"owi": "Owiniga",
"owl": "Old Welsh",
"oyb": "Oy",
"oyd": "Oyda",
"oym": "Wayampi",
"oyy": "Oya'oya",
"ozm": "Koonzime",
"pa": "Panjabi; Punjabi",
"paa": "Papuan languages",
"pab": "Parecís",
"pac": "Pacoh",
"pad": "Paumarí",
"pae": "Pagibete",
"paf": "Paranawát",
"pag": "Pangasinan",
"pah": "Tenharim",
"pai": "Pe",
"pak": "Parakanã",
"pal": "Pahlavi",
"pam": "Pampanga; Kapampangan",
"pao": "Northern Paiute",
"pap": "Papiamento",
"paq": "Parya",
"par": "Panamint; Timbisha",
"pas": "Papasena",
"pau": "Palauan",
"pav": "Pakaásnovos",
"paw": "Pawnee",
"pax": "Pankararé",
"pay": "Pech",
"paz": "Pankararú",
"pbb": "Páez",
"pbc": "Patamona",
"pbe": "Mezontla Popoloca",
"pbf": "Coyotepec Popoloca",
"pbg": "Paraujano",
"pbh": "E'ñapa Woromaipu",
"pbi": "Parkwa",
"pbl": "Mak (Nigeria)",
"pbm": "Puebla Mazatec",
"pbn": "Kpasam",
"pbo": "Papel",
"pbp": "Badyara",
"pbr": "Pangwa",
"pbs": "Central Pame",
"pbt": "Southern Pashto",
"pbu": "Northern Pashto",
"pbv": "Pnar",
"pby": "Pyu (Papua New Guinea)",
"pca": "Santa Inés Ahuatempan Popoloca",
"pcb": "Pear",
"pcc": "Bouyei",
"pcd": "Picard",
"pce": "Ruching Palaung",
"pcf": "Paliyan",
"pcg": "Paniya",
"pch": "Pardhan",
"pci": "Duruwa",
"pcj": "Parenga",
"pck": "Paite Chin",
"pcl": "Pardhi",
"pcm": "Nigerian Pidgin",
"pcn": "Piti",
"pcp": "Pacahuara",
"pcw": "Pyapun",
"pda": "Anam",
"pdc": "Pennsylvania German",
"pdi": "Pa Di",
"pdn": "Podena; Fedan",
"pdo": "Padoe",
"pdt": "Plautdietsch",
"pdu": "Kayan",
"pea": "Peranakan Indonesian",
"peb": "Eastern Pomo",
"ped": "Mala (Papua New Guinea)",
"pee": "Taje",
"pef": "Northeastern Pomo",
"peg": "Pengo",
"peh": "Bonan",
"pei": "Chichimeca-Jonaz",
"pej": "Northern Pomo",
"pek": "Penchal",
"pel": "Pekal",
"pem": "Phende",
"peo": "Old Persian (ca. 600-400 B.C.)",
"pep": "Kunja",
"peq": "Southern Pomo",
"pes": "Iranian Persian",
"pev": "Pémono",
"pex": "Petats",
"pey": "Petjo",
"pez": "Eastern Penan",
"pfa": "Pááfang",
"pfe": "Pere",
"pfl": "Pfaelzisch",
"pga": "Sudanese Creole Arabic",
"pgd": "Gāndhārī",
"pgg": "Pangwali",
"pgi": "Pagi",
"pgk": "Rerep",
"pgl": "Primitive Irish",
"pgn": "Paelignian",
"pgs": "Pangseng",
"pgu": "Pagu",
"pgz": "Papua New Guinean Sign Language",
"pha": "Pa-Hng",
"phd": "Phudagi",
"phg": "Phuong",
"phh": "Phukha",
"phi": "Philippine languages",
"phj": "Pahari",
"phk": "Phake",
"phl": "Phalura; Palula",
"phm": "Phimbi",
"phn": "Phoenician",
"pho": "Phunoi",
"phq": "Phana'",
"phr": "Pahari-Potwari",
"pht": "Phu Thai",
"phu": "Phuan",
"phv": "Pahlavani",
"phw": "Phangduwali",
"pi": "Pali",
"pia": "Pima Bajo",
"pib": "Yine",
"pic": "Pinji",
"pid": "Piaroa",
"pie": "Piro",
"pif": "Pingelapese",
"pig": "Pisabo",
"pih": "Pitcairn-Norfolk",
"pij": "Pijao",
"pil": "Yom",
"pim": "Powhatan",
"pin": "Piame",
"pio": "Piapoco",
"pip": "Pero",
"pir": "Piratapuyo",
"pis": "Pijin",
"pit": "Pitta Pitta",
"piu": "Pintupi-Luritja",
"piv": "Pileni; Vaeakau-Taumako",
"piw": "Pimbwe",
"pix": "Piu",
"piy": "Piya-Kwonci",
"piz": "Pije",
"pjt": "Pitjantjatjara",
"pka": "Ardhamāgadhī Prākrit",
"pkb": "Pokomo; Kipfokomo",
"pkc": "Paekche",
"pkg": "Pak-Tong",
"pkh": "Pankhu",
"pkn": "Pakanha",
"pko": "Pökoot",
"pkp": "Pukapuka",
"pkr": "Attapady Kurumba",
"pks": "Pakistan Sign Language",
"pkt": "Maleng",
"pku": "Paku",
"pl": "Polish",
"pla": "Miani",
"plb": "Polonombauk",
"plc": "Central Palawano",
"pld": "Polari",
"ple": "Palu'e",
"plf": "Central Malayo-Polynesian languages",
"plg": "Pilagá",
"plh": "Paulohi",
"plj": "Polci",
"plk": "Kohistani Shina",
"pll": "Shwe Palaung",
"pln": "Palenquero",
"plo": "Oluta Popoluca",
"plq": "Palaic",
"plr": "Palaka Senoufo",
"pls": "San Marcos Tlacoyalco Popoloca; San Marcos Tlalcoyalco Popoloca",
"plt": "Plateau Malagasy",
"plu": "Palikúr",
"plv": "Southwest Palawano",
"plw": "Brooke's Point Palawano",
"ply": "Bolyu",
"plz": "Paluan",
"pma": "Paama",
"pmb": "Pambia",
"pmd": "Pallanganmiddang",
"pme": "Pwaamei",
"pmf": "Pamona",
"pmh": "Māhārāṣṭri Prākrit",
"pmi": "Northern Pumi",
"pmj": "Southern Pumi",
"pmk": "Pamlico",
"pml": "Lingua Franca",
"pmm": "Pomo",
"pmn": "Pam",
"pmo": "Pom",
"pmq": "Northern Pame",
"pmr": "Paynamar",
"pms": "Piemontese",
"pmt": "Tuamotuan",
"pmw": "Plains Miwok",
"pmx": "Poumei Naga",
"pmy": "Papuan Malay",
"pmz": "Southern Pame",
"pna": "Punan Bah-Biau",
"pnb": "Western Panjabi",
"pnc": "Pannei",
"pnd": "Mpinda",
"pne": "Western Penan",
"png": "Pangu; Pongu",
"pnh": "Penrhyn",
"pni": "Aoheng",
"pnj": "Pinjarup",
"pnk": "Paunaka",
"pnl": "Paleni",
"pnm": "Punan Batu 1",
"pnn": "Pinai-Hagahai",
"pno": "Panobo",
"pnp": "Pancana",
"pnq": "Pana (Burkina Faso)",
"pnr": "Panim",
"pns": "Ponosakan",
"pnt": "Pontic",
"pnu": "Jiongnai Bunu",
"pnv": "Pinigura",
"pnw": "Banyjima; Panytyima",
"pnx": "Phong-Kniang",
"pny": "Pinyin",
"pnz": "Pana (Central African Republic)",
"poc": "Poqomam",
"poe": "San Juan Atzingo Popoloca",
"pof": "Poke",
"pog": "Potiguára",
"poh": "Poqomchi'",
"poi": "Highland Popoluca",
"pok": "Pokangá",
"pom": "Southeastern Pomo",
"pon": "Pohnpeian",
"poo": "Central Pomo",
"pop": "Pwapwâ",
"poq": "Texistepec Popoluca",
"pos": "Sayula Popoluca",
"pot": "Potawatomi",
"pov": "Upper Guinea Crioulo",
"pow": "San Felipe Otlaltepec Popoloca",
"pox": "Polabian",
"poy": "Pogolo",
"poz": "Malayo-Polynesian languages",
"ppe": "Papi",
"ppi": "Paipai",
"ppk": "Uma",
"ppl": "Pipil; Nicarao",
"ppm": "Papuma",
"ppn": "Papapana",
"ppo": "Folopa",
"ppp": "Pelende",
"ppq": "Pei",
"pps": "San Luís Temalacayuca Popoloca",
"ppt": "Pare",
"ppu": "Papora",
"pqa": "Pa'a",
"pqe": "Eastern Malayo-Polynesian languages",
"pqm": "Malecite-Passamaquoddy",
"pqw": "Western Malayo-Polynesian languages",
"pra": "Prakrit languages",
"prc": "Parachi",
"prd": "Parsi-Dari",
"pre": "Principense",
"prf": "Paranan",
"prg": "Prussian",
"prh": "Porohanon",
"pri": "Paicî",
"prk": "Parauk",
"prl": "Peruvian Sign Language",
"prm": "Kibiri",
"prn": "Prasuni",
"pro": "Old Provençal (to 1500); Old Occitan (to 1500)",
"prp": "Parsi",
"prq": "Ashéninka Perené",
"prr": "Puri",
"prs": "Dari; Afghan Persian",
"prt": "Phai",
"pru": "Puragi",
"prw": "Parawen",
"prx": "Purik",
"prz": "Providencia Sign Language",
"ps": "Pushto; Pashto",
"psa": "Asue Awyu",
"psc": "Iranian Sign Language; Persian Sign Language",
"psd": "Plains Indian Sign Language",
"pse": "Central Malay",
"psg": "Penang Sign Language",
"psh": "Southwest Pashai; Southwest Pashayi",
"psi": "Southeast Pashai; Southeast Pashayi",
"psl": "Puerto Rican Sign Language",
"psm": "Pauserna",
"psn": "Panasuan",
"pso": "Polish Sign Language",
"psp": "Philippine Sign Language",
"psq": "Pasi",
"psr": "Portuguese Sign Language",
"pss": "Kaulong",
"pst": "Central Pashto",
"psu": "Sauraseni Prākrit",
"psw": "Port Sandwich",
"psy": "Piscataway",
"pt": "Portuguese",
"pta": "Pai Tavytera",
"pth": "Pataxó Hã-Ha-Hãe",
"pti": "Pindiini; Wangkatha",
"ptn": "Patani",
"pto": "Zo'é",
"ptp": "Patep",
"ptq": "Pattapu",
"ptr": "Piamatsina",
"ptt": "Enrekang",
"ptu": "Bambam",
"ptv": "Port Vato",
"ptw": "Pentlatch",
"pty": "Pathiya",
"pua": "Western Highland Purepecha",
"pub": "Purum",
"puc": "Punan Merap",
"pud": "Punan Aput",
"pue": "Puelche",
"puf": "Punan Merah",
"pug": "Phuie",
"pui": "Puinave",
"puj": "Punan Tubu",
"pum": "Puma",
"puo": "Puoc",
"pup": "Pulabu",
"puq": "Puquina",
"pur": "Puruborá",
"put": "Putoh",
"puu": "Punu",
"puw": "Puluwatese",
"pux": "Puare",
"puy": "Purisimeño",
"pwa": "Pawaia",
"pwb": "Panawa",
"pwg": "Gapapaiwa",
"pwi": "Patwin",
"pwm": "Molbog",
"pwn": "Paiwan",
"pwo": "Pwo Western Karen",
"pwr": "Powari",
"pww": "Pwo Northern Karen",
"pxm": "Quetzaltepec Mixe",
"pye": "Pye Krumen",
"pym": "Fyam",
"pyn": "Poyanáwa",
"pys": "Paraguayan Sign Language; Lengua de Señas del Paraguay",
"pyu": "Puyuma",
"pyx": "Pyu (Myanmar)",
"pyy": "Pyen",
"pzh": "Pazeh",
"pzn": "Jejara Naga; Para Naga",
"qu": "Quechua",
"qua": "Quapaw",
"qub": "Huallaga Huánuco Quechua",
"quc": "K'iche'; Quiché",
"qud": "Calderón Highland Quichua",
"quf": "Lambayeque Quechua",
"qug": "Chimborazo Highland Quichua",
"quh": "South Bolivian Quechua",
"qui": "Quileute",
"quk": "Chachapoyas Quechua",
"qul": "North Bolivian Quechua",
"qum": "Sipacapense",
"qun": "Quinault",
"qup": "Southern Pastaza Quechua",
"quq": "Quinqui",
"qur": "Yanahuanca Pasco Quechua",
"qus": "Santiago del Estero Quichua",
"quv": "Sacapulteco",
"quw": "Tena Lowland Quichua",
"qux": "Yauyos Quechua",
"quy": "Ayacucho Quechua",
"quz": "Cusco Quechua",
"qva": "Ambo-Pasco Quechua",
"qvc": "Cajamarca Quechua",
"qve": "Eastern Apurímac Quechua",
"qvh": "Huamalíes-Dos de Mayo Huánuco Quechua",
"qvi": "Imbabura Highland Quichua",
"qvj": "Loja Highland Quichua",
"qvl": "Cajatambo North Lima Quechua",
"qvm": "Margos-Yarowilca-Lauricocha Quechua",
"qvn": "North Junín Quechua",
"qvo": "Napo Lowland Quechua",
"qvp": "Pacaraos Quechua",
"qvs": "San Martín Quechua",
"qvw": "Huaylla Wanca Quechua",
"qvy": "Queyu",
"qvz": "Northern Pastaza Quichua",
"qwa": "Corongo Ancash Quechua",
"qwc": "Classical Quechua",
"qwe": "Quechuan (family)",
"qwh": "Huaylas Ancash Quechua",
"qwm": "Kuman (Russia)",
"qws": "Sihuas Ancash Quechua",
"qwt": "Kwalhioqua-Tlatskanai",
"qxa": "Chiquián Ancash Quechua",
"qxc": "Chincha Quechua",
"qxh": "Panao Huánuco Quechua",
"qxl": "Salasaca Highland Quichua",
"qxn": "Northern Conchucos Ancash Quechua",
"qxo": "Southern Conchucos Ancash Quechua",
"qxp": "Puno Quechua",
"qxq": "Qashqa'i",
"qxr": "Cañar Highland Quichua",
"qxs": "Southern Qiang",
"qxt": "Santa Ana de Tusi Pasco Quechua",
"qxu": "Arequipa-La Unión Quechua",
"qxw": "Jauja Wanca Quechua",
"qya": "Quenya",
"qyp": "Quiripi",
"raa": "Dungmali",
"rab": "Camling",
"rac": "Rasawa",
"rad": "Rade",
"raf": "Western Meohang",
"rag": "Logooli; Lulogooli",
"rah": "Rabha",
"rai": "Ramoaaina",
"raj": "Rajasthani",
"rak": "Tulu-Bohuai",
"ral": "Ralte",
"ram": "Canela",
"ran": "Riantana",
"rao": "Rao",
"rap": "Rapanui",
"raq": "Saam",
"rar": "Rarotongan; Cook Islands Maori",
"ras": "Tegali",
"rat": "Razajerdi",
"rau": "Raute",
"rav": "Sampang",
"raw": "Rawang",
"rax": "Rang",
"ray": "Rapa",
"raz": "Rahambuu",
"rbb": "Rumai Palaung",
"rbk": "Northern Bontok",
"rbl": "Miraya Bikol",
"rbp": "Barababaraba",
"rcf": "Réunion Creole French",
"rdb": "Rudbari",
"rea": "Rerau",
"reb": "Rembong",
"ree": "Rejang Kayan",
"reg": "Kara (Tanzania)",
"rei": "Reli",
"rej": "Rejang",
"rel": "Rendille",
"rem": "Remo",
"ren": "Rengao",
"rer": "Rer Bare",
"res": "Reshe",
"ret": "Retta",
"rey": "Reyesano",
"rga": "Roria",
"rge": "Romano-Greek",
"rgk": "Rangkas",
"rgn": "Romagnol",
"rgr": "Resígaro",
"rgs": "Southern Roglai",
"rgu": "Ringgou",
"rhg": "Rohingya",
"rhp": "Yahang",
"ria": "Riang (India)",
"rib": "Bribri Sign Language",
"rif": "Tarifit",
"ril": "Riang Lang; Riang (Myanmar)",
"rim": "Nyaturu",
"rin": "Nungu",
"rir": "Ribun",
"rit": "Ritharrngu",
"riu": "Riung",
"rjg": "Rajong",
"rji": "Raji",
"rjs": "Rajbanshi",
"rka": "Kraol",
"rkb": "Rikbaktsa",
"rkh": "Rakahanga-Manihiki",
"rki": "Rakhine",
"rkm": "Marka",
"rkt": "Rangpuri; Kamta",
"rkw": "Arakwal",
"rm": "Romansh",
"rma": "Rama",
"rmb": "Rembarrnga",
"rmc": "Carpathian Romani",
"rmd": "Traveller Danish",
"rme": "Angloromani",
"rmf": "Kalo Finnish Romani",
"rmg": "Traveller Norwegian",
"rmh": "Murkim",
"rmi": "Lomavren",
"rmk": "Romkun",
"rml": "Baltic Romani",
"rmm": "Roma",
"rmn": "Balkan Romani",
"rmo": "Sinte Romani",
"rmp": "Rempi",
"rmq": "Caló",
"rms": "Romanian Sign Language",
"rmt": "Domari",
"rmu": "Tavringer Romani",
"rmv": "Romanova",
"rmw": "Welsh Romani",
"rmx": "Romam",
"rmy": "Vlax Romani",
"rmz": "Marma",
"rn": "Rundi",
"rnb": "Brunca Sign Language",
"rnd": "Ruund",
"rng": "Ronga",
"rnl": "Ranglong",
"rnn": "Roon",
"rnp": "Rongpo",
"rnr": "Nari Nari",
"rnw": "Rungwa",
"ro": "Romanian; Moldavian; Moldovan",
"roa": "Romance languages",
"rob": "Tae'",
"roc": "Cacgia Roglai",
"rod": "Rogo",
"roe": "Ronji",
"rof": "Rombo",
"rog": "Northern Roglai",
"rol": "Romblomanon",
"rom": "Romany",
"roo": "Rotokas",
"rop": "Kriol",
"ror": "Rongga",
"rou": "Runga",
"row": "Dela-Oenale",
"rpn": "Repanbitip",
"rpt": "Rapting",
"rri": "Ririo",
"rro": "Waima",
"rrt": "Arritinngithigh",
"rsb": "Romano-Serbian",
"rsk": "Ruthenian; Rusyn",
"rsl": "Russian Sign Language",
"rsm": "Miriwoong Sign Language",
"rsn": "Rwandan Sign Language",
"rtc": "Rungtu Chin",
"rth": "Ratahan",
"rtm": "Rotuman",
"rts": "Yurats",
"rtw": "Rathawi",
"ru": "Russian",
"rub": "Gungu",
"ruc": "Ruuli",
"rue": "Rusyn",
"ruf": "Luguru",
"rug": "Roviana",
"ruh": "Ruga",
"rui": "Rufiji",
"ruk": "Che",
"ruo": "Istro Romanian",
"rup": "Macedo-Romanian; Aromanian; Arumanian",
"ruq": "Megleno Romanian",
"rut": "Rutul",
"ruu": "Lanas Lobu",
"ruy": "Mala (Nigeria)",
"ruz": "Ruma",
"rw": "Kinyarwanda",
"rwa": "Rawo",
"rwk": "Rwa",
"rwl": "Ruwila",
"rwm": "Amba (Uganda)",
"rwo": "Rawa",
"rwr": "Marwari (India)",
"rxd": "Ngardi",
"rxw": "Karuwali; Garuwali",
"ryn": "Northern Amami-Oshima",
"rys": "Yaeyama",
"ryu": "Central Okinawan",
"rzh": "Rāziḥī",
"sa": "Sanskrit",
"saa": "Saba",
"sab": "Buglere",
"sac": "Meskwaki",
"sad": "Sandawe",
"sae": "Sabanê",
"saf": "Safaliba",
"sah": "Yakut",
"sai": "South American Indian languages",
"saj": "Sahu",
"sak": "Sake",
"sal": "Salishan languages",
"sam": "Samaritan Aramaic",
"sao": "Sause",
"saq": "Samburu",
"sar": "Saraveca",
"sas": "Sasak",
"sat": "Santali",
"sau": "Saleman",
"sav": "Saafi-Saafi",
"saw": "Sawi",
"sax": "Sa",
"say": "Saya",
"saz": "Saurashtra",
"sba": "Ngambay",
"sbb": "Simbo",
"sbc": "Kele (Papua New Guinea)",
"sbd": "Southern Samo",
"sbe": "Saliba",
"sbf": "Chabu; Shabo",
"sbg": "Seget",
"sbh": "Sori-Harengan",
"sbi": "Seti",
"sbj": "Surbakhal",
"sbk": "Safwa",
"sbl": "Botolan Sambal",
"sbm": "Sagala",
"sbn": "Sindhi Bhil",
"sbo": "Sabüm",
"sbp": "Sangu (Tanzania)",
"sbq": "Sileibi",
"sbr": "Sembakung Murut",
"sbs": "Subiya",
"sbt": "Kimki",
"sbu": "Stod Bhoti",
"sbv": "Sabine",
"sbw": "Simba",
"sbx": "Seberuang",
"sby": "Soli",
"sbz": "Sara Kaba",
"sc": "Sardinian",
"scb": "Chut",
"sce": "Dongxiang",
"scf": "San Miguel Creole French",
"scg": "Sanggau",
"sch": "Sakachep",
"sci": "Sri Lankan Creole Malay",
"sck": "Sadri",
"scl": "Shina",
"scn": "Sicilian",
"sco": "Scots",
"scp": "Hyolmo; Helambu Sherpa",
"scq": "Sa'och",
"scs": "North Slavey",
"sct": "Southern Katang",
"scu": "Shumcho",
"scv": "Sheni",
"scw": "Sha",
"scx": "Sicel",
"sd": "Sindhi",
"sda": "Toraja-Sa'dan",
"sdb": "Shabak",
"sdc": "Sassarese Sardinian",
"sde": "Surubu",
"sdf": "Sarli",
"sdg": "Savi",
"sdh": "Southern Kurdish",
"sdj": "Suundi",
"sdk": "Sos Kundi",
"sdl": "Saudi Arabian Sign Language",
"sdn": "Gallurese Sardinian",
"sdo": "Bukar-Sadung Bidayuh",
"sdp": "Sherdukpen",
"sdq": "Semandang",
"sdr": "Oraon Sadri",
"sds": "Sened",
"sdt": "Shuadit",
"sdu": "Sarudu",
"sdv": "Eastern Sudanic languages",
"sdx": "Sibu Melanau",
"sdz": "Sallands",
"se": "Northern Sami",
"sea": "Semai",
"seb": "Shempire Senoufo",
"sec": "Sechelt",
"sed": "Sedang",
"see": "Seneca",
"sef": "Cebaara Senoufo",
"seg": "Segeju",
"seh": "Sena",
"sei": "Seri",
"sej": "Sene",
"sek": "Sekani",
"sel": "Selkup",
"sem": "Semitic languages",
"sen": "Nanerigé Sénoufo",
"seo": "Suarmin",
"sep": "Sìcìté Sénoufo",
"seq": "Senara Sénoufo",
"ser": "Serrano",
"ses": "Koyraboro Senni Songhai",
"set": "Sentani",
"seu": "Serui-Laut",
"sev": "Nyarafolo Senoufo",
"sew": "Sewa Bay",
"sey": "Secoya",
"sez": "Senthang Chin",
"sfb": "Langue des signes de Belgique Francophone; French Belgian Sign Language",
"sfe": "Eastern Subanen",
"sfm": "Small Flowery Miao",
"sfs": "South African Sign Language",
"sfw": "Sehwi",
"sg": "Sango",
"sga": "Old Irish (to 900)",
"sgb": "Mag-antsi Ayta",
"sgc": "Kipsigis",
"sgd": "Surigaonon",
"sge": "Segai",
"sgg": "Swiss-German Sign Language",
"sgh": "Shughni",
"sgi": "Suga",
"sgj": "Surgujia",
"sgk": "Sangkong",
"sgm": "Singa",
"sgn": "Sign languages",
"sgp": "Singpho",
"sgr": "Sangisari",
"sgs": "Samogitian",
"sgt": "Brokpake",
"sgu": "Salas",
"sgw": "Sebat Bet Gurage",
"sgx": "Sierra Leone Sign Language",
"sgy": "Sanglechi",
"sgz": "Sursurunga",
"sh": "Serbo-Croatian",
"sha": "Shall-Zwall",
"shb": "Ninam",
"shc": "Sonde",
"shd": "Kundal Shahi",
"she": "Sheko",
"shg": "Shua",
"shh": "Shoshoni",
"shi": "Tachelhit",
"shj": "Shatt",
"shk": "Shilluk",
"shl": "Shendu",
"shm": "Shahrudi",
"shn": "Shan",
"sho": "Shanga",
"shp": "Shipibo-Conibo",
"shq": "Sala",
"shr": "Shi",
"shs": "Shuswap",
"sht": "Shasta",
"shu": "Chadian Arabic",
"shv": "Shehri",
"shw": "Shwai",
"shx": "She",
"shy": "Tachawit",
"shz": "Syenara Senoufo",
"si": "Sinhala; Sinhalese",
"sia": "Akkala Sami",
"sib": "Sebop",
"sid": "Sidamo",
"sie": "Simaa",
"sif": "Siamou",
"sig": "Paasaal",
"sih": "Zire; Sîshëë",
"sii": "Shom Peng",
"sij": "Numbami",
"sik": "Sikiana",
"sil": "Tumulung Sisaala",
"sim": "Mende (Papua New Guinea)",
"sio": "Siouan languages",
"sip": "Sikkimese",
"siq": "Sonia",
"sir": "Siri",
"sis": "Siuslaw",
"sit": "Sino-Tibetan languages",
"siu": "Sinagen",
"siv": "Sumariup",
"siw": "Siwai",
"six": "Sumau",
"siy": "Sivandi",
"siz": "Siwi",
"sja": "Epena",
"sjb": "Sajau Basap",
"sjd": "Kildin Sami",
"sje": "Pite Sami",
"sjg": "Assangori",
"sjk": "Kemi Sami",
"sjl": "Sajalong; Miji",
"sjm": "Mapun",
"sjn": "Sindarin",
"sjo": "Xibe",
"sjp": "Surjapuri",
"sjr": "Siar-Lak",
"sjs": "Senhaja De Srair",
"sjt": "Ter Sami",
"sju": "Ume Sami",
"sjw": "Shawnee",
"sk": "Slovak",
"ska": "Skagit",
"skb": "Saek",
"skc": "Ma Manda",
"skd": "Southern Sierra Miwok",
"ske": "Seke (Vanuatu)",
"skf": "Sakirabiá",
"skg": "Sakalava Malagasy",
"skh": "Sikule",
"ski": "Sika",
"skj": "Seke (Nepal)",
"skm": "Kutong",
"skn": "Kolibugan Subanon",
"sko": "Seko Tengah",
"skp": "Sekapan",
"skq": "Sininkere",
"skr": "Saraiki; Seraiki",
"sks": "Maia",
"skt": "Sakata",
"sku": "Sakao",
"skv": "Skou",
"skw": "Skepi Creole Dutch",
"skx": "Seko Padang",
"sky": "Sikaiana",
"skz": "Sekar",
"sl": "Slovenian",
"sla": "Slavic languages",
"slc": "Sáliba",
"sld": "Sissala",
"sle": "Sholaga",
"slf": "Swiss-Italian Sign Language",
"slg": "Selungai Murut",
"slh": "Southern Puget Sound Salish",
"sli": "Lower Silesian",
"slj": "Salumá",
"sll": "Salt-Yui",
"slm": "Pangutaran Sama",
"sln": "Salinan",
"slp": "Lamaholot",
"slq": "Salchuq",
"slr": "Salar",
"sls": "Singapore Sign Language",
"slt": "Sila",
"slu": "Selaru",
"slw": "Sialum",
"slx": "Salampasu",
"sly": "Selayar",
"slz": "Ma'ya",
"sm": "Samoan",
"sma": "Southern Sami",
"smb": "Simbari",
"smc": "Som",
"smf": "Auwe",
"smg": "Simbali",
"smh": "Samei",
"smi": "Sami languages",
"smj": "Lule Sami",
"smk": "Bolinao",
"sml": "Central Sama",
"smm": "Musasa",
"smn": "Inari Sami",
"smp": "Samaritan",
"smq": "Samo",
"smr": "Simeulue",
"sms": "Skolt Sami",
"smt": "Simte",
"smu": "Somray",
"smv": "Samvedi",
"smw": "Sumbawa",
"smx": "Samba",
"smy": "Semnani",
"smz": "Simeku",
"sn": "Shona",
"snc": "Sinaugoro",
"sne": "Bau Bidayuh",
"snf": "Noon",
"sng": "Sanga (Democratic Republic of Congo)",
"sni": "Sensi",
"snj": "Riverain Sango",
"snk": "Soninke",
"snl": "Sangil",
"snm": "Southern Ma'di",
"snn": "Siona",
"sno": "Snohomish",
"snp": "Siane",
"snq": "Sangu (Gabon)",
"snr": "Sihan",
"sns": "South West Bay; Nahavaq",
"snu": "Senggi; Viid",
"snv": "Sa'ban",
"snw": "Selee",
"snx": "Sam",
"sny": "Saniyo-Hiyewe",
"snz": "Kou",
"so": "Somali",
"soa": "Thai Song",
"sob": "Sobei",
"soc": "So (Democratic Republic of Congo)",
"sod": "Songoora",
"soe": "Songomeno",
"sog": "Sogdian",
"soh": "Aka",
"soi": "Sonha",
"soj": "Soi",
"sok": "Sokoro",
"sol": "Solos",
"son": "Songhai languages",
"soo": "Songo",
"sop": "Songe",
"soq": "Kanasi",
"sor": "Somrai",
"sos": "Seeku",
"sou": "Southern Thai",
"sov": "Sonsorol",
"sow": "Sowanda",
"sox": "Swo",
"soy": "Miyobe",
"soz": "Temi",
"spb": "Sepa (Indonesia)",
"spc": "Sapé",
"spd": "Saep",
"spe": "Sepa (Papua New Guinea)",
"spg": "Sian",
"spi": "Saponi",
"spk": "Sengo",
"spl": "Selepet",
"spm": "Akukem",
"spn": "Sanapaná",
"spo": "Spokane",
"spp": "Supyire Senoufo",
"spq": "Loreto-Ucayali Spanish",
"spr": "Saparua",
"sps": "Saposa",
"spt": "Spiti Bhoti",
"spu": "Sapuan",
"spv": "Sambalpuri; Kosli",
"spx": "South Picene",
"spy": "Sabaot",
"sq": "Albanian",
"sqa": "Shama-Sambuga",
"sqh": "Shau",
"sqj": "Albanian languages",
"sqk": "Albanian Sign Language",
"sqm": "Suma",
"sqn": "Susquehannock",
"sqo": "Sorkhei",
"sqq": "Sou",
"sqr": "Siculo Arabic",
"sqs": "Sri Lankan Sign Language",
"sqt": "Soqotri",
"squ": "Squamish",
"sqx": "Kufr Qassem Sign Language (KQSL)",
"sr": "Serbian",
"sra": "Saruga",
"srb": "Sora",
"src": "Logudorese Sardinian",
"sre": "Sara",
"srf": "Nafi",
"srg": "Sulod",
"srh": "Sarikoli",
"sri": "Siriano",
"srk": "Serudung Murut",
"srl": "Isirawa",
"srm": "Saramaccan",
"srn": "Sranan Tongo",
"sro": "Campidanese Sardinian",
"srq": "Sirionó",
"srr": "Serer",
"srs": "Sarsi",
"srt": "Sauri",
"sru": "Suruí",
"srv": "Southern Sorsoganon",
"srw": "Serua",
"srx": "Sirmauri",
"sry": "Sera",
"srz": "Shahmirzadi",
"ss": "Swati",
"ssa": "Nilo-Saharan languages",
"ssb": "Southern Sama",
"ssc": "Suba-Simbiti",
"ssd": "Siroi",
"sse": "Balangingi; Bangingih Sama",
"ssf": "Thao",
"ssg": "Seimat",
"ssh": "Shihhi Arabic",
"ssi": "Sansi",
"ssj": "Sausi",
"ssk": "Sunam",
"ssl": "Western Sisaala",
"ssm": "Semnam",
"ssn": "Waata",
"sso": "Sissano",
"ssp": "Spanish Sign Language",
"ssq": "So'a",
"ssr": "Swiss-French Sign Language",
"sss": "Sô",
"sst": "Sinasina",
"ssu": "Susuami",
"ssv": "Shark Bay",
"ssx": "Samberigi",
"ssy": "Saho",
"ssz": "Sengseng",
"st": "Southern Sotho",
"sta": "Settla",
"stb": "Northern Subanen",
"std": "Sentinel",
"ste": "Liana-Seti",
"stf": "Seta",
"stg": "Trieng",
"sth": "Shelta",
"sti": "Bulo Stieng",
"stj": "Matya Samo",
"stk": "Arammba",
"stl": "Stellingwerfs",
"stm": "Setaman",
"stn": "Owa",
"sto": "Stoney",
"stp": "Southeastern Tepehuan",
"stq": "Saterfriesisch",
"str": "Straits Salish",
"sts": "Shumashti",
"stt": "Budeh Stieng",
"stu": "Samtao",
"stv": "Silt'e",
"stw": "Satawalese",
"sty": "Siberian Tatar",
"su": "Sundanese",
"sua": "Sulka",
"sub": "Suku",
"suc": "Western Subanon",
"sue": "Suena",
"sug": "Suganga",
"sui": "Suki",
"suj": "Shubi",
"suk": "Sukuma",
"suo": "Bouni",
"suq": "Tirmaga-Chai Suri; Suri",
"sur": "Mwaghavul",
"sus": "Susu",
"sut": "Subtiaba",
"suv": "Puroik",
"suw": "Sumbwa",
"sux": "Sumerian",
"suy": "Suyá",
"suz": "Sunwar",
"sv": "Swedish",
"sva": "Svan",
"svb": "Ulau-Suain",
"svc": "Vincentian Creole English",
"sve": "Serili",
"svk": "Slovakian Sign Language",
"svm": "Slavomolisano",
"svs": "Savosavo",
"svx": "Skalvian",
"sw": "Swahili (macrolanguage)",
"swb": "Maore Comorian",
"swc": "Congo Swahili",
"swf": "Sere",
"swg": "Swabian",
"swh": "Swahili (individual language); Kiswahili",
"swi": "Sui",
"swj": "Sira",
"swk": "Malawi Sena",
"swl": "Swedish Sign Language",
"swm": "Samosa",
"swn": "Sawknah",
"swo": "Shanenawa",
"swp": "Suau",
"swq": "Sharwa",
"swr": "Saweru",
"sws": "Seluwasan",
"swt": "Sawila",
"swu": "Suwawa",
"swv": "Shekhawati",
"sww": "Sowa",
"swx": "Suruahá",
"swy": "Sarua",
"sxb": "Suba",
"sxc": "Sicanian",
"sxe": "Sighu",
"sxg": "Shuhi; Shixing",
"sxk": "Southern Kalapuya",
"sxl": "Selian",
"sxm": "Samre",
"sxn": "Sangir",
"sxo": "Sorothaptic",
"sxr": "Saaroa",
"sxs": "Sasaru",
"sxu": "Upper Saxon",
"sxw": "Saxwe Gbe",
"sya": "Siang",
"syb": "Central Subanen",
"syc": "Classical Syriac",
"syd": "Samoyedic languages",
"syi": "Seki",
"syk": "Sukur",
"syl": "Sylheti",
"sym": "Maya Samo",
"syn": "Senaya",
"syo": "Suoy",
"syr": "Syriac",
"sys": "Sinyar",
"syw": "Kagate",
"syx": "Samay",
"syy": "Al-Sayyid Bedouin Sign Language",
"sza": "Semelai",
"szb": "Ngalum",
"szc": "Semaq Beri",
"szd": "Seru",
"sze": "Seze",
"szg": "Sengele",
"szl": "Silesian",
"szn": "Sula",
"szp": "Suabo",
"szs": "Solomon Islands Sign Language",
"szv": "Isu (Fako Division)",
"szw": "Sawai",
"szy": "Sakizaya",
"ta": "Tamil",
"taa": "Lower Tanana",
"tab": "Tabassaran",
"tac": "Lowland Tarahumara",
"tad": "Tause",
"tae": "Tariana",
"taf": "Tapirapé",
"tag": "Tagoi",
"tai": "Tai languages",
"taj": "Eastern Tamang",
"tak": "Tala",
"tal": "Tal",
"tan": "Tangale",
"tao": "Yami",
"tap": "Taabwa",
"taq": "Tamasheq",
"tar": "Central Tarahumara",
"tas": "Tay Boi",
"tau": "Upper Tanana",
"tav": "Tatuyo",
"taw": "Tai",
"tax": "Tamki",
"tay": "Atayal",
"taz": "Tocho",
"tba": "Aikanã",
"tbc": "Takia",
"tbd": "Kaki Ae",
"tbe": "Tanimbili",
"tbf": "Mandara",
"tbg": "North Tairora",
"tbh": "Dharawal; Thurawal",
"tbi": "Gaam",
"tbj": "Tiang",
"tbk": "Calamian Tagbanwa",
"tbl": "Tboli",
"tbm": "Tagbu",
"tbn": "Barro Negro Tunebo",
"tbo": "Tawala",
"tbp": "Taworta; Diebroud",
"tbq": "Tibeto-Burman languages",
"tbr": "Tumtum",
"tbs": "Tanguat",
"tbt": "Tembo (Kitembo)",
"tbu": "Tubar",
"tbv": "Tobo",
"tbw": "Tagbanwa",
"tbx": "Kapin",
"tby": "Tabaru",
"tbz": "Ditammari",
"tca": "Ticuna",
"tcb": "Tanacross",
"tcc": "Datooga",
"tcd": "Tafi",
"tce": "Southern Tutchone",
"tcf": "Malinaltepec Me'phaa; Malinaltepec Tlapanec",
"tcg": "Tamagario",
"tch": "Turks And Caicos Creole English",
"tci": "Wára",
"tck": "Tchitchege",
"tcl": "Taman (Myanmar)",
"tcm": "Tanahmerah",
"tcn": "Tichurong",
"tco": "Taungyo",
"tcp": "Tawr Chin",
"tcq": "Kaiy",
"tcs": "Torres Strait Creole; Yumplatok",
"tct": "T'en",
"tcu": "Southeastern Tarahumara",
"tcw": "Tecpatlán Totonac",
"tcx": "Toda",
"tcy": "Tulu",
"tcz": "Thado Chin",
"tda": "Tagdal",
"tdb": "Panchpargania",
"tdc": "Emberá-Tadó",
"tdd": "Tai Nüa",
"tde": "Tiranige Diga Dogon",
"tdf": "Talieng",
"tdg": "Western Tamang",
"tdh": "Thulung",
"tdi": "Tomadino",
"tdj": "Tajio",
"tdk": "Tambas",
"tdl": "Sur",
"tdm": "Taruma",
"tdn": "Tondano",
"tdo": "Teme",
"tdq": "Tita",
"tdr": "Todrah",
"tds": "Doutai",
"tdt": "Tetun Dili",
"tdv": "Toro",
"tdx": "Tandroy-Mahafaly Malagasy",
"tdy": "Tadyawan",
"te": "Telugu",
"tea": "Temiar",
"teb": "Tetete",
"tec": "Terik",
"ted": "Tepo Krumen",
"tee": "Huehuetla Tepehua",
"tef": "Teressa",
"teg": "Teke-Tege",
"teh": "Tehuelche",
"tei": "Torricelli",
"tek": "Ibali Teke",
"tem": "Timne",
"ten": "Tama (Colombia)",
"teo": "Teso",
"tep": "Tepecano",
"teq": "Temein",
"ter": "Tereno",
"tes": "Tengger",
"tet": "Tetum",
"teu": "Soo",
"tev": "Teor",
"tew": "Tewa (USA)",
"tex": "Tennet",
"tey": "Tulishi",
"tez": "Tetserret",
"tfi": "Tofin Gbe",
"tfn": "Tanaina",
"tfo": "Tefaro",
"tfr": "Teribe",
"tft": "Ternate",
"tg": "Tajik",
"tga": "Sagalla",
"tgb": "Tobilung",
"tgc": "Tigak",
"tgd": "Ciwogai",
"tge": "Eastern Gorkha Tamang",
"tgf": "Chalikha",
"tgh": "Tobagonian Creole English",
"tgi": "Lawunuia",
"tgj": "Tagin",
"tgn": "Tandaganon",
"tgo": "Sudest",
"tgp": "Tangoa",
"tgq": "Tring",
"tgr": "Tareng",
"tgs": "Nume",
"tgt": "Central Tagbanwa",
"tgu": "Tanggu",
"tgv": "Tingui-Boto",
"tgw": "Tagwana Senoufo",
"tgx": "Tagish",
"tgy": "Togoyo",
"tgz": "Tagalaka",
"th": "Thai",
"thd": "Kuuk Thaayorre; Thayore",
"the": "Chitwania Tharu",
"thf": "Thangmi",
"thh": "Northern Tarahumara",
"thi": "Tai Long",
"thk": "Tharaka; Kitharaka",
"thl": "Dangaura Tharu",
"thm": "Aheu",
"thn": "Thachanadan",
"thp": "Thompson",
"thq": "Kochila Tharu",
"thr": "Rana Tharu",
"ths": "Thakali",
"tht": "Tahltan",
"thu": "Thuri",
"thv": "Tahaggart Tamahaq",
"thy": "Tha",
"thz": "Tayart Tamajeq",
"ti": "Tigrinya",
"tia": "Tidikelt Tamazight",
"tic": "Tira",
"tif": "Tifal",
"tig": "Tigre",
"tih": "Timugon Murut",
"tii": "Tiene",
"tij": "Tilung",
"tik": "Tikar",
"til": "Tillamook",
"tim": "Timbe",
"tin": "Tindi",
"tio": "Teop",
"tip": "Trimuris",
"tiq": "Tiéfo",
"tis": "Masadiit Itneg",
"tit": "Tinigua",
"tiu": "Adasen",
"tiv": "Tiv",
"tiw": "Tiwi",
"tix": "Southern Tiwa",
"tiy": "Tiruray",
"tiz": "Tai Hongjin",
"tja": "Tajuasohn",
"tjg": "Tunjung",
"tji": "Northern Tujia",
"tjj": "Tjungundji",
"tjl": "Tai Laing",
"tjm": "Timucua",
"tjn": "Tonjon",
"tjo": "Temacine Tamazight",
"tjp": "Tjupany",
"tjs": "Southern Tujia",
"tju": "Tjurruru",
"tjw": "Djabwurrung",
"tk": "Turkmen",
"tka": "Truká",
"tkb": "Buksa",
"tkd": "Tukudede",
"tke": "Takwane",
"tkf": "Tukumanféd",
"tkg": "Tesaka Malagasy",
"tkl": "Tokelau",
"tkm": "Takelma",
"tkn": "Toku-No-Shima",
"tkp": "Tikopia",
"tkq": "Tee",
"tkr": "Tsakhur",
"tks": "Takestani",
"tkt": "Kathoriya Tharu",
"tku": "Upper Necaxa Totonac",
"tkv": "Mur Pano",
"tkw": "Teanu",
"tkx": "Tangko",
"tkz": "Takua",
"tl": "Tagalog",
"tla": "Southwestern Tepehuan",
"tlb": "Tobelo",
"tlc": "Yecuatla Totonac",
"tld": "Talaud",
"tlf": "Telefol",
"tlg": "Tofanma",
"tlh": "Klingon; tlhIngan Hol",
"tli": "Tlingit",
"tlj": "Talinga-Bwisi",
"tlk": "Taloki",
"tll": "Tetela",
"tlm": "Tolomako",
"tln": "Talondo'",
"tlo": "Talodi",
"tlp": "Filomena Mata-Coahuitlán Totonac",
"tlq": "Tai Loi",
"tlr": "Talise",
"tls": "Tambotalo",
"tlt": "Sou Nama; Teluti",
"tlu": "Tulehu",
"tlv": "Taliabu",
"tlx": "Khehek",
"tly": "Talysh",
"tma": "Tama (Chad)",
"tmb": "Katbol; Avava",
"tmc": "Tumak",
"tmd": "Haruai",
"tme": "Tremembé",
"tmf": "Toba-Maskoy",
"tmg": "Ternateño",
"tmh": "Tamashek",
"tmi": "Tutuba",
"tmj": "Samarokena",
"tmk": "Northwestern Tamang",
"tml": "Tamnim Citak",
"tmm": "Tai Thanh",
"tmn": "Taman (Indonesia)",
"tmo": "Temoq",
"tmq": "Tumleo",
"tmr": "Jewish Babylonian Aramaic (ca. 200-1200 CE)",
"tms": "Tima",
"tmt": "Tasmate",
"tmu": "Iau",
"tmv": "Tembo (Motembo)",
"tmw": "Temuan",
"tmy": "Tami",
"tmz": "Tamanaku",
"tn": "Tswana",
"tna": "Tacana",
"tnb": "Western Tunebo",
"tnc": "Tanimuca-Retuarã",
"tnd": "Angosturas Tunebo",
"tng": "Tobanga",
"tnh": "Maiani",
"tni": "Tandia",
"tnk": "Kwamera",
"tnl": "Lenakel",
"tnm": "Tabla",
"tnn": "North Tanna",
"tno": "Toromono",
"tnp": "Whitesands",
"tnq": "Taino",
"tnr": "Ménik",
"tns": "Tenis",
"tnt": "Tontemboan",
"tnu": "Tay Khang",
"tnv": "Tangchangya",
"tnw": "Tonsawang",
"tnx": "Tanema",
"tny": "Tongwe",
"tnz": "Ten'edn",
"to": "Tonga (Tonga Islands)",
"tob": "Toba",
"toc": "Coyutla Totonac",
"tod": "Toma",
"tof": "Gizrra",
"tog": "Tonga (Nyasa)",
"toh": "Gitonga",
"toi": "Tonga (Zambia)",
"toj": "Tojolabal",
"tok": "Toki Pona",
"tol": "Tolowa",
"tom": "Tombulu",
"too": "Xicotepec De Juárez Totonac",
"top": "Papantla Totonac",
"toq": "Toposa",
"tor": "Togbo-Vara Banda",
"tos": "Highland Totonac",
"tou": "Tho",
"tov": "Upper Taromi",
"tow": "Jemez",
"tox": "Tobian",
"toy": "Topoiyo",
"toz": "To",
"tpa": "Taupota",
"tpc": "Azoyú Me'phaa; Azoyú Tlapanec",
"tpe": "Tippera",
"tpf": "Tarpia",
"tpg": "Kula",
"tpi": "Tok Pisin",
"tpj": "Tapieté",
"tpk": "Tupinikin",
"tpl": "Tlacoapa Me'phaa; Tlacoapa Tlapanec",
"tpm": "Tampulma",
"tpn": "Tupinambá",
"tpo": "Tai Pao",
"tpp": "Pisaflores Tepehua",
"tpq": "Tukpa",
"tpr": "Tuparí",
"tpt": "Tlachichilco Tepehua",
"tpu": "Tampuan",
"tpv": "Tanapag",
"tpw": "Tupí",
"tpx": "Acatepec Me'phaa; Acatepec Tlapanec",
"tpy": "Trumai",
"tpz": "Tinputz",
"tqb": "Tembé",
"tql": "Lehali",
"tqm": "Turumsa",
"tqn": "Tenino",
"tqo": "Toaripi",
"tqp": "Tomoip",
"tqq": "Tunni",
"tqr": "Torona",
"tqt": "Western Totonac",
"tqu": "Touo",
"tqw": "Tonkawa",
"tr": "Turkish",
"tra": "Tirahi",
"trb": "Terebu",
"trc": "Copala Triqui",
"trd": "Turi",
"tre": "East Tarangan",
"trf": "Trinidadian Creole English",
"trg": "Lishán Didán",
"trh": "Turaka",
"tri": "Trió",
"trj": "Toram",
"trk": "Turkic languages",
"trl": "Traveller Scottish",
"trm": "Tregami",
"trn": "Trinitario",
"tro": "Tarao Naga",
"trp": "Kok Borok",
"trq": "San Martín Itunyoso Triqui",
"trr": "Taushiro",
"trs": "Chicahuaxtla Triqui",
"trt": "Tunggare",
"tru": "Turoyo; Surayt",
"trv": "Sediq; Seediq; Taroko",
"trw": "Torwali",
"trx": "Tringgus-Sembaan Bidayuh",
"try": "Turung",
"trz": "Torá",
"ts": "Tsonga",
"tsa": "Tsaangi",
"tsb": "Tsamai",
"tsc": "Tswa",
"tsd": "Tsakonian",
"tse": "Tunisian Sign Language",
"tsg": "Tausug",
"tsh": "Tsuvan",
"tsi": "Tsimshian",
"tsj": "Tshangla",
"tsk": "Tseku",
"tsl": "Ts'ün-Lao",
"tsm": "Turkish Sign Language; Türk İşaret Dili",
"tsp": "Northern Toussian",
"tsq": "Thai Sign Language",
"tsr": "Akei",
"tss": "Taiwan Sign Language",
"tst": "Tondi Songway Kiini",
"tsu": "Tsou",
"tsv": "Tsogo",
"tsw": "Tsishingini",
"tsx": "Mubami",
"tsy": "Tebul Sign Language",
"tsz": "Purepecha",
"tt": "Tatar",
"tta": "Tutelo",
"ttb": "Gaa",
"ttc": "Tektiteko",
"ttd": "Tauade",
"tte": "Bwanabwana",
"ttf": "Tuotomb",
"ttg": "Tutong",
"tth": "Upper Ta'oih",
"tti": "Tobati",
"ttj": "Tooro",
"ttk": "Totoro",
"ttl": "Totela",
"ttm": "Northern Tutchone",
"ttn": "Towei",
"tto": "Lower Ta'oih",
"ttp": "Tombelala",
"ttq": "Tawallammat Tamajaq",
"ttr": "Tera",
"tts": "Northeastern Thai",
"ttt": "Muslim Tat",
"ttu": "Torau",
"ttv": "Titan",
"ttw": "Long Wat",
"tty": "Sikaritai",
"ttz": "Tsum",
"tua": "Wiarumus",
"tub": "Tübatulabal",
"tuc": "Mutu",
"tud": "Tuxá",
"tue": "Tuyuca",
"tuf": "Central Tunebo",
"tug": "Tunia",
"tuh": "Taulil",
"tui": "Tupuri",
"tuj": "Tugutil",
"tul": "Tula",
"tum": "Tumbuka",
"tun": "Tunica",
"tuo": "Tucano",
"tup": "Tupi languages",
"tuq": "Tedaga",
"tus": "Tuscarora",
"tut": "Altaic languages",
"tuu": "Tututni",
"tuv": "Turkana",
"tuw": "Tungus languages",
"tux": "Tuxináwa",
"tuy": "Tugen",
"tuz": "Turka",
"tva": "Vaghua",
"tvd": "Tsuvadi",
"tve": "Te'un",
"tvk": "Southeast Ambrym",
"tvl": "Tuvalu",
"tvm": "Tela-Masbuar",
"tvn": "Tavoyan",
"tvo": "Tidore",
"tvs": "Taveta",
"tvt": "Tutsa Naga",
"tvu": "Tunen",
"tvw": "Sedoa",
"tvx": "Taivoan",
"tvy": "Timor Pidgin",
"tw": "Twi",
"twa": "Twana",
"twb": "Western Tawbuid",
"twc": "Teshenawa",
"twd": "Twents",
"twe": "Tewa (Indonesia)",
"twf": "Northern Tiwa",
"twg": "Tereweng",
"twh": "Tai Dón",
"twl": "Tawara",
"twm": "Tawang Monpa",
"twn": "Twendi",
"two": "Tswapong",
"twp": "Ere",
"twq": "Tasawaq",
"twr": "Southwestern Tarahumara",
"twt": "Turiwára",
"twu": "Termanu",
"tww": "Tuwari",
"twx": "Tewe",
"twy": "Tawoyan",
"txa": "Tombonuo",
"txb": "Tokharian B",
"txc": "Tsetsaut",
"txe": "Totoli",
"txg": "Tangut",
"txh": "Thracian",
"txi": "Ikpeng",
"txj": "Tarjumo",
"txm": "Tomini",
"txn": "West Tarangan",
"txo": "Toto",
"txq": "Tii",
"txr": "Tartessian",
"txs": "Tonsea",
"txt": "Citak",
"txu": "Kayapó",
"txx": "Tatana",
"txy": "Tanosy Malagasy",
"ty": "Tahitian",
"tya": "Tauya",
"tye": "Kyanga",
"tyh": "O'du",
"tyi": "Teke-Tsaayi",
"tyj": "Tai Do; Tai Yo",
"tyl": "Thu Lao",
"tyn": "Kombai",
"typ": "Thaypan",
"tyr": "Tai Daeng",
"tys": "Tày Sa Pa",
"tyt": "Tày Tac",
"tyu": "Kua",
"tyv": "Tuvinian",
"tyx": "Teke-Tyee",
"tyy": "Tiyaa",
"tyz": "Tày",
"tza": "Tanzanian Sign Language",
"tzh": "Tzeltal",
"tzj": "Tz'utujil",
"tzl": "Talossan",
"tzm": "Central Atlas Tamazight",
"tzn": "Tugun",
"tzo": "Tzotzil",
"tzx": "Tabriak",
"uam": "Uamué",
"uan": "Kuan",
"uar": "Tairuma",
"uba": "Ubang",
"ubi": "Ubi",
"ubl": "Buhi'non Bikol",
"ubr": "Ubir",
"ubu": "Umbu-Ungu",
"uby": "Ubykh",
"uda": "Uda",
"ude": "Udihe",
"udg": "Muduga",
"udi": "Udi",
"udj": "Ujir",
"udl": "Wuzlam",
"udm": "Udmurt",
"udu": "Uduk",
"ues": "Kioko",
"ufi": "Ufim",
"ug": "Uighur; Uyghur",
"uga": "Ugaritic",
"ugb": "Kuku-Ugbanh",
"uge": "Ughele",
"ugh": "Kubachi",
"ugn": "Ugandan Sign Language",
"ugo": "Ugong",
"ugy": "Uruguayan Sign Language",
"uha": "Uhami",
"uhn": "Damal",
"uis": "Uisai",
"uiv": "Iyive",
"uji": "Tanjijili",
"uk": "Ukrainian",
"uka": "Kaburi",
"ukg": "Ukuriguma",
"ukh": "Ukhwejo",
"uki": "Kui (India)",
"ukk": "Muak Sa-aak",
"ukl": "Ukrainian Sign Language",
"ukp": "Ukpe-Bayobiri",
"ukq": "Ukwa",
"uks": "Urubú-Kaapor Sign Language; Kaapor Sign Language",
"uku": "Ukue",
"ukv": "Kuku",
"ukw": "Ukwuani-Aboh-Ndoni",
"uky": "Kuuk-Yak",
"ula": "Fungwa",
"ulb": "Ulukwumi",
"ulc": "Ulch",
"ule": "Lule",
"ulf": "Usku; Afra",
"uli": "Ulithian",
"ulk": "Meriam Mir",
"ull": "Ullatan",
"ulm": "Ulumanda'",
"uln": "Unserdeutsch",
"ulu": "Uma' Lung",
"ulw": "Ulwa",
"uma": "Umatilla",
"umb": "Umbundu",
"umc": "Marrucinian",
"umd": "Umbindhamu",
"umg": "Morrobalama; Umbuygamu",
"umi": "Ukit",
"umm": "Umon",
"umn": "Makyan Naga",
"umo": "Umotína",
"ump": "Umpila",
"umr": "Umbugarla",
"ums": "Pendau",
"umu": "Munsee",
"una": "North Watut",
"und": "Undetermined",
"une": "Uneme",
"ung": "Ngarinyin",
"uni": "Uni",
"unk": "Enawené-Nawé",
"unm": "Unami",
"unn": "Kurnai",
"unr": "Mundari",
"unu": "Unubahe",
"unx": "Munda",
"unz": "Unde Kaili",
"uon": "Kulon",
"upi": "Umeda",
"upv": "Uripiv-Wala-Rano-Atchin",
"ur": "Urdu",
"ura": "Urarina",
"urb": "Urubú-Kaapor; Kaapor",
"urc": "Urningangg",
"ure": "Uru",
"urf": "Uradhi",
"urg": "Urigina",
"urh": "Urhobo",
"uri": "Urim",
"urj": "Uralic languages",
"urk": "Urak Lawoi'",
"url": "Urali",
"urm": "Urapmin",
"urn": "Uruangnirin",
"uro": "Ura (Papua New Guinea)",
"urp": "Uru-Pa-In",
"urr": "Lehalurup; Löyöp",
"urt": "Urat",
"uru": "Urumi",
"urv": "Uruava",
"urw": "Sop",
"urx": "Urimo",
"ury": "Orya",
"urz": "Uru-Eu-Wau-Wau",
"usa": "Usarufa",
"ush": "Ushojo",
"usi": "Usui",
"usk": "Usaghade",
"usp": "Uspanteco",
"uss": "us-Saare",
"usu": "Uya",
"uta": "Otank",
"ute": "Ute-Southern Paiute",
"uth": "ut-Hun",
"utp": "Amba (Solomon Islands)",
"utr": "Etulo",
"utu": "Utu",
"uum": "Urum",
"uur": "Ura (Vanuatu)",
"uuu": "U",
"uve": "West Uvean; Fagauvea",
"uvh": "Uri",
"uvl": "Lote",
"uwa": "Kuku-Uwanh",
"uya": "Doko-Uyanga",
"uz": "Uzbek",
"uzn": "Northern Uzbek",
"uzs": "Southern Uzbek",
"vaa": "Vaagri Booli",
"vae": "Vale",
"vaf": "Vafsi",
"vag": "Vagla",
"vah": "Varhadi-Nagpuri",
"vai": "Vai",
"vaj": "Sekele; Northwestern ǃKung; Vasekele",
"val": "Vehes",
"vam": "Vanimo",
"van": "Valman",
"vao": "Vao",
"vap": "Vaiphei",
"var": "Huarijio",
"vas": "Vasavi",
"vau": "Vanuma",
"vav": "Varli",
"vay": "Wayu",
"vbb": "Southeast Babar",
"vbk": "Southwestern Bontok",
"ve": "Venda",
"vec": "Venetian",
"ved": "Veddah",
"vel": "Veluws",
"vem": "Vemgo-Mabas",
"veo": "Ventureño",
"vep": "Veps",
"ver": "Mom Jango",
"vgr": "Vaghri",
"vgt": "Vlaamse Gebarentaal; Flemish Sign Language",
"vi": "Vietnamese",
"vic": "Virgin Islands Creole English",
"vid": "Vidunda",
"vif": "Vili",
"vig": "Viemo",
"vil": "Vilela",
"vin": "Vinza",
"vis": "Vishavan",
"vit": "Viti",
"viv": "Iduna",
"vka": "Kariyarra",
"vkj": "Kujarge",
"vkk": "Kaur",
"vkl": "Kulisusu",
"vkm": "Kamakan",
"vkn": "Koro Nulu",
"vko": "Kodeoha",
"vkp": "Korlai Creole Portuguese",
"vkt": "Tenggarong Kutai Malay",
"vku": "Kurrama",
"vkz": "Koro Zuba",
"vlp": "Valpei",
"vls": "Vlaams",
"vma": "Martuyhunira",
"vmb": "Barbaram",
"vmc": "Juxtlahuaca Mixtec",
"vmd": "Mudu Koraga",
"vme": "East Masela",
"vmf": "Mainfränkisch",
"vmg": "Lungalunga",
"vmh": "Maraghei",
"vmi": "Miwa",
"vmj": "Ixtayutla Mixtec",
"vmk": "Makhuwa-Shirima",
"vml": "Malgana",
"vmm": "Mitlatongo Mixtec",
"vmp": "Soyaltepec Mazatec",
"vmq": "Soyaltepec Mixtec",
"vmr": "Marenje",
"vms": "Moksela",
"vmu": "Muluridyi",
"vmv": "Valley Maidu",
"vmw": "Makhuwa",
"vmx": "Tamazola Mixtec",
"vmy": "Ayautla Mazatec",
"vmz": "Mazatlán Mazatec",
"vnk": "Vano; Lovono",
"vnm": "Vinmavis; Neve'ei",
"vnp": "Vunapu",
"vo": "Volapük",
"vor": "Voro",
"vot": "Votic",
"vra": "Vera'a",
"vro": "Võro",
"vrs": "Varisi",
"vrt": "Burmbar; Banam Bay",
"vsi": "Moldova Sign Language",
"vsl": "Venezuelan Sign Language",
"vsv": "Valencian Sign Language; Llengua de signes valenciana",
"vto": "Vitou",
"vum": "Vumbu",
"vun": "Vunjo",
"vut": "Vute",
"vwa": "Awa (China)",
"wa": "Walloon",
"waa": "Walla Walla",
"wab": "Wab",
"wac": "Wasco-Wishram",
"wad": "Wamesa; Wondama",
"wae": "Walser",
"waf": "Wakoná",
"wag": "Wa'ema",
"wah": "Watubela",
"wai": "Wares",
"waj": "Waffa",
"wak": "Wakashan languages",
"wal": "Wolaytta; Wolaitta",
"wam": "Wampanoag",
"wan": "Wan",
"wao": "Wappo",
"wap": "Wapishana",
"waq": "Wagiman",
"war": "Waray (Philippines)",
"was": "Washo",
"wat": "Kaninuwa",
"wau": "Waurá",
"wav": "Waka",
"waw": "Waiwai",
"wax": "Watam; Marangis",
"way": "Wayana",
"waz": "Wampur",
"wba": "Warao",
"wbb": "Wabo",
"wbe": "Waritai",
"wbf": "Wara",
"wbh": "Wanda",
"wbi": "Vwanji",
"wbj": "Alagwa",
"wbk": "Waigali",
"wbl": "Wakhi",
"wbm": "Wa",
"wbp": "Warlpiri",
"wbq": "Waddar",
"wbr": "Wagdi",
"wbs": "West Bengal Sign Language",
"wbt": "Warnman",
"wbv": "Wajarri",
"wbw": "Woi",
"wca": "Yanomámi",
"wci": "Waci Gbe",
"wdd": "Wandji",
"wdg": "Wadaginam",
"wdj": "Wadjiginy",
"wdk": "Wadikali",
"wdt": "Wendat",
"wdu": "Wadjigu",
"wdy": "Wadjabangayi",
"wea": "Wewaw",
"wec": "Wè Western",
"wed": "Wedau",
"weg": "Wergaia",
"weh": "Weh",
"wei": "Kiunum",
"wem": "Weme Gbe",
"wen": "Sorbian languages",
"weo": "Wemale",
"wep": "Westphalien",
"wer": "Weri",
"wes": "Cameroon Pidgin",
"wet": "Perai",
"weu": "Rawngtu Chin",
"wew": "Wejewa",
"wfg": "Yafi; Zorop",
"wga": "Wagaya",
"wgb": "Wagawaga",
"wgg": "Wangkangurru; Wangganguru",
"wgi": "Wahgi",
"wgo": "Waigeo",
"wgu": "Wirangu",
"wgy": "Warrgamay",
"wha": "Sou Upaa; Manusela",
"whg": "North Wahgi",
"whk": "Wahau Kenyah",
"whu": "Wahau Kayan",
"wib": "Southern Toussian",
"wic": "Wichita",
"wie": "Wik-Epa",
"wif": "Wik-Keyangan",
"wig": "Wik Ngathan",
"wih": "Wik-Me'anha",
"wii": "Minidien",
"wij": "Wik-Iiyanh",
"wik": "Wikalkan",
"wil": "Wilawila",
"wim": "Wik-Mungkan",
"win": "Ho-Chunk",
"wir": "Wiraféd",
"wiu": "Wiru",
"wiv": "Vitu",
"wiy": "Wiyot",
"wja": "Waja",
"wji": "Warji",
"wka": "Kw'adza",
"wkb": "Kumbaran",
"wkd": "Wakde; Mo",
"wkl": "Kalanadi",
"wkr": "Keerray-Woorroong",
"wku": "Kunduvadi",
"wkw": "Wakawaka",
"wky": "Wangkayutyuru",
"wla": "Walio",
"wlc": "Mwali Comorian",
"wle": "Wolane",
"wlg": "Kunbarlang",
"wlh": "Welaun",
"wli": "Waioli",
"wlk": "Wailaki",
"wll": "Wali (Sudan)",
"wlm": "Middle Welsh",
"wlo": "Wolio",
"wlr": "Wailapa",
"wls": "Wallisian",
"wlu": "Wuliwuli",
"wlv": "Wichí Lhamtés Vejoz",
"wlw": "Walak",
"wlx": "Wali (Ghana)",
"wly": "Waling",
"wma": "Mawa (Nigeria)",
"wmb": "Wambaya",
"wmc": "Wamas",
"wmd": "Mamaindé",
"wme": "Wambule",
"wmg": "Western Minyag",
"wmh": "Waima'a",
"wmi": "Wamin",
"wmm": "Maiwa (Indonesia)",
"wmn": "Waamwang",
"wmo": "Wom (Papua New Guinea)",
"wms": "Wambon",
"wmt": "Walmajarri",
"wmw": "Mwani",
"wmx": "Womo",
"wnb": "Wanambre",
"wnc": "Wantoat",
"wnd": "Wandarang",
"wne": "Waneci",
"wng": "Wanggom",
"wni": "Ndzwani Comorian",
"wnk": "Wanukaka",
"wnm": "Wanggamala",
"wnn": "Wunumara",
"wno": "Wano",
"wnp": "Wanap",
"wnu": "Usan",
"wnw": "Wintu",
"wny": "Wanyi; Waanyi",
"wo": "Wolof",
"woa": "Kuwema; Tyaraity",
"wob": "Wè Northern",
"woc": "Wogeo",
"wod": "Wolani",
"woe": "Woleaian",
"wof": "Gambian Wolof",
"wog": "Wogamusin",
"woi": "Kamang",
"wok": "Longto",
"wom": "Wom (Nigeria)",
"won": "Wongo",
"woo": "Manombai",
"wor": "Woria",
"wos": "Hanga Hundi",
"wow": "Wawonii",
"woy": "Weyto",
"wpc": "Maco",
"wrb": "Waluwarra; Warluwara",
"wrg": "Warungu; Gudjal",
"wrh": "Wiradjuri",
"wri": "Wariyangga",
"wrk": "Garrwa",
"wrl": "Warlmanpa",
"wrm": "Warumungu",
"wrn": "Warnang",
"wro": "Worrorra",
"wrp": "Waropen",
"wrr": "Wardaman",
"wrs": "Waris",
"wru": "Waru",
"wrv": "Waruna",
"wrw": "Gugu Warra",
"wrx": "Wae Rana",
"wry": "Merwari",
"wrz": "Waray (Australia)",
"wsa": "Warembori",
"wsg": "Adilabad Gondi",
"wsi": "Wusi",
"wsk": "Waskia",
"wsr": "Owenia",
"wss": "Wasa",
"wsu": "Wasu",
"wsv": "Wotapuri-Katarqalai",
"wtf": "Watiwa",
"wth": "Wathawurrung",
"wti": "Berta",
"wtk": "Watakataui",
"wtm": "Mewati",
"wtw": "Wotu",
"wua": "Wikngenchera",
"wub": "Wunambal",
"wud": "Wudu",
"wuh": "Wutunhua",
"wul": "Silimo",
"wum": "Wumbvu",
"wun": "Bungu",
"wur": "Wurrugu",
"wut": "Wutung",
"wuu": "Wu Chinese",
"wuv": "Wuvulu-Aua",
"wux": "Wulna",
"wuy": "Wauyai",
"wwa": "Waama",
"wwb": "Wakabunga",
"wwo": "Wetamut; Dorig",
"wwr": "Warrwa",
"www": "Wawa",
"wxa": "Waxianghua",
"wxw": "Wardandi",
"wyb": "Wangaaybuwan-Ngiyambaa",
"wyi": "Woiwurrung",
"wym": "Wymysorys",
"wyn": "Wyandot",
"wyr": "Wayoró",
"wyy": "Western Fijian",
"xaa": "Andalusian Arabic",
"xab": "Sambe",
"xac": "Kachari",
"xad": "Adai",
"xae": "Aequian",
"xag": "Aghwan",
"xai": "Kaimbé",
"xaj": "Ararandewára",
"xak": "Máku",
"xal": "Kalmyk; Oirat",
"xam": "ǀXam",
"xan": "Xamtanga",
"xao": "Khao",
"xap": "Apalachee",
"xaq": "Aquitanian",
"xar": "Karami",
"xas": "Kamas",
"xat": "Katawixi",
"xau": "Kauwera",
"xav": "Xavánte",
"xaw": "Kawaiisu",
"xay": "Kayan Mahakam",
"xbb": "Lower Burdekin",
"xbc": "Bactrian",
"xbd": "Bindal",
"xbe": "Bigambal",
"xbg": "Bunganditj",
"xbi": "Kombio",
"xbj": "Birrpayi",
"xbm": "Middle Breton",
"xbn": "Kenaboi",
"xbo": "Bolgarian",
"xbp": "Bibbulman",
"xbr": "Kambera",
"xbw": "Kambiwá",
"xby": "Batjala; Batyala",
"xcb": "Cumbric",
"xcc": "Camunic",
"xce": "Celtiberian",
"xcg": "Cisalpine Gaulish",
"xch": "Chemakum; Chimakum",
"xcl": "Classical Armenian",
"xcm": "Comecrudo",
"xcn": "Cotoname",
"xco": "Chorasmian",
"xcr": "Carian",
"xct": "Classical Tibetan",
"xcu": "Curonian",
"xcv": "Chuvantsy",
"xcw": "Coahuilteco",
"xcy": "Cayuse",
"xda": "Darkinyung",
"xdc": "Dacian",
"xdk": "Dharuk",
"xdm": "Edomite",
"xdo": "Kwandu",
"xdq": "Kaitag",
"xdy": "Malayic Dayak",
"xeb": "Eblan",
"xed": "Hdi",
"xeg": "ǁXegwi",
"xel": "Kelo",
"xem": "Kembayan",
"xep": "Epi-Olmec",
"xer": "Xerénte",
"xes": "Kesawai",
"xet": "Xetá",
"xeu": "Keoru-Ahia",
"xfa": "Faliscan",
"xga": "Galatian",
"xgb": "Gbin",
"xgd": "Gudang",
"xgf": "Gabrielino-Fernandeño",
"xgg": "Goreng",
"xgi": "Garingbal",
"xgl": "Galindan",
"xgm": "Dharumbal; Guwinmal",
"xgn": "Mongolian languages",
"xgr": "Garza",
"xgu": "Unggumi",
"xgw": "Guwa",
"xh": "Xhosa",
"xha": "Harami",
"xhc": "Hunnic",
"xhd": "Hadrami",
"xhe": "Khetrani",
"xhm": "Middle Khmer (1400 to 1850 CE)",
"xhr": "Hernican",
"xht": "Hattic",
"xhu": "Hurrian",
"xhv": "Khua",
"xib": "Iberian",
"xii": "Xiri",
"xil": "Illyrian",
"xin": "Xinca",
"xir": "Xiriâna",
"xis": "Kisan",
"xiv": "Indus Valley Language",
"xiy": "Xipaya",
"xjb": "Minjungbal",
"xjt": "Jaitmatang",
"xka": "Kalkoti",
"xkb": "Northern Nago",
"xkc": "Kho'ini",
"xkd": "Mendalam Kayan",
"xke": "Kereho",
"xkf": "Khengkha",
"xkg": "Kagoro",
"xki": "Kenyan Sign Language",
"xkj": "Kajali",
"xkk": "Kachok; Kaco'",
"xkl": "Mainstream Kenyah",
"xkn": "Kayan River Kayan",
"xko": "Kiorr",
"xkp": "Kabatei",
"xkq": "Koroni",
"xkr": "Xakriabá",
"xks": "Kumbewaha",
"xkt": "Kantosi",
"xku": "Kaamba",
"xkv": "Kgalagadi",
"xkw": "Kembra",
"xkx": "Karore",
"xky": "Uma' Lasan",
"xkz": "Kurtokha",
"xla": "Kamula",
"xlb": "Loup B",
"xlc": "Lycian",
"xld": "Lydian",
"xle": "Lemnian",
"xlg": "Ligurian (Ancient)",
"xli": "Liburnian",
"xln": "Alanic",
"xlo": "Loup A",
"xlp": "Lepontic",
"xls": "Lusitanian",
"xlu": "Cuneiform Luwian",
"xly": "Elymian",
"xma": "Mushungulu",
"xmb": "Mbonga",
"xmc": "Makhuwa-Marrevone",
"xmd": "Mbudum",
"xme": "Median",
"xmf": "Mingrelian",
"xmg": "Mengaka",
"xmh": "Kugu-Muminh",
"xmj": "Majera",
"xmk": "Ancient Macedonian",
"xml": "Malaysian Sign Language",
"xmm": "Manado Malay",
"xmn": "Manichaean Middle Persian",
"xmo": "Morerebi",
"xmp": "Kuku-Mu'inh",
"xmq": "Kuku-Mangk",
"xmr": "Meroitic",
"xms": "Moroccan Sign Language",
"xmt": "Matbat",
"xmu": "Kamu",
"xmv": "Antankarana Malagasy; Tankarana Malagasy",
"xmw": "Tsimihety Malagasy",
"xmx": "Salawati; Maden",
"xmy": "Mayaguduna",
"xmz": "Mori Bawah",
"xna": "Ancient North Arabian",
"xnb": "Kanakanabu",
"xnd": "Na-Dene languages",
"xng": "Middle Mongolian",
"xnh": "Kuanhua",
"xni": "Ngarigu",
"xnj": "Ngoni (Tanzania)",
"xnk": "Nganakarti",
"xnm": "Ngumbarl",
"xnn": "Northern Kankanay",
"xno": "Anglo-Norman",
"xnq": "Ngoni (Mozambique)",
"xnr": "Kangri",
"xns": "Kanashi",
"xnt": "Narragansett",
"xnu": "Nukunul",
"xny": "Nyiyaparli",
"xnz": "Kenzi; Mattoki",
"xoc": "O'chi'chi'",
"xod": "Kokoda",
"xog": "Soga",
"xoi": "Kominimung",
"xok": "Xokleng",
"xom": "Komo (Sudan)",
"xon": "Konkomba",
"xoo": "Xukurú",
"xop": "Kopar",
"xor": "Korubo",
"xow": "Kowaki",
"xpa": "Pirriya",
"xpb": "Northeastern Tasmanian; Pyemmairrener",
"xpc": "Pecheneg",
"xpd": "Oyster Bay Tasmanian",
"xpe": "Liberia Kpelle",
"xpf": "Southeast Tasmanian; Nuenonne",
"xpg": "Phrygian",
"xph": "North Midlands Tasmanian; Tyerrenoterpanner",
"xpi": "Pictish",
"xpj": "Mpalitjanh",
"xpk": "Kulina Pano",
"xpl": "Port Sorell Tasmanian",
"xpm": "Pumpokol",
"xpn": "Kapinawá",
"xpo": "Pochutec",
"xpp": "Puyo-Paekche",
"xpq": "Mohegan-Pequot",
"xpr": "Parthian",
"xps": "Pisidian",
"xpt": "Punthamara",
"xpu": "Punic",
"xpv": "Northern Tasmanian; Tommeginne",
"xpw": "Northwestern Tasmanian; Peerapper",
"xpx": "Southwestern Tasmanian; Toogee",
"xpy": "Puyo",
"xpz": "Bruny Island Tasmanian",
"xqa": "Karakhanid",
"xqt": "Qatabanian",
"xra": "Krahô",
"xrb": "Eastern Karaboro",
"xrd": "Gundungurra",
"xre": "Kreye",
"xrg": "Minang",
"xri": "Krikati-Timbira",
"xrm": "Armazic",
"xrn": "Arin",
"xrr": "Raetic",
"xrt": "Aranama-Tamique",
"xru": "Marriammu",
"xrw": "Karawa",
"xsa": "Sabaean",
"xsb": "Sambal",
"xsc": "Scythian",
"xsd": "Sidetic",
"xse": "Sempan",
"xsh": "Shamang",
"xsi": "Sio",
"xsj": "Subi",
"xsl": "South Slavey",
"xsm": "Kasem",
"xsn": "Sanga (Nigeria)",
"xso": "Solano",
"xsp": "Silopi",
"xsq": "Makhuwa-Saka",
"xsr": "Sherpa",
"xss": "Assan",
"xsu": "Sanumá",
"xsv": "Sudovian",
"xsy": "Saisiyat",
"xta": "Alcozauca Mixtec",
"xtb": "Chazumba Mixtec",
"xtc": "Katcha-Kadugli-Miri",
"xtd": "Diuxi-Tilantongo Mixtec",
"xte": "Ketengban",
"xtg": "Transalpine Gaulish",
"xth": "Yitha Yitha",
"xti": "Sinicahua Mixtec",
"xtj": "San Juan Teita Mixtec",
"xtl": "Tijaltepec Mixtec",
"xtm": "Magdalena Peñasco Mixtec",
"xtn": "Northern Tlaxiaco Mixtec",
"xto": "Tokharian A",
"xtp": "San Miguel Piedras Mixtec",
"xtq": "Tumshuqese",
"xtr": "Early Tripuri",
"xts": "Sindihui Mixtec",
"xtt": "Tacahua Mixtec",
"xtu": "Cuyamecalco Mixtec",
"xtv": "Thawa",
"xtw": "Tawandê",
"xty": "Yoloxochitl Mixtec",
"xua": "Alu Kurumba",
"xub": "Betta Kurumba",
"xud": "Umiida",
"xug": "Kunigami",
"xuj": "Jennu Kurumba",
"xul": "Ngunawal; Nunukul",
"xum": "Umbrian",
"xun": "Unggaranggu",
"xuo": "Kuo",
"xup": "Upper Umpqua",
"xur": "Urartian",
"xut": "Kuthant",
"xuu": "Kxoe; Khwedam",
"xve": "Venetic",
"xvi": "Kamviri",
"xvn": "Vandalic",
"xvo": "Volscian",
"xvs": "Vestinian",
"xwa": "Kwaza",
"xwc": "Woccon",
"xwd": "Wadi Wadi",
"xwe": "Xwela Gbe",
"xwg": "Kwegu",
"xwj": "Wajuk",
"xwk": "Wangkumara",
"xwl": "Western Xwla Gbe",
"xwo": "Written Oirat",
"xwr": "Kwerba Mamberamo",
"xwt": "Wotjobaluk",
"xww": "Wemba Wemba",
"xxb": "Boro (Ghana)",
"xxk": "Ke'o",
"xxm": "Minkin",
"xxr": "Koropó",
"xxt": "Tambora",
"xya": "Yaygir",
"xyb": "Yandjibara",
"xyj": "Mayi-Yapi",
"xyk": "Mayi-Kulan",
"xyl": "Yalakalore",
"xyt": "Mayi-Thakurti",
"xyy": "Yorta Yorta",
"xzh": "Zhang-Zhung",
"xzm": "Zemgalian",
"xzp": "Ancient Zapotec",
"yaa": "Yaminahua",
"yab": "Yuhup",
"yac": "Pass Valley Yali",
"yad": "Yagua",
"yae": "Pumé",
"yaf": "Yaka (Democratic Republic of Congo)",
"yag": "Yámana",
"yah": "Yazgulyam",
"yai": "Yagnobi",
"yaj": "Banda-Yangere",
"yak": "Yakama",
"yal": "Yalunka",
"yam": "Yamba",
"yan": "Mayangna",
"yao": "Yao",
"yap": "Yapese",
"yaq": "Yaqui",
"yar": "Yabarana",
"yas": "Nugunu (Cameroon)",
"yat": "Yambeta",
"yau": "Yuwana",
"yav": "Yangben",
"yaw": "Yawalapití",
"yax": "Yauma",
"yay": "Agwagwune",
"yaz": "Lokaa",
"yba": "Yala",
"ybb": "Yemba",
"ybe": "West Yugur",
"ybh": "Yakha",
"ybi": "Yamphu",
"ybj": "Hasha",
"ybk": "Bokha",
"ybl": "Yukuben",
"ybm": "Yaben",
"ybn": "Yabaâna",
"ybo": "Yabong",
"ybx": "Yawiyo",
"yby": "Yaweyuha",
"ych": "Chesu",
"ycl": "Lolopo",
"ycn": "Yucuna",
"ycp": "Chepya",
"yda": "Yanda",
"ydd": "Eastern Yiddish",
"yde": "Yangum Dey",
"ydg": "Yidgha",
"ydk": "Yoidik",
"yea": "Ravula",
"yec": "Yeniche",
"yee": "Yimas",
"yei": "Yeni",
"yej": "Yevanic",
"yel": "Yela",
"yer": "Tarok",
"yes": "Nyankpa",
"yet": "Yetfa",
"yeu": "Yerukula",
"yev": "Yapunda",
"yey": "Yeyi",
"yga": "Malyangapa",
"ygi": "Yiningayi",
"ygl": "Yangum Gel",
"ygm": "Yagomi",
"ygp": "Gepo",
"ygr": "Yagaria",
"ygs": "Yolŋu Sign Language",
"ygu": "Yugul",
"ygw": "Yagwoia",
"yha": "Baha Buyang",
"yhd": "Judeo-Iraqi Arabic",
"yhl": "Hlepho Phowa",
"yhs": "Yan-nhaŋu Sign Language",
"yi": "Yiddish",
"yia": "Yinggarda",
"yif": "Ache",
"yig": "Wusa Nasu",
"yih": "Western Yiddish",
"yii": "Yidiny",
"yij": "Yindjibarndi",
"yik": "Dongshanba Lalo",
"yil": "Yindjilandji",
"yim": "Yimchungru Naga",
"yin": "Riang Lai; Yinchia",
"yip": "Pholo",
"yiq": "Miqie",
"yir": "North Awyu",
"yis": "Yis",
"yit": "Eastern Lalu",
"yiu": "Awu",
"yiv": "Northern Nisu",
"yix": "Axi Yi",
"yiz": "Azhe",
"yka": "Yakan",
"ykg": "Northern Yukaghir",
"yki": "Yoke",
"ykk": "Yakaikeke",
"ykl": "Khlula",
"ykm": "Kap",
"ykn": "Kua-nsi",
"yko": "Yasa",
"ykr": "Yekora",
"ykt": "Kathu",
"yku": "Kuamasi",
"yky": "Yakoma",
"yla": "Yaul",
"ylb": "Yaleba",
"yle": "Yele",
"ylg": "Yelogu",
"yli": "Angguruk Yali",
"yll": "Yil",
"ylm": "Limi",
"yln": "Langnian Buyang",
"ylo": "Naluo Yi",
"ylr": "Yalarnnga",
"ylu": "Aribwaung",
"yly": "Nyâlayu; Nyelâyu",
"ymb": "Yambes",
"ymc": "Southern Muji",
"ymd": "Muda",
"yme": "Yameo",
"ymg": "Yamongeri",
"ymh": "Mili",
"ymi": "Moji",
"ymk": "Makwe",
"yml": "Iamalele",
"ymm": "Maay",
"ymn": "Yamna; Sunum",
"ymo": "Yangum Mon",
"ymp": "Yamap",
"ymq": "Qila Muji",
"ymr": "Malasar",
"yms": "Mysian",
"ymx": "Northern Muji",
"ymz": "Muzi",
"yna": "Aluo",
"ynd": "Yandruwandha",
"yne": "Lang'e",
"yng": "Yango",
"ynk": "Naukan Yupik",
"ynl": "Yangulam",
"ynn": "Yana",
"yno": "Yong",
"ynq": "Yendang",
"yns": "Yansi",
"ynu": "Yahuna",
"yo": "Yoruba",
"yob": "Yoba",
"yog": "Yogad",
"yoi": "Yonaguni",
"yok": "Yokuts",
"yol": "Yola",
"yom": "Yombe",
"yon": "Yongkom",
"yot": "Yotti",
"yox": "Yoron",
"yoy": "Yoy",
"ypa": "Phala",
"ypb": "Labo Phowa",
"ypg": "Phola",
"yph": "Phupha",
"ypk": "Yupik languages",
"ypm": "Phuma",
"ypn": "Ani Phowa",
"ypo": "Alo Phola",
"ypp": "Phupa",
"ypz": "Phuza",
"yra": "Yerakai",
"yrb": "Yareba",
"yre": "Yaouré",
"yrk": "Nenets",
"yrl": "Nhengatu",
"yrm": "Yirrk-Mel",
"yrn": "Yerong",
"yro": "Yaroamë",
"yrs": "Yarsun",
"yrw": "Yarawata",
"yry": "Yarluyandi",
"ysc": "Yassic",
"ysd": "Samatao",
"ysg": "Sonaga",
"ysl": "Yugoslavian Sign Language",
"ysm": "Myanmar Sign Language",
"ysn": "Sani",
"yso": "Nisi (China)",
"ysp": "Southern Lolopo",
"ysr": "Sirenik Yupik",
"yss": "Yessan-Mayo",
"ysy": "Sanie",
"yta": "Talu",
"ytl": "Tanglang",
"ytp": "Thopho",
"ytw": "Yout Wam",
"yty": "Yatay",
"yua": "Yucateco; Yucatec Maya",
"yub": "Yugambal",
"yuc": "Yuchi",
"yud": "Judeo-Tripolitanian Arabic",
"yue": "Yue Chinese; Cantonese",
"yuf": "Havasupai-Walapai-Yavapai",
"yug": "Yug",
"yui": "Yurutí",
"yuj": "Karkar-Yuri",
"yuk": "Yuki",
"yul": "Yulu",
"yum": "Quechan",
"yun": "Bena (Nigeria)",
"yup": "Yukpa",
"yuq": "Yuqui",
"yur": "Yurok",
"yut": "Yopno",
"yuw": "Yau (Morobe Province)",
"yux": "Southern Yukaghir",
"yuy": "East Yugur",
"yuz": "Yuracare",
"yva": "Yawa",
"yvt": "Yavitero",
"ywa": "Kalou",
"ywg": "Yinhawangka",
"ywl": "Western Lalu",
"ywn": "Yawanawa",
"ywq": "Wuding-Luquan Yi",
"ywr": "Yawuru",
"ywt": "Xishanba Lalo; Central Lalo",
"ywu": "Wumeng Nasu",
"yww": "Yawarawarga",
"yxa": "Mayawali",
"yxg": "Yagara",
"yxl": "Yardliyawarra",
"yxm": "Yinwum",
"yxu": "Yuyu",
"yxy": "Yabula Yabula",
"yyr": "Yir Yoront",
"yyu": "Yau (Sandaun Province)",
"yyz": "Ayizi",
"yzg": "E'ma Buyang",
"yzk": "Zokhuo",
"za": "Zhuang; Chuang",
"zaa": "Sierra de Juárez Zapotec",
"zab": "Western Tlacolula Valley Zapotec; San Juan Guelavía Zapotec",
"zac": "Ocotlán Zapotec",
"zad": "Cajonos Zapotec",
"zae": "Yareni Zapotec",
"zaf": "Ayoquesco Zapotec",
"zag": "Zaghawa",
"zah": "Zangwal",
"zai": "Isthmus Zapotec",
"zaj": "Zaramo",
"zak": "Zanaki",
"zal": "Zauzou",
"zam": "Miahuatlán Zapotec",
"zao": "Ozolotepec Zapotec",
"zap": "Zapotec",
"zaq": "Aloápam Zapotec",
"zar": "Rincón Zapotec",
"zas": "Santo Domingo Albarradas Zapotec",
"zat": "Tabaa Zapotec",
"zau": "Zangskari",
"zav": "Yatzachi Zapotec",
"zaw": "Mitla Zapotec",
"zax": "Xadani Zapotec",
"zay": "Zayse-Zergulla; Zaysete",
"zaz": "Zari",
"zba": "Balaibalan",
"zbc": "Central Berawan",
"zbe": "East Berawan",
"zbl": "Blissymbols; Bliss; Blissymbolics",
"zbt": "Batui",
"zbu": "Bu (Bauchi State)",
"zbw": "West Berawan",
"zca": "Coatecas Altas Zapotec",
"zcd": "Las Delicias Zapotec",
"zch": "Central Hongshuihe Zhuang",
"zdj": "Ngazidja Comorian",
"zea": "Zeeuws",
"zeg": "Zenag",
"zeh": "Eastern Hongshuihe Zhuang",
"zen": "Zenaga",
"zga": "Kinga",
"zgb": "Guibei Zhuang",
"zgh": "Standard Moroccan Tamazight",
"zgm": "Minz Zhuang",
"zgn": "Guibian Zhuang",
"zgr": "Magori",
"zh": "Chinese",
"zhb": "Zhaba",
"zhd": "Dai Zhuang",
"zhi": "Zhire",
"zhn": "Nong Zhuang",
"zhw": "Zhoa",
"zhx": "Chinese (family)",
"zia": "Zia",
"zib": "Zimbabwe Sign Language",
"zik": "Zimakani",
"zil": "Zialo",
"zim": "Mesme",
"zin": "Zinza",
"ziw": "Zigula",
"ziz": "Zizilivakan",
"zka": "Kaimbulawa",
"zkb": "Koibal",
"zkd": "Kadu",
"zkg": "Koguryo",
"zkh": "Khorezmian",
"zkk": "Karankawa",
"zkn": "Kanan",
"zko": "Kott",
"zkp": "São Paulo Kaingáng",
"zkr": "Zakhring",
"zkt": "Kitan",
"zku": "Kaurna",
"zkv": "Krevinian",
"zkz": "Khazar",
"zla": "Zula",
"zle": "East Slavic languages",
"zlj": "Liujiang Zhuang",
"zlm": "Malay (individual language)",
"zln": "Lianshan Zhuang",
"zlq": "Liuqian Zhuang",
"zls": "South Slavic languages",
"zlw": "West Slavic languages",
"zma": "Manda (Australia)",
"zmb": "Zimba",
"zmc": "Margany",
"zmd": "Maridan",
"zme": "Mangerr",
"zmf": "Mfinu",
"zmg": "Marti Ke",
"zmh": "Makolkol",
"zmi": "Negeri Sembilan Malay",
"zmj": "Maridjabin",
"zmk": "Mandandanyi",
"zml": "Matngala",
"zmm": "Marimanindji; Marramaninyshi",
"zmn": "Mbangwe",
"zmo": "Molo",
"zmp": "Mpuono",
"zmq": "Mituku",
"zmr": "Maranunggu",
"zms": "Mbesa",
"zmt": "Maringarr",
"zmu": "Muruwari",
"zmv": "Mbariman-Gudhinma",
"zmw": "Mbo (Democratic Republic of Congo)",
"zmx": "Bomitaba",
"zmy": "Mariyedi",
"zmz": "Mbandja",
"zna": "Zan Gula",
"znd": "Zande languages",
"zne": "Zande (individual language)",
"zng": "Mang",
"znk": "Manangkari",
"zns": "Mangas",
"zoc": "Copainalá Zoque",
"zoh": "Chimalapa Zoque",
"zom": "Zou",
"zoo": "Asunción Mixtepec Zapotec",
"zoq": "Tabasco Zoque",
"zor": "Rayón Zoque",
"zos": "Francisco León Zoque",
"zpa": "Lachiguiri Zapotec",
"zpb": "Yautepec Zapotec",
"zpc": "Choapan Zapotec",
"zpd": "Southeastern Ixtlán Zapotec",
"zpe": "Petapa Zapotec",
"zpf": "San Pedro Quiatoni Zapotec",
"zpg": "Guevea De Humboldt Zapotec",
"zph": "Totomachapan Zapotec",
"zpi": "Santa María Quiegolani Zapotec",
"zpj": "Quiavicuzas Zapotec",
"zpk": "Tlacolulita Zapotec",
"zpl": "Lachixío Zapotec",
"zpm": "Mixtepec Zapotec",
"zpn": "Santa Inés Yatzechi Zapotec",
"zpo": "Amatlán Zapotec",
"zpp": "El Alto Zapotec",
"zpq": "Zoogocho Zapotec",
"zpr": "Santiago Xanica Zapotec",
"zps": "Coatlán Zapotec",
"zpt": "San Vicente Coatlán Zapotec",
"zpu": "Yalálag Zapotec",
"zpv": "Chichicapan Zapotec",
"zpw": "Zaniza Zapotec",
"zpx": "San Baltazar Loxicha Zapotec",
"zpy": "Mazaltepec Zapotec",
"zpz": "Texmelucan Zapotec",
"zqe": "Qiubei Zhuang",
"zra": "Kara (Korea)",
"zrg": "Mirgan",
"zrn": "Zerenkel",
"zro": "Záparo",
"zrp": "Zarphatic",
"zrs": "Mairasi",
"zsa": "Sarasira",
"zsk": "Kaskean",
"zsl": "Zambian Sign Language",
"zsm": "Standard Malay",
"zsr": "Southern Rincon Zapotec",
"zsu": "Sukurum",
"zte": "Elotepec Zapotec",
"ztg": "Xanaguía Zapotec",
"ztl": "Lapaguía-Guivini Zapotec",
"ztm": "San Agustín Mixtepec Zapotec",
"ztn": "Santa Catarina Albarradas Zapotec",
"ztp": "Loxicha Zapotec",
"ztq": "Quioquitani-Quierí Zapotec",
"zts": "Tilquiapan Zapotec",
"ztt": "Tejalapan Zapotec",
"ztu": "Güilá Zapotec",
"ztx": "Zaachila Zapotec",
"zty": "Yatee Zapotec",
"zu": "Zulu",
"zua": "Zeem",
"zuh": "Tokano",
"zum": "Kumzari",
"zun": "Zuni",
"zuy": "Zumaya",
"zwa": "Zay",
"zyb": "Yongbei Zhuang",
"zyg": "Yang Zhuang",
"zyj": "Youjiang Zhuang",
"zyn": "Yongnan Zhuang",
"zyp": "Zyphe Chin",
"zza": "Zaza; Dimili; Dimli (macrolanguage); Kirdki; Kirmanjki (macrolanguage); Zazaki",
"zzj": "Zuojiang Zhuang"
}
|
datasets/src/datasets/utils/resources/languages.json/0
|
{
"file_path": "datasets/src/datasets/utils/resources/languages.json",
"repo_id": "datasets",
"token_count": 111198
}
| 83
|
import os
import tarfile
import pyarrow as pa
import pytest
from datasets import Dataset, concatenate_datasets, load_dataset
from datasets.features import Audio, Features, Sequence, Value
from ..utils import (
require_sndfile,
)
@pytest.fixture()
def tar_wav_path(shared_datadir, tmp_path_factory):
audio_path = str(shared_datadir / "test_audio_44100.wav")
path = tmp_path_factory.mktemp("data") / "audio_data.wav.tar"
with tarfile.TarFile(path, "w") as f:
f.add(audio_path, arcname=os.path.basename(audio_path))
return path
@pytest.fixture()
def tar_mp3_path(shared_datadir, tmp_path_factory):
audio_path = str(shared_datadir / "test_audio_44100.mp3")
path = tmp_path_factory.mktemp("data") / "audio_data.mp3.tar"
with tarfile.TarFile(path, "w") as f:
f.add(audio_path, arcname=os.path.basename(audio_path))
return path
def iter_archive(archive_path):
with tarfile.open(archive_path) as tar:
for tarinfo in tar:
file_path = tarinfo.name
file_obj = tar.extractfile(tarinfo)
yield file_path, file_obj
def test_audio_instantiation():
audio = Audio()
assert audio.sampling_rate is None
assert audio.mono is True
assert audio.id is None
assert audio.dtype == "dict"
assert audio.pa_type == pa.struct({"bytes": pa.binary(), "path": pa.string()})
assert audio._type == "Audio"
def test_audio_feature_type_to_arrow():
features = Features({"audio": Audio()})
assert features.arrow_schema == pa.schema({"audio": Audio().pa_type})
features = Features({"struct_containing_an_audio": {"audio": Audio()}})
assert features.arrow_schema == pa.schema({"struct_containing_an_audio": pa.struct({"audio": Audio().pa_type})})
features = Features({"sequence_of_audios": Sequence(Audio())})
assert features.arrow_schema == pa.schema({"sequence_of_audios": pa.list_(Audio().pa_type)})
@pytest.mark.parametrize(
"build_example",
[
lambda audio_path: audio_path,
lambda audio_path: open(audio_path, "rb").read(),
lambda audio_path: {"path": audio_path},
lambda audio_path: {"path": audio_path, "bytes": None},
lambda audio_path: {"path": audio_path, "bytes": open(audio_path, "rb").read()},
lambda audio_path: {"path": None, "bytes": open(audio_path, "rb").read()},
lambda audio_path: {"bytes": open(audio_path, "rb").read()},
lambda audio_path: {"array": [0.1, 0.2, 0.3], "sampling_rate": 16_000},
],
)
def test_audio_feature_encode_example(shared_datadir, build_example):
audio_path = str(shared_datadir / "test_audio_44100.wav")
audio = Audio()
encoded_example = audio.encode_example(build_example(audio_path))
assert isinstance(encoded_example, dict)
assert encoded_example.keys() == {"bytes", "path"}
assert encoded_example["bytes"] is not None or encoded_example["path"] is not None
decoded_example = audio.decode_example(encoded_example)
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
@pytest.mark.parametrize(
"build_example",
[
lambda audio_path: {"path": audio_path, "sampling_rate": 16_000},
lambda audio_path: {"path": audio_path, "bytes": None, "sampling_rate": 16_000},
lambda audio_path: {"path": audio_path, "bytes": open(audio_path, "rb").read(), "sampling_rate": 16_000},
lambda audio_path: {"array": [0.1, 0.2, 0.3], "sampling_rate": 16_000},
],
)
def test_audio_feature_encode_example_pcm(shared_datadir, build_example):
audio_path = str(shared_datadir / "test_audio_16000.pcm")
audio = Audio(sampling_rate=16_000)
encoded_example = audio.encode_example(build_example(audio_path))
assert isinstance(encoded_example, dict)
assert encoded_example.keys() == {"bytes", "path"}
assert encoded_example["bytes"] is not None or encoded_example["path"] is not None
decoded_example = audio.decode_example(encoded_example)
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
@require_sndfile
def test_audio_decode_example(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
audio = Audio()
decoded_example = audio.decode_example(audio.encode_example(audio_path))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] == audio_path
assert decoded_example["array"].shape == (202311,)
assert decoded_example["sampling_rate"] == 44100
with pytest.raises(RuntimeError):
Audio(decode=False).decode_example(audio_path)
@require_sndfile
def test_audio_resampling(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
audio = Audio(sampling_rate=16000)
decoded_example = audio.decode_example(audio.encode_example(audio_path))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] == audio_path
assert decoded_example["array"].shape == (73401,)
assert decoded_example["sampling_rate"] == 16000
@require_sndfile
def test_audio_decode_example_mp3(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.mp3")
audio = Audio()
decoded_example = audio.decode_example(audio.encode_example(audio_path))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] == audio_path
assert decoded_example["array"].shape == (110592,)
assert decoded_example["sampling_rate"] == 44100
@require_sndfile
def test_audio_decode_example_opus(shared_datadir):
audio_path = str(shared_datadir / "test_audio_48000.opus")
audio = Audio()
decoded_example = audio.decode_example(audio.encode_example(audio_path))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] == audio_path
assert decoded_example["array"].shape == (48000,)
assert decoded_example["sampling_rate"] == 48000
@pytest.mark.parametrize("sampling_rate", [16_000, 48_000])
def test_audio_decode_example_pcm(shared_datadir, sampling_rate):
audio_path = str(shared_datadir / "test_audio_16000.pcm")
audio_input = {"path": audio_path, "sampling_rate": 16_000}
audio = Audio(sampling_rate=sampling_rate)
decoded_example = audio.decode_example(audio.encode_example(audio_input))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] is None
assert decoded_example["array"].shape == (16208 * sampling_rate // 16_000,)
assert decoded_example["sampling_rate"] == sampling_rate
@require_sndfile
def test_audio_resampling_mp3_different_sampling_rates(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.mp3")
audio_path2 = str(shared_datadir / "test_audio_16000.mp3")
audio = Audio(sampling_rate=48000)
decoded_example = audio.decode_example(audio.encode_example(audio_path))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] == audio_path
assert decoded_example["array"].shape == (120373,)
assert decoded_example["sampling_rate"] == 48000
decoded_example = audio.decode_example(audio.encode_example(audio_path2))
assert decoded_example.keys() == {"path", "array", "sampling_rate"}
assert decoded_example["path"] == audio_path2
assert decoded_example["array"].shape == (122688,)
assert decoded_example["sampling_rate"] == 48000
@require_sndfile
def test_dataset_with_audio_feature(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (202311,)
assert item["audio"]["sampling_rate"] == 44100
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (202311,)
assert batch["audio"][0]["sampling_rate"] == 44100
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (202311,)
assert column[0]["sampling_rate"] == 44100
@require_sndfile
def test_dataset_with_audio_feature_tar_wav(tar_wav_path):
audio_filename = "test_audio_44100.wav"
data = {"audio": []}
for file_path, file_obj in iter_archive(tar_wav_path):
data["audio"].append({"path": file_path, "bytes": file_obj.read()})
break
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_filename
assert item["audio"]["array"].shape == (202311,)
assert item["audio"]["sampling_rate"] == 44100
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_filename
assert batch["audio"][0]["array"].shape == (202311,)
assert batch["audio"][0]["sampling_rate"] == 44100
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_filename
assert column[0]["array"].shape == (202311,)
assert column[0]["sampling_rate"] == 44100
@require_sndfile
def test_dataset_with_audio_feature_tar_mp3(tar_mp3_path):
audio_filename = "test_audio_44100.mp3"
data = {"audio": []}
for file_path, file_obj in iter_archive(tar_mp3_path):
data["audio"].append({"path": file_path, "bytes": file_obj.read()})
break
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_filename
assert item["audio"]["array"].shape == (110592,)
assert item["audio"]["sampling_rate"] == 44100
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_filename
assert batch["audio"][0]["array"].shape == (110592,)
assert batch["audio"][0]["sampling_rate"] == 44100
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_filename
assert column[0]["array"].shape == (110592,)
assert column[0]["sampling_rate"] == 44100
@require_sndfile
def test_dataset_with_audio_feature_with_none():
data = {"audio": [None]}
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"] is None
batch = dset[:1]
assert len(batch) == 1
assert batch.keys() == {"audio"}
assert isinstance(batch["audio"], list) and all(item is None for item in batch["audio"])
column = dset["audio"]
assert len(column) == 1
assert isinstance(column, list) and all(item is None for item in column)
# nested tests
data = {"audio": [[None]]}
features = Features({"audio": Sequence(Audio())})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert all(i is None for i in item["audio"])
data = {"nested": [{"audio": None}]}
features = Features({"nested": {"audio": Audio()}})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"nested"}
assert item["nested"].keys() == {"audio"}
assert item["nested"]["audio"] is None
@require_sndfile
def test_resampling_at_loading_dataset_with_audio_feature(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
features = Features({"audio": Audio(sampling_rate=16000)})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (73401,)
assert item["audio"]["sampling_rate"] == 16000
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (73401,)
assert batch["audio"][0]["sampling_rate"] == 16000
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (73401,)
assert column[0]["sampling_rate"] == 16000
@require_sndfile
def test_resampling_at_loading_dataset_with_audio_feature_mp3(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.mp3")
data = {"audio": [audio_path]}
features = Features({"audio": Audio(sampling_rate=16000)})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (40125,)
assert item["audio"]["sampling_rate"] == 16000
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (40125,)
assert batch["audio"][0]["sampling_rate"] == 16000
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (40125,)
assert column[0]["sampling_rate"] == 16000
@require_sndfile
def test_resampling_after_loading_dataset_with_audio_feature(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item["audio"]["sampling_rate"] == 44100
dset = dset.cast_column("audio", Audio(sampling_rate=16000))
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (73401,)
assert item["audio"]["sampling_rate"] == 16000
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (73401,)
assert batch["audio"][0]["sampling_rate"] == 16000
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (73401,)
assert column[0]["sampling_rate"] == 16000
@require_sndfile
def test_resampling_after_loading_dataset_with_audio_feature_mp3(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.mp3")
data = {"audio": [audio_path]}
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item["audio"]["sampling_rate"] == 44100
dset = dset.cast_column("audio", Audio(sampling_rate=16000))
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (40125,)
assert item["audio"]["sampling_rate"] == 16000
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (40125,)
assert batch["audio"][0]["sampling_rate"] == 16000
column = dset["audio"]
assert len(column) == 1
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (40125,)
assert column[0]["sampling_rate"] == 16000
@pytest.mark.parametrize(
"build_data",
[
lambda audio_path: {"audio": [audio_path]},
lambda audio_path: {"audio": [open(audio_path, "rb").read()]},
lambda audio_path: {"audio": [{"path": audio_path}]},
lambda audio_path: {"audio": [{"path": audio_path, "bytes": None}]},
lambda audio_path: {"audio": [{"path": audio_path, "bytes": open(audio_path, "rb").read()}]},
lambda audio_path: {"audio": [{"path": None, "bytes": open(audio_path, "rb").read()}]},
lambda audio_path: {"audio": [{"bytes": open(audio_path, "rb").read()}]},
],
)
def test_dataset_cast_to_audio_features(shared_datadir, build_data):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = build_data(audio_path)
dset = Dataset.from_dict(data)
item = dset.cast(Features({"audio": Audio()}))[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
item = dset.cast_column("audio", Audio())[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
def test_dataset_concatenate_audio_features(shared_datadir):
# we use a different data structure between 1 and 2 to make sure they are compatible with each other
audio_path = str(shared_datadir / "test_audio_44100.wav")
data1 = {"audio": [audio_path]}
dset1 = Dataset.from_dict(data1, features=Features({"audio": Audio()}))
data2 = {"audio": [{"bytes": open(audio_path, "rb").read()}]}
dset2 = Dataset.from_dict(data2, features=Features({"audio": Audio()}))
concatenated_dataset = concatenate_datasets([dset1, dset2])
assert len(concatenated_dataset) == len(dset1) + len(dset2)
assert concatenated_dataset[0]["audio"]["array"].shape == dset1[0]["audio"]["array"].shape
assert concatenated_dataset[1]["audio"]["array"].shape == dset2[0]["audio"]["array"].shape
def test_dataset_concatenate_nested_audio_features(shared_datadir):
# we use a different data structure between 1 and 2 to make sure they are compatible with each other
audio_path = str(shared_datadir / "test_audio_44100.wav")
features = Features({"list_of_structs_of_audios": [{"audio": Audio()}]})
data1 = {"list_of_structs_of_audios": [[{"audio": audio_path}]]}
dset1 = Dataset.from_dict(data1, features=features)
data2 = {"list_of_structs_of_audios": [[{"audio": {"bytes": open(audio_path, "rb").read()}}]]}
dset2 = Dataset.from_dict(data2, features=features)
concatenated_dataset = concatenate_datasets([dset1, dset2])
assert len(concatenated_dataset) == len(dset1) + len(dset2)
assert (
concatenated_dataset[0]["list_of_structs_of_audios"][0]["audio"]["array"].shape
== dset1[0]["list_of_structs_of_audios"][0]["audio"]["array"].shape
)
assert (
concatenated_dataset[1]["list_of_structs_of_audios"][0]["audio"]["array"].shape
== dset2[0]["list_of_structs_of_audios"][0]["audio"]["array"].shape
)
@require_sndfile
def test_dataset_with_audio_feature_map_is_not_decoded(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path], "text": ["Hello"]}
features = Features({"audio": Audio(), "text": Value("string")})
dset = Dataset.from_dict(data, features=features)
expected_audio = features.encode_batch(data)["audio"][0]
for item in dset.cast_column("audio", Audio(decode=False)):
assert item.keys() == {"audio", "text"}
assert item == {"audio": expected_audio, "text": "Hello"}
def process_text(example):
example["text"] = example["text"] + " World!"
return example
processed_dset = dset.map(process_text)
for item in processed_dset.cast_column("audio", Audio(decode=False)):
assert item.keys() == {"audio", "text"}
assert item == {"audio": expected_audio, "text": "Hello World!"}
@require_sndfile
def test_dataset_with_audio_feature_map_is_decoded(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path], "text": ["Hello"]}
features = Features({"audio": Audio(), "text": Value("string")})
dset = Dataset.from_dict(data, features=features)
def process_audio_sampling_rate_by_example(example):
example["double_sampling_rate"] = 2 * example["audio"]["sampling_rate"]
return example
decoded_dset = dset.map(process_audio_sampling_rate_by_example)
for item in decoded_dset.cast_column("audio", Audio(decode=False)):
assert item.keys() == {"audio", "text", "double_sampling_rate"}
assert item["double_sampling_rate"] == 88200
def process_audio_sampling_rate_by_batch(batch):
double_sampling_rates = []
for audio in batch["audio"]:
double_sampling_rates.append(2 * audio["sampling_rate"])
batch["double_sampling_rate"] = double_sampling_rates
return batch
decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)
for item in decoded_dset.cast_column("audio", Audio(decode=False)):
assert item.keys() == {"audio", "text", "double_sampling_rate"}
assert item["double_sampling_rate"] == 88200
@require_sndfile
def test_formatted_dataset_with_audio_feature(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path, audio_path]}
features = Features({"audio": Audio()})
dset = Dataset.from_dict(data, features=features)
with dset.formatted_as("numpy"):
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (202311,)
assert item["audio"]["sampling_rate"] == 44100
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (202311,)
assert batch["audio"][0]["sampling_rate"] == 44100
column = dset["audio"]
assert len(column) == 2
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (202311,)
assert column[0]["sampling_rate"] == 44100
with dset.formatted_as("pandas"):
item = dset[0]
assert item.shape == (1, 1)
assert item.columns == ["audio"]
assert item["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert item["audio"][0]["path"] == audio_path
assert item["audio"][0]["array"].shape == (202311,)
assert item["audio"][0]["sampling_rate"] == 44100
batch = dset[:1]
assert batch.shape == (1, 1)
assert batch.columns == ["audio"]
assert batch["audio"][0].keys() == {"path", "array", "sampling_rate"}
assert batch["audio"][0]["path"] == audio_path
assert batch["audio"][0]["array"].shape == (202311,)
assert batch["audio"][0]["sampling_rate"] == 44100
column = dset["audio"]
assert len(column) == 2
assert column[0].keys() == {"path", "array", "sampling_rate"}
assert column[0]["path"] == audio_path
assert column[0]["array"].shape == (202311,)
assert column[0]["sampling_rate"] == 44100
@pytest.fixture
def jsonl_audio_dataset_path(shared_datadir, tmp_path_factory):
import json
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = [{"audio": audio_path, "text": "Hello world!"}]
path = str(tmp_path_factory.mktemp("data") / "audio_dataset.jsonl")
with open(path, "w") as f:
for item in data:
f.write(json.dumps(item) + "\n")
return path
@require_sndfile
@pytest.mark.parametrize("streaming", [False, True])
def test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data_files = jsonl_audio_dataset_path
features = Features({"audio": Audio(), "text": Value("string")})
dset = load_dataset("json", split="train", data_files=data_files, features=features, streaming=streaming)
item = dset[0] if not streaming else next(iter(dset))
assert item.keys() == {"audio", "text"}
assert item["audio"].keys() == {"path", "array", "sampling_rate"}
assert item["audio"]["path"] == audio_path
assert item["audio"]["array"].shape == (202311,)
assert item["audio"]["sampling_rate"] == 44100
@require_sndfile
@pytest.mark.integration
def test_dataset_with_audio_feature_loaded_from_cache():
# load first time
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
assert isinstance(ds, Dataset)
def test_dataset_with_audio_feature_undecoded(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
features = Features({"audio": Audio(decode=False)})
dset = Dataset.from_dict(data, features=features)
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"] == {"path": audio_path, "bytes": None}
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0] == {"path": audio_path, "bytes": None}
column = dset["audio"]
assert len(column) == 1
assert column[0] == {"path": audio_path, "bytes": None}
def test_formatted_dataset_with_audio_feature_undecoded(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
features = Features({"audio": Audio(decode=False)})
dset = Dataset.from_dict(data, features=features)
with dset.formatted_as("numpy"):
item = dset[0]
assert item.keys() == {"audio"}
assert item["audio"] == {"path": audio_path, "bytes": None}
batch = dset[:1]
assert batch.keys() == {"audio"}
assert len(batch["audio"]) == 1
assert batch["audio"][0] == {"path": audio_path, "bytes": None}
column = dset["audio"]
assert len(column) == 1
assert column[0] == {"path": audio_path, "bytes": None}
with dset.formatted_as("pandas"):
item = dset[0]
assert item.shape == (1, 1)
assert item.columns == ["audio"]
assert item["audio"][0] == {"path": audio_path, "bytes": None}
batch = dset[:1]
assert batch.shape == (1, 1)
assert batch.columns == ["audio"]
assert batch["audio"][0] == {"path": audio_path, "bytes": None}
column = dset["audio"]
assert len(column) == 1
assert column[0] == {"path": audio_path, "bytes": None}
def test_dataset_with_audio_feature_map_undecoded(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
features = Features({"audio": Audio(decode=False)})
dset = Dataset.from_dict(data, features=features)
def assert_audio_example_undecoded(example):
assert example["audio"] == {"path": audio_path, "bytes": None}
dset.map(assert_audio_example_undecoded)
def assert_audio_batch_undecoded(batch):
for audio in batch["audio"]:
assert audio == {"path": audio_path, "bytes": None}
dset.map(assert_audio_batch_undecoded, batched=True)
def test_audio_embed_storage(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
example = {"bytes": None, "path": audio_path}
storage = pa.array([example], type=pa.struct({"bytes": pa.binary(), "path": pa.string()}))
embedded_storage = Audio().embed_storage(storage)
embedded_example = embedded_storage.to_pylist()[0]
assert embedded_example == {"bytes": open(audio_path, "rb").read(), "path": "test_audio_44100.wav"}
|
datasets/tests/features/test_audio.py/0
|
{
"file_path": "datasets/tests/features/test_audio.py",
"repo_id": "datasets",
"token_count": 11528
}
| 84
|
import os
import tempfile
from unittest import TestCase
import numpy as np
import pandas as pd
import pytest
from datasets import load_from_disk
from datasets.arrow_dataset import Dataset
from datasets.dataset_dict import DatasetDict, IterableDatasetDict
from datasets.features import ClassLabel, Features, Sequence, Value
from datasets.iterable_dataset import IterableDataset
from datasets.splits import NamedSplit
from .utils import (
assert_arrow_memory_doesnt_increase,
assert_arrow_memory_increases,
require_polars,
require_tf,
require_torch,
)
class DatasetDictTest(TestCase):
def _create_dummy_dataset(self, multiple_columns=False):
if multiple_columns:
data = {"col_1": [3, 2, 1, 0], "col_2": ["a", "b", "c", "d"]}
dset = Dataset.from_dict(data)
else:
dset = Dataset.from_dict(
{"filename": ["my_name-train" + "_" + f"{x:03d}" for x in np.arange(30).tolist()]}
)
return dset
def _create_dummy_dataset_dict(self, multiple_columns=False) -> DatasetDict:
return DatasetDict(
{
"train": self._create_dummy_dataset(multiple_columns=multiple_columns),
"test": self._create_dummy_dataset(multiple_columns=multiple_columns),
}
)
def _create_dummy_iterable_dataset(self, multiple_columns=False) -> IterableDataset:
def gen():
if multiple_columns:
data = {"col_1": [3, 2, 1, 0], "col_2": ["a", "b", "c", "d"]}
for v1, v2 in zip(data["col_1"], data["col_2"]):
yield {"col_1": v1, "col_2": v2}
else:
for x in range(30):
yield {"filename": "my_name-train" + "_" + f"{x:03d}"}
return IterableDataset.from_generator(gen)
def _create_dummy_iterable_dataset_dict(self, multiple_columns=False) -> IterableDatasetDict:
return IterableDatasetDict(
{
"train": self._create_dummy_iterable_dataset(multiple_columns=multiple_columns),
"test": self._create_dummy_iterable_dataset(multiple_columns=multiple_columns),
}
)
def test_flatten(self):
dset_split = Dataset.from_dict(
{"a": [{"b": {"c": ["text"]}}] * 10, "foo": [1] * 10},
features=Features({"a": {"b": Sequence({"c": Value("string")})}, "foo": Value("int64")}),
)
dset = DatasetDict({"train": dset_split, "test": dset_split})
dset = dset.flatten()
self.assertDictEqual(dset.column_names, {"train": ["a.b.c", "foo"], "test": ["a.b.c", "foo"]})
self.assertListEqual(sorted(dset["train"].features.keys()), ["a.b.c", "foo"])
self.assertDictEqual(
dset["train"].features, Features({"a.b.c": Sequence(Value("string")), "foo": Value("int64")})
)
del dset
def test_set_format_numpy(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset.set_format(type="numpy", columns=["col_1"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 1)
self.assertIsInstance(dset_split[0]["col_1"], np.int64)
self.assertEqual(dset_split[0]["col_1"].item(), 3)
dset.reset_format()
with dset.formatted_as(type="numpy", columns=["col_1"]):
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 1)
self.assertIsInstance(dset_split[0]["col_1"], np.int64)
self.assertEqual(dset_split[0]["col_1"].item(), 3)
for dset_split in dset.values():
self.assertEqual(dset_split.format["type"], None)
self.assertEqual(dset_split.format["format_kwargs"], {})
self.assertEqual(dset_split.format["columns"], dset_split.column_names)
self.assertEqual(dset_split.format["output_all_columns"], False)
dset.set_format(type="numpy", columns=["col_1"], output_all_columns=True)
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 2)
self.assertIsInstance(dset_split[0]["col_2"], str)
self.assertEqual(dset_split[0]["col_2"], "a")
dset.set_format(type="numpy", columns=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 2)
self.assertIsInstance(dset_split[0]["col_2"], np.str_)
self.assertEqual(dset_split[0]["col_2"].item(), "a")
del dset
@require_torch
def test_set_format_torch(self):
import torch
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset.set_format(type="torch", columns=["col_1"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 1)
self.assertIsInstance(dset_split[0]["col_1"], torch.Tensor)
self.assertListEqual(list(dset_split[0]["col_1"].shape), [])
self.assertEqual(dset_split[0]["col_1"].item(), 3)
dset.set_format(type="torch", columns=["col_1"], output_all_columns=True)
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 2)
self.assertIsInstance(dset_split[0]["col_2"], str)
self.assertEqual(dset_split[0]["col_2"], "a")
dset.set_format(type="torch")
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 2)
self.assertIsInstance(dset_split[0]["col_1"], torch.Tensor)
self.assertListEqual(list(dset_split[0]["col_1"].shape), [])
self.assertEqual(dset_split[0]["col_1"].item(), 3)
self.assertIsInstance(dset_split[0]["col_2"], str)
self.assertEqual(dset_split[0]["col_2"], "a")
del dset
@require_tf
def test_set_format_tf(self):
import tensorflow as tf
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset.set_format(type="tensorflow", columns=["col_1"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 1)
self.assertIsInstance(dset_split[0]["col_1"], tf.Tensor)
self.assertListEqual(list(dset_split[0]["col_1"].shape), [])
self.assertEqual(dset_split[0]["col_1"].numpy().item(), 3)
dset.set_format(type="tensorflow", columns=["col_1"], output_all_columns=True)
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 2)
self.assertIsInstance(dset_split[0]["col_2"], str)
self.assertEqual(dset_split[0]["col_2"], "a")
dset.set_format(type="tensorflow", columns=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0]), 2)
self.assertEqual(dset_split[0]["col_2"].numpy().decode("utf-8"), "a")
del dset
def test_set_format_pandas(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset.set_format(type="pandas", columns=["col_1"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0].columns), 1)
self.assertIsInstance(dset_split[0], pd.DataFrame)
self.assertListEqual(list(dset_split[0].shape), [1, 1])
self.assertEqual(dset_split[0]["col_1"].item(), 3)
dset.set_format(type="pandas", columns=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0].columns), 2)
self.assertEqual(dset_split[0]["col_2"].item(), "a")
del dset
@require_polars
def test_set_format_polars(self):
import polars as pl
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset.set_format(type="polars", columns=["col_1"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0].columns), 1)
self.assertIsInstance(dset_split[0], pl.DataFrame)
self.assertEqual(dset_split[0].shape, (1, 1))
self.assertEqual(dset_split[0]["col_1"].item(), 3)
dset.set_format(type="polars", columns=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0].columns), 2)
self.assertEqual(dset_split[0]["col_2"].item(), "a")
del dset
def test_set_transform(self):
def transform(batch):
return {k: [str(i).upper() for i in v] for k, v in batch.items()}
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset.set_transform(transform=transform, columns=["col_1"])
for dset_split in dset.values():
self.assertEqual(dset_split.format["type"], "custom")
self.assertEqual(len(dset_split[0].keys()), 1)
self.assertEqual(dset_split[0]["col_1"], "3")
self.assertEqual(dset_split[:2]["col_1"], ["3", "2"])
self.assertEqual(dset_split["col_1"][:2], ["3", "2"])
prev_format = dset[list(dset.keys())[0]].format
for dset_split in dset.values():
dset_split.set_format(**dset_split.format)
self.assertEqual(prev_format, dset_split.format)
dset.set_transform(transform=transform, columns=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(len(dset_split[0].keys()), 2)
self.assertEqual(dset_split[0]["col_2"], "A")
del dset
def test_with_format(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset2 = dset.with_format("numpy", columns=["col_1"])
dset.set_format("numpy", columns=["col_1"])
for dset_split, dset_split2 in zip(dset.values(), dset2.values()):
self.assertDictEqual(dset_split.format, dset_split2.format)
del dset, dset2
def test_with_transform(self):
def transform(batch):
return {k: [str(i).upper() for i in v] for k, v in batch.items()}
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset2 = dset.with_transform(transform, columns=["col_1"])
dset.set_transform(transform, columns=["col_1"])
for dset_split, dset_split2 in zip(dset.values(), dset2.values()):
self.assertDictEqual(dset_split.format, dset_split2.format)
del dset, dset2
def test_cast(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
features = dset["train"].features
features["col_1"] = Value("float64")
dset = dset.cast(features)
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 2)
self.assertEqual(dset_split.features["col_1"], Value("float64"))
self.assertIsInstance(dset_split[0]["col_1"], float)
del dset
def test_remove_columns(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset = dset.remove_columns(column_names="col_1")
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 1)
self.assertListEqual(list(dset_split.column_names), ["col_2"])
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset = dset.remove_columns(column_names=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 0)
dset = self._create_dummy_dataset_dict(multiple_columns=True)
for dset_split in dset.values():
dset_split._format_columns = ["col_1", "col_2"]
dset = dset.remove_columns(column_names=["col_1"])
for dset_split in dset.values():
self.assertListEqual(dset_split._format_columns, ["col_2"])
self.assertEqual(dset_split.num_columns, 1)
self.assertListEqual(list(dset_split.column_names), ["col_2"])
del dset
def test_rename_column(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset = dset.rename_column(original_column_name="col_1", new_column_name="new_name")
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 2)
self.assertListEqual(list(dset_split.column_names), ["new_name", "col_2"])
del dset
def test_select_columns(self):
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset = dset.select_columns(column_names=[])
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 0)
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset = dset.select_columns(column_names="col_1")
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 1)
self.assertListEqual(list(dset_split.column_names), ["col_1"])
dset = self._create_dummy_dataset_dict(multiple_columns=True)
dset = dset.select_columns(column_names=["col_1", "col_2"])
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 2)
dset = self._create_dummy_dataset_dict(multiple_columns=True)
for dset_split in dset.values():
dset_split._format_columns = ["col_1", "col_2"]
dset = dset.select_columns(column_names=["col_1"])
for dset_split in dset.values():
self.assertEqual(dset_split.num_columns, 1)
self.assertListEqual(list(dset_split.column_names), ["col_1"])
self.assertListEqual(dset_split._format_columns, ["col_1"])
def test_map(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
mapped_dsets_1: DatasetDict = dsets.map(lambda ex: {"foo": ["bar"] * len(ex["filename"])}, batched=True)
self.assertListEqual(list(dsets.keys()), list(mapped_dsets_1.keys()))
self.assertListEqual(mapped_dsets_1["train"].column_names, ["filename", "foo"])
cache_file_names = {
"train": os.path.join(tmp_dir, "train.arrow"),
"test": os.path.join(tmp_dir, "test.arrow"),
}
mapped_dsets_2: DatasetDict = mapped_dsets_1.map(
lambda ex: {"bar": ["foo"] * len(ex["filename"])}, batched=True, cache_file_names=cache_file_names
)
self.assertListEqual(list(dsets.keys()), list(mapped_dsets_2.keys()))
self.assertListEqual(sorted(mapped_dsets_2["train"].column_names), sorted(["filename", "foo", "bar"]))
del dsets, mapped_dsets_1, mapped_dsets_2
def test_iterable_map(self):
dsets = self._create_dummy_iterable_dataset_dict()
fn_kwargs = {"n": 3}
mapped_dsets: IterableDatasetDict = dsets.map(
lambda x, n: {"foo": [n] * len(x["filename"])},
batched=True,
fn_kwargs=fn_kwargs,
)
mapped_example = next(iter(mapped_dsets["train"]))
self.assertListEqual(sorted(mapped_example.keys()), sorted(["filename", "foo"]))
self.assertLessEqual(mapped_example["foo"], 3)
del dsets, mapped_dsets
def test_filter(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
filtered_dsets_1: DatasetDict = dsets.filter(lambda ex: int(ex["filename"].split("_")[-1]) < 10)
self.assertListEqual(list(dsets.keys()), list(filtered_dsets_1.keys()))
self.assertEqual(len(filtered_dsets_1["train"]), 10)
cache_file_names = {
"train": os.path.join(tmp_dir, "train.arrow"),
"test": os.path.join(tmp_dir, "test.arrow"),
}
filtered_dsets_2: DatasetDict = filtered_dsets_1.filter(
lambda ex: int(ex["filename"].split("_")[-1]) < 5, cache_file_names=cache_file_names
)
self.assertListEqual(list(dsets.keys()), list(filtered_dsets_2.keys()))
self.assertEqual(len(filtered_dsets_2["train"]), 5)
filtered_dsets_3: DatasetDict = dsets.filter(
lambda examples: [int(ex.split("_")[-1]) < 10 for ex in examples["filename"]], batched=True
)
self.assertListEqual(list(dsets.keys()), list(filtered_dsets_3.keys()))
self.assertEqual(len(filtered_dsets_3["train"]), 10)
del dsets, filtered_dsets_1, filtered_dsets_2, filtered_dsets_3
def test_iterable_filter(self):
dsets = self._create_dummy_iterable_dataset_dict()
example = next(iter(dsets["train"]))
fn_kwargs = {"n": 3}
filtered_dsets: IterableDatasetDict = dsets.filter(
lambda ex, n: n < int(ex["filename"].split("_")[-1]), fn_kwargs=fn_kwargs
)
filtered_example = next(iter(filtered_dsets["train"]))
self.assertListEqual(list(example.keys()), list(filtered_example.keys()))
self.assertEqual(int(filtered_example["filename"].split("_")[-1]), 4) # id starts from 3
del dsets, filtered_dsets
def test_sort(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
sorted_dsets_1: DatasetDict = dsets.sort("filename")
self.assertListEqual(list(dsets.keys()), list(sorted_dsets_1.keys()))
self.assertListEqual(
[f.split("_")[-1] for f in sorted_dsets_1["train"]["filename"]],
sorted(f"{x:03d}" for x in range(30)),
)
indices_cache_file_names = {
"train": os.path.join(tmp_dir, "train.arrow"),
"test": os.path.join(tmp_dir, "test.arrow"),
}
sorted_dsets_2: DatasetDict = sorted_dsets_1.sort(
"filename", indices_cache_file_names=indices_cache_file_names, reverse=True
)
self.assertListEqual(list(dsets.keys()), list(sorted_dsets_2.keys()))
self.assertListEqual(
[f.split("_")[-1] for f in sorted_dsets_2["train"]["filename"]],
sorted((f"{x:03d}" for x in range(30)), reverse=True),
)
del dsets, sorted_dsets_1, sorted_dsets_2
def test_shuffle(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
indices_cache_file_names = {
"train": os.path.join(tmp_dir, "train.arrow"),
"test": os.path.join(tmp_dir, "test.arrow"),
}
seeds = {
"train": 1234,
"test": 1234,
}
dsets_shuffled = dsets.shuffle(
seeds=seeds, indices_cache_file_names=indices_cache_file_names, load_from_cache_file=False
)
self.assertListEqual(dsets_shuffled["train"]["filename"], dsets_shuffled["test"]["filename"])
self.assertEqual(len(dsets_shuffled["train"]), 30)
self.assertEqual(dsets_shuffled["train"][0]["filename"], "my_name-train_028")
self.assertEqual(dsets_shuffled["train"][2]["filename"], "my_name-train_010")
self.assertDictEqual(dsets["train"].features, Features({"filename": Value("string")}))
self.assertDictEqual(dsets_shuffled["train"].features, Features({"filename": Value("string")}))
# Reproducibility
indices_cache_file_names_2 = {
"train": os.path.join(tmp_dir, "train_2.arrow"),
"test": os.path.join(tmp_dir, "test_2.arrow"),
}
dsets_shuffled_2 = dsets.shuffle(
seeds=seeds, indices_cache_file_names=indices_cache_file_names_2, load_from_cache_file=False
)
self.assertListEqual(dsets_shuffled["train"]["filename"], dsets_shuffled_2["train"]["filename"])
seeds = {
"train": 1234,
"test": 1,
}
indices_cache_file_names_3 = {
"train": os.path.join(tmp_dir, "train_3.arrow"),
"test": os.path.join(tmp_dir, "test_3.arrow"),
}
dsets_shuffled_3 = dsets.shuffle(
seeds=seeds, indices_cache_file_names=indices_cache_file_names_3, load_from_cache_file=False
)
self.assertNotEqual(dsets_shuffled_3["train"]["filename"], dsets_shuffled_3["test"]["filename"])
# other input types
dsets_shuffled_int = dsets.shuffle(42)
dsets_shuffled_alias = dsets.shuffle(seed=42)
dsets_shuffled_none = dsets.shuffle()
self.assertEqual(len(dsets_shuffled_int["train"]), 30)
self.assertEqual(len(dsets_shuffled_alias["train"]), 30)
self.assertEqual(len(dsets_shuffled_none["train"]), 30)
del dsets, dsets_shuffled, dsets_shuffled_2, dsets_shuffled_3
del dsets_shuffled_int, dsets_shuffled_alias, dsets_shuffled_none
def test_flatten_indices(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
indices_cache_file_names = {
"train": os.path.join(tmp_dir, "train.arrow"),
"test": os.path.join(tmp_dir, "test.arrow"),
}
dsets_shuffled = dsets.shuffle(
seed=42, indices_cache_file_names=indices_cache_file_names, load_from_cache_file=False
)
self.assertIsNotNone(dsets_shuffled["train"]._indices)
self.assertIsNotNone(dsets_shuffled["test"]._indices)
dsets_flat = dsets_shuffled.flatten_indices()
self.assertIsNone(dsets_flat["train"]._indices)
self.assertIsNone(dsets_flat["test"]._indices)
del dsets, dsets_shuffled, dsets_flat
def test_check_values_type(self):
dsets = self._create_dummy_dataset_dict()
dsets["bad_split"] = None
self.assertRaises(TypeError, dsets.map, lambda x: x)
self.assertRaises(TypeError, dsets.filter, lambda x: True)
self.assertRaises(TypeError, dsets.shuffle)
self.assertRaises(TypeError, dsets.sort, "filename")
del dsets
def test_serialization(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
dsets.save_to_disk(tmp_dir)
reloaded_dsets = DatasetDict.load_from_disk(tmp_dir)
self.assertListEqual(sorted(reloaded_dsets), ["test", "train"])
self.assertEqual(len(reloaded_dsets["train"]), 30)
self.assertListEqual(reloaded_dsets["train"].column_names, ["filename"])
self.assertEqual(len(reloaded_dsets["test"]), 30)
self.assertListEqual(reloaded_dsets["test"].column_names, ["filename"])
del reloaded_dsets
del dsets["test"]
dsets.save_to_disk(tmp_dir)
reloaded_dsets = DatasetDict.load_from_disk(tmp_dir)
self.assertListEqual(sorted(reloaded_dsets), ["train"])
self.assertEqual(len(reloaded_dsets["train"]), 30)
self.assertListEqual(reloaded_dsets["train"].column_names, ["filename"])
del dsets, reloaded_dsets
dsets = self._create_dummy_dataset_dict()
dsets.save_to_disk(tmp_dir, num_shards={"train": 3, "test": 2})
reloaded_dsets = DatasetDict.load_from_disk(tmp_dir)
self.assertListEqual(sorted(reloaded_dsets), ["test", "train"])
self.assertEqual(len(reloaded_dsets["train"]), 30)
self.assertListEqual(reloaded_dsets["train"].column_names, ["filename"])
self.assertEqual(len(reloaded_dsets["train"].cache_files), 3)
self.assertEqual(len(reloaded_dsets["test"]), 30)
self.assertListEqual(reloaded_dsets["test"].column_names, ["filename"])
self.assertEqual(len(reloaded_dsets["test"].cache_files), 2)
del reloaded_dsets
dsets = self._create_dummy_dataset_dict()
dsets.save_to_disk(tmp_dir, num_proc=2)
reloaded_dsets = DatasetDict.load_from_disk(tmp_dir)
self.assertListEqual(sorted(reloaded_dsets), ["test", "train"])
self.assertEqual(len(reloaded_dsets["train"]), 30)
self.assertListEqual(reloaded_dsets["train"].column_names, ["filename"])
self.assertEqual(len(reloaded_dsets["train"].cache_files), 2)
self.assertEqual(len(reloaded_dsets["test"]), 30)
self.assertListEqual(reloaded_dsets["test"].column_names, ["filename"])
self.assertEqual(len(reloaded_dsets["test"].cache_files), 2)
del reloaded_dsets
def test_load_from_disk(self):
with tempfile.TemporaryDirectory() as tmp_dir:
dsets = self._create_dummy_dataset_dict()
dsets.save_to_disk(tmp_dir)
del dsets
dsets = load_from_disk(tmp_dir)
self.assertListEqual(sorted(dsets), ["test", "train"])
self.assertEqual(len(dsets["train"]), 30)
self.assertListEqual(dsets["train"].column_names, ["filename"])
self.assertEqual(len(dsets["test"]), 30)
self.assertListEqual(dsets["test"].column_names, ["filename"])
del dsets
def test_align_labels_with_mapping(self):
train_features = Features(
{
"input_text": Value("string"),
"input_labels": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
test_features = Features(
{
"input_text": Value("string"),
"input_labels": ClassLabel(num_classes=3, names=["entailment", "contradiction", "neutral"]),
}
)
train_data = {"input_text": ["a", "a", "b", "b", "c", "c"], "input_labels": [0, 0, 1, 1, 2, 2]}
test_data = {"input_text": ["a", "a", "c", "c", "b", "b"], "input_labels": [0, 0, 1, 1, 2, 2]}
label2id = {"CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1}
id2label = {v: k for k, v in label2id.items()}
train_expected_labels = [2, 2, 1, 1, 0, 0]
test_expected_labels = [2, 2, 0, 0, 1, 1]
train_expected_label_names = [id2label[idx] for idx in train_expected_labels]
test_expected_label_names = [id2label[idx] for idx in test_expected_labels]
dsets = DatasetDict(
{
"train": Dataset.from_dict(train_data, features=train_features),
"test": Dataset.from_dict(test_data, features=test_features),
}
)
dsets = dsets.align_labels_with_mapping(label2id, "input_labels")
self.assertListEqual(train_expected_labels, dsets["train"]["input_labels"])
self.assertListEqual(test_expected_labels, dsets["test"]["input_labels"])
train_aligned_label_names = [
dsets["train"].features["input_labels"].int2str(idx) for idx in dsets["train"]["input_labels"]
]
test_aligned_label_names = [
dsets["test"].features["input_labels"].int2str(idx) for idx in dsets["test"]["input_labels"]
]
self.assertListEqual(train_expected_label_names, train_aligned_label_names)
self.assertListEqual(test_expected_label_names, test_aligned_label_names)
def test_dummy_datasetdict_serialize_fs(mockfs):
dataset_dict = DatasetDict(
{
"train": Dataset.from_dict({"a": range(30)}),
"test": Dataset.from_dict({"a": range(10)}),
}
)
dataset_path = "mock://my_dataset"
dataset_dict.save_to_disk(dataset_path, storage_options=mockfs.storage_options)
assert mockfs.isdir(dataset_path)
assert mockfs.glob(dataset_path + "/*")
reloaded = dataset_dict.load_from_disk(dataset_path, storage_options=mockfs.storage_options)
assert list(reloaded) == list(dataset_dict)
for k in dataset_dict:
assert reloaded[k].features == dataset_dict[k].features
assert reloaded[k].to_dict() == dataset_dict[k].to_dict()
def _check_csv_datasetdict(dataset_dict, expected_features, splits=("train",)):
assert isinstance(dataset_dict, DatasetDict)
for split in splits:
dataset = dataset_dict[split]
assert dataset.num_rows == 4
assert dataset.num_columns == 3
assert dataset.column_names == ["col_1", "col_2", "col_3"]
for feature, expected_dtype in expected_features.items():
assert dataset.features[feature].dtype == expected_dtype
@pytest.mark.parametrize("keep_in_memory", [False, True])
def test_datasetdict_from_csv_keep_in_memory(keep_in_memory, csv_path, tmp_path):
cache_dir = tmp_path / "cache"
expected_features = {"col_1": "int64", "col_2": "int64", "col_3": "float64"}
with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase():
dataset = DatasetDict.from_csv({"train": csv_path}, cache_dir=cache_dir, keep_in_memory=keep_in_memory)
_check_csv_datasetdict(dataset, expected_features)
@pytest.mark.parametrize(
"features",
[
None,
{"col_1": "string", "col_2": "int64", "col_3": "float64"},
{"col_1": "string", "col_2": "string", "col_3": "string"},
{"col_1": "int32", "col_2": "int32", "col_3": "int32"},
{"col_1": "float32", "col_2": "float32", "col_3": "float32"},
],
)
def test_datasetdict_from_csv_features(features, csv_path, tmp_path):
cache_dir = tmp_path / "cache"
# CSV file loses col_1 string dtype information: default now is "int64" instead of "string"
default_expected_features = {"col_1": "int64", "col_2": "int64", "col_3": "float64"}
expected_features = features.copy() if features else default_expected_features
features = (
Features({feature: Value(dtype) for feature, dtype in features.items()}) if features is not None else None
)
dataset = DatasetDict.from_csv({"train": csv_path}, features=features, cache_dir=cache_dir)
_check_csv_datasetdict(dataset, expected_features)
@pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"])
def test_datasetdict_from_csv_split(split, csv_path, tmp_path):
if split:
path = {split: csv_path}
else:
split = "train"
path = {"train": csv_path, "test": csv_path}
cache_dir = tmp_path / "cache"
expected_features = {"col_1": "int64", "col_2": "int64", "col_3": "float64"}
dataset = DatasetDict.from_csv(path, cache_dir=cache_dir)
_check_csv_datasetdict(dataset, expected_features, splits=list(path.keys()))
assert all(dataset[split].split == split for split in path.keys())
def _check_json_datasetdict(dataset_dict, expected_features, splits=("train",)):
assert isinstance(dataset_dict, DatasetDict)
for split in splits:
dataset = dataset_dict[split]
assert dataset.num_rows == 4
assert dataset.num_columns == 3
assert dataset.column_names == ["col_1", "col_2", "col_3"]
for feature, expected_dtype in expected_features.items():
assert dataset.features[feature].dtype == expected_dtype
@pytest.mark.parametrize("keep_in_memory", [False, True])
def test_datasetdict_from_json_keep_in_memory(keep_in_memory, jsonl_path, tmp_path):
cache_dir = tmp_path / "cache"
expected_features = {"col_1": "string", "col_2": "int64", "col_3": "float64"}
with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase():
dataset = DatasetDict.from_json({"train": jsonl_path}, cache_dir=cache_dir, keep_in_memory=keep_in_memory)
_check_json_datasetdict(dataset, expected_features)
@pytest.mark.parametrize(
"features",
[
None,
{"col_1": "string", "col_2": "int64", "col_3": "float64"},
{"col_1": "string", "col_2": "string", "col_3": "string"},
{"col_1": "int32", "col_2": "int32", "col_3": "int32"},
{"col_1": "float32", "col_2": "float32", "col_3": "float32"},
],
)
def test_datasetdict_from_json_features(features, jsonl_path, tmp_path):
cache_dir = tmp_path / "cache"
default_expected_features = {"col_1": "string", "col_2": "int64", "col_3": "float64"}
expected_features = features.copy() if features else default_expected_features
features = (
Features({feature: Value(dtype) for feature, dtype in features.items()}) if features is not None else None
)
dataset = DatasetDict.from_json({"train": jsonl_path}, features=features, cache_dir=cache_dir)
_check_json_datasetdict(dataset, expected_features)
@pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"])
def test_datasetdict_from_json_splits(split, jsonl_path, tmp_path):
if split:
path = {split: jsonl_path}
else:
split = "train"
path = {"train": jsonl_path, "test": jsonl_path}
cache_dir = tmp_path / "cache"
expected_features = {"col_1": "string", "col_2": "int64", "col_3": "float64"}
dataset = DatasetDict.from_json(path, cache_dir=cache_dir)
_check_json_datasetdict(dataset, expected_features, splits=list(path.keys()))
assert all(dataset[split].split == split for split in path.keys())
def _check_parquet_datasetdict(dataset_dict, expected_features, splits=("train",)):
assert isinstance(dataset_dict, DatasetDict)
for split in splits:
dataset = dataset_dict[split]
assert dataset.num_rows == 4
assert dataset.num_columns == 3
assert dataset.column_names == ["col_1", "col_2", "col_3"]
for feature, expected_dtype in expected_features.items():
assert dataset.features[feature].dtype == expected_dtype
@pytest.mark.parametrize("keep_in_memory", [False, True])
def test_datasetdict_from_parquet_keep_in_memory(keep_in_memory, parquet_path, tmp_path):
cache_dir = tmp_path / "cache"
expected_features = {"col_1": "string", "col_2": "int64", "col_3": "float64"}
with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase():
dataset = DatasetDict.from_parquet({"train": parquet_path}, cache_dir=cache_dir, keep_in_memory=keep_in_memory)
_check_parquet_datasetdict(dataset, expected_features)
@pytest.mark.parametrize(
"features",
[
None,
{"col_1": "string", "col_2": "int64", "col_3": "float64"},
{"col_1": "string", "col_2": "string", "col_3": "string"},
{"col_1": "int32", "col_2": "int32", "col_3": "int32"},
{"col_1": "float32", "col_2": "float32", "col_3": "float32"},
],
)
def test_datasetdict_from_parquet_features(features, parquet_path, tmp_path):
cache_dir = tmp_path / "cache"
default_expected_features = {"col_1": "string", "col_2": "int64", "col_3": "float64"}
expected_features = features.copy() if features else default_expected_features
features = (
Features({feature: Value(dtype) for feature, dtype in features.items()}) if features is not None else None
)
dataset = DatasetDict.from_parquet({"train": parquet_path}, features=features, cache_dir=cache_dir)
_check_parquet_datasetdict(dataset, expected_features)
@pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"])
def test_datasetdict_from_parquet_split(split, parquet_path, tmp_path):
if split:
path = {split: parquet_path}
else:
split = "train"
path = {"train": parquet_path, "test": parquet_path}
cache_dir = tmp_path / "cache"
expected_features = {"col_1": "string", "col_2": "int64", "col_3": "float64"}
dataset = DatasetDict.from_parquet(path, cache_dir=cache_dir)
_check_parquet_datasetdict(dataset, expected_features, splits=list(path.keys()))
assert all(dataset[split].split == split for split in path.keys())
def _check_text_datasetdict(dataset_dict, expected_features, splits=("train",)):
assert isinstance(dataset_dict, DatasetDict)
for split in splits:
dataset = dataset_dict[split]
assert dataset.num_rows == 4
assert dataset.num_columns == 1
assert dataset.column_names == ["text"]
for feature, expected_dtype in expected_features.items():
assert dataset.features[feature].dtype == expected_dtype
@pytest.mark.parametrize("keep_in_memory", [False, True])
def test_datasetdict_from_text_keep_in_memory(keep_in_memory, text_path, tmp_path):
cache_dir = tmp_path / "cache"
expected_features = {"text": "string"}
with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase():
dataset = DatasetDict.from_text({"train": text_path}, cache_dir=cache_dir, keep_in_memory=keep_in_memory)
_check_text_datasetdict(dataset, expected_features)
@pytest.mark.parametrize(
"features",
[
None,
{"text": "string"},
{"text": "int32"},
{"text": "float32"},
],
)
def test_datasetdict_from_text_features(features, text_path, tmp_path):
cache_dir = tmp_path / "cache"
default_expected_features = {"text": "string"}
expected_features = features.copy() if features else default_expected_features
features = (
Features({feature: Value(dtype) for feature, dtype in features.items()}) if features is not None else None
)
dataset = DatasetDict.from_text({"train": text_path}, features=features, cache_dir=cache_dir)
_check_text_datasetdict(dataset, expected_features)
@pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"])
def test_datasetdict_from_text_split(split, text_path, tmp_path):
if split:
path = {split: text_path}
else:
split = "train"
path = {"train": text_path, "test": text_path}
cache_dir = tmp_path / "cache"
expected_features = {"text": "string"}
dataset = DatasetDict.from_text(path, cache_dir=cache_dir)
_check_text_datasetdict(dataset, expected_features, splits=list(path.keys()))
assert all(dataset[split].split == split for split in path.keys())
|
datasets/tests/test_dataset_dict.py/0
|
{
"file_path": "datasets/tests/test_dataset_dict.py",
"repo_id": "datasets",
"token_count": 17807
}
| 85
|
import pickle
from copy import deepcopy
from itertools import chain, islice
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.compute as pc
import pytest
from datasets import Dataset, load_dataset
from datasets.combine import concatenate_datasets, interleave_datasets
from datasets.features import (
ClassLabel,
Features,
Image,
Value,
)
from datasets.formatting import get_format_type_from_alias
from datasets.info import DatasetInfo
from datasets.iterable_dataset import (
ArrowExamplesIterable,
BufferShuffledExamplesIterable,
CyclingMultiSourcesExamplesIterable,
ExamplesIterable,
FilteredExamplesIterable,
FormattingConfig,
HorizontallyConcatenatedMultiSourcesExamplesIterable,
IterableDataset,
MappedExamplesIterable,
RandomlyCyclingMultiSourcesExamplesIterable,
SelectColumnsIterable,
ShuffledDataSourcesArrowExamplesIterable,
ShuffledDataSourcesExamplesIterable,
ShufflingConfig,
SkipExamplesIterable,
StepExamplesIterable,
TakeExamplesIterable,
TypedExamplesIterable,
VerticallyConcatenatedMultiSourcesExamplesIterable,
_BaseExamplesIterable,
_batch_arrow_tables,
_batch_to_examples,
_convert_to_arrow,
_examples_to_batch,
)
from .utils import (
assert_arrow_memory_doesnt_increase,
is_rng_equal,
require_dill_gt_0_3_2,
require_not_windows,
require_pyspark,
require_tf,
require_torch,
)
DEFAULT_N_EXAMPLES = 20
DEFAULT_BATCH_SIZE = 4
DEFAULT_FILEPATH = "file.txt"
SAMPLE_DATASET_IDENTIFIER = "hf-internal-testing/dataset_with_script" # has dataset script
def generate_examples_fn(**kwargs):
kwargs = kwargs.copy()
n = kwargs.pop("n", DEFAULT_N_EXAMPLES)
filepaths = kwargs.pop("filepaths", None)
for filepath in filepaths or [DEFAULT_FILEPATH]:
if filepaths is not None:
kwargs["filepath"] = filepath
for i in range(n):
yield f"{filepath}_{i}", {"id": i, **kwargs}
def generate_tables_fn(**kwargs):
kwargs = kwargs.copy()
n = kwargs.pop("n", DEFAULT_N_EXAMPLES)
batch_size = kwargs.pop("batch_size", DEFAULT_BATCH_SIZE)
filepaths = kwargs.pop("filepaths", None)
for filepath in filepaths or [DEFAULT_FILEPATH]:
buffer = []
batch_idx = 0
if filepaths is not None:
kwargs["filepath"] = filepath
for i in range(n):
buffer.append({"id": i, **kwargs})
if len(buffer) == batch_size:
yield f"{filepath}_{batch_idx}", pa.Table.from_pylist(buffer)
buffer = []
batch_idx += 1
yield batch_idx, pa.Table.from_pylist(buffer)
@pytest.fixture
def dataset():
ex_iterable = ExamplesIterable(generate_examples_fn, {})
return IterableDataset(ex_iterable, info=DatasetInfo(description="dummy"), split="train")
@pytest.fixture
def dataset_with_several_columns():
ex_iterable = ExamplesIterable(
generate_examples_fn,
{"filepath": ["data0.txt", "data1.txt", "data2.txt"], "metadata": {"sources": ["https://foo.bar"]}},
)
return IterableDataset(ex_iterable, info=DatasetInfo(description="dummy"), split="train")
@pytest.fixture
def arrow_file(tmp_path_factory, dataset: IterableDataset):
filename = str(tmp_path_factory.mktemp("data") / "file.arrow")
Dataset.from_generator(dataset.__iter__).map(cache_file_name=filename)
return filename
################################
#
# Utilities tests
#
################################
@pytest.mark.parametrize("batch_size", [1, 2, 3, 9, 10, 11, 20])
@pytest.mark.parametrize("drop_last_batch", [False, True])
def test_convert_to_arrow(batch_size, drop_last_batch):
examples = [{"foo": i} for i in range(10)]
full_table = pa.Table.from_pylist(examples)
num_rows = len(full_table) if not drop_last_batch else len(full_table) // batch_size * batch_size
num_batches = (num_rows // batch_size) + 1 if num_rows % batch_size else num_rows // batch_size
subtables = list(
_convert_to_arrow(
list(enumerate(examples)),
batch_size=batch_size,
drop_last_batch=drop_last_batch,
)
)
assert len(subtables) == num_batches
if drop_last_batch:
assert all(len(subtable) == batch_size for _, subtable in subtables)
else:
assert all(len(subtable) == batch_size for _, subtable in subtables[:-1])
assert len(subtables[-1][1]) <= batch_size
if num_rows > 0:
reloaded = pa.concat_tables([subtable for _, subtable in subtables])
assert full_table.slice(0, num_rows).to_pydict() == reloaded.to_pydict()
@pytest.mark.parametrize(
"tables",
[
[pa.table({"foo": range(10)})],
[pa.table({"foo": range(0, 5)}), pa.table({"foo": range(5, 10)})],
[pa.table({"foo": [i]}) for i in range(10)],
],
)
@pytest.mark.parametrize("batch_size", [1, 2, 3, 9, 10, 11, 20])
@pytest.mark.parametrize("drop_last_batch", [False, True])
def test_batch_arrow_tables(tables, batch_size, drop_last_batch):
full_table = pa.concat_tables(tables)
num_rows = len(full_table) if not drop_last_batch else len(full_table) // batch_size * batch_size
num_batches = (num_rows // batch_size) + 1 if num_rows % batch_size else num_rows // batch_size
subtables = list(
_batch_arrow_tables(list(enumerate(tables)), batch_size=batch_size, drop_last_batch=drop_last_batch)
)
assert len(subtables) == num_batches
if drop_last_batch:
assert all(len(subtable) == batch_size for _, subtable in subtables)
else:
assert all(len(subtable) == batch_size for _, subtable in subtables[:-1])
assert len(subtables[-1][1]) <= batch_size
if num_rows > 0:
reloaded = pa.concat_tables([subtable for _, subtable in subtables])
assert full_table.slice(0, num_rows).to_pydict() == reloaded.to_pydict()
################################
#
# _BaseExampleIterable tests
#
################################
def test_examples_iterable():
ex_iterable = ExamplesIterable(generate_examples_fn, {})
expected = list(generate_examples_fn())
assert next(iter(ex_iterable)) == expected[0]
assert list(ex_iterable) == expected
assert ex_iterable.iter_arrow is None
def test_examples_iterable_with_kwargs():
ex_iterable = ExamplesIterable(generate_examples_fn, {"filepaths": ["0.txt", "1.txt"], "split": "train"})
expected = list(generate_examples_fn(filepaths=["0.txt", "1.txt"], split="train"))
assert list(ex_iterable) == expected
assert all("split" in ex for _, ex in ex_iterable)
assert sorted({ex["filepath"] for _, ex in ex_iterable}) == ["0.txt", "1.txt"]
def test_examples_iterable_shuffle_data_sources():
ex_iterable = ExamplesIterable(generate_examples_fn, {"filepaths": ["0.txt", "1.txt"]})
ex_iterable = ex_iterable.shuffle_data_sources(np.random.default_rng(40))
expected = list(generate_examples_fn(filepaths=["1.txt", "0.txt"])) # shuffle the filepaths
assert list(ex_iterable) == expected
def test_examples_iterable_shuffle_shards_and_metadata():
def gen(filepaths, all_metadata):
for i, (filepath, metadata) in enumerate(zip(filepaths, all_metadata)):
yield i, {"filepath": filepath, "metadata": metadata}
ex_iterable = ExamplesIterable(
gen,
{
"filepaths": [f"{i}.txt" for i in range(100)],
"all_metadata": [{"id": str(i)} for i in range(100)],
},
)
ex_iterable = ex_iterable.shuffle_data_sources(np.random.default_rng(42))
out = list(ex_iterable)
filepaths_ids = [x["filepath"].split(".")[0] for _, x in out]
metadata_ids = [x["metadata"]["id"] for _, x in out]
assert filepaths_ids == metadata_ids, "entangled lists of shards/metadata should be shuffled the same way"
def test_arrow_examples_iterable():
ex_iterable = ArrowExamplesIterable(generate_tables_fn, {})
expected = sum([pa_table.to_pylist() for _, pa_table in generate_tables_fn()], [])
assert next(iter(ex_iterable))[1] == expected[0]
assert [example for _, example in ex_iterable] == expected
expected = list(generate_tables_fn())
assert list(ex_iterable.iter_arrow()) == expected
def test_arrow_examples_iterable_with_kwargs():
ex_iterable = ArrowExamplesIterable(generate_tables_fn, {"filepaths": ["0.txt", "1.txt"], "split": "train"})
expected = sum(
[pa_table.to_pylist() for _, pa_table in generate_tables_fn(filepaths=["0.txt", "1.txt"], split="train")], []
)
assert [example for _, example in ex_iterable] == expected
assert all("split" in ex for _, ex in ex_iterable)
assert sorted({ex["filepath"] for _, ex in ex_iterable}) == ["0.txt", "1.txt"]
expected = list(generate_tables_fn(filepaths=["0.txt", "1.txt"], split="train"))
assert list(ex_iterable.iter_arrow()) == expected
def test_arrow_examples_iterable_shuffle_data_sources():
ex_iterable = ArrowExamplesIterable(generate_tables_fn, {"filepaths": ["0.txt", "1.txt"]})
ex_iterable = ex_iterable.shuffle_data_sources(np.random.default_rng(40))
expected = sum(
[pa_table.to_pylist() for _, pa_table in generate_tables_fn(filepaths=["1.txt", "0.txt"])], []
) # shuffle the filepaths
assert [example for _, example in ex_iterable] == expected
expected = list(generate_tables_fn(filepaths=["1.txt", "0.txt"]))
assert list(ex_iterable.iter_arrow()) == expected
@pytest.mark.parametrize("seed", [42, 1337, 101010, 123456])
def test_buffer_shuffled_examples_iterable(seed):
n, buffer_size = 100, 30
generator = np.random.default_rng(seed)
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = BufferShuffledExamplesIterable(base_ex_iterable, buffer_size=buffer_size, generator=generator)
rng = deepcopy(generator)
expected_indices_used_for_shuffling = list(
islice(BufferShuffledExamplesIterable._iter_random_indices(rng, buffer_size=buffer_size), n - buffer_size)
)
# indices to pick in the shuffle buffer should all be in the right range
assert all(0 <= index_to_pick < buffer_size for index_to_pick in expected_indices_used_for_shuffling)
# it should be random indices
assert expected_indices_used_for_shuffling != list(range(buffer_size))
# The final order of examples is the result of a shuffle buffer.
all_examples = list(generate_examples_fn(n=n))
# We create a buffer and we pick random examples from it.
buffer, rest = all_examples[:buffer_size], all_examples[buffer_size:]
expected = []
for i, index_to_pick in enumerate(expected_indices_used_for_shuffling):
expected.append(buffer[index_to_pick])
# The picked examples are directly replaced by the next examples from the iterable.
buffer[index_to_pick] = rest.pop(0)
# Once we have reached the end of the iterable, we shuffle the buffer and return the remaining examples.
rng.shuffle(buffer)
expected += buffer
assert next(iter(ex_iterable)) == expected[0]
assert list(ex_iterable) == expected
assert sorted(ex_iterable) == sorted(all_examples)
def test_cycling_multi_sources_examples_iterable():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"text": "foo"})
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"text": "bar"})
ex_iterable = CyclingMultiSourcesExamplesIterable([ex_iterable1, ex_iterable2])
expected = list(chain(*zip(generate_examples_fn(text="foo"), generate_examples_fn(text="bar"))))
# The cycling stops as soon as one iterable is out of examples (here ex_iterable1), so the last sample from ex_iterable2 is unecessary
expected = expected[:-1]
assert next(iter(ex_iterable)) == expected[0]
assert list(ex_iterable) == expected
assert all((x["id"], x["text"]) == (i // 2, "bar" if i % 2 else "foo") for i, (_, x) in enumerate(ex_iterable))
@pytest.mark.parametrize("probabilities", [None, (0.5, 0.5), (0.9, 0.1)])
def test_randomly_cycling_multi_sources_examples_iterable(probabilities):
seed = 42
generator = np.random.default_rng(seed)
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"text": "foo"})
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"text": "bar"})
ex_iterable = RandomlyCyclingMultiSourcesExamplesIterable(
[ex_iterable1, ex_iterable2], generator=generator, probabilities=probabilities
)
# The source used randomly changes at each example. It stops when one of the iterators is empty.
rng = deepcopy(generator)
iterators = (generate_examples_fn(text="foo"), generate_examples_fn(text="bar"))
indices_iterator = RandomlyCyclingMultiSourcesExamplesIterable._iter_random_indices(
rng, len(iterators), p=probabilities
)
expected = []
lengths = [len(list(ex_iterable1)), len(list(ex_iterable2))]
for i in indices_iterator:
if lengths[0] == 0 or lengths[1] == 0:
break
for key, example in iterators[i]:
expected.append((key, example))
lengths[i] -= 1
break
else:
break
assert next(iter(ex_iterable)) == expected[0]
assert list(ex_iterable) == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda x: {"id+1": x["id"] + 1}, False, None), # just add 1 to the id
(3, lambda x: {"id+1": [x["id"][0] + 1]}, True, 1), # same with bs=1
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, 10), # same with bs=10
(25, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, 10), # same with bs=10
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, None), # same with bs=None
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, -1), # same with bs<=0
(3, lambda x: {k: v * 2 for k, v in x.items()}, True, 1), # make a duplicate of each example
],
)
def test_mapped_examples_iterable(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(base_ex_iterable, func, batched=batched, batch_size=batch_size)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if batched is False:
expected = [{**x, **func(x)} for x in all_examples]
else:
# For batched map we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
transformed_batch = func(batch)
all_transformed_examples.extend(_batch_to_examples(transformed_batch))
expected = _examples_to_batch(all_examples)
expected.update(_examples_to_batch(all_transformed_examples))
expected = list(_batch_to_examples(expected))
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda x: {"id+1": x["id"] + 1}, False, None), # just add 1 to the id
(3, lambda x: {"id+1": [x["id"][0] + 1]}, True, 1), # same with bs=1
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, 10), # same with bs=10
(25, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, 10), # same with bs=10
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, None), # same with bs=None
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, -1), # same with bs<=0
(3, lambda x: {k: v * 2 for k, v in x.items()}, True, 1), # make a duplicate of each example
],
)
def test_mapped_examples_iterable_drop_last_batch(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, drop_last_batch=True
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
is_empty = False
if batched is False:
# `drop_last_batch` has no effect here
expected = [{**x, **func(x)} for x in all_examples]
else:
# For batched map we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
if len(examples) < batch_size: # ignore last batch
break
batch = _examples_to_batch(examples)
transformed_batch = func(batch)
all_transformed_examples.extend(_batch_to_examples(transformed_batch))
all_examples = all_examples if n % batch_size == 0 else all_examples[: n // batch_size * batch_size]
if all_examples:
expected = _examples_to_batch(all_examples)
expected.update(_examples_to_batch(all_transformed_examples))
expected = list(_batch_to_examples(expected))
else:
is_empty = True
if not is_empty:
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
else:
with pytest.raises(StopIteration):
next(iter(ex_iterable))
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda x, index: {"id+idx": x["id"] + index}, False, None), # add the index to the id
(
25,
lambda x, indices: {"id+idx": [i + j for i, j in zip(x["id"], indices)]},
True,
10,
), # add the index to the id
(5, lambda x, indices: {"id+idx": [i + j for i, j in zip(x["id"], indices)]}, True, None), # same with bs=None
(5, lambda x, indices: {"id+idx": [i + j for i, j in zip(x["id"], indices)]}, True, -1), # same with bs<=0
],
)
def test_mapped_examples_iterable_with_indices(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, with_indices=True
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if batched is False:
expected = [{**x, **func(x, idx)} for idx, x in enumerate(all_examples)]
else:
# For batched map we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
indices = list(range(batch_offset, batch_offset + len(examples)))
transformed_batch = func(batch, indices)
all_transformed_examples.extend(_batch_to_examples(transformed_batch))
expected = _examples_to_batch(all_examples)
expected.update(_examples_to_batch(all_transformed_examples))
expected = list(_batch_to_examples(expected))
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, remove_columns",
[
(3, lambda x: {"id+1": x["id"] + 1}, False, None, ["extra_column"]), # just add 1 to the id
(25, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, 10, ["extra_column"]), # same with bs=10
(
50,
lambda x: {"foo": ["bar"] * np.random.default_rng(x["id"][0]).integers(0, 10)},
True,
8,
["extra_column", "id"],
), # make a duplicate of each example
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, None, ["extra_column"]), # same with bs=None
(5, lambda x: {"id+1": [i + 1 for i in x["id"]]}, True, -1, ["extra_column"]), # same with bs<=0
],
)
def test_mapped_examples_iterable_remove_columns(n, func, batched, batch_size, remove_columns):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n, "extra_column": "foo"})
ex_iterable = MappedExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, remove_columns=remove_columns
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
columns_to_remove = remove_columns if isinstance(remove_columns, list) else [remove_columns]
if batched is False:
expected = [{**{k: v for k, v in x.items() if k not in columns_to_remove}, **func(x)} for x in all_examples]
else:
# For batched map we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
transformed_batch = func(batch)
all_transformed_examples.extend(_batch_to_examples(transformed_batch))
expected = {k: v for k, v in _examples_to_batch(all_examples).items() if k not in columns_to_remove}
expected.update(_examples_to_batch(all_transformed_examples))
expected = list(_batch_to_examples(expected))
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, fn_kwargs",
[
(3, lambda x, y=0: {"id+y": x["id"] + y}, False, None, None),
(3, lambda x, y=0: {"id+y": x["id"] + y}, False, None, {"y": 3}),
(25, lambda x, y=0: {"id+y": [i + y for i in x["id"]]}, True, 10, {"y": 3}),
(5, lambda x, y=0: {"id+y": [i + y for i in x["id"]]}, True, None, {"y": 3}), # same with bs=None
(5, lambda x, y=0: {"id+y": [i + y for i in x["id"]]}, True, -1, {"y": 3}), # same with bs<=0
],
)
def test_mapped_examples_iterable_fn_kwargs(n, func, batched, batch_size, fn_kwargs):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, fn_kwargs=fn_kwargs
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if fn_kwargs is None:
fn_kwargs = {}
if batched is False:
expected = [{**x, **func(x, **fn_kwargs)} for x in all_examples]
else:
# For batched map we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
transformed_batch = func(batch, **fn_kwargs)
all_transformed_examples.extend(_batch_to_examples(transformed_batch))
expected = _examples_to_batch(all_examples)
expected.update(_examples_to_batch(all_transformed_examples))
expected = list(_batch_to_examples(expected))
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, input_columns",
[
(3, lambda id_: {"id+1": id_ + 1}, False, None, ["id"]), # just add 1 to the id
(25, lambda ids_: {"id+1": [i + 1 for i in ids_]}, True, 10, ["id"]), # same with bs=10
(5, lambda ids_: {"id+1": [i + 1 for i in ids_]}, True, None, ["id"]), # same with bs=None
(5, lambda ids_: {"id+1": [i + 1 for i in ids_]}, True, -1, ["id"]), # same with bs<=0
],
)
def test_mapped_examples_iterable_input_columns(n, func, batched, batch_size, input_columns):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, input_columns=input_columns
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
columns_to_input = input_columns if isinstance(input_columns, list) else [input_columns]
if batched is False:
expected = [{**x, **func(*[x[col] for col in columns_to_input])} for x in all_examples]
else:
# For batched map we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
transformed_batch = func(*[batch[col] for col in columns_to_input])
all_transformed_examples.extend(_batch_to_examples(transformed_batch))
expected = _examples_to_batch(all_examples)
expected.update(_examples_to_batch(all_transformed_examples))
expected = list(_batch_to_examples(expected))
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), False, None), # just add 1 to the id
(3, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 1), # same with bs=1
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 10), # same with bs=10
(25, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 10), # same with bs=10
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, None), # same with bs=None
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, -1), # same with bs<=0
(3, lambda t: pa.concat_tables([t] * 2), True, 1), # make a duplicate of each example
],
)
def test_mapped_examples_iterable_arrow_format(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable,
func,
batched=batched,
batch_size=batch_size,
formatting=FormattingConfig(format_type="arrow"),
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if batched is False:
expected = [func(pa.Table.from_pylist([x])).to_pylist()[0] for x in all_examples]
else:
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = pa.Table.from_pylist(examples)
expected.extend(func(batch).to_pylist())
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), False, None), # just add 1 to the id
(3, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 1), # same with bs=1
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 10), # same with bs=10
(25, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 10), # same with bs=10
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, None), # same with bs=None
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, -1), # same with bs<=0
(3, lambda t: pa.concat_tables([t] * 2), True, 1), # make a duplicate of each example
],
)
def test_mapped_examples_iterable_drop_last_batch_and_arrow_format(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable,
func,
batched=batched,
batch_size=batch_size,
drop_last_batch=True,
formatting=FormattingConfig(format_type="arrow"),
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
is_empty = False
if batched is False:
# `drop_last_batch` has no effect here
expected = [func(pa.Table.from_pylist([x])).to_pylist()[0] for x in all_examples]
else:
all_transformed_examples = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
if len(examples) < batch_size: # ignore last batch
break
batch = pa.Table.from_pylist(examples)
out = func(batch)
all_transformed_examples.extend(
out.to_pylist()
) # we don't merge with input since they're arrow tables and not dictionaries
all_examples = all_examples if n % batch_size == 0 else all_examples[: n // batch_size * batch_size]
if all_examples:
expected = all_transformed_examples
else:
is_empty = True
if not is_empty:
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
else:
with pytest.raises(StopIteration):
next(iter(ex_iterable))
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(
3,
lambda t, index: t.append_column("id+idx", pc.add(t["id"], index)),
False,
None,
), # add the index to the id
(
25,
lambda t, indices: t.append_column("id+idx", pc.add(t["id"], indices)),
True,
10,
), # add the index to the id
(5, lambda t, indices: t.append_column("id+idx", pc.add(t["id"], indices)), True, None), # same with bs=None
(5, lambda t, indices: t.append_column("id+idx", pc.add(t["id"], indices)), True, -1), # same with bs<=0
],
)
def test_mapped_examples_iterable_with_indices_and_arrow_format(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable,
func,
batched=batched,
batch_size=batch_size,
with_indices=True,
formatting=FormattingConfig(format_type="arrow"),
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if batched is False:
expected = [func(pa.Table.from_pylist([x]), i).to_pylist()[0] for i, x in enumerate(all_examples)]
else:
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = pa.Table.from_pylist(examples)
expected.extend(func(batch, list(range(batch_offset, batch_offset + len(batch)))).to_pylist())
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, remove_columns",
[
(
3,
lambda t: t.append_column("id+1", pc.add(t["id"], 1)),
False,
None,
["extra_column"],
), # just add 1 to the id
(25, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, 10, ["extra_column"]), # same with bs=10
(
50,
lambda t: pa.table({"foo": ["bar"] * np.random.default_rng(t["id"][0].as_py()).integers(0, 10)}),
True,
8,
["extra_column", "id"],
), # make a duplicate of each example
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, None, ["extra_column"]), # same with bs=None
(5, lambda t: t.append_column("id+1", pc.add(t["id"], 1)), True, -1, ["extra_column"]), # same with bs<=0
],
)
def test_mapped_examples_iterable_remove_columns_arrow_format(n, func, batched, batch_size, remove_columns):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n, "extra_column": "foo"})
ex_iterable = MappedExamplesIterable(
base_ex_iterable,
func,
batched=batched,
batch_size=batch_size,
remove_columns=remove_columns,
formatting=FormattingConfig(format_type="arrow"),
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
columns_to_remove = remove_columns if isinstance(remove_columns, list) else [remove_columns]
if batched is False:
expected = [
{**{k: v for k, v in func(pa.Table.from_pylist([x])).to_pylist()[0].items() if k not in columns_to_remove}}
for x in all_examples
]
else:
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = pa.Table.from_pylist(examples)
expected.extend(
[{k: v for k, v in x.items() if k not in columns_to_remove} for x in func(batch).to_pylist()]
)
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, fn_kwargs",
[
(3, lambda t, y=0: t.append_column("id+idx", pc.add(t["id"], y)), False, None, None),
(3, lambda t, y=0: t.append_column("id+idx", pc.add(t["id"], y)), False, None, {"y": 3}),
(25, lambda t, y=0: t.append_column("id+idx", pc.add(t["id"], y)), True, 10, {"y": 3}),
(5, lambda t, y=0: t.append_column("id+idx", pc.add(t["id"], y)), True, None, {"y": 3}), # same with bs=None
(5, lambda t, y=0: t.append_column("id+idx", pc.add(t["id"], y)), True, -1, {"y": 3}), # same with bs<=0
],
)
def test_mapped_examples_iterable_fn_kwargs_and_arrow_format(n, func, batched, batch_size, fn_kwargs):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable,
func,
batched=batched,
batch_size=batch_size,
fn_kwargs=fn_kwargs,
formatting=FormattingConfig(format_type="arrow"),
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if fn_kwargs is None:
fn_kwargs = {}
if batched is False:
expected = [func(pa.Table.from_pylist([x]), **fn_kwargs).to_pylist()[0] for x in all_examples]
else:
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = pa.Table.from_pylist(examples)
expected.extend(func(batch, **fn_kwargs).to_pylist())
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, input_columns",
[
(3, lambda id_: pa.table({"id+1": pc.add(id_, 1)}), False, None, ["id"]), # just add 1 to the id
(25, lambda ids_: pa.table({"id+1": pc.add(ids_, 1)}), True, 10, ["id"]), # same with bs=10
(5, lambda ids_: pa.table({"id+1": pc.add(ids_, 1)}), True, None, ["id"]), # same with bs=None
(5, lambda ids_: pa.table({"id+1": pc.add(ids_, 1)}), True, -1, ["id"]), # same with bs<=0
],
)
def test_mapped_examples_iterable_input_columns_and_arrow_format(n, func, batched, batch_size, input_columns):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = MappedExamplesIterable(
base_ex_iterable,
func,
batched=batched,
batch_size=batch_size,
input_columns=input_columns,
formatting=FormattingConfig(format_type="arrow"),
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
columns_to_input = input_columns if isinstance(input_columns, list) else [input_columns]
if batched is False:
expected = [
func(*[pa.Table.from_pylist([x])[col] for col in columns_to_input]).to_pylist()[0] for x in all_examples
]
else:
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = pa.Table.from_pylist(examples)
expected.extend(func(*[batch[col] for col in columns_to_input]).to_pylist())
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda x: x["id"] % 2 == 0, False, None), # keep even number
(3, lambda x: [x["id"][0] % 2 == 0], True, 1), # same with bs=1
(25, lambda x: [i % 2 == 0 for i in x["id"]], True, 10), # same with bs=10
(5, lambda x: [i % 2 == 0 for i in x["id"]], True, None), # same with bs=None
(5, lambda x: [i % 2 == 0 for i in x["id"]], True, -1), # same with bs<=0
(3, lambda x: False, False, None), # return 0 examples
(3, lambda x: [False] * len(x["id"]), True, 10), # same with bs=10
],
)
def test_filtered_examples_iterable(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = FilteredExamplesIterable(base_ex_iterable, func, batched=batched, batch_size=batch_size)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if batched is False:
expected = [x for x in all_examples if func(x)]
else:
# For batched filter we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
mask = func(batch)
expected.extend([x for x, to_keep in zip(examples, mask) if to_keep])
if expected:
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size",
[
(3, lambda x, index: index % 2 == 0, False, None), # keep even number
(25, lambda x, indices: [idx % 2 == 0 for idx in indices], True, 10), # same with bs=10
(5, lambda x, indices: [idx % 2 == 0 for idx in indices], True, None), # same with bs=None
(5, lambda x, indices: [idx % 2 == 0 for idx in indices], True, -1), # same with bs<=0
],
)
def test_filtered_examples_iterable_with_indices(n, func, batched, batch_size):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = FilteredExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, with_indices=True
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
if batched is False:
expected = [x for idx, x in enumerate(all_examples) if func(x, idx)]
else:
# For batched filter we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
indices = list(range(batch_offset, batch_offset + len(examples)))
mask = func(batch, indices)
expected.extend([x for x, to_keep in zip(examples, mask) if to_keep])
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
@pytest.mark.parametrize(
"n, func, batched, batch_size, input_columns",
[
(3, lambda id_: id_ % 2 == 0, False, None, ["id"]), # keep even number
(25, lambda ids_: [i % 2 == 0 for i in ids_], True, 10, ["id"]), # same with bs=10
(3, lambda ids_: [i % 2 == 0 for i in ids_], True, None, ["id"]), # same with bs=None
(3, lambda ids_: [i % 2 == 0 for i in ids_], True, None, ["id"]), # same with bs=None
],
)
def test_filtered_examples_iterable_input_columns(n, func, batched, batch_size, input_columns):
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n})
ex_iterable = FilteredExamplesIterable(
base_ex_iterable, func, batched=batched, batch_size=batch_size, input_columns=input_columns
)
all_examples = [x for _, x in generate_examples_fn(n=n)]
columns_to_input = input_columns if isinstance(input_columns, list) else [input_columns]
if batched is False:
expected = [x for x in all_examples if func(*[x[col] for col in columns_to_input])]
else:
# For batched filter we have to format the examples as a batch (i.e. in one single dictionary) to pass the batch to the function
expected = []
# If batch_size is None or <=0, we use the whole dataset as a single batch
if batch_size is None or batch_size <= 0:
batch_size = len(all_examples)
for batch_offset in range(0, len(all_examples), batch_size):
examples = all_examples[batch_offset : batch_offset + batch_size]
batch = _examples_to_batch(examples)
mask = func(*[batch[col] for col in columns_to_input])
expected.extend([x for x, to_keep in zip(examples, mask) if to_keep])
assert next(iter(ex_iterable))[1] == expected[0]
assert [x for _, x in ex_iterable] == expected
def test_skip_examples_iterable():
total, count = 10, 2
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": total})
skip_ex_iterable = SkipExamplesIterable(base_ex_iterable, n=count)
expected = list(generate_examples_fn(n=total))[count:]
assert list(skip_ex_iterable) == expected
assert (
skip_ex_iterable.shuffle_data_sources(np.random.default_rng(42)) is skip_ex_iterable
), "skip examples makes the shards order fixed"
def test_take_examples_iterable():
total, count = 10, 2
base_ex_iterable = ExamplesIterable(generate_examples_fn, {"n": total})
take_ex_iterable = TakeExamplesIterable(base_ex_iterable, n=count)
expected = list(generate_examples_fn(n=total))[:count]
assert list(take_ex_iterable) == expected
assert (
take_ex_iterable.shuffle_data_sources(np.random.default_rng(42)) is take_ex_iterable
), "skip examples makes the shards order fixed"
def test_vertically_concatenated_examples_iterable():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label": 10})
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label": 5})
concatenated_ex_iterable = VerticallyConcatenatedMultiSourcesExamplesIterable([ex_iterable1, ex_iterable2])
expected = [x for _, x in ex_iterable1] + [x for _, x in ex_iterable2]
assert [x for _, x in concatenated_ex_iterable] == expected
def test_vertically_concatenated_examples_iterable_with_different_columns():
# having different columns is supported
# Though iterable datasets fill the missing data with nulls
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label": 10})
ex_iterable2 = ExamplesIterable(generate_examples_fn, {})
concatenated_ex_iterable = VerticallyConcatenatedMultiSourcesExamplesIterable([ex_iterable1, ex_iterable2])
expected = [x for _, x in ex_iterable1] + [x for _, x in ex_iterable2]
assert [x for _, x in concatenated_ex_iterable] == expected
def test_vertically_concatenated_examples_iterable_shuffle_data_sources():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label": 10})
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label": 5})
concatenated_ex_iterable = VerticallyConcatenatedMultiSourcesExamplesIterable([ex_iterable1, ex_iterable2])
rng = np.random.default_rng(42)
shuffled_ex_iterable = concatenated_ex_iterable.shuffle_data_sources(rng)
# make sure the list of examples iterables is shuffled, and each examples iterable is shuffled
expected = [x for _, x in ex_iterable2.shuffle_data_sources(rng)] + [
x for _, x in ex_iterable1.shuffle_data_sources(rng)
]
assert [x for _, x in shuffled_ex_iterable] == expected
def test_horizontally_concatenated_examples_iterable():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label1": 10})
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label2": 5})
concatenated_ex_iterable = HorizontallyConcatenatedMultiSourcesExamplesIterable([ex_iterable1, ex_iterable2])
with pytest.raises(ValueError): # column "id" is duplicated -> raise an error
list(concatenated_ex_iterable)
ex_iterable2 = MappedExamplesIterable(ex_iterable2, lambda x: x, remove_columns=["id"])
concatenated_ex_iterable = HorizontallyConcatenatedMultiSourcesExamplesIterable([ex_iterable1, ex_iterable2])
expected = [{**x, **y} for (_, x), (_, y) in zip(ex_iterable1, ex_iterable2)]
assert [x for _, x in concatenated_ex_iterable] == expected
assert (
concatenated_ex_iterable.shuffle_data_sources(np.random.default_rng(42)) is concatenated_ex_iterable
), "horizontally concatenated examples makes the shards order fixed"
@pytest.mark.parametrize(
"ex_iterable",
[
ExamplesIterable(generate_examples_fn, {}),
ShuffledDataSourcesExamplesIterable(generate_examples_fn, {}, np.random.default_rng(42)),
SelectColumnsIterable(ExamplesIterable(generate_examples_fn, {}), ["id"]),
StepExamplesIterable(ExamplesIterable(generate_examples_fn, {}), 2, 0),
CyclingMultiSourcesExamplesIterable([ExamplesIterable(generate_examples_fn, {})]),
VerticallyConcatenatedMultiSourcesExamplesIterable([ExamplesIterable(generate_examples_fn, {})]),
HorizontallyConcatenatedMultiSourcesExamplesIterable([ExamplesIterable(generate_examples_fn, {})]),
RandomlyCyclingMultiSourcesExamplesIterable(
[ExamplesIterable(generate_examples_fn, {})], np.random.default_rng(42)
),
MappedExamplesIterable(ExamplesIterable(generate_examples_fn, {}), lambda x: x),
MappedExamplesIterable(ArrowExamplesIterable(generate_tables_fn, {}), lambda x: x),
FilteredExamplesIterable(ExamplesIterable(generate_examples_fn, {}), lambda x: True),
FilteredExamplesIterable(ArrowExamplesIterable(generate_tables_fn, {}), lambda x: True),
BufferShuffledExamplesIterable(ExamplesIterable(generate_examples_fn, {}), 10, np.random.default_rng(42)),
SkipExamplesIterable(ExamplesIterable(generate_examples_fn, {}), 10),
TakeExamplesIterable(ExamplesIterable(generate_examples_fn, {}), 10),
TypedExamplesIterable(
ExamplesIterable(generate_examples_fn, {}), Features({"id": Value("int32")}), token_per_repo_id={}
),
],
)
def test_no_iter_arrow(ex_iterable: _BaseExamplesIterable):
assert ex_iterable.iter_arrow is None
@pytest.mark.parametrize(
"ex_iterable",
[
ArrowExamplesIterable(generate_tables_fn, {}),
ShuffledDataSourcesArrowExamplesIterable(generate_tables_fn, {}, np.random.default_rng(42)),
SelectColumnsIterable(ArrowExamplesIterable(generate_tables_fn, {}), ["id"]),
# StepExamplesIterable(ArrowExamplesIterable(generate_tables_fn, {}), 2, 0), # not implemented
# CyclingMultiSourcesExamplesIterable([ArrowExamplesIterable(generate_tables_fn, {})]), # not implemented
VerticallyConcatenatedMultiSourcesExamplesIterable([ArrowExamplesIterable(generate_tables_fn, {})]),
# HorizontallyConcatenatedMultiSourcesExamplesIterable([ArrowExamplesIterable(generate_tables_fn, {})]), # not implemented
# RandomlyCyclingMultiSourcesExamplesIterable([ArrowExamplesIterable(generate_tables_fn, {})], np.random.default_rng(42)), # not implemented
MappedExamplesIterable(
ExamplesIterable(generate_examples_fn, {}), lambda t: t, formatting=FormattingConfig(format_type="arrow")
),
MappedExamplesIterable(
ArrowExamplesIterable(generate_tables_fn, {}),
lambda t: t,
formatting=FormattingConfig(format_type="arrow"),
),
FilteredExamplesIterable(
ExamplesIterable(generate_examples_fn, {}),
lambda t: True,
formatting=FormattingConfig(format_type="arrow"),
),
FilteredExamplesIterable(
ArrowExamplesIterable(generate_tables_fn, {}),
lambda t: True,
formatting=FormattingConfig(format_type="arrow"),
),
# BufferShuffledExamplesIterable(ArrowExamplesIterable(generate_tables_fn, {}), 10, np.random.default_rng(42)), # not implemented
# SkipExamplesIterable(ArrowExamplesIterable(generate_tables_fn, {}), 10), # not implemented
# TakeExamplesIterable(ArrowExamplesIterable(generate_tables_fn, {}), 10), # not implemented
TypedExamplesIterable(
ArrowExamplesIterable(generate_tables_fn, {}), Features({"id": Value("int32")}), token_per_repo_id={}
),
],
)
def test_iter_arrow(ex_iterable: _BaseExamplesIterable):
assert ex_iterable.iter_arrow is not None
key, pa_table = next(ex_iterable.iter_arrow())
assert isinstance(pa_table, pa.Table)
############################
#
# IterableDataset tests
#
############################
def test_iterable_dataset():
dataset = IterableDataset(ExamplesIterable(generate_examples_fn, {}))
expected = [x for _, x in generate_examples_fn()]
assert next(iter(dataset)) == expected[0]
assert list(dataset) == expected
def test_iterable_dataset_from_generator():
data = [
{"col_1": "0", "col_2": 0, "col_3": 0.0},
{"col_1": "1", "col_2": 1, "col_3": 1.0},
{"col_1": "2", "col_2": 2, "col_3": 2.0},
{"col_1": "3", "col_2": 3, "col_3": 3.0},
]
def gen():
yield from data
dataset = IterableDataset.from_generator(gen)
assert isinstance(dataset, IterableDataset)
assert list(dataset) == data
def test_iterable_dataset_from_generator_with_shards():
def gen(shard_names):
for shard_name in shard_names:
for i in range(10):
yield {"shard_name": shard_name, "i": i}
shard_names = [f"data{shard_idx}.txt" for shard_idx in range(4)]
dataset = IterableDataset.from_generator(gen, gen_kwargs={"shard_names": shard_names})
assert isinstance(dataset, IterableDataset)
assert dataset.n_shards == len(shard_names)
def test_iterable_dataset_from_file(dataset: IterableDataset, arrow_file: str):
with assert_arrow_memory_doesnt_increase():
dataset_from_file = IterableDataset.from_file(arrow_file)
expected_features = dataset._resolve_features().features
assert dataset_from_file.features.type == expected_features.type
assert dataset_from_file.features == expected_features
assert isinstance(dataset_from_file, IterableDataset)
assert list(dataset_from_file) == list(dataset)
@require_not_windows
@require_dill_gt_0_3_2
@require_pyspark
def test_from_spark_streaming():
import pyspark
spark = pyspark.sql.SparkSession.builder.master("local[*]").appName("pyspark").getOrCreate()
data = [
("0", 0, 0.0),
("1", 1, 1.0),
("2", 2, 2.0),
("3", 3, 3.0),
]
df = spark.createDataFrame(data, "col_1: string, col_2: int, col_3: float")
dataset = IterableDataset.from_spark(df)
assert isinstance(dataset, IterableDataset)
results = []
for ex in dataset:
results.append(ex)
assert results == [
{"col_1": "0", "col_2": 0, "col_3": 0.0},
{"col_1": "1", "col_2": 1, "col_3": 1.0},
{"col_1": "2", "col_2": 2, "col_3": 2.0},
{"col_1": "3", "col_2": 3, "col_3": 3.0},
]
@require_not_windows
@require_dill_gt_0_3_2
@require_pyspark
def test_from_spark_streaming_features():
import PIL.Image
import pyspark
spark = pyspark.sql.SparkSession.builder.master("local[*]").appName("pyspark").getOrCreate()
data = [(0, np.arange(4 * 4 * 3).reshape(4, 4, 3).tolist())]
df = spark.createDataFrame(data, "idx: int, image: array<array<array<int>>>")
features = Features({"idx": Value("int64"), "image": Image()})
dataset = IterableDataset.from_spark(
df,
features=features,
)
assert isinstance(dataset, IterableDataset)
results = []
for ex in dataset:
results.append(ex)
assert len(results) == 1
isinstance(results[0]["image"], PIL.Image.Image)
@require_torch
def test_iterable_dataset_torch_integration():
ex_iterable = ExamplesIterable(generate_examples_fn, {})
dataset = IterableDataset(ex_iterable)
import torch.utils.data
assert isinstance(dataset, torch.utils.data.IterableDataset)
assert isinstance(dataset, IterableDataset)
assert dataset._ex_iterable is ex_iterable
@require_torch
def test_iterable_dataset_torch_picklable():
import pickle
ex_iterable = ExamplesIterable(generate_examples_fn, {})
dataset = IterableDataset(ex_iterable, formatting=FormattingConfig(format_type="torch"))
reloaded_dataset = pickle.loads(pickle.dumps(dataset))
import torch.utils.data
assert isinstance(reloaded_dataset, IterableDataset)
assert isinstance(reloaded_dataset, torch.utils.data.IterableDataset)
assert reloaded_dataset._formatting.format_type == "torch"
assert len(list(dataset)) == len(list(reloaded_dataset))
@require_torch
def test_iterable_dataset_with_format_torch():
ex_iterable = ExamplesIterable(generate_examples_fn, {})
dataset = IterableDataset(ex_iterable)
from torch.utils.data import DataLoader
dataloader = DataLoader(dataset)
assert len(list(dataloader)) == len(list(ex_iterable))
@require_torch
def test_iterable_dataset_torch_dataloader_parallel():
from torch.utils.data import DataLoader
ex_iterable = ExamplesIterable(generate_examples_fn, {})
dataset = IterableDataset(ex_iterable)
dataloader = DataLoader(dataset, num_workers=2, batch_size=None)
result = list(dataloader)
expected = [example for _, example in ex_iterable]
assert len(result) == len(expected)
assert {str(x) for x in result} == {str(x) for x in expected}
@require_torch
@pytest.mark.filterwarnings("ignore:This DataLoader will create:UserWarning")
@pytest.mark.parametrize("n_shards, num_workers", [(2, 1), (2, 2), (3, 2), (2, 3)])
def test_sharded_iterable_dataset_torch_dataloader_parallel(n_shards, num_workers):
from torch.utils.data import DataLoader
ex_iterable = ExamplesIterable(generate_examples_fn, {"filepaths": [f"{i}.txt" for i in range(n_shards)]})
dataset = IterableDataset(ex_iterable)
dataloader = DataLoader(dataset, batch_size=None, num_workers=num_workers)
result = list(dataloader)
expected = [example for _, example in ex_iterable]
assert len(result) == len(expected)
assert {str(x) for x in result} == {str(x) for x in expected}
@require_torch
@pytest.mark.integration
@pytest.mark.parametrize("num_workers", [1, 2])
def test_iterable_dataset_from_hub_torch_dataloader_parallel(num_workers, tmp_path):
from torch.utils.data import DataLoader
dataset = load_dataset(SAMPLE_DATASET_IDENTIFIER, cache_dir=str(tmp_path), streaming=True, split="train")
dataloader = DataLoader(dataset, batch_size=None, num_workers=num_workers)
result = list(dataloader)
assert len(result) == 2
@pytest.mark.parametrize("batch_size", [4, 5])
@pytest.mark.parametrize("drop_last_batch", [False, True])
def test_iterable_dataset_iter_batch(batch_size, drop_last_batch):
n = 25
dataset = IterableDataset(ExamplesIterable(generate_examples_fn, {"n": n}))
all_examples = [ex for _, ex in generate_examples_fn(n=n)]
expected = []
for i in range(0, len(all_examples), batch_size):
if len(all_examples[i : i + batch_size]) < batch_size and drop_last_batch:
continue
expected.append(_examples_to_batch(all_examples[i : i + batch_size]))
assert next(iter(dataset.iter(batch_size, drop_last_batch=drop_last_batch))) == expected[0]
assert list(dataset.iter(batch_size, drop_last_batch=drop_last_batch)) == expected
def test_iterable_dataset_info():
info = DatasetInfo(description="desc", citation="@article{}", size_in_bytes=42)
ex_iterable = ExamplesIterable(generate_examples_fn, {})
dataset = IterableDataset(ex_iterable, info=info)
assert dataset.info == info
assert dataset.description == info.description
assert dataset.citation == info.citation
assert dataset.size_in_bytes == info.size_in_bytes
def test_iterable_dataset_set_epoch(dataset: IterableDataset):
assert dataset._epoch == 0
dataset.set_epoch(42)
assert dataset._epoch == 42
@pytest.mark.parametrize("seed", [None, 42, 1337])
@pytest.mark.parametrize("epoch", [None, 0, 1, 10])
def test_iterable_dataset_set_epoch_of_shuffled_dataset(dataset: IterableDataset, seed, epoch):
buffer_size = 10
shuffled_dataset = dataset.shuffle(seed, buffer_size=buffer_size)
base_generator = shuffled_dataset._shuffling.generator
if epoch is not None:
shuffled_dataset.set_epoch(epoch)
effective_generator = shuffled_dataset._effective_generator()
assert effective_generator is not None
if epoch is None or epoch == 0:
assert is_rng_equal(base_generator, shuffled_dataset._effective_generator())
else:
assert not is_rng_equal(base_generator, shuffled_dataset._effective_generator())
effective_seed = deepcopy(base_generator).integers(0, 1 << 63) - epoch
assert is_rng_equal(np.random.default_rng(effective_seed), shuffled_dataset._effective_generator())
def test_iterable_dataset_map(
dataset: IterableDataset,
):
func = lambda x: {"id+1": x["id"] + 1} # noqa: E731
mapped_dataset = dataset.map(func)
assert isinstance(mapped_dataset._ex_iterable, MappedExamplesIterable)
assert mapped_dataset._ex_iterable.function is func
assert mapped_dataset._ex_iterable.batched is False
assert next(iter(mapped_dataset)) == {**next(iter(dataset)), **func(next(iter(generate_examples_fn()))[1])}
def test_iterable_dataset_map_batched(
dataset: IterableDataset,
):
func = lambda x: {"id+1": [i + 1 for i in x["id"]]} # noqa: E731
batch_size = 3
dataset = dataset.map(func, batched=True, batch_size=batch_size)
assert isinstance(dataset._ex_iterable, MappedExamplesIterable)
assert dataset._ex_iterable.function is func
assert dataset._ex_iterable.batch_size == batch_size
assert next(iter(dataset)) == {"id": 0, "id+1": 1}
def test_iterable_dataset_map_complex_features(
dataset: IterableDataset,
):
# https://github.com/huggingface/datasets/issues/3505
ex_iterable = ExamplesIterable(generate_examples_fn, {"label": "positive"})
features = Features(
{
"id": Value("int64"),
"label": Value("string"),
}
)
dataset = IterableDataset(ex_iterable, info=DatasetInfo(features=features))
dataset = dataset.cast_column("label", ClassLabel(names=["negative", "positive"]))
dataset = dataset.map(lambda x: {"id+1": x["id"] + 1, **x})
assert isinstance(dataset._ex_iterable, MappedExamplesIterable)
features["label"] = ClassLabel(names=["negative", "positive"])
assert [{k: v for k, v in ex.items() if k != "id+1"} for ex in dataset] == [
features.encode_example(ex) for _, ex in ex_iterable
]
def test_iterable_dataset_map_with_features(dataset: IterableDataset) -> None:
# https://github.com/huggingface/datasets/issues/3888
ex_iterable = ExamplesIterable(generate_examples_fn, {"label": "positive"})
features_before_map = Features(
{
"id": Value("int64"),
"label": Value("string"),
}
)
dataset = IterableDataset(ex_iterable, info=DatasetInfo(features=features_before_map))
assert dataset.info.features is not None
assert dataset.info.features == features_before_map
features_after_map = Features(
{
"id": Value("int64"),
"label": Value("string"),
"target": Value("string"),
}
)
dataset = dataset.map(lambda x: {"target": x["label"]}, features=features_after_map)
assert dataset.info.features is not None
assert dataset.info.features == features_after_map
def test_iterable_dataset_map_with_fn_kwargs(dataset: IterableDataset) -> None:
fn_kwargs = {"y": 1}
mapped_dataset = dataset.map(lambda x, y: {"id+y": x["id"] + y}, fn_kwargs=fn_kwargs)
assert mapped_dataset._ex_iterable.batched is False
assert next(iter(mapped_dataset)) == {"id": 0, "id+y": 1}
batch_size = 3
mapped_dataset = dataset.map(
lambda x, y: {"id+y": [i + y for i in x["id"]]}, batched=True, batch_size=batch_size, fn_kwargs=fn_kwargs
)
assert isinstance(mapped_dataset._ex_iterable, MappedExamplesIterable)
assert mapped_dataset._ex_iterable.batch_size == batch_size
assert next(iter(mapped_dataset)) == {"id": 0, "id+y": 1}
def test_iterable_dataset_filter(dataset: IterableDataset) -> None:
fn_kwargs = {"y": 1}
filtered_dataset = dataset.filter(lambda x, y: x["id"] == y, fn_kwargs=fn_kwargs)
assert filtered_dataset._ex_iterable.batched is False
assert next(iter(filtered_dataset)) == {"id": 1}
@pytest.mark.parametrize("seed", [42, 1337, 101010, 123456])
@pytest.mark.parametrize("epoch", [None, 0, 1])
def test_iterable_dataset_shuffle(dataset: IterableDataset, seed, epoch):
buffer_size = 3
dataset = deepcopy(dataset)
dataset._ex_iterable.kwargs["filepaths"] = ["0.txt", "1.txt"]
dataset = dataset.shuffle(seed, buffer_size=buffer_size)
assert isinstance(dataset._shuffling, ShufflingConfig)
assert isinstance(dataset._shuffling.generator, np.random.Generator)
assert is_rng_equal(dataset._shuffling.generator, np.random.default_rng(seed))
# Effective seed is sum of seed and epoch
if epoch is None or epoch == 0:
effective_seed = seed
else:
dataset.set_epoch(epoch)
effective_seed = np.random.default_rng(seed).integers(0, 1 << 63) - epoch
# Shuffling adds a shuffle buffer
expected_first_example_index = next(
iter(BufferShuffledExamplesIterable._iter_random_indices(np.random.default_rng(effective_seed), buffer_size))
)
assert isinstance(dataset._ex_iterable, BufferShuffledExamplesIterable)
# It also shuffles the underlying examples iterable
expected_ex_iterable = ExamplesIterable(
generate_examples_fn, {"filepaths": ["0.txt", "1.txt"]}
).shuffle_data_sources(np.random.default_rng(effective_seed))
assert isinstance(dataset._ex_iterable.ex_iterable, ExamplesIterable)
assert next(iter(dataset)) == list(islice(expected_ex_iterable, expected_first_example_index + 1))[-1][1]
@pytest.mark.parametrize(
"features",
[
None,
Features(
{
"id": Value("int64"),
"label": Value("int64"),
}
),
Features(
{
"id": Value("int64"),
"label": ClassLabel(names=["negative", "positive"]),
}
),
],
)
def test_iterable_dataset_features(features):
ex_iterable = ExamplesIterable(generate_examples_fn, {"label": 0})
dataset = IterableDataset(ex_iterable, info=DatasetInfo(features=features))
if features:
expected = [features.encode_example(x) for _, x in ex_iterable]
else:
expected = [x for _, x in ex_iterable]
assert list(dataset) == expected
def test_iterable_dataset_features_cast_to_python():
ex_iterable = ExamplesIterable(
generate_examples_fn, {"timestamp": pd.Timestamp(2020, 1, 1), "array": np.ones(5), "n": 1}
)
features = Features(
{
"id": Value("int64"),
"timestamp": Value("timestamp[us]"),
"array": [Value("int64")],
}
)
dataset = IterableDataset(ex_iterable, info=DatasetInfo(features=features))
assert list(dataset) == [{"timestamp": pd.Timestamp(2020, 1, 1).to_pydatetime(), "array": [1] * 5, "id": 0}]
@pytest.mark.parametrize("format_type", [None, "torch", "python", "tf", "tensorflow", "np", "numpy", "jax"])
def test_iterable_dataset_with_format(dataset: IterableDataset, format_type):
formatted_dataset = dataset.with_format(format_type)
assert formatted_dataset._formatting.format_type == get_format_type_from_alias(format_type)
@require_torch
def test_iterable_dataset_is_torch_iterable_dataset(dataset: IterableDataset):
from torch.utils.data import DataLoader, _DatasetKind
dataloader = DataLoader(dataset)
assert dataloader._dataset_kind == _DatasetKind.Iterable
out = list(dataloader)
assert len(out) == DEFAULT_N_EXAMPLES
@pytest.mark.parametrize("n", [0, 2, int(1e10)])
def test_iterable_dataset_skip(dataset: IterableDataset, n):
skip_dataset = dataset.skip(n)
assert isinstance(skip_dataset._ex_iterable, SkipExamplesIterable)
assert skip_dataset._ex_iterable.n == n
assert list(skip_dataset) == list(dataset)[n:]
@pytest.mark.parametrize("n", [0, 2, int(1e10)])
def test_iterable_dataset_take(dataset: IterableDataset, n):
take_dataset = dataset.take(n)
assert isinstance(take_dataset._ex_iterable, TakeExamplesIterable)
assert take_dataset._ex_iterable.n == n
assert list(take_dataset) == list(dataset)[:n]
@pytest.mark.parametrize("method", ["skip", "take"])
def test_iterable_dataset_shuffle_after_skip_or_take(method):
seed = 42
n, n_shards = 3, 10
count = 7
ex_iterable = ExamplesIterable(generate_examples_fn, {"n": n, "filepaths": [f"{i}.txt" for i in range(n_shards)]})
dataset = IterableDataset(ex_iterable)
dataset = dataset.skip(n) if method == "skip" else dataset.take(count)
shuffled_dataset = dataset.shuffle(seed, buffer_size=DEFAULT_N_EXAMPLES)
# shuffling a skip/take dataset should keep the same examples and don't shuffle the shards
key = lambda x: f"{x['filepath']}_{x['id']}" # noqa: E731
assert sorted(dataset, key=key) == sorted(shuffled_dataset, key=key)
def test_iterable_dataset_add_column(dataset_with_several_columns):
new_column = list(range(DEFAULT_N_EXAMPLES))
new_dataset = dataset_with_several_columns.add_column("new_column", new_column)
assert list(new_dataset) == [
{**example, "new_column": idx} for idx, example in enumerate(dataset_with_several_columns)
]
new_dataset = new_dataset._resolve_features()
assert "new_column" in new_dataset.column_names
def test_iterable_dataset_rename_column(dataset_with_several_columns):
new_dataset = dataset_with_several_columns.rename_column("id", "new_id")
assert list(new_dataset) == [
{("new_id" if k == "id" else k): v for k, v in example.items()} for example in dataset_with_several_columns
]
assert new_dataset.features is None
assert new_dataset.column_names is None
# rename the column if ds.features was not None
new_dataset = dataset_with_several_columns._resolve_features().rename_column("id", "new_id")
assert new_dataset.features is not None
assert new_dataset.column_names is not None
assert "id" not in new_dataset.column_names
assert "new_id" in new_dataset.column_names
def test_iterable_dataset_rename_columns(dataset_with_several_columns):
column_mapping = {"id": "new_id", "filepath": "filename"}
new_dataset = dataset_with_several_columns.rename_columns(column_mapping)
assert list(new_dataset) == [
{column_mapping.get(k, k): v for k, v in example.items()} for example in dataset_with_several_columns
]
assert new_dataset.features is None
assert new_dataset.column_names is None
# rename the columns if ds.features was not None
new_dataset = dataset_with_several_columns._resolve_features().rename_columns(column_mapping)
assert new_dataset.features is not None
assert new_dataset.column_names is not None
assert all(c not in new_dataset.column_names for c in ["id", "filepath"])
assert all(c in new_dataset.column_names for c in ["new_id", "filename"])
def test_iterable_dataset_remove_columns(dataset_with_several_columns):
new_dataset = dataset_with_several_columns.remove_columns("id")
assert list(new_dataset) == [
{k: v for k, v in example.items() if k != "id"} for example in dataset_with_several_columns
]
assert new_dataset.features is None
new_dataset = dataset_with_several_columns.remove_columns(["id", "filepath"])
assert list(new_dataset) == [
{k: v for k, v in example.items() if k != "id" and k != "filepath"} for example in dataset_with_several_columns
]
assert new_dataset.features is None
assert new_dataset.column_names is None
# remove the columns if ds.features was not None
new_dataset = dataset_with_several_columns._resolve_features().remove_columns(["id", "filepath"])
assert new_dataset.features is not None
assert new_dataset.column_names is not None
assert all(c not in new_dataset.features for c in ["id", "filepath"])
assert all(c not in new_dataset.column_names for c in ["id", "filepath"])
def test_iterable_dataset_select_columns(dataset_with_several_columns):
new_dataset = dataset_with_several_columns.select_columns("id")
assert list(new_dataset) == [
{k: v for k, v in example.items() if k == "id"} for example in dataset_with_several_columns
]
assert new_dataset.features is None
new_dataset = dataset_with_several_columns.select_columns(["id", "filepath"])
assert list(new_dataset) == [
{k: v for k, v in example.items() if k in ("id", "filepath")} for example in dataset_with_several_columns
]
assert new_dataset.features is None
# select the columns if ds.features was not None
new_dataset = dataset_with_several_columns._resolve_features().select_columns(["id", "filepath"])
assert new_dataset.features is not None
assert new_dataset.column_names is not None
assert all(c in new_dataset.features for c in ["id", "filepath"])
assert all(c in new_dataset.column_names for c in ["id", "filepath"])
def test_iterable_dataset_cast_column():
ex_iterable = ExamplesIterable(generate_examples_fn, {"label": 10})
features = Features({"id": Value("int64"), "label": Value("int64")})
dataset = IterableDataset(ex_iterable, info=DatasetInfo(features=features))
casted_dataset = dataset.cast_column("label", Value("bool"))
casted_features = features.copy()
casted_features["label"] = Value("bool")
assert list(casted_dataset) == [casted_features.encode_example(ex) for _, ex in ex_iterable]
def test_iterable_dataset_cast():
ex_iterable = ExamplesIterable(generate_examples_fn, {"label": 10})
features = Features({"id": Value("int64"), "label": Value("int64")})
dataset = IterableDataset(ex_iterable, info=DatasetInfo(features=features))
new_features = Features({"id": Value("int64"), "label": Value("bool")})
casted_dataset = dataset.cast(new_features)
assert list(casted_dataset) == [new_features.encode_example(ex) for _, ex in ex_iterable]
def test_iterable_dataset_resolve_features():
ex_iterable = ExamplesIterable(generate_examples_fn, {})
dataset = IterableDataset(ex_iterable)
assert dataset.features is None
assert dataset.column_names is None
dataset = dataset._resolve_features()
assert dataset.features == Features(
{
"id": Value("int64"),
}
)
assert dataset.column_names == ["id"]
def test_iterable_dataset_resolve_features_keep_order():
def gen():
yield from zip(range(3), [{"a": 1}, {"c": 1}, {"b": 1}])
ex_iterable = ExamplesIterable(gen, {})
dataset = IterableDataset(ex_iterable)._resolve_features()
# columns appear in order of appearance in the dataset
assert list(dataset.features) == ["a", "c", "b"]
assert dataset.column_names == ["a", "c", "b"]
def test_iterable_dataset_with_features_fill_with_none():
def gen():
yield from zip(range(2), [{"a": 1}, {"b": 1}])
ex_iterable = ExamplesIterable(gen, {})
info = DatasetInfo(features=Features({"a": Value("int32"), "b": Value("int32")}))
dataset = IterableDataset(ex_iterable, info=info)
assert list(dataset) == [{"a": 1, "b": None}, {"b": 1, "a": None}]
def test_concatenate_datasets():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label": 10})
dataset1 = IterableDataset(ex_iterable1)
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label": 5})
dataset2 = IterableDataset(ex_iterable2)
concatenated_dataset = concatenate_datasets([dataset1, dataset2])
assert list(concatenated_dataset) == list(dataset1) + list(dataset2)
def test_concatenate_datasets_resolves_features():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label": 10})
dataset1 = IterableDataset(ex_iterable1)
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label": 5})
dataset2 = IterableDataset(ex_iterable2)
concatenated_dataset = concatenate_datasets([dataset1, dataset2])
assert concatenated_dataset.features is not None
assert sorted(concatenated_dataset.features) == ["id", "label"]
def test_concatenate_datasets_with_different_columns():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label": 10})
dataset1 = IterableDataset(ex_iterable1)
ex_iterable2 = ExamplesIterable(generate_examples_fn, {})
dataset2 = IterableDataset(ex_iterable2)
# missing column "label" -> it should be replaced with nulls
extended_dataset2_list = [{"label": None, **x} for x in dataset2]
concatenated_dataset = concatenate_datasets([dataset1, dataset2])
assert list(concatenated_dataset) == list(dataset1) + extended_dataset2_list
# change order
concatenated_dataset = concatenate_datasets([dataset2, dataset1])
assert list(concatenated_dataset) == extended_dataset2_list + list(dataset1)
def test_concatenate_datasets_axis_1():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label1": 10})
dataset1 = IterableDataset(ex_iterable1)
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label2": 5})
dataset2 = IterableDataset(ex_iterable2)
with pytest.raises(ValueError): # column "id" is duplicated -> raise an error
concatenate_datasets([dataset1, dataset2], axis=1)
concatenated_dataset = concatenate_datasets([dataset1, dataset2.remove_columns("id")], axis=1)
assert list(concatenated_dataset) == [{**x, **y} for x, y in zip(dataset1, dataset2)]
def test_concatenate_datasets_axis_1_resolves_features():
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label1": 10})
dataset1 = IterableDataset(ex_iterable1)
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label2": 5})
dataset2 = IterableDataset(ex_iterable2).remove_columns("id")
concatenated_dataset = concatenate_datasets([dataset1, dataset2], axis=1)
assert concatenated_dataset.features is not None
assert sorted(concatenated_dataset.features) == ["id", "label1", "label2"]
def test_concatenate_datasets_axis_1_with_different_lengths():
n1 = 10
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"label1": 10, "n": n1})
dataset1 = IterableDataset(ex_iterable1)
n2 = 5
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"label2": 5, "n": n2})
dataset2 = IterableDataset(ex_iterable2).remove_columns("id")
# missing rows -> they should be replaced with nulls
extended_dataset2_list = list(dataset2) + [{"label2": None}] * (n1 - n2)
concatenated_dataset = concatenate_datasets([dataset1, dataset2], axis=1)
assert list(concatenated_dataset) == [{**x, **y} for x, y in zip(dataset1, extended_dataset2_list)]
# change order
concatenated_dataset = concatenate_datasets([dataset2, dataset1], axis=1)
assert list(concatenated_dataset) == [{**x, **y} for x, y in zip(extended_dataset2_list, dataset1)]
@pytest.mark.parametrize(
"probas, seed, expected_length, stopping_strategy",
[
(None, None, 3 * (DEFAULT_N_EXAMPLES - 1) + 1, "first_exhausted"),
([1, 0, 0], None, DEFAULT_N_EXAMPLES, "first_exhausted"),
([0, 1, 0], None, DEFAULT_N_EXAMPLES, "first_exhausted"),
([0.2, 0.5, 0.3], 42, None, "first_exhausted"),
([0.1, 0.1, 0.8], 1337, None, "first_exhausted"),
([0.5, 0.2, 0.3], 101010, None, "first_exhausted"),
(None, None, 3 * DEFAULT_N_EXAMPLES, "all_exhausted"),
([0.2, 0.5, 0.3], 42, None, "all_exhausted"),
([0.1, 0.1, 0.8], 1337, None, "all_exhausted"),
([0.5, 0.2, 0.3], 101010, None, "all_exhausted"),
],
)
def test_interleave_datasets(dataset: IterableDataset, probas, seed, expected_length, stopping_strategy):
d1 = dataset
d2 = dataset.map(lambda x: {"id+1": x["id"] + 1, **x})
d3 = dataset.with_format("python")
datasets = [d1, d2, d3]
merged_dataset = interleave_datasets(
datasets, probabilities=probas, seed=seed, stopping_strategy=stopping_strategy
)
def fill_default(example):
return {"id": None, "id+1": None, **example}
# Check the examples iterable
assert isinstance(
merged_dataset._ex_iterable, (CyclingMultiSourcesExamplesIterable, RandomlyCyclingMultiSourcesExamplesIterable)
)
# Check that it is deterministic
if seed is not None:
merged_dataset2 = interleave_datasets(
[d1, d2, d3], probabilities=probas, seed=seed, stopping_strategy=stopping_strategy
)
assert list(merged_dataset) == list(merged_dataset2)
# Check features
assert merged_dataset.features == Features({"id": Value("int64"), "id+1": Value("int64")})
# Check first example
if seed is not None:
rng = np.random.default_rng(seed)
i = next(iter(RandomlyCyclingMultiSourcesExamplesIterable._iter_random_indices(rng, len(datasets), p=probas)))
assert next(iter(merged_dataset)) == fill_default(next(iter(datasets[i])))
else:
assert any(next(iter(merged_dataset)) == fill_default(next(iter(dataset))) for dataset in datasets)
# Compute length it case it's random
if expected_length is None:
expected_length = 0
counts = np.array([len(list(d)) for d in datasets])
bool_strategy_func = np.all if stopping_strategy == "all_exhausted" else np.any
rng = np.random.default_rng(seed)
for i in RandomlyCyclingMultiSourcesExamplesIterable._iter_random_indices(rng, len(datasets), p=probas):
counts[i] -= 1
expected_length += 1
if bool_strategy_func(counts <= 0):
break
# Check length
assert len(list(merged_dataset)) == expected_length
def test_interleave_datasets_with_features(
dataset: IterableDataset,
):
features = Features(
{
"id": Value("int64"),
"label": ClassLabel(names=["negative", "positive"]),
}
)
ex_iterable = ExamplesIterable(generate_examples_fn, {"label": 0})
dataset_with_features = IterableDataset(ex_iterable, info=DatasetInfo(features=features))
merged_dataset = interleave_datasets([dataset, dataset_with_features])
assert merged_dataset.features == features
def test_interleave_datasets_with_oversampling():
# Test hardcoded results
d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [0, 1, 2]])), {}))
d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [10, 11, 12, 13]])), {}))
d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [20, 21, 22, 23, 24]])), {}))
expected_values = [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]
# Check oversampling strategy without probabilities
assert [x["a"] for x in interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")] == expected_values
# Check oversampling strategy with probabilities
expected_values = [20, 0, 21, 10, 1, 22, 23, 24, 2, 0, 1, 20, 11, 21, 2, 0, 12, 1, 22, 13]
values = [
x["a"]
for x in interleave_datasets(
[d1, d2, d3], probabilities=[0.5, 0.2, 0.3], seed=42, stopping_strategy="all_exhausted"
)
]
assert values == expected_values
@require_torch
def test_with_format_torch(dataset_with_several_columns: IterableDataset):
import torch
dset = dataset_with_several_columns.with_format(type="torch")
example = next(iter(dset))
batch = next(iter(dset.iter(batch_size=3)))
assert len(example) == 3
assert isinstance(example["id"], torch.Tensor)
assert list(example["id"].shape) == []
assert example["id"].item() == 0
assert isinstance(batch["id"], torch.Tensor)
assert isinstance(example["filepath"], list)
assert isinstance(example["filepath"][0], str)
assert example["filepath"][0] == "data0.txt"
assert isinstance(batch["filepath"], list)
assert isinstance(example["metadata"], dict)
assert isinstance(example["metadata"]["sources"], list)
assert isinstance(example["metadata"]["sources"][0], str)
assert isinstance(batch["metadata"], list)
@require_tf
def test_with_format_tf(dataset_with_several_columns: IterableDataset):
import tensorflow as tf
dset = dataset_with_several_columns.with_format(type="tensorflow")
example = next(iter(dset))
batch = next(iter(dset.iter(batch_size=3)))
assert isinstance(example["id"], tf.Tensor)
assert list(example["id"].shape) == []
assert example["id"].numpy().item() == 0
assert isinstance(batch["id"], tf.Tensor)
assert isinstance(example["filepath"], tf.Tensor)
assert example["filepath"][0] == b"data0.txt"
assert isinstance(batch["filepath"], tf.Tensor)
assert isinstance(example["metadata"], dict)
assert isinstance(example["metadata"]["sources"], tf.Tensor)
assert isinstance(batch["metadata"], list)
def test_map_array_are_not_converted_back_to_lists(dataset: IterableDataset):
def func(example):
return {"array": np.array([1, 2, 3])}
dset_test = dataset.map(func)
example = next(iter(dset_test))
# not aligned with Dataset.map because we don't convert back to lists after map()
assert isinstance(example["array"], np.ndarray)
def test_formatted_map(dataset: IterableDataset):
dataset = dataset.with_format("np")
assert isinstance(next(dataset.iter(batch_size=3))["id"], np.ndarray)
dataset = dataset.with_format(None)
assert isinstance(next(dataset.iter(batch_size=3))["id"], list)
def add_one_numpy(example):
assert isinstance(example["id"], np.ndarray)
return {"id": example["id"] + 1}
dataset = dataset.with_format("np")
dataset = dataset.map(add_one_numpy, batched=True)
assert isinstance(next(dataset.iter(batch_size=3))["id"], np.ndarray)
dataset = dataset.with_format(None)
assert isinstance(next(dataset.iter(batch_size=3))["id"], list)
@pytest.mark.parametrize("n_shards1, n_shards2, num_workers", [(2, 1, 1), (2, 2, 2), (1, 3, 1), (4, 3, 3)])
def test_interleave_dataset_with_sharding(n_shards1, n_shards2, num_workers):
from torch.utils.data import DataLoader
ex_iterable1 = ExamplesIterable(generate_examples_fn, {"filepaths": [f"{i}-1.txt" for i in range(n_shards1)]})
dataset1 = IterableDataset(ex_iterable1).with_format("torch")
ex_iterable2 = ExamplesIterable(generate_examples_fn, {"filepaths": [f"{i}-2.txt" for i in range(n_shards2)]})
dataset2 = IterableDataset(ex_iterable2).with_format("torch")
dataset_merged = interleave_datasets([dataset1, dataset2], stopping_strategy="first_exhausted")
assert dataset_merged.n_shards == min(n_shards1, n_shards2)
dataloader = DataLoader(dataset_merged, batch_size=None, num_workers=num_workers)
result = list(dataloader)
expected_length = 2 * min(
len([example for _, example in ex_iterable1]), len([example for _, example in ex_iterable2])
)
# some samples may be missing because the stopping strategy is applied per process
assert expected_length - num_workers <= len(result) <= expected_length
assert len(result) == len({str(x) for x in result})
def filter_func(batch):
return batch["id"] == 4
def map_func(batch):
batch["id"] *= 2
return batch
def test_pickle_after_many_transforms(dataset_with_several_columns):
dataset = dataset_with_several_columns
dataset = dataset.remove_columns(["filepath"])
dataset = dataset.take(5)
dataset = dataset.map(map_func)
dataset = dataset.shuffle()
dataset = dataset.skip(1)
dataset = dataset.filter(filter_func)
dataset = dataset.add_column("additional_col", ["something"])
dataset = dataset.rename_column("metadata", "metadata1")
dataset = dataset.rename_columns({"id": "id1", "metadata1": "metadata2"})
dataset = dataset.select_columns(["id1", "additional_col"])
unpickled_dataset = pickle.loads(pickle.dumps(dataset))
assert list(unpickled_dataset) == list(dataset)
|
datasets/tests/test_iterable_dataset.py/0
|
{
"file_path": "datasets/tests/test_iterable_dataset.py",
"repo_id": "datasets",
"token_count": 36328
}
| 86
|
import unittest
from unittest.mock import patch
import pytest
from pytest import CaptureFixture
from datasets.utils import (
are_progress_bars_disabled,
disable_progress_bars,
enable_progress_bars,
tqdm,
)
class TestTqdmUtils(unittest.TestCase):
@pytest.fixture(autouse=True)
def capsys(self, capsys: CaptureFixture) -> None:
"""Workaround to make capsys work in unittest framework.
Capsys is a convenient pytest fixture to capture stdout.
See https://waylonwalker.com/pytest-capsys/.
Taken from https://github.com/pytest-dev/pytest/issues/2504#issuecomment-309475790.
"""
self.capsys = capsys
def setUp(self) -> None:
"""Get verbosity to set it back after the tests."""
self._previous_are_progress_bars_disabled = are_progress_bars_disabled()
return super().setUp()
def tearDown(self) -> None:
"""Set back progress bars verbosity as before testing."""
if self._previous_are_progress_bars_disabled:
disable_progress_bars()
else:
enable_progress_bars()
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None)
def test_tqdm_helpers(self) -> None:
"""Test helpers to enable/disable progress bars."""
disable_progress_bars()
self.assertTrue(are_progress_bars_disabled())
enable_progress_bars()
self.assertFalse(are_progress_bars_disabled())
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", True)
def test_cannot_enable_tqdm_when_env_variable_is_set(self) -> None:
"""
Test helpers cannot enable/disable progress bars when
`HF_DATASETS_DISABLE_PROGRESS_BARS` is set.
"""
disable_progress_bars()
self.assertTrue(are_progress_bars_disabled())
with self.assertWarns(UserWarning):
enable_progress_bars()
self.assertTrue(are_progress_bars_disabled()) # Still disabled !
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", False)
def test_cannot_disable_tqdm_when_env_variable_is_set(self) -> None:
"""
Test helpers cannot enable/disable progress bars when
`HF_DATASETS_DISABLE_PROGRESS_BARS` is set.
"""
enable_progress_bars()
self.assertFalse(are_progress_bars_disabled())
with self.assertWarns(UserWarning):
disable_progress_bars()
self.assertFalse(are_progress_bars_disabled()) # Still enabled !
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None)
def test_tqdm_disabled(self) -> None:
"""Test TQDM not outputting anything when globally disabled."""
disable_progress_bars()
for _ in tqdm(range(10)):
pass
captured = self.capsys.readouterr()
self.assertEqual(captured.out, "")
self.assertEqual(captured.err, "")
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None)
def test_tqdm_disabled_cannot_be_forced(self) -> None:
"""Test TQDM cannot be forced when globally disabled."""
disable_progress_bars()
for _ in tqdm(range(10), disable=False):
pass
captured = self.capsys.readouterr()
self.assertEqual(captured.out, "")
self.assertEqual(captured.err, "")
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None)
def test_tqdm_can_be_disabled_when_globally_enabled(self) -> None:
"""Test TQDM can still be locally disabled even when globally enabled."""
enable_progress_bars()
for _ in tqdm(range(10), disable=True):
pass
captured = self.capsys.readouterr()
self.assertEqual(captured.out, "")
self.assertEqual(captured.err, "")
@patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None)
def test_tqdm_enabled(self) -> None:
"""Test TQDM work normally when globally enabled."""
enable_progress_bars()
for _ in tqdm(range(10)):
pass
captured = self.capsys.readouterr()
self.assertEqual(captured.out, "")
self.assertIn("10/10", captured.err) # tqdm log
|
datasets/tests/test_tqdm.py/0
|
{
"file_path": "datasets/tests/test_tqdm.py",
"repo_id": "datasets",
"token_count": 1804
}
| 87
|
<jupyter_start><jupyter_text>Unit 4: Code your first Deep Reinforcement Learning Algorithm with PyTorch: Reinforce. And test its robustness 💪In this notebook, you'll code your first Deep Reinforcement Learning algorithm from scratch: Reinforce (also called Monte Carlo Policy Gradient).Reinforce is a *Policy-based method*: a Deep Reinforcement Learning algorithm that tries **to optimize the policy directly without using an action-value function**.More precisely, Reinforce is a *Policy-gradient method*, a subclass of *Policy-based methods* that aims **to optimize the policy directly by estimating the weights of the optimal policy using gradient ascent**.To test its robustness, we're going to train it in 2 different simple environments:- Cartpole-v1- PixelcopterEnv⬇️ Here is an example of what **you will achieve at the end of this notebook.** ⬇️ 🎮 Environments: - [CartPole-v1](https://www.gymlibrary.dev/environments/classic_control/cart_pole/)- [PixelCopter](https://pygame-learning-environment.readthedocs.io/en/latest/user/games/pixelcopter.html) 📚 RL-Library: - Python- PyTorchWe're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues). Objectives of this notebook 🏆At the end of the notebook, you will:- Be able to **code from scratch a Reinforce algorithm using PyTorch.**- Be able to **test the robustness of your agent using simple environments.**- Be able to **push your trained agent to the Hub** with a nice video replay and an evaluation score 🔥. This notebook is from the Deep Reinforcement Learning Course In this free course, you will:- 📖 Study Deep Reinforcement Learning in **theory and practice**.- 🧑💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.- 🤖 Train **agents in unique environments** And more check 📚 the syllabus 👉 https://simoninithomas.github.io/deep-rl-courseDon’t forget to **sign up to the course** (we are collecting your email to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/ydHrjt3WP5 Prerequisites 🏗️Before diving into the notebook, you need to:🔲 📚 [Study Policy Gradients by reading Unit 4](https://huggingface.co/deep-rl-course/unit4/introduction) Let's code Reinforce algorithm from scratch 🔥To validate this hands-on for the certification process, you need to push your trained models to the Hub.- Get a result of >= 350 for `Cartpole-v1`.- Get a result of >= 5 for `PixelCopter`.To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**. **If you don't see your model on the leaderboard, go at the bottom of the leaderboard page and click on the refresh button**.For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introductioncertification-process An advice 💡It's better to run this colab in a copy on your Google Drive, so that **if it timeouts** you still have the saved notebook on your Google Drive and do not need to fill everything from scratch.To do that you can either do `Ctrl + S` or `File > Save a copy in Google Drive.` Set the GPU 💪- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` - `Hardware Accelerator > GPU` Create a virtual display 🖥During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). Hence the following cell will install the librairies and create and run a virtual screen 🖥<jupyter_code>%%capture
!apt install python-opengl
!apt install ffmpeg
!apt install xvfb
!pip install pyvirtualdisplay
!pip install pyglet==1.5.1
# Virtual display
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()<jupyter_output><empty_output><jupyter_text>Install the dependencies 🔽The first step is to install the dependencies. We’ll install multiple ones:- `gym`- `gym-games`: Extra gym environments made with PyGame.- `huggingface_hub`: 🤗 works as a central place where anyone can share and explore models and datasets. It has versioning, metrics, visualizations, and other features that will allow you to easily collaborate with others.You may be wondering why we install gym and not gymnasium, a more recent version of gym? **Because the gym-games we are using are not updated yet with gymnasium**. The differences you'll encounter here:- In `gym` we don't have `terminated` and `truncated` but only `done`.- In `gym` using `env.step()` returns `state, reward, done, info`You can learn more about the differences between Gym and Gymnasium here 👉 https://gymnasium.farama.org/content/migration-guide/You can see here all the Reinforce models available 👉 https://huggingface.co/models?other=reinforceAnd you can find all the Deep Reinforcement Learning models here 👉 https://huggingface.co/models?pipeline_tag=reinforcement-learning<jupyter_code>!pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit4/requirements-unit4.txt<jupyter_output><empty_output><jupyter_text>Import the packages 📦In addition to import the installed libraries, we also import:- `imageio`: A library that will help us to generate a replay video<jupyter_code>import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
# PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
# Gym
import gym
import gym_pygame
# Hugging Face Hub
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
import imageio<jupyter_output><empty_output><jupyter_text>Check if we have a GPU- Let's check if we have a GPU- If it's the case you should see `device:cuda0`<jupyter_code>device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)<jupyter_output><empty_output><jupyter_text>We're now ready to implement our Reinforce algorithm 🔥 First agent: Playing CartPole-v1 🤖 Create the CartPole environment and understand how it works [The environment 🎮](https://www.gymlibrary.dev/environments/classic_control/cart_pole/) Why do we use a simple environment like CartPole-v1?As explained in [Reinforcement Learning Tips and Tricks](https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html), when you implement your agent from scratch you need **to be sure that it works correctly and find bugs with easy environments before going deeper**. Since finding bugs will be much easier in simple environments.> Try to have some “sign of life” on toy problems> Validate the implementation by making it run on harder and harder envs (you can compare results against the RL zoo). You usually need to run hyperparameter optimization for that step.___ The CartPole-v1 environment> A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pendulum is placed upright on the cart and the goal is to balance the pole by applying forces in the left and right direction on the cart.So, we start with CartPole-v1. The goal is to push the cart left or right **so that the pole stays in the equilibrium.**The episode ends if:- The pole Angle is greater than ±12°- Cart Position is greater than ±2.4- Episode length is greater than 500We get a reward 💰 of +1 every timestep the Pole stays in the equilibrium.<jupyter_code>env_id = "CartPole-v1"
# Create the env
env = gym.make(env_id)
# Create the evaluation env
eval_env = gym.make(env_id)
# Get the state space and action space
s_size = env.observation_space.shape[0]
a_size = env.action_space.n
print("_____OBSERVATION SPACE_____ \n")
print("The State Space is: ", s_size)
print("Sample observation", env.observation_space.sample()) # Get a random observation
print("\n _____ACTION SPACE_____ \n")
print("The Action Space is: ", a_size)
print("Action Space Sample", env.action_space.sample()) # Take a random action<jupyter_output><empty_output><jupyter_text>Let's build the Reinforce ArchitectureThis implementation is based on two implementations:- [PyTorch official Reinforcement Learning example](https://github.com/pytorch/examples/blob/main/reinforcement_learning/reinforce.py)- [Udacity Reinforce](https://github.com/udacity/deep-reinforcement-learning/blob/master/reinforce/REINFORCE.ipynb)- [Improvement of the integration by Chris1nexus](https://github.com/huggingface/deep-rl-class/pull/95) So we want:- Two fully connected layers (fc1 and fc2).- Using ReLU as activation function of fc1- Using Softmax to output a probability distribution over actions<jupyter_code>class Policy(nn.Module):
def __init__(self, s_size, a_size, h_size):
super(Policy, self).__init__()
# Create two fully connected layers
def forward(self, x):
# Define the forward pass
# state goes to fc1 then we apply ReLU activation function
# fc1 outputs goes to fc2
# We output the softmax
def act(self, state):
"""
Given a state, take action
"""
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = np.argmax(m)
return action.item(), m.log_prob(action)<jupyter_output><empty_output><jupyter_text>Solution<jupyter_code>class Policy(nn.Module):
def __init__(self, s_size, a_size, h_size):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
def act(self, state):
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = np.argmax(m)
return action.item(), m.log_prob(action)<jupyter_output><empty_output><jupyter_text>I make a mistake, can you guess where?- To find out let's make a forward pass:<jupyter_code>debug_policy = Policy(s_size, a_size, 64).to(device)
debug_policy.act(env.reset())<jupyter_output><empty_output><jupyter_text>- Here we see that the error says `ValueError: The value argument to log_prob must be a Tensor`- It means that `action` in `m.log_prob(action)` must be a Tensor **but it's not.**- Do you know why? Check the act function and try to see why it does not work. Advice 💡: Something is wrong in this implementation. Remember that we act function **we want to sample an action from the probability distribution over actions**. (Real) Solution<jupyter_code>class Policy(nn.Module):
def __init__(self, s_size, a_size, h_size):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
def act(self, state):
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item(), m.log_prob(action)<jupyter_output><empty_output><jupyter_text>By using CartPole, it was easier to debug since **we know that the bug comes from our integration and not from our simple environment**. - Since **we want to sample an action from the probability distribution over actions**, we can't use `action = np.argmax(m)` since it will always output the action that have the highest probability.- We need to replace with `action = m.sample()` that will sample an action from the probability distribution P(.|s) Let's build the Reinforce Training AlgorithmThis is the Reinforce algorithm pseudocode: - When we calculate the return Gt (line 6) we see that we calculate the sum of discounted rewards **starting at timestep t**.- Why? Because our policy should only **reinforce actions on the basis of the consequences**: so rewards obtained before taking an action are useless (since they were not because of the action), **only the ones that come after the action matters**.- Before coding this you should read this section [don't let the past distract you](https://spinningup.openai.com/en/latest/spinningup/rl_intro3.htmldon-t-let-the-past-distract-you) that explains why we use reward-to-go policy gradient.We use an interesting technique coded by [Chris1nexus](https://github.com/Chris1nexus) to **compute the return at each timestep efficiently**. The comments explained the procedure. Don't hesitate also [to check the PR explanation](https://github.com/huggingface/deep-rl-class/pull/95)But overall the idea is to **compute the return at each timestep efficiently**. The second question you may ask is **why do we minimize the loss**? You talked about Gradient Ascent not Gradient Descent?- We want to maximize our utility function $J(\theta)$ but in PyTorch like in Tensorflow it's better to **minimize an objective function.** - So let's say we want to reinforce action 3 at a certain timestep. Before training this action P is 0.25. - So we want to modify $\theta$ such that $\pi_\theta(a_3|s; \theta) > 0.25$ - Because all P must sum to 1, max $\pi_\theta(a_3|s; \theta)$ will **minimize other action probability.** - So we should tell PyTorch **to min $1 - \pi_\theta(a_3|s; \theta)$.** - This loss function approaches 0 as $\pi_\theta(a_3|s; \theta)$ nears 1. - So we are encouraging the gradient to max $\pi_\theta(a_3|s; \theta)$<jupyter_code>def reinforce(policy, optimizer, n_training_episodes, max_t, gamma, print_every):
# Help us to calculate the score during the training
scores_deque = deque(maxlen=100)
scores = []
# Line 3 of pseudocode
for i_episode in range(1, n_training_episodes+1):
saved_log_probs = []
rewards = []
state = # TODO: reset the environment
# Line 4 of pseudocode
for t in range(max_t):
action, log_prob = # TODO get the action
saved_log_probs.append(log_prob)
state, reward, done, _ = # TODO: take an env step
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
# Line 6 of pseudocode: calculate the return
returns = deque(maxlen=max_t)
n_steps = len(rewards)
# Compute the discounted returns at each timestep,
# as the sum of the gamma-discounted return at time t (G_t) + the reward at time t
# In O(N) time, where N is the number of time steps
# (this definition of the discounted return G_t follows the definition of this quantity
# shown at page 44 of Sutton&Barto 2017 2nd draft)
# G_t = r_(t+1) + r_(t+2) + ...
# Given this formulation, the returns at each timestep t can be computed
# by re-using the computed future returns G_(t+1) to compute the current return G_t
# G_t = r_(t+1) + gamma*G_(t+1)
# G_(t-1) = r_t + gamma* G_t
# (this follows a dynamic programming approach, with which we memorize solutions in order
# to avoid computing them multiple times)
# This is correct since the above is equivalent to (see also page 46 of Sutton&Barto 2017 2nd draft)
# G_(t-1) = r_t + gamma*r_(t+1) + gamma*gamma*r_(t+2) + ...
## Given the above, we calculate the returns at timestep t as:
# gamma[t] * return[t] + reward[t]
#
## We compute this starting from the last timestep to the first, in order
## to employ the formula presented above and avoid redundant computations that would be needed
## if we were to do it from first to last.
## Hence, the queue "returns" will hold the returns in chronological order, from t=0 to t=n_steps
## thanks to the appendleft() function which allows to append to the position 0 in constant time O(1)
## a normal python list would instead require O(N) to do this.
for t in range(n_steps)[::-1]:
disc_return_t = (returns[0] if len(returns)>0 else 0)
returns.appendleft( ) # TODO: complete here
## standardization of the returns is employed to make training more stable
eps = np.finfo(np.float32).eps.item()
## eps is the smallest representable float, which is
# added to the standard deviation of the returns to avoid numerical instabilities
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + eps)
# Line 7:
policy_loss = []
for log_prob, disc_return in zip(saved_log_probs, returns):
policy_loss.append(-log_prob * disc_return)
policy_loss = torch.cat(policy_loss).sum()
# Line 8: PyTorch prefers gradient descent
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
return scores<jupyter_output><empty_output><jupyter_text>Solution<jupyter_code>def reinforce(policy, optimizer, n_training_episodes, max_t, gamma, print_every):
# Help us to calculate the score during the training
scores_deque = deque(maxlen=100)
scores = []
# Line 3 of pseudocode
for i_episode in range(1, n_training_episodes+1):
saved_log_probs = []
rewards = []
state = env.reset()
# Line 4 of pseudocode
for t in range(max_t):
action, log_prob = policy.act(state)
saved_log_probs.append(log_prob)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
# Line 6 of pseudocode: calculate the return
returns = deque(maxlen=max_t)
n_steps = len(rewards)
# Compute the discounted returns at each timestep,
# as
# the sum of the gamma-discounted return at time t (G_t) + the reward at time t
#
# In O(N) time, where N is the number of time steps
# (this definition of the discounted return G_t follows the definition of this quantity
# shown at page 44 of Sutton&Barto 2017 2nd draft)
# G_t = r_(t+1) + r_(t+2) + ...
# Given this formulation, the returns at each timestep t can be computed
# by re-using the computed future returns G_(t+1) to compute the current return G_t
# G_t = r_(t+1) + gamma*G_(t+1)
# G_(t-1) = r_t + gamma* G_t
# (this follows a dynamic programming approach, with which we memorize solutions in order
# to avoid computing them multiple times)
# This is correct since the above is equivalent to (see also page 46 of Sutton&Barto 2017 2nd draft)
# G_(t-1) = r_t + gamma*r_(t+1) + gamma*gamma*r_(t+2) + ...
## Given the above, we calculate the returns at timestep t as:
# gamma[t] * return[t] + reward[t]
#
## We compute this starting from the last timestep to the first, in order
## to employ the formula presented above and avoid redundant computations that would be needed
## if we were to do it from first to last.
## Hence, the queue "returns" will hold the returns in chronological order, from t=0 to t=n_steps
## thanks to the appendleft() function which allows to append to the position 0 in constant time O(1)
## a normal python list would instead require O(N) to do this.
for t in range(n_steps)[::-1]:
disc_return_t = (returns[0] if len(returns)>0 else 0)
returns.appendleft( gamma*disc_return_t + rewards[t] )
## standardization of the returns is employed to make training more stable
eps = np.finfo(np.float32).eps.item()
## eps is the smallest representable float, which is
# added to the standard deviation of the returns to avoid numerical instabilities
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + eps)
# Line 7:
policy_loss = []
for log_prob, disc_return in zip(saved_log_probs, returns):
policy_loss.append(-log_prob * disc_return)
policy_loss = torch.cat(policy_loss).sum()
# Line 8: PyTorch prefers gradient descent
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
return scores<jupyter_output><empty_output><jupyter_text>Train it- We're now ready to train our agent.- But first, we define a variable containing all the training hyperparameters.- You can change the training parameters (and should 😉)<jupyter_code>cartpole_hyperparameters = {
"h_size": 16,
"n_training_episodes": 1000,
"n_evaluation_episodes": 10,
"max_t": 1000,
"gamma": 1.0,
"lr": 1e-2,
"env_id": env_id,
"state_space": s_size,
"action_space": a_size,
}
# Create policy and place it to the device
cartpole_policy = Policy(cartpole_hyperparameters["state_space"], cartpole_hyperparameters["action_space"], cartpole_hyperparameters["h_size"]).to(device)
cartpole_optimizer = optim.Adam(cartpole_policy.parameters(), lr=cartpole_hyperparameters["lr"])
scores = reinforce(cartpole_policy,
cartpole_optimizer,
cartpole_hyperparameters["n_training_episodes"],
cartpole_hyperparameters["max_t"],
cartpole_hyperparameters["gamma"],
100)<jupyter_output><empty_output><jupyter_text>Define evaluation method 📝- Here we define the evaluation method that we're going to use to test our Reinforce agent.<jupyter_code>def evaluate_agent(env, max_steps, n_eval_episodes, policy):
"""
Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.
:param env: The evaluation environment
:param n_eval_episodes: Number of episode to evaluate the agent
:param policy: The Reinforce agent
"""
episode_rewards = []
for episode in range(n_eval_episodes):
state = env.reset()
step = 0
done = False
total_rewards_ep = 0
for step in range(max_steps):
action, _ = policy.act(state)
new_state, reward, done, info = env.step(action)
total_rewards_ep += reward
if done:
break
state = new_state
episode_rewards.append(total_rewards_ep)
mean_reward = np.mean(episode_rewards)
std_reward = np.std(episode_rewards)
return mean_reward, std_reward<jupyter_output><empty_output><jupyter_text>Evaluate our agent 📈<jupyter_code>evaluate_agent(eval_env,
cartpole_hyperparameters["max_t"],
cartpole_hyperparameters["n_evaluation_episodes"],
cartpole_policy)<jupyter_output><empty_output><jupyter_text>Publish our trained model on the Hub 🔥Now that we saw we got good results after the training, we can publish our trained model on the hub 🤗 with one line of code.Here's an example of a Model Card: Push to the Hub Do not modify this code<jupyter_code>from huggingface_hub import HfApi, snapshot_download
from huggingface_hub.repocard import metadata_eval_result, metadata_save
from pathlib import Path
import datetime
import json
import imageio
import tempfile
import os
def record_video(env, policy, out_directory, fps=30):
"""
Generate a replay video of the agent
:param env
:param Qtable: Qtable of our agent
:param out_directory
:param fps: how many frame per seconds (with taxi-v3 and frozenlake-v1 we use 1)
"""
images = []
done = False
state = env.reset()
img = env.render(mode='rgb_array')
images.append(img)
while not done:
# Take the action (index) that have the maximum expected future reward given that state
action, _ = policy.act(state)
state, reward, done, info = env.step(action) # We directly put next_state = state for recording logic
img = env.render(mode='rgb_array')
images.append(img)
imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)
def push_to_hub(repo_id,
model,
hyperparameters,
eval_env,
video_fps=30
):
"""
Evaluate, Generate a video and Upload a model to Hugging Face Hub.
This method does the complete pipeline:
- It evaluates the model
- It generates the model card
- It generates a replay video of the agent
- It pushes everything to the Hub
:param repo_id: repo_id: id of the model repository from the Hugging Face Hub
:param model: the pytorch model we want to save
:param hyperparameters: training hyperparameters
:param eval_env: evaluation environment
:param video_fps: how many frame per seconds to record our video replay
"""
_, repo_name = repo_id.split("/")
api = HfApi()
# Step 1: Create the repo
repo_url = api.create_repo(
repo_id=repo_id,
exist_ok=True,
)
with tempfile.TemporaryDirectory() as tmpdirname:
local_directory = Path(tmpdirname)
# Step 2: Save the model
torch.save(model, local_directory / "model.pt")
# Step 3: Save the hyperparameters to JSON
with open(local_directory / "hyperparameters.json", "w") as outfile:
json.dump(hyperparameters, outfile)
# Step 4: Evaluate the model and build JSON
mean_reward, std_reward = evaluate_agent(eval_env,
hyperparameters["max_t"],
hyperparameters["n_evaluation_episodes"],
model)
# Get datetime
eval_datetime = datetime.datetime.now()
eval_form_datetime = eval_datetime.isoformat()
evaluate_data = {
"env_id": hyperparameters["env_id"],
"mean_reward": mean_reward,
"n_evaluation_episodes": hyperparameters["n_evaluation_episodes"],
"eval_datetime": eval_form_datetime,
}
# Write a JSON file
with open(local_directory / "results.json", "w") as outfile:
json.dump(evaluate_data, outfile)
# Step 5: Create the model card
env_name = hyperparameters["env_id"]
metadata = {}
metadata["tags"] = [
env_name,
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class"
]
# Add metrics
eval = metadata_eval_result(
model_pretty_name=repo_name,
task_pretty_name="reinforcement-learning",
task_id="reinforcement-learning",
metrics_pretty_name="mean_reward",
metrics_id="mean_reward",
metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}",
dataset_pretty_name=env_name,
dataset_id=env_name,
)
# Merges both dictionaries
metadata = {**metadata, **eval}
model_card = f"""
# **Reinforce** Agent playing **{env_id}**
This is a trained model of a **Reinforce** agent playing **{env_id}** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
"""
readme_path = local_directory / "README.md"
readme = ""
if readme_path.exists():
with readme_path.open("r", encoding="utf8") as f:
readme = f.read()
else:
readme = model_card
with readme_path.open("w", encoding="utf-8") as f:
f.write(readme)
# Save our metrics to Readme metadata
metadata_save(readme_path, metadata)
# Step 6: Record a video
video_path = local_directory / "replay.mp4"
record_video(env, model, video_path, video_fps)
# Step 7. Push everything to the Hub
api.upload_folder(
repo_id=repo_id,
folder_path=local_directory,
path_in_repo=".",
)
print(f"Your model is pushed to the Hub. You can view your model here: {repo_url}")<jupyter_output><empty_output><jupyter_text>.By using `push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the Hub**.This way:- You can **showcase our work** 🔥- You can **visualize your agent playing** 👀- You can **share with the community an agent that others can use** 💾- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard To be able to share your model with the community there are three more steps to follow:1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.- Create a new token (https://huggingface.co/settings/tokens) **with write role**<jupyter_code>notebook_login()<jupyter_output><empty_output><jupyter_text>If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` (or `login`) 3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥 using `package_to_hub()` function<jupyter_code>repo_id = "" #TODO Define your repo id {username/Reinforce-{model-id}}
push_to_hub(repo_id,
cartpole_policy, # The model we want to save
cartpole_hyperparameters, # Hyperparameters
eval_env, # Evaluation environment
video_fps=30
)<jupyter_output><empty_output><jupyter_text>Now that we try the robustness of our implementation, let's try a more complex environment: PixelCopter 🚁 Second agent: PixelCopter 🚁 Study the PixelCopter environment 👀- [The Environment documentation](https://pygame-learning-environment.readthedocs.io/en/latest/user/games/pixelcopter.html)<jupyter_code>env_id = "Pixelcopter-PLE-v0"
env = gym.make(env_id)
eval_env = gym.make(env_id)
s_size = env.observation_space.shape[0]
a_size = env.action_space.n
print("_____OBSERVATION SPACE_____ \n")
print("The State Space is: ", s_size)
print("Sample observation", env.observation_space.sample()) # Get a random observation
print("\n _____ACTION SPACE_____ \n")
print("The Action Space is: ", a_size)
print("Action Space Sample", env.action_space.sample()) # Take a random action<jupyter_output><empty_output><jupyter_text>The observation space (7) 👀:- player y position- player velocity- player distance to floor- player distance to ceiling- next block x distance to player- next blocks top y location- next blocks bottom y locationThe action space(2) 🎮:- Up (press accelerator) - Do nothing (don't press accelerator) The reward function 💰: - For each vertical block it passes through it gains a positive reward of +1. Each time a terminal state reached it receives a negative reward of -1. Define the new Policy 🧠- We need to have a deeper neural network since the environment is more complex<jupyter_code>class Policy(nn.Module):
def __init__(self, s_size, a_size, h_size):
super(Policy, self).__init__()
# Define the three layers here
def forward(self, x):
# Define the forward process here
return F.softmax(x, dim=1)
def act(self, state):
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item(), m.log_prob(action)<jupyter_output><empty_output><jupyter_text>Solution<jupyter_code>class Policy(nn.Module):
def __init__(self, s_size, a_size, h_size):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, h_size*2)
self.fc3 = nn.Linear(h_size*2, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.softmax(x, dim=1)
def act(self, state):
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item(), m.log_prob(action)<jupyter_output><empty_output><jupyter_text>Define the hyperparameters ⚙️- Because this environment is more complex.- Especially for the hidden size, we need more neurons.<jupyter_code>pixelcopter_hyperparameters = {
"h_size": 64,
"n_training_episodes": 50000,
"n_evaluation_episodes": 10,
"max_t": 10000,
"gamma": 0.99,
"lr": 1e-4,
"env_id": env_id,
"state_space": s_size,
"action_space": a_size,
}<jupyter_output><empty_output><jupyter_text>Train it- We're now ready to train our agent 🔥.<jupyter_code># Create policy and place it to the device
# torch.manual_seed(50)
pixelcopter_policy = Policy(pixelcopter_hyperparameters["state_space"], pixelcopter_hyperparameters["action_space"], pixelcopter_hyperparameters["h_size"]).to(device)
pixelcopter_optimizer = optim.Adam(pixelcopter_policy.parameters(), lr=pixelcopter_hyperparameters["lr"])
scores = reinforce(pixelcopter_policy,
pixelcopter_optimizer,
pixelcopter_hyperparameters["n_training_episodes"],
pixelcopter_hyperparameters["max_t"],
pixelcopter_hyperparameters["gamma"],
1000)<jupyter_output><empty_output><jupyter_text>Publish our trained model on the Hub 🔥<jupyter_code>repo_id = "" #TODO Define your repo id {username/Reinforce-{model-id}}
push_to_hub(repo_id,
pixelcopter_policy, # The model we want to save
pixelcopter_hyperparameters, # Hyperparameters
eval_env, # Evaluation environment
video_fps=30
)<jupyter_output><empty_output>
|
deep-rl-class/notebooks/unit4/unit4.ipynb/0
|
{
"file_path": "deep-rl-class/notebooks/unit4/unit4.ipynb",
"repo_id": "deep-rl-class",
"token_count": 12740
}
| 88
|
# The Exploration/Exploitation trade-off [[exp-exp-tradeoff]]
Finally, before looking at the different methods to solve Reinforcement Learning problems, we must cover one more very important topic: *the exploration/exploitation trade-off.*
- *Exploration* is exploring the environment by trying random actions in order to **find more information about the environment.**
- *Exploitation* is **exploiting known information to maximize the reward.**
Remember, the goal of our RL agent is to maximize the expected cumulative reward. However, **we can fall into a common trap**.
Let’s take an example:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/exp_1.jpg" alt="Exploration" width="100%">
In this game, our mouse can have an **infinite amount of small cheese** (+1 each). But at the top of the maze, there is a gigantic sum of cheese (+1000).
However, if we only focus on exploitation, our agent will never reach the gigantic sum of cheese. Instead, it will only exploit **the nearest source of rewards,** even if this source is small (exploitation).
But if our agent does a little bit of exploration, it can **discover the big reward** (the pile of big cheese).
This is what we call the exploration/exploitation trade-off. We need to balance how much we **explore the environment** and how much we **exploit what we know about the environment.**
Therefore, we must **define a rule that helps to handle this trade-off**. We’ll see the different ways to handle it in the future units.
If it’s still confusing, **think of a real problem: the choice of picking a restaurant:**
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/exp_2.jpg" alt="Exploration">
<figcaption>Source: <a href="https://inst.eecs.berkeley.edu/~cs188/sp20/assets/lecture/lec15_6up.pdf"> Berkley AI Course</a>
</figcaption>
</figure>
- *Exploitation*: You go to the same one that you know is good every day and **take the risk to miss another better restaurant.**
- *Exploration*: Try restaurants you never went to before, with the risk of having a bad experience **but the probable opportunity of a fantastic experience.**
To recap:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/expexpltradeoff.jpg" alt="Exploration Exploitation Tradeoff" width="100%">
|
deep-rl-class/units/en/unit1/exp-exp-tradeoff.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unit1/exp-exp-tradeoff.mdx",
"repo_id": "deep-rl-class",
"token_count": 699
}
| 89
|
# Monte Carlo vs Temporal Difference Learning [[mc-vs-td]]
The last thing we need to discuss before diving into Q-Learning is the two learning strategies.
Remember that an RL agent **learns by interacting with its environment.** The idea is that **given the experience and the received reward, the agent will update its value function or policy.**
Monte Carlo and Temporal Difference Learning are two different **strategies on how to train our value function or our policy function.** Both of them **use experience to solve the RL problem.**
On one hand, Monte Carlo uses **an entire episode of experience before learning.** On the other hand, Temporal Difference uses **only a step ( \\(S_t, A_t, R_{t+1}, S_{t+1}\\) ) to learn.**
We'll explain both of them **using a value-based method example.**
## Monte Carlo: learning at the end of the episode [[monte-carlo]]
Monte Carlo waits until the end of the episode, calculates \\(G_t\\) (return) and uses it as **a target for updating \\(V(S_t)\\).**
So it requires a **complete episode of interaction before updating our value function.**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/monte-carlo-approach.jpg" alt="Monte Carlo"/>
If we take an example:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-2.jpg" alt="Monte Carlo"/>
- We always start the episode **at the same starting point.**
- **The agent takes actions using the policy**. For instance, using an Epsilon Greedy Strategy, a policy that alternates between exploration (random actions) and exploitation.
- We get **the reward and the next state.**
- We terminate the episode if the cat eats the mouse or if the mouse moves > 10 steps.
- At the end of the episode, **we have a list of State, Actions, Rewards, and Next States tuples**
For instance [[State tile 3 bottom, Go Left, +1, State tile 2 bottom], [State tile 2 bottom, Go Left, +0, State tile 1 bottom]...]
- **The agent will sum the total rewards \\(G_t\\)** (to see how well it did).
- It will then **update \\(V(s_t)\\) based on the formula**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-3.jpg" alt="Monte Carlo"/>
- Then **start a new game with this new knowledge**
By running more and more episodes, **the agent will learn to play better and better.**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-3p.jpg" alt="Monte Carlo"/>
For instance, if we train a state-value function using Monte Carlo:
- We initialize our value function **so that it returns 0 value for each state**
- Our learning rate (lr) is 0.1 and our discount rate is 1 (= no discount)
- Our mouse **explores the environment and takes random actions**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-4.jpg" alt="Monte Carlo"/>
- The mouse made more than 10 steps, so the episode ends .
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-4p.jpg" alt="Monte Carlo"/>
- We have a list of state, action, rewards, next_state, **we need to calculate the return \\(G{t=0}\\)**
\\(G_t = R_{t+1} + R_{t+2} + R_{t+3} ...\\) (for simplicity, we don't discount the rewards)
\\(G_0 = R_{1} + R_{2} + R_{3}…\\)
\\(G_0 = 1 + 0 + 0 + 0 + 0 + 0 + 1 + 1 + 0 + 0\\)
\\(G_0 = 3\\)
- We can now compute the **new** \\(V(S_0)\\):
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-5.jpg" alt="Monte Carlo"/>
\\(V(S_0) = V(S_0) + lr * [G_0 — V(S_0)]\\)
\\(V(S_0) = 0 + 0.1 * [3 – 0]\\)
\\(V(S_0) = 0.3\\)
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/MC-5p.jpg" alt="Monte Carlo"/>
## Temporal Difference Learning: learning at each step [[td-learning]]
**Temporal Difference, on the other hand, waits for only one interaction (one step) \\(S_{t+1}\\)** to form a TD target and update \\(V(S_t)\\) using \\(R_{t+1}\\) and \\( \gamma * V(S_{t+1})\\).
The idea with **TD is to update the \\(V(S_t)\\) at each step.**
But because we didn't experience an entire episode, we don't have \\(G_t\\) (expected return). Instead, **we estimate \\(G_t\\) by adding \\(R_{t+1}\\) and the discounted value of the next state.**
This is called bootstrapping. It's called this **because TD bases its update in part on an existing estimate \\(V(S_{t+1})\\) and not a complete sample \\(G_t\\).**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-1.jpg" alt="Temporal Difference"/>
This method is called TD(0) or **one-step TD (update the value function after any individual step).**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-1p.jpg" alt="Temporal Difference"/>
If we take the same example,
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-2.jpg" alt="Temporal Difference"/>
- We initialize our value function so that it returns 0 value for each state.
- Our learning rate (lr) is 0.1, and our discount rate is 1 (no discount).
- Our mouse begins to explore the environment and takes a random action: **going to the left**
- It gets a reward \\(R_{t+1} = 1\\) since **it eats a piece of cheese**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-2p.jpg" alt="Temporal Difference"/>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-3.jpg" alt="Temporal Difference"/>
We can now update \\(V(S_0)\\):
New \\(V(S_0) = V(S_0) + lr * [R_1 + \gamma * V(S_1) - V(S_0)]\\)
New \\(V(S_0) = 0 + 0.1 * [1 + 1 * 0–0]\\)
New \\(V(S_0) = 0.1\\)
So we just updated our value function for State 0.
Now we **continue to interact with this environment with our updated value function.**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-3p.jpg" alt="Temporal Difference"/>
To summarize:
- With *Monte Carlo*, we update the value function from a complete episode, and so we **use the actual accurate discounted return of this episode.**
- With *TD Learning*, we update the value function from a step, and we replace \\(G_t\\), which we don't know, with **an estimated return called the TD target.**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Summary.jpg" alt="Summary"/>
|
deep-rl-class/units/en/unit2/mc-vs-td.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unit2/mc-vs-td.mdx",
"repo_id": "deep-rl-class",
"token_count": 2316
}
| 90
|
# Deep Q-Learning [[deep-q-learning]]
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/thumbnail.jpg" alt="Unit 3 thumbnail" width="100%">
In the last unit, we learned our first reinforcement learning algorithm: Q-Learning, **implemented it from scratch**, and trained it in two environments, FrozenLake-v1 ☃️ and Taxi-v3 🚕.
We got excellent results with this simple algorithm, but these environments were relatively simple because the **state space was discrete and small** (16 different states for FrozenLake-v1 and 500 for Taxi-v3). For comparison, the state space in Atari games can **contain \\(10^{9}\\) to \\(10^{11}\\) states**.
But as we'll see, producing and updating a **Q-table can become ineffective in large state space environments.**
So in this unit, **we'll study our first Deep Reinforcement Learning agent**: Deep Q-Learning. Instead of using a Q-table, Deep Q-Learning uses a Neural Network that takes a state and approximates Q-values for each action based on that state.
And **we'll train it to play Space Invaders and other Atari environments using [RL-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo)**, a training framework for RL using Stable-Baselines that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results, and recording videos.
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/atari-envs.gif" alt="Environments"/>
So let’s get started! 🚀
|
deep-rl-class/units/en/unit3/introduction.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unit3/introduction.mdx",
"repo_id": "deep-rl-class",
"token_count": 437
}
| 91
|
# How do Unity ML-Agents work? [[how-mlagents-works]]
Before training our agent, we need to understand **what ML-Agents is and how it works**.
## What is Unity ML-Agents? [[what-is-mlagents]]
[Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents) is a toolkit for the game engine Unity that **allows us to create environments using Unity or use pre-made environments to train our agents**.
It’s developed by [Unity Technologies](https://unity.com/), the developers of Unity, one of the most famous Game Engines used by the creators of Firewatch, Cuphead, and Cities: Skylines.
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/firewatch.jpeg" alt="Firewatch"/>
<figcaption>Firewatch was made with Unity</figcaption>
</figure>
## The six components [[six-components]]
With Unity ML-Agents, you have six essential components:
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/mlagents-1.png" alt="MLAgents"/>
<figcaption>Source: <a href="https://unity-technologies.github.io/ml-agents/">Unity ML-Agents Documentation</a> </figcaption>
</figure>
- The first is the *Learning Environment*, which contains **the Unity scene (the environment) and the environment elements** (game characters).
- The second is the *Python Low-level API*, which contains **the low-level Python interface for interacting and manipulating the environment**. It’s the API we use to launch the training.
- Then, we have the *External Communicator* that **connects the Learning Environment (made with C#) with the low level Python API (Python)**.
- The *Python trainers*: the **Reinforcement algorithms made with PyTorch (PPO, SAC…)**.
- The *Gym wrapper*: to encapsulate the RL environment in a gym wrapper.
- The *PettingZoo wrapper*: PettingZoo is the multi-agents version of the gym wrapper.
## Inside the Learning Component [[inside-learning-component]]
Inside the Learning Component, we have **two important elements**:
- The first is the *agent component*, the actor of the scene. We’ll **train the agent by optimizing its policy** (which will tell us what action to take in each state). The policy is called the *Brain*.
- Finally, there is the *Academy*. This component **orchestrates agents and their decision-making processes**. Think of this Academy as a teacher who handles Python API requests.
To better understand its role, let’s remember the RL process. This can be modeled as a loop that works like this:
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/RL_process.jpg" alt="The RL process" width="100%">
<figcaption>The RL Process: a loop of state, action, reward and next state</figcaption>
<figcaption>Source: <a href="http://incompleteideas.net/book/RLbook2020.pdf">Reinforcement Learning: An Introduction, Richard Sutton and Andrew G. Barto</a></figcaption>
</figure>
Now, let’s imagine an agent learning to play a platform game. The RL process looks like this:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/RL_process_game.jpg" alt="The RL process" width="100%">
- Our Agent receives **state \\(S_0\\)** from the **Environment** — we receive the first frame of our game (Environment).
- Based on that **state \\(S_0\\),** the Agent takes **action \\(A_0\\)** — our Agent will move to the right.
- The environment goes to a **new** **state \\(S_1\\)** — new frame.
- The environment gives some **reward \\(R_1\\)** to the Agent — we’re not dead *(Positive Reward +1)*.
This RL loop outputs a sequence of **state, action, reward and next state.** The goal of the agent is to **maximize the expected cumulative reward**.
The Academy will be the one that will **send the order to our Agents and ensure that agents are in sync**:
- Collect Observations
- Select your action using your policy
- Take the Action
- Reset if you reached the max step or if you’re done.
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/academy.png" alt="The MLAgents Academy" width="100%">
Now that we understand how ML-Agents works, **we’re ready to train our agents.**
|
deep-rl-class/units/en/unit5/how-mlagents-works.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unit5/how-mlagents-works.mdx",
"repo_id": "deep-rl-class",
"token_count": 1276
}
| 92
|
# Introduction [[introduction]]
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit0/thumbnail.png" alt="Thumbnail"/>
Since the beginning of this course, we learned to train agents in a *single-agent system* where our agent was alone in its environment: it was **not cooperating or collaborating with other agents**.
This worked great, and the single-agent system is useful for many applications.
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/patchwork.jpg" alt="Patchwork"/>
<figcaption>
A patchwork of all the environments you’ve trained your agents on since the beginning of the course
</figcaption>
</figure>
But, as humans, **we live in a multi-agent world**. Our intelligence comes from interaction with other agents. And so, our **goal is to create agents that can interact with other humans and other agents**.
Consequently, we must study how to train deep reinforcement learning agents in a *multi-agents system* to build robust agents that can adapt, collaborate, or compete.
So today we’re going to **learn the basics of the fascinating topic of multi-agents reinforcement learning (MARL)**.
And the most exciting part is that, during this unit, you’re going to train your first agents in a multi-agents system: **a 2vs2 soccer team that needs to beat the opponent team**.
And you’re going to participate in **AI vs. AI challenge** where your trained agent will compete against other classmates’ agents every day and be ranked on a [new leaderboard](https://huggingface.co/spaces/huggingface-projects/AIvsAI-SoccerTwos).
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/soccertwos.gif" alt="SoccerTwos"/>
<figcaption>This environment was made by the <a href="https://github.com/Unity-Technologies/ml-agents">Unity MLAgents Team</a></figcaption>
</figure>
So let’s get started!
|
deep-rl-class/units/en/unit7/introduction.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unit7/introduction.mdx",
"repo_id": "deep-rl-class",
"token_count": 574
}
| 93
|
# Introduction [[introduction]]
In this bonus unit, we'll reinforce what we learned in the first unit by teaching Huggy the Dog to fetch the stick and then [play with him directly in your browser](https://huggingface.co/spaces/ThomasSimonini/Huggy) 🐶
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit2/thumbnail.png" alt="Unit bonus 1 thumbnail" width="100%">
So let's get started 🚀
|
deep-rl-class/units/en/unitbonus1/introduction.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unitbonus1/introduction.mdx",
"repo_id": "deep-rl-class",
"token_count": 138
}
| 94
|
# Brief introduction to RL documentation
In this advanced topic, we address the question: **how should we monitor and keep track of powerful reinforcement learning agents that we are training in the real world and
interfacing with humans?**
As machine learning systems have increasingly impacted modern life, the **call for the documentation of these systems has grown**.
Such documentation can cover aspects such as the training data used — where it is stored, when it was collected, who was involved, etc.
— or the model optimization framework — the architecture, evaluation metrics, relevant papers, etc. — and more.
Today, model cards and datasheets are becoming increasingly available. For example, on the Hub
(see documentation [here](https://huggingface.co/docs/hub/model-cards)).
If you click on a [popular model on the Hub](https://huggingface.co/models), you can learn about its creation process.
These model and data specific logs are designed to be completed when the model or dataset are created, leaving them to go un-updated when these models are built into evolving systems in the future.
## Motivating Reward Reports
Reinforcement learning systems are fundamentally designed to optimize based on measurements of reward and time.
While the notion of a reward function can be mapped nicely to many well-understood fields of supervised learning (via a loss function),
understanding of how machine learning systems evolve over time is limited.
To that end, the authors introduce [*Reward Reports for Reinforcement Learning*](https://www.notion.so/Brief-introduction-to-RL-documentation-b8cbda5a6f5242338e0756e6bef72af4) (the pithy naming is designed to mirror the popular papers *Model Cards for Model Reporting* and *Datasheets for Datasets*).
The goal is to propose a type of documentation focused on the **human factors of reward** and **time-varying feedback systems**.
Building on the documentation frameworks for [model cards](https://arxiv.org/abs/1810.03993) and [datasheets](https://arxiv.org/abs/1803.09010) proposed by Mitchell et al. and Gebru et al., we argue the need for Reward Reports for AI systems.
**Reward Reports** are living documents for proposed RL deployments that demarcate design choices.
However, many questions remain about the applicability of this framework to different RL applications, roadblocks to system interpretability,
and the resonances between deployed supervised machine learning systems and the sequential decision-making utilized in RL.
At a minimum, Reward Reports are an opportunity for RL practitioners to deliberate on these questions and begin the work of deciding how to resolve them in practice.
## Capturing temporal behavior with documentation
The core piece specific to documentation designed for RL and feedback-driven ML systems is a *change-log*. The change-log updates information
from the designer (changed training parameters, data, etc.) along with noticed changes from the user (harmful behavior, unexpected responses, etc.).
The change log is accompanied by update triggers that encourage monitoring these effects.
## Contributing
Some of the most impactful RL-driven systems are multi-stakeholder in nature and behind the closed doors of private corporations.
These corporations are largely without regulation, so the burden of documentation falls on the public.
If you are interested in contributing, we are building Reward Reports for popular machine learning systems on a public
record on [GitHub](https://github.com/RewardReports/reward-reports).
For further reading, you can visit the Reward Reports [paper](https://arxiv.org/abs/2204.10817)
or look [an example report](https://github.com/RewardReports/reward-reports/tree/main/examples).
## Author
This section was written by <a href="https://twitter.com/natolambert"> Nathan Lambert </a>
|
deep-rl-class/units/en/unitbonus3/rl-documentation.mdx/0
|
{
"file_path": "deep-rl-class/units/en/unitbonus3/rl-documentation.mdx",
"repo_id": "deep-rl-class",
"token_count": 886
}
| 95
|
<!---
Copyright 2022 - The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<br>
<img src="https://raw.githubusercontent.com/huggingface/diffusers/main/docs/source/en/imgs/diffusers_library.jpg" width="400"/>
<br>
<p>
<p align="center">
<a href="https://github.com/huggingface/diffusers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue">
</a>
<a href="https://github.com/huggingface/diffusers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg">
</a>
<a href="https://pepy.tech/project/diffusers">
<img alt="GitHub release" src="https://static.pepy.tech/badge/diffusers/month">
</a>
<a href="CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg">
</a>
<a href="https://twitter.com/diffuserslib">
<img alt="X account" src="https://img.shields.io/twitter/url/https/twitter.com/diffuserslib.svg?style=social&label=Follow%20%40diffuserslib">
</a>
</p>
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](https://huggingface.co/docs/diffusers/conceptual/philosophy#usability-over-performance), [simple over easy](https://huggingface.co/docs/diffusers/conceptual/philosophy#simple-over-easy), and [customizability over abstractions](https://huggingface.co/docs/diffusers/conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
🤗 Diffusers offers three core components:
- State-of-the-art [diffusion pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) that can be run in inference with just a few lines of code.
- Interchangeable noise [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview) for different diffusion speeds and output quality.
- Pretrained [models](https://huggingface.co/docs/diffusers/api/models/overview) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.
## Installation
We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/#installation), please refer to their official documentation.
### PyTorch
With `pip` (official package):
```bash
pip install --upgrade diffusers[torch]
```
With `conda` (maintained by the community):
```sh
conda install -c conda-forge diffusers
```
### Flax
With `pip` (official package):
```bash
pip install --upgrade diffusers[flax]
```
### Apple Silicon (M1/M2) support
Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide.
## Quickstart
Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 22000+ checkpoints):
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipeline.to("cuda")
pipeline("An image of a squirrel in Picasso style").images[0]
```
You can also dig into the models and schedulers toolbox to build your own diffusion system:
```python
from diffusers import DDPMScheduler, UNet2DModel
from PIL import Image
import torch
scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda")
scheduler.set_timesteps(50)
sample_size = model.config.sample_size
noise = torch.randn((1, 3, sample_size, sample_size), device="cuda")
input = noise
for t in scheduler.timesteps:
with torch.no_grad():
noisy_residual = model(input, t).sample
prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
input = prev_noisy_sample
image = (input / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
image = Image.fromarray((image * 255).round().astype("uint8"))
image
```
Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to launch your diffusion journey today!
## How to navigate the documentation
| **Documentation** | **What can I learn?** |
|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
| [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. |
## Contribution
We ❤️ contributions from the open-source community!
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)
Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or just hang out ☕.
## Popular Tasks & Pipelines
<table>
<tr>
<th>Task</th>
<th>Pipeline</th>
<th>🤗 Hub</th>
</tr>
<tr style="border-top: 2px solid black">
<td>Unconditional Image Generation</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/ddpm"> DDPM </a></td>
<td><a href="https://huggingface.co/google/ddpm-ema-church-256"> google/ddpm-ema-church-256 </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img">Stable Diffusion Text-to-Image</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/unclip">unCLIP</a></td>
<td><a href="https://huggingface.co/kakaobrain/karlo-v1-alpha"> kakaobrain/karlo-v1-alpha </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/deepfloyd_if">DeepFloyd IF</a></td>
<td><a href="https://huggingface.co/DeepFloyd/IF-I-XL-v1.0"> DeepFloyd/IF-I-XL-v1.0 </a></td>
</tr>
<tr>
<td>Text-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/kandinsky">Kandinsky</a></td>
<td><a href="https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder"> kandinsky-community/kandinsky-2-2-decoder </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/controlnet">ControlNet</a></td>
<td><a href="https://huggingface.co/lllyasviel/sd-controlnet-canny"> lllyasviel/sd-controlnet-canny </a></td>
</tr>
<tr>
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/pix2pix">InstructPix2Pix</a></td>
<td><a href="https://huggingface.co/timbrooks/instruct-pix2pix"> timbrooks/instruct-pix2pix </a></td>
</tr>
<tr>
<td>Text-guided Image-to-Image</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/img2img">Stable Diffusion Image-to-Image</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Text-guided Image Inpainting</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint">Stable Diffusion Inpainting</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-inpainting"> runwayml/stable-diffusion-inpainting </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Image Variation</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/image_variation">Stable Diffusion Image Variation</a></td>
<td><a href="https://huggingface.co/lambdalabs/sd-image-variations-diffusers"> lambdalabs/sd-image-variations-diffusers </a></td>
</tr>
<tr style="border-top: 2px solid black">
<td>Super Resolution</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/upscale">Stable Diffusion Upscale</a></td>
<td><a href="https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler"> stabilityai/stable-diffusion-x4-upscaler </a></td>
</tr>
<tr>
<td>Super Resolution</td>
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/latent_upscale">Stable Diffusion Latent Upscale</a></td>
<td><a href="https://huggingface.co/stabilityai/sd-x2-latent-upscaler"> stabilityai/sd-x2-latent-upscaler </a></td>
</tr>
</table>
## Popular libraries using 🧨 Diffusers
- https://github.com/microsoft/TaskMatrix
- https://github.com/invoke-ai/InvokeAI
- https://github.com/apple/ml-stable-diffusion
- https://github.com/Sanster/lama-cleaner
- https://github.com/IDEA-Research/Grounded-Segment-Anything
- https://github.com/ashawkey/stable-dreamfusion
- https://github.com/deep-floyd/IF
- https://github.com/bentoml/BentoML
- https://github.com/bmaltais/kohya_ss
- +9000 other amazing GitHub repositories 💪
Thank you for using us ❤️.
## Credits
This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:
- @CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion)
- @hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion)
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim)
- @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.
## Citation
```bibtex
@misc{von-platen-etal-2022-diffusers,
author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Dhruv Nair and Sayak Paul and William Berman and Yiyi Xu and Steven Liu and Thomas Wolf},
title = {Diffusers: State-of-the-art diffusion models},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/diffusers}}
}
```
|
diffusers/README.md/0
|
{
"file_path": "diffusers/README.md",
"repo_id": "diffusers",
"token_count": 5416
}
| 96
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# PEFT
Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`~loaders.peft.PeftAdapterMixin`] class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`] to load an adapter.
<Tip>
Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.
</Tip>
## PeftAdapterMixin
[[autodoc]] loaders.peft.PeftAdapterMixin
|
diffusers/docs/source/en/api/loaders/peft.md/0
|
{
"file_path": "diffusers/docs/source/en/api/loaders/peft.md",
"repo_id": "diffusers",
"token_count": 320
}
| 97
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Paint by Example
[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://huggingface.co/papers/2211.13227) is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen.
The abstract from the paper is:
*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*
The original codebase can be found at [Fantasy-Studio/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example), and you can try it out in a [demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example).
## Tips
Paint by Example is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint is warm-started from [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to inpaint partly masked images conditioned on example and reference images.
<Tip>
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
</Tip>
## PaintByExamplePipeline
[[autodoc]] PaintByExamplePipeline
- all
- __call__
## StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
|
diffusers/docs/source/en/api/pipelines/paint_by_example.md/0
|
{
"file_path": "diffusers/docs/source/en/api/pipelines/paint_by_example.md",
"repo_id": "diffusers",
"token_count": 762
}
| 98
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Latent Consistency Model Multistep Scheduler
## Overview
Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao.
This scheduler should be able to generate good samples from [`LatentConsistencyModelPipeline`] in 1-8 steps.
## LCMScheduler
[[autodoc]] LCMScheduler
|
diffusers/docs/source/en/api/schedulers/lcm.md/0
|
{
"file_path": "diffusers/docs/source/en/api/schedulers/lcm.md",
"repo_id": "diffusers",
"token_count": 291
}
| 99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.