test
Browse files
README.md
CHANGED
|
@@ -9,95 +9,4 @@ app_file: app.py
|
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 13 |
-
|
| 14 |
-
# Metric Card for IndicGLUE
|
| 15 |
-
|
| 16 |
-
## Metric description
|
| 17 |
-
This metric is used to compute the evaluation metric for the [IndicGLUE dataset](https://huggingface.co/datasets/indic_glue).
|
| 18 |
-
|
| 19 |
-
IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide variety of tasks and covers 11 major Indian languages - Assamese (`as`), Bengali (`bn`), Gujarati (`gu`), Hindi (`hi`), Kannada (`kn`), Malayalam (`ml`), Marathi(`mr`), Oriya(`or`), Panjabi (`pa`), Tamil(`ta`) and Telugu (`te`).
|
| 20 |
-
|
| 21 |
-
## How to use
|
| 22 |
-
|
| 23 |
-
There are two steps: (1) loading the IndicGLUE metric relevant to the subset of the dataset being used for evaluation; and (2) calculating the metric.
|
| 24 |
-
|
| 25 |
-
1. **Loading the relevant IndicGLUE metric** : the subsets of IndicGLUE are the following: `wnli`, `copa`, `sna`, `csqa`, `wstp`, `inltkh`, `bbca`, `cvit-mkb-clsr`, `iitp-mr`, `iitp-pr`, `actsa-sc`, `md`, and`wiki-ner`.
|
| 26 |
-
|
| 27 |
-
More information about the different subsets of the Indic GLUE dataset can be found on the [IndicGLUE dataset page](https://indicnlp.ai4bharat.org/indic-glue/).
|
| 28 |
-
|
| 29 |
-
2. **Calculating the metric**: the metric takes two inputs : one list with the predictions of the model to score and one lists of references for each translation for all subsets of the dataset except for `cvit-mkb-clsr`, where each prediction and reference is a vector of floats.
|
| 30 |
-
|
| 31 |
-
```python
|
| 32 |
-
indic_glue_metric = evaluate.load('indic_glue', 'wnli')
|
| 33 |
-
references = [0, 1]
|
| 34 |
-
predictions = [0, 1]
|
| 35 |
-
results = indic_glue_metric.compute(predictions=predictions, references=references)
|
| 36 |
-
```
|
| 37 |
-
|
| 38 |
-
## Output values
|
| 39 |
-
The output of the metric depends on the IndicGLUE subset chosen, consisting of a dictionary that contains one or several of the following metrics:
|
| 40 |
-
|
| 41 |
-
`accuracy`: the proportion of correct predictions among the total number of cases processed, with a range between 0 and 1 (see [accuracy](https://huggingface.co/metrics/accuracy) for more information).
|
| 42 |
-
|
| 43 |
-
`f1`: the harmonic mean of the precision and recall (see [F1 score](https://huggingface.co/metrics/f1) for more information). Its range is 0-1 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall.
|
| 44 |
-
|
| 45 |
-
`precision@10`: the fraction of the true examples among the top 10 predicted examples, with a range between 0 and 1 (see [precision](https://huggingface.co/metrics/precision) for more information).
|
| 46 |
-
|
| 47 |
-
The `cvit-mkb-clsr` subset returns `precision@10`, the `wiki-ner` subset returns `accuracy` and `f1`, and all other subsets of Indic GLUE return only accuracy.
|
| 48 |
-
|
| 49 |
-
### Values from popular papers
|
| 50 |
-
|
| 51 |
-
The [original IndicGlue paper](https://aclanthology.org/2020.findings-emnlp.445.pdf) reported an average accuracy of 0.766 on the dataset, which varies depending on the subset selected.
|
| 52 |
-
|
| 53 |
-
## Examples
|
| 54 |
-
|
| 55 |
-
Maximal values for the WNLI subset (which outputs `accuracy`):
|
| 56 |
-
|
| 57 |
-
```python
|
| 58 |
-
indic_glue_metric = evaluate.load('indic_glue', 'wnli')
|
| 59 |
-
references = [0, 1]
|
| 60 |
-
predictions = [0, 1]
|
| 61 |
-
results = indic_glue_metric.compute(predictions=predictions, references=references)
|
| 62 |
-
print(results)
|
| 63 |
-
{'accuracy': 1.0}
|
| 64 |
-
```
|
| 65 |
-
|
| 66 |
-
Minimal values for the Wiki-NER subset (which outputs `accuracy` and `f1`):
|
| 67 |
-
|
| 68 |
-
```python
|
| 69 |
-
>>> indic_glue_metric = evaluate.load('indic_glue', 'wiki-ner')
|
| 70 |
-
>>> references = [0, 1]
|
| 71 |
-
>>> predictions = [1,0]
|
| 72 |
-
>>> results = indic_glue_metric.compute(predictions=predictions, references=references)
|
| 73 |
-
>>> print(results)
|
| 74 |
-
{'accuracy': 1.0, 'f1': 1.0}
|
| 75 |
-
```
|
| 76 |
-
|
| 77 |
-
Partial match for the CVIT-Mann Ki Baat subset (which outputs `precision@10`)
|
| 78 |
-
|
| 79 |
-
```python
|
| 80 |
-
>>> indic_glue_metric = evaluate.load('indic_glue', 'cvit-mkb-clsr')
|
| 81 |
-
>>> references = [[0.5, 0.5, 0.5], [0.1, 0.2, 0.3]]
|
| 82 |
-
>>> predictions = [[0.5, 0.5, 0.5], [0.1, 0.2, 0.3]]
|
| 83 |
-
>>> results = indic_glue_metric.compute(predictions=predictions, references=references)
|
| 84 |
-
>>> print(results)
|
| 85 |
-
{'precision@10': 1.0}
|
| 86 |
-
```
|
| 87 |
-
|
| 88 |
-
## Limitations and bias
|
| 89 |
-
This metric works only with datasets that have the same format as the [IndicGLUE dataset](https://huggingface.co/datasets/glue).
|
| 90 |
-
|
| 91 |
-
## Citation
|
| 92 |
-
|
| 93 |
-
```bibtex
|
| 94 |
-
@inproceedings{kakwani2020indicnlpsuite,
|
| 95 |
-
title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}},
|
| 96 |
-
author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
|
| 97 |
-
year={2020},
|
| 98 |
-
booktitle={Findings of EMNLP},
|
| 99 |
-
}
|
| 100 |
-
```
|
| 101 |
-
|
| 102 |
-
## Further References
|
| 103 |
-
- [IndicNLP website](https://indicnlp.ai4bharat.org/home/)
|
|
|
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|