Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
1M - 10M
License:
Commit
·
aa3d54a
1
Parent(s):
ad8fd84
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,11 @@ language:
|
|
| 4 |
- en
|
| 5 |
paperswithcode_id: embedding-data/WikiAnswers
|
| 6 |
pretty_name: WikiAnswers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
# Dataset Card for "WikiAnswers"
|
|
@@ -43,15 +48,38 @@ pretty_name: WikiAnswers
|
|
| 43 |
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
|
| 44 |
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
|
| 45 |
|
| 46 |
-
### Supported Tasks
|
| 47 |
-
|
| 48 |
-
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
|
| 49 |
-
|
| 50 |
### Languages
|
| 51 |
-
|
| 52 |
-
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
|
| 53 |
-
|
| 54 |
## Dataset Structure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
### Data Instances
|
| 57 |
|
|
@@ -59,19 +87,6 @@ Each cluster optionally contains an answer provided by WikiAnswers users. There
|
|
| 59 |
|
| 60 |
### Data Splits
|
| 61 |
|
| 62 |
-
The data can be downloaded from: [http://knowitall.cs.washington.edu/oqa/data/wikianswers/](http://knowitall.cs.washington.edu/oqa/data/wikianswers/).
|
| 63 |
-
The corpus is split into 40 gzip-compressed files. The total compressed filesize is 8GB; the total decompressed filesize is 40GB.
|
| 64 |
-
Each file contains one cluster per line. Each cluster is a tab-separated list of questions and answers.
|
| 65 |
-
Questions are prefixed by q: and answers are prefixed by a:. Here is an example cluster (tabs replaced with newlines):
|
| 66 |
-
|
| 67 |
-
```
|
| 68 |
-
q:How many muslims make up indias 1 billion population?
|
| 69 |
-
q:How many of india's population are muslim?
|
| 70 |
-
q:How many populations of muslims in india?
|
| 71 |
-
q:What is population of muslims in india?
|
| 72 |
-
a:Over 160 million Muslims per Pew Forum Study as of October 2009.
|
| 73 |
-
|
| 74 |
-
```
|
| 75 |
|
| 76 |
## Dataset Creation
|
| 77 |
|
|
@@ -142,5 +157,4 @@ a:Over 160 million Muslims per Pew Forum Study as of October 2009.
|
|
| 142 |
|
| 143 |
### Contributions
|
| 144 |
|
| 145 |
-
Thanks to [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129) for adding this dataset.
|
| 146 |
|
|
|
|
| 4 |
- en
|
| 5 |
paperswithcode_id: embedding-data/WikiAnswers
|
| 6 |
pretty_name: WikiAnswers
|
| 7 |
+
task_categories:
|
| 8 |
+
- sentence-similarity
|
| 9 |
+
- paraphrase-mining
|
| 10 |
+
task_ids:
|
| 11 |
+
- semantic-similarity-classification
|
| 12 |
---
|
| 13 |
|
| 14 |
# Dataset Card for "WikiAnswers"
|
|
|
|
| 48 |
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
|
| 49 |
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
|
| 50 |
|
| 51 |
+
### Supported Tasks
|
| 52 |
+
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
|
|
|
|
|
|
|
| 53 |
### Languages
|
| 54 |
+
- English.
|
|
|
|
|
|
|
| 55 |
## Dataset Structure
|
| 56 |
+
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
|
| 57 |
+
```
|
| 58 |
+
{"set": [sentence_1, sentence_2, ..., sentence_25]}
|
| 59 |
+
{"set": [sentence_1, sentence_2, ..., sentence_25]}
|
| 60 |
+
...
|
| 61 |
+
{"set": [sentence_1, sentence_2, ..., sentence_25]}
|
| 62 |
+
```
|
| 63 |
+
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
|
| 64 |
+
### Usage Example
|
| 65 |
+
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
| 66 |
+
```python
|
| 67 |
+
from datasets import load_dataset
|
| 68 |
+
dataset = load_dataset("embedding-data/WikiAnswers")
|
| 69 |
+
```
|
| 70 |
+
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
|
| 71 |
+
```python
|
| 72 |
+
DatasetDict({
|
| 73 |
+
train: Dataset({
|
| 74 |
+
features: ['set'],
|
| 75 |
+
num_rows: N
|
| 76 |
+
})
|
| 77 |
+
})
|
| 78 |
+
```
|
| 79 |
+
Review an example `i` with:
|
| 80 |
+
```python
|
| 81 |
+
dataset["train"][i]["set"]
|
| 82 |
+
```
|
| 83 |
|
| 84 |
### Data Instances
|
| 85 |
|
|
|
|
| 87 |
|
| 88 |
### Data Splits
|
| 89 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
## Dataset Creation
|
| 92 |
|
|
|
|
| 157 |
|
| 158 |
### Contributions
|
| 159 |
|
|
|
|
| 160 |
|