dkhati56 commited on
Commit
15303cc
·
1 Parent(s): 3a6271c

Update Readme for Text-to-code codesearchnet

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md CHANGED
@@ -1,3 +1,94 @@
1
  ---
2
  license: mit
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ tags:
4
+ - CodeSearchNet
5
+ - CodeXGLUE
6
+ size_categories:
7
+ - n<1K
8
  ---
9
+
10
+ ### Dataset is imported from CodeXGLUE and pre-processed using their script.
11
+
12
+ # Where to find in Semeru:
13
+ The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/text-to-code/codesearchnet/python in Semeru
14
+
15
+ # CodeXGLUE -- Code Search (AdvTest)
16
+
17
+ ## Task Definition
18
+
19
+ Given a natural language, the task is to search source code that matches the natural language. To test the generalization ability of a model, function names and variables in test sets are replaced by special tokens.
20
+
21
+ ## Dataset
22
+
23
+ The dataset we use comes from [CodeSearchNet](https://arxiv.org/pdf/1909.09436.pdf) and we filter the dataset as the following:
24
+
25
+ - Remove examples that codes cannot be parsed into an abstract syntax tree.
26
+ - Remove examples that #tokens of documents is < 3 or >256
27
+ - Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
28
+ - Remove examples that documents are not English.
29
+
30
+ Besides, to test the generalization ability of a model, function names and variables in test sets are replaced by special tokens.
31
+
32
+
33
+ ### Data Format
34
+
35
+ After preprocessing dataset, you can obtain three .jsonl files, i.e. train.jsonl, valid.jsonl, test.jsonl
36
+
37
+ For each file, each line in the uncompressed file represents one function. One row is illustrated below.
38
+
39
+ - **repo:** the owner/repo
40
+ - **path:** the full path to the original file
41
+ - **func_name:** the function or method name
42
+ - **original_string:** the raw string before tokenization or parsing
43
+ - **language:** the programming language
44
+ - **code/function:** the part of the `original_string` that is code
45
+ - **code_tokens/function_tokens:** tokenized version of `code`
46
+ - **docstring:** the top-level comment or docstring, if it exists in the original string
47
+ - **docstring_tokens:** tokenized version of `docstring`
48
+ - **url:** the url for the example (identify natural language)
49
+ - **idx**: the index of code (identify code)
50
+
51
+ ### Data Statistics
52
+
53
+ Data statistics of the dataset are shown in the below table:
54
+
55
+ | | #Examples |
56
+ | ----- | :-------: |
57
+ | Train | 251,820 |
58
+ | Dev | 9,604 |
59
+ | Test | 19,210 |
60
+
61
+
62
+ ### Example
63
+
64
+ Given a text-code file evaluator/test.jsonl:
65
+
66
+ ```json
67
+ {"url": "url0", "docstring": "doc0","function": "fun0", "idx": 10}
68
+ {"url": "url1", "docstring": "doc1","function": "fun1", "idx": 11}
69
+ {"url": "url2", "docstring": "doc2","function": "fun2", "idx": 12}
70
+ {"url": "url3", "docstring": "doc3","function": "fun3", "idx": 13}
71
+ {"url": "url4", "docstring": "doc4","function": "fun4", "idx": 14}
72
+ ```
73
+
74
+
75
+ ### Input Predictions
76
+
77
+ For each url for natural language, descending sort candidate codes and return their idx in order. For example:
78
+
79
+ ```json
80
+ {"url": "url0", "answers": [10,11,12,13,14]}
81
+ {"url": "url1", "answers": [10,12,11,13,14]}
82
+ {"url": "url2", "answers": [13,11,12,10,14]}
83
+ {"url": "url3", "answers": [10,14,12,13,11]}
84
+ {"url": "url4", "answers": [10,11,12,13,14]}
85
+ ```
86
+
87
+
88
+ ## Reference
89
+ <pre><code>@article{husain2019codesearchnet,
90
+ title={Codesearchnet challenge: Evaluating the state of semantic code search},
91
+ author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
92
+ journal={arXiv preprint arXiv:1909.09436},
93
+ year={2019}
94
+ }</code></pre>