Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -84,12 +84,16 @@ The evaluation set consists of 5,000 randomly sampled documents from each of the | |
| 84 | 
             
            We fine-tuned the following models and evaluated them on the dev set of JGLUE.
         | 
| 85 | 
             
            We tuned learning rate and training epochs for each model and task following [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
         | 
| 86 |  | 
| 87 | 
            -
            | Model | 
| 88 | 
            -
             | 
| 89 | 
            -
            |  | 
| 90 | 
            -
            |  | 
| 91 | 
            -
            |  | 
| 92 | 
            -
            |  | 
|  | |
|  | |
|  | |
|  | |
| 93 |  | 
| 94 | 
             
            ## Acknowledgments
         | 
| 95 |  | 
|  | |
| 84 | 
             
            We fine-tuned the following models and evaluated them on the dev set of JGLUE.
         | 
| 85 | 
             
            We tuned learning rate and training epochs for each model and task following [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
         | 
| 86 |  | 
| 87 | 
            +
            | Model                         | MARC-ja/acc | JSTS/spearman | JNLI/acc | JSQuAD/EM | JSQuAD/F1 | JComQA/acc |
         | 
| 88 | 
            +
            |-------------------------------|-------------|---------------|----------|-----------|-----------|------------|
         | 
| 89 | 
            +
            | Waseda RoBERTa base           | 0.965       | 0.876         | 0.905    | 0.853     | 0.916     | 0.853      |
         | 
| 90 | 
            +
            | Waseda RoBERTa large (seq512) | 0.969       | 0.890         | 0.928    | 0.910     | 0.955     | 0.900      |
         | 
| 91 | 
            +
            | LUKE Japanese base*           | 0.965       | 0.877         | 0.912    | -         | -         | 0.842      |
         | 
| 92 | 
            +
            | LUKE Japanese large*          | 0.965       | 0.902         | 0.927    | -         | -         | 0.893      |
         | 
| 93 | 
            +
            | DeBERTaV2 base                | 0.970       | 0.886         | 0.922    | 0.899     | 0.951     | 0.873      |
         | 
| 94 | 
            +
            | DeBERTaV2 large               | 0.968       | 0.892         | 0.924    | 0.912     | 0.959     | 0.890      |
         | 
| 95 | 
            +
             | 
| 96 | 
            +
            *The scores of LUKE are from [the official repository](https://github.com/studio-ousia/luke).
         | 
| 97 |  | 
| 98 | 
             
            ## Acknowledgments
         | 
| 99 |  | 

