readme: add initial version of model card (#1)
Browse files- readme: add initial version of model card (59ff953408cd7965d0c8f8949b72c0779f92afd8)
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,74 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            language: fr
         | 
| 3 | 
            +
            license: mit
         | 
| 4 | 
            +
            tags:
         | 
| 5 | 
            +
            - flair
         | 
| 6 | 
            +
            - token-classification
         | 
| 7 | 
            +
            - sequence-tagger-model
         | 
| 8 | 
            +
            base_model: dbmdz/bert-tiny-historic-multilingual-cased
         | 
| 9 | 
            +
            widget:
         | 
| 10 | 
            +
            - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
         | 
| 11 | 
            +
                ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
         | 
| 12 | 
            +
                ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
         | 
| 13 | 
            +
                . 31 décembre .
         | 
| 14 | 
            +
            ---
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            This Flair model was fine-tuned on the
         | 
| 19 | 
            +
            [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
         | 
| 20 | 
            +
            NER Dataset using hmBERT Tiny as backbone LM.
         | 
| 21 | 
            +
             | 
| 22 | 
            +
            The ICDAR-Europeana NER Dataset is a preprocessed variant of the
         | 
| 23 | 
            +
            [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
         | 
| 24 | 
            +
             | 
| 25 | 
            +
            The following NEs were annotated: `PER`, `LOC` and `ORG`.
         | 
| 26 | 
            +
             | 
| 27 | 
            +
            # Results
         | 
| 28 | 
            +
             | 
| 29 | 
            +
            We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
         | 
| 30 | 
            +
             | 
| 31 | 
            +
            * Batch Sizes: `[4, 8]`
         | 
| 32 | 
            +
            * Learning Rates: `[5e-05, 3e-05]`
         | 
| 33 | 
            +
             | 
| 34 | 
            +
            And report micro F1-score on development set:
         | 
| 35 | 
            +
             | 
| 36 | 
            +
            | Configuration     | Seed 1       | Seed 2       | Seed 3       | Seed 4          | Seed 5       | Average         |
         | 
| 37 | 
            +
            |-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
         | 
| 38 | 
            +
            | `bs4-e10-lr5e-05` | [0.6013][1]  | [0.5273][2]  | [0.6086][3]  | [**0.6208**][4] | [0.5731][5]  | 0.5862 ± 0.0373 |
         | 
| 39 | 
            +
            | `bs8-e10-lr5e-05` | [0.6186][6]  | [0.4917][7]  | [0.6056][8]  | [0.5972][9]     | [0.4881][10] | 0.5602 ± 0.0647 |
         | 
| 40 | 
            +
            | `bs4-e10-lr3e-05` | [0.6034][11] | [0.4735][12] | [0.5837][13] | [0.578][14]     | [0.4716][15] | 0.542 ± 0.0641  |
         | 
| 41 | 
            +
            | `bs8-e10-lr3e-05` | [0.5743][16] | [0.4119][17] | [0.551][18]  | [0.5261][19]    | [0.4408][20] | 0.5008 ± 0.0708 |
         | 
| 42 | 
            +
             | 
| 43 | 
            +
            [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
         | 
| 44 | 
            +
            [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
         | 
| 45 | 
            +
            [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
         | 
| 46 | 
            +
            [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
         | 
| 47 | 
            +
            [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
         | 
| 48 | 
            +
            [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
         | 
| 49 | 
            +
            [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
         | 
| 50 | 
            +
            [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
         | 
| 51 | 
            +
            [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
         | 
| 52 | 
            +
            [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
         | 
| 53 | 
            +
            [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
         | 
| 54 | 
            +
            [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
         | 
| 55 | 
            +
            [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
         | 
| 56 | 
            +
            [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
         | 
| 57 | 
            +
            [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
         | 
| 58 | 
            +
            [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
         | 
| 59 | 
            +
            [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
         | 
| 60 | 
            +
            [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
         | 
| 61 | 
            +
            [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
         | 
| 62 | 
            +
            [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
         | 
| 63 | 
            +
             | 
| 64 | 
            +
            The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
         | 
| 65 | 
            +
             | 
| 66 | 
            +
            More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
         | 
| 67 | 
            +
             | 
| 68 | 
            +
            # Acknowledgements
         | 
| 69 | 
            +
             | 
| 70 | 
            +
            We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
         | 
| 71 | 
            +
            [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
         | 
| 72 | 
            +
             | 
| 73 | 
            +
            Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
         | 
| 74 | 
            +
            Many Thanks for providing access to the TPUs ❤️
         | 

