Commit
·
5472d34
1
Parent(s):
b2d671f
init
Browse files- .DS_Store +0 -0
- .gitattributes +3 -0
- .gitignore +1 -0
- LICENSE +21 -0
- README.md +48 -0
- annotations.toml +44 -0
- assets/.gitattributes +6 -0
- assets/annotated_data.jsonl +3 -0
- assets/annotated_data_spans.jsonl +3 -0
- assets/dev.jsonl +3 -0
- assets/test.jsonl +3 -0
- assets/train.jsonl +3 -0
- configs/base_config_lg.cfg +110 -0
- configs/base_config_md.cfg +110 -0
- configs/base_config_sm.cfg +111 -0
- configs/base_config_trf.cfg +105 -0
- configs/config.cfg +151 -0
- configs/config_lg.cfg +151 -0
- configs/config_md.cfg +151 -0
- configs/config_sm.cfg +151 -0
- configs/config_trf.cfg +153 -0
- corpus/.DS_Store +0 -0
- corpus/dev.spacy +3 -0
- corpus/test.spacy +3 -0
- corpus/train.spacy +3 -0
- gold-training-data/christine_0020_0040_annotated.jsonl +3 -0
- gold-training-data/christine_0040_0060.jsonl +3 -0
- gold-training-data/greg_0000_0020.jsonl +3 -0
- gold-training-data/greg_60_80.jsonl +3 -0
- model_comparison.md +47 -0
- model_comparison.tex +17 -0
- notebooks/prepare-training.ipynb +246 -0
- notebooks/testing.ipynb +1010 -0
- project.lock +174 -0
- project.yml +268 -0
- requirements.txt +3 -0
- scripts/build-table.py +68 -0
- scripts/convert.py +41 -0
- scripts/convert_sents.py +58 -0
- scripts/readme.py +82 -0
- scripts/split.py +54 -0
- spacy-project.md +53 -0
.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
.gitattributes
CHANGED
|
@@ -1,3 +1,5 @@
|
|
|
|
|
|
|
|
| 1 |
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
*.bin filter=lfs diff=lfs merge=lfs -text
|
|
@@ -53,3 +55,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 1 |
+
*.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
| 3 |
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 4 |
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 5 |
*.bin filter=lfs diff=lfs merge=lfs -text
|
|
|
|
| 55 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 56 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 57 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 58 |
+
*.spacy filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
training/
|
LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2024 William Mattingly
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
README.md
CHANGED
|
@@ -1,3 +1,51 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
# Overall Model Performance
|
| 6 |
+
| Model | Precision | Recall | F-Score |
|
| 7 |
+
|:------------|------------:|---------:|----------:|
|
| 8 |
+
| Small | 94.1 | 89.2 | 91.6 |
|
| 9 |
+
| Medium | 94 | 90.5 | 92.2 |
|
| 10 |
+
| Large | 94.1 | 91.7 | 92.9 |
|
| 11 |
+
| Transformer | 93.6 | 91.6 | 92.6 |
|
| 12 |
+
|
| 13 |
+
# Performance per Label
|
| 14 |
+
| Model | Label | Precision | Recall | F-Score |
|
| 15 |
+
|:------------|:----------------|------------:|---------:|----------:|
|
| 16 |
+
| Small | BUILDING | 94.7 | 90.2 | 92.4 |
|
| 17 |
+
| Medium | BUILDING | 95.2 | 92.8 | 94 |
|
| 18 |
+
| Large | BUILDING | 94.8 | 93.2 | 94 |
|
| 19 |
+
| Transformer | BUILDING | 94.3 | 94.2 | 94.3 |
|
| 20 |
+
| Small | COUNTRY | 97.6 | 94.6 | 96.1 |
|
| 21 |
+
| Medium | COUNTRY | 96.5 | 96.3 | 96.4 |
|
| 22 |
+
| Large | COUNTRY | 97.7 | 96.8 | 97.2 |
|
| 23 |
+
| Transformer | COUNTRY | 96.6 | 96.8 | 96.7 |
|
| 24 |
+
| Small | DLF | 92.4 | 86.4 | 89.3 |
|
| 25 |
+
| Medium | DLF | 95 | 84.1 | 89.2 |
|
| 26 |
+
| Large | DLF | 93.5 | 88.4 | 90.9 |
|
| 27 |
+
| Transformer | DLF | 94.1 | 90.4 | 92.2 |
|
| 28 |
+
| Small | ENV_FEATURES | 86.6 | 81.2 | 83.8 |
|
| 29 |
+
| Medium | ENV_FEATURES | 86.3 | 79.1 | 82.5 |
|
| 30 |
+
| Large | ENV_FEATURES | 77.5 | 90.1 | 83.3 |
|
| 31 |
+
| Transformer | ENV_FEATURES | 85.1 | 86.9 | 86 |
|
| 32 |
+
| Small | INT_SPACE | 93.8 | 85.9 | 89.6 |
|
| 33 |
+
| Medium | INT_SPACE | 93.9 | 91.3 | 92.6 |
|
| 34 |
+
| Large | INT_SPACE | 92.4 | 93.8 | 93.1 |
|
| 35 |
+
| Transformer | INT_SPACE | 94.6 | 91.8 | 93.2 |
|
| 36 |
+
| Small | NPIP | 92.7 | 86.4 | 89.4 |
|
| 37 |
+
| Medium | NPIP | 94.5 | 82.4 | 88 |
|
| 38 |
+
| Large | NPIP | 92.7 | 86.6 | 89.6 |
|
| 39 |
+
| Transformer | NPIP | 94.8 | 83 | 88.5 |
|
| 40 |
+
| Small | POPULATED_PLACE | 94 | 90.6 | 92.3 |
|
| 41 |
+
| Medium | POPULATED_PLACE | 93 | 91.2 | 92.1 |
|
| 42 |
+
| Large | POPULATED_PLACE | 95.2 | 90.4 | 92.7 |
|
| 43 |
+
| Transformer | POPULATED_PLACE | 92.1 | 91.3 | 91.7 |
|
| 44 |
+
| Small | REGION | 84.4 | 68.4 | 75.6 |
|
| 45 |
+
| Medium | REGION | 81.4 | 75.8 | 78.5 |
|
| 46 |
+
| Large | REGION | 83 | 76.8 | 79.8 |
|
| 47 |
+
| Transformer | REGION | 81.2 | 68.4 | 74.3 |
|
| 48 |
+
| Small | SPATIAL_OBJ | 96 | 90 | 92.9 |
|
| 49 |
+
| Medium | SPATIAL_OBJ | 95.2 | 93.8 | 94.5 |
|
| 50 |
+
| Large | SPATIAL_OBJ | 95.3 | 95.5 | 95.4 |
|
| 51 |
+
| Transformer | SPATIAL_OBJ | 96.3 | 92.8 | 94.5 |
|
annotations.toml
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[BUILDING]
|
| 2 |
+
Train = 8162
|
| 3 |
+
Dev = 1811
|
| 4 |
+
Test = 1540
|
| 5 |
+
|
| 6 |
+
[COUNTRY]
|
| 7 |
+
Train = 4227
|
| 8 |
+
Dev = 849
|
| 9 |
+
Test = 893
|
| 10 |
+
|
| 11 |
+
[DLF]
|
| 12 |
+
Train = 4305
|
| 13 |
+
Dev = 998
|
| 14 |
+
Test = 888
|
| 15 |
+
|
| 16 |
+
[ENV_FEATURES]
|
| 17 |
+
Train = 1139
|
| 18 |
+
Dev = 225
|
| 19 |
+
Test = 191
|
| 20 |
+
|
| 21 |
+
[INT_SPACE]
|
| 22 |
+
Train = 2309
|
| 23 |
+
Dev = 550
|
| 24 |
+
Test = 403
|
| 25 |
+
|
| 26 |
+
[NPIP]
|
| 27 |
+
Train = 1799
|
| 28 |
+
Dev = 422
|
| 29 |
+
Test = 352
|
| 30 |
+
|
| 31 |
+
[POPULATED_PLACE]
|
| 32 |
+
Train = 10505
|
| 33 |
+
Dev = 2427
|
| 34 |
+
Test = 2290
|
| 35 |
+
|
| 36 |
+
[REGION]
|
| 37 |
+
Train = 969
|
| 38 |
+
Dev = 249
|
| 39 |
+
Test = 190
|
| 40 |
+
|
| 41 |
+
[SPATIAL_OBJ]
|
| 42 |
+
Train = 3992
|
| 43 |
+
Dev = 908
|
| 44 |
+
Test = 780
|
assets/.gitattributes
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# This is needed to ensure that text-based assets included with project
|
| 2 |
+
# templates and cloned via Git end up with consistent line endings and
|
| 3 |
+
# the same checksums. It will prevent Git from converting line endings.
|
| 4 |
+
# Otherwise, a user cloning assets on Windows may end up with a different
|
| 5 |
+
# checksum due to different line endings.
|
| 6 |
+
* -text
|
assets/annotated_data.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:655a46331231574542658bf468c3716b2c76459ba7ee7d87ebc7239a16d4c535
|
| 3 |
+
size 125899652
|
assets/annotated_data_spans.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea9e834cb3a87b06915c12e36759ff9cbeb48dbd3abdebc0c559b754105e4940
|
| 3 |
+
size 112457147
|
assets/dev.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ae88de1d832df838613b975381734ae8a744ef08de0bde105f14acdfe2671ca8
|
| 3 |
+
size 18859471
|
assets/test.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d42e779b0f1bad2aaed924a75777bb2280e6d58047f898d66afaa5fd7549d21b
|
| 3 |
+
size 15674151
|
assets/train.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:908cb3932321710a3f5be506eca92de804c08d63254d8f269c6ff77c33112f32
|
| 3 |
+
size 77923525
|
configs/base_config_lg.cfg
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# This is an auto-generated partial config. To use it with 'spacy train'
|
| 2 |
+
# you can run spacy init fill-config to auto-fill all default settings:
|
| 3 |
+
# python -m spacy init fill-config ./base_config.cfg ./config.cfg
|
| 4 |
+
[paths]
|
| 5 |
+
train = null
|
| 6 |
+
dev = null
|
| 7 |
+
vectors = "en_core_web_lg"
|
| 8 |
+
[system]
|
| 9 |
+
gpu_allocator = null
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 1000
|
| 15 |
+
|
| 16 |
+
[components]
|
| 17 |
+
|
| 18 |
+
[components.tok2vec]
|
| 19 |
+
factory = "tok2vec"
|
| 20 |
+
|
| 21 |
+
[components.tok2vec.model]
|
| 22 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 23 |
+
|
| 24 |
+
[components.tok2vec.model.embed]
|
| 25 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 26 |
+
width = ${components.tok2vec.model.encode.width}
|
| 27 |
+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
|
| 28 |
+
rows = [5000, 1000, 2500, 2500]
|
| 29 |
+
include_static_vectors = true
|
| 30 |
+
|
| 31 |
+
[components.tok2vec.model.encode]
|
| 32 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 33 |
+
width = 256
|
| 34 |
+
depth = 8
|
| 35 |
+
window_size = 1
|
| 36 |
+
maxout_pieces = 3
|
| 37 |
+
|
| 38 |
+
[components.spancat]
|
| 39 |
+
factory = "spancat"
|
| 40 |
+
max_positive = null
|
| 41 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 42 |
+
spans_key = "sc"
|
| 43 |
+
threshold = 0.5
|
| 44 |
+
|
| 45 |
+
[components.spancat.model]
|
| 46 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 47 |
+
|
| 48 |
+
[components.spancat.model.reducer]
|
| 49 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 50 |
+
hidden_size = 128
|
| 51 |
+
|
| 52 |
+
[components.spancat.model.scorer]
|
| 53 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 54 |
+
nO = null
|
| 55 |
+
nI = null
|
| 56 |
+
|
| 57 |
+
[components.spancat.model.tok2vec]
|
| 58 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 59 |
+
width = ${components.tok2vec.model.encode.width}
|
| 60 |
+
|
| 61 |
+
[components.spancat.suggester]
|
| 62 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 63 |
+
sizes = [1,2,3]
|
| 64 |
+
|
| 65 |
+
[corpora]
|
| 66 |
+
|
| 67 |
+
[corpora.train]
|
| 68 |
+
@readers = "spacy.Corpus.v1"
|
| 69 |
+
path = ${paths.train}
|
| 70 |
+
max_length = 0
|
| 71 |
+
|
| 72 |
+
[corpora.dev]
|
| 73 |
+
@readers = "spacy.Corpus.v1"
|
| 74 |
+
path = ${paths.dev}
|
| 75 |
+
max_length = 0
|
| 76 |
+
|
| 77 |
+
[training]
|
| 78 |
+
dev_corpus = "corpora.dev"
|
| 79 |
+
train_corpus = "corpora.train"
|
| 80 |
+
seed = ${system.seed}
|
| 81 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 82 |
+
dropout = 0.1
|
| 83 |
+
accumulate_gradient = 1
|
| 84 |
+
patience = 20000
|
| 85 |
+
max_epochs = 10
|
| 86 |
+
max_steps = 0
|
| 87 |
+
eval_frequency = 200
|
| 88 |
+
frozen_components = []
|
| 89 |
+
annotating_components = []
|
| 90 |
+
before_to_disk = null
|
| 91 |
+
before_update = null
|
| 92 |
+
|
| 93 |
+
[training.batcher]
|
| 94 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 95 |
+
discard_oversize = false
|
| 96 |
+
tolerance = 0.2
|
| 97 |
+
get_length = null
|
| 98 |
+
|
| 99 |
+
[training.batcher.size]
|
| 100 |
+
@schedules = "compounding.v1"
|
| 101 |
+
start = 1000
|
| 102 |
+
stop = 10000
|
| 103 |
+
compound = 1.001
|
| 104 |
+
t = 0.0
|
| 105 |
+
|
| 106 |
+
[training.optimizer]
|
| 107 |
+
@optimizers = "Adam.v1"
|
| 108 |
+
|
| 109 |
+
[initialize]
|
| 110 |
+
vectors = ${paths.vectors}
|
configs/base_config_md.cfg
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# This is an auto-generated partial config. To use it with 'spacy train'
|
| 2 |
+
# you can run spacy init fill-config to auto-fill all default settings:
|
| 3 |
+
# python -m spacy init fill-config ./base_config.cfg ./config.cfg
|
| 4 |
+
[paths]
|
| 5 |
+
train = null
|
| 6 |
+
dev = null
|
| 7 |
+
vectors = "en_core_web_md"
|
| 8 |
+
[system]
|
| 9 |
+
gpu_allocator = null
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 1000
|
| 15 |
+
|
| 16 |
+
[components]
|
| 17 |
+
|
| 18 |
+
[components.tok2vec]
|
| 19 |
+
factory = "tok2vec"
|
| 20 |
+
|
| 21 |
+
[components.tok2vec.model]
|
| 22 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 23 |
+
|
| 24 |
+
[components.tok2vec.model.embed]
|
| 25 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 26 |
+
width = ${components.tok2vec.model.encode.width}
|
| 27 |
+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
|
| 28 |
+
rows = [5000, 1000, 2500, 2500]
|
| 29 |
+
include_static_vectors = true
|
| 30 |
+
|
| 31 |
+
[components.tok2vec.model.encode]
|
| 32 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 33 |
+
width = 256
|
| 34 |
+
depth = 8
|
| 35 |
+
window_size = 1
|
| 36 |
+
maxout_pieces = 3
|
| 37 |
+
|
| 38 |
+
[components.spancat]
|
| 39 |
+
factory = "spancat"
|
| 40 |
+
max_positive = null
|
| 41 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 42 |
+
spans_key = "sc"
|
| 43 |
+
threshold = 0.5
|
| 44 |
+
|
| 45 |
+
[components.spancat.model]
|
| 46 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 47 |
+
|
| 48 |
+
[components.spancat.model.reducer]
|
| 49 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 50 |
+
hidden_size = 128
|
| 51 |
+
|
| 52 |
+
[components.spancat.model.scorer]
|
| 53 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 54 |
+
nO = null
|
| 55 |
+
nI = null
|
| 56 |
+
|
| 57 |
+
[components.spancat.model.tok2vec]
|
| 58 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 59 |
+
width = ${components.tok2vec.model.encode.width}
|
| 60 |
+
|
| 61 |
+
[components.spancat.suggester]
|
| 62 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 63 |
+
sizes = [1,2,3]
|
| 64 |
+
|
| 65 |
+
[corpora]
|
| 66 |
+
|
| 67 |
+
[corpora.train]
|
| 68 |
+
@readers = "spacy.Corpus.v1"
|
| 69 |
+
path = ${paths.train}
|
| 70 |
+
max_length = 0
|
| 71 |
+
|
| 72 |
+
[corpora.dev]
|
| 73 |
+
@readers = "spacy.Corpus.v1"
|
| 74 |
+
path = ${paths.dev}
|
| 75 |
+
max_length = 0
|
| 76 |
+
|
| 77 |
+
[training]
|
| 78 |
+
dev_corpus = "corpora.dev"
|
| 79 |
+
train_corpus = "corpora.train"
|
| 80 |
+
seed = ${system.seed}
|
| 81 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 82 |
+
dropout = 0.1
|
| 83 |
+
accumulate_gradient = 1
|
| 84 |
+
patience = 20000
|
| 85 |
+
max_epochs = 10
|
| 86 |
+
max_steps = 0
|
| 87 |
+
eval_frequency = 200
|
| 88 |
+
frozen_components = []
|
| 89 |
+
annotating_components = []
|
| 90 |
+
before_to_disk = null
|
| 91 |
+
before_update = null
|
| 92 |
+
|
| 93 |
+
[training.batcher]
|
| 94 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 95 |
+
discard_oversize = false
|
| 96 |
+
tolerance = 0.2
|
| 97 |
+
get_length = null
|
| 98 |
+
|
| 99 |
+
[training.batcher.size]
|
| 100 |
+
@schedules = "compounding.v1"
|
| 101 |
+
start = 1000
|
| 102 |
+
stop = 10000
|
| 103 |
+
compound = 1.001
|
| 104 |
+
t = 0.0
|
| 105 |
+
|
| 106 |
+
[training.optimizer]
|
| 107 |
+
@optimizers = "Adam.v1"
|
| 108 |
+
|
| 109 |
+
[initialize]
|
| 110 |
+
vectors = ${paths.vectors}
|
configs/base_config_sm.cfg
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# This is an auto-generated partial config. To use it with 'spacy train'
|
| 2 |
+
# you can run spacy init fill-config to auto-fill all default settings:
|
| 3 |
+
# python -m spacy init fill-config ./base_config.cfg ./config.cfg
|
| 4 |
+
[paths]
|
| 5 |
+
train = null
|
| 6 |
+
dev = null
|
| 7 |
+
vectors = null
|
| 8 |
+
[system]
|
| 9 |
+
gpu_allocator = null
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 10000
|
| 15 |
+
|
| 16 |
+
[components]
|
| 17 |
+
|
| 18 |
+
[components.tok2vec]
|
| 19 |
+
factory = "tok2vec"
|
| 20 |
+
|
| 21 |
+
[components.tok2vec.model]
|
| 22 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 23 |
+
|
| 24 |
+
[components.tok2vec.model.embed]
|
| 25 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 26 |
+
width = ${components.tok2vec.model.encode.width}
|
| 27 |
+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
|
| 28 |
+
rows = [5000, 1000, 2500, 2500]
|
| 29 |
+
include_static_vectors = false
|
| 30 |
+
|
| 31 |
+
[components.tok2vec.model.encode]
|
| 32 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 33 |
+
width = 96
|
| 34 |
+
depth = 4
|
| 35 |
+
window_size = 1
|
| 36 |
+
maxout_pieces = 3
|
| 37 |
+
|
| 38 |
+
[components.spancat]
|
| 39 |
+
factory = "spancat"
|
| 40 |
+
max_positive = null
|
| 41 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 42 |
+
spans_key = "sc"
|
| 43 |
+
threshold = 0.5
|
| 44 |
+
|
| 45 |
+
[components.spancat.model]
|
| 46 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 47 |
+
|
| 48 |
+
[components.spancat.model.reducer]
|
| 49 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 50 |
+
hidden_size = 128
|
| 51 |
+
|
| 52 |
+
[components.spancat.model.scorer]
|
| 53 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 54 |
+
nO = null
|
| 55 |
+
nI = null
|
| 56 |
+
|
| 57 |
+
[components.spancat.model.tok2vec]
|
| 58 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 59 |
+
width = ${components.tok2vec.model.encode.width}
|
| 60 |
+
|
| 61 |
+
[components.spancat.suggester]
|
| 62 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 63 |
+
sizes = [1,2,3]
|
| 64 |
+
|
| 65 |
+
[corpora]
|
| 66 |
+
|
| 67 |
+
[corpora.train]
|
| 68 |
+
@readers = "spacy.Corpus.v1"
|
| 69 |
+
path = ${paths.train}
|
| 70 |
+
max_length = 0
|
| 71 |
+
|
| 72 |
+
[corpora.dev]
|
| 73 |
+
@readers = "spacy.Corpus.v1"
|
| 74 |
+
path = ${paths.dev}
|
| 75 |
+
max_length = 0
|
| 76 |
+
|
| 77 |
+
[training]
|
| 78 |
+
dev_corpus = "corpora.dev"
|
| 79 |
+
train_corpus = "corpora.train"
|
| 80 |
+
seed = ${system.seed}
|
| 81 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 82 |
+
dropout = 0.1
|
| 83 |
+
accumulate_gradient = 1
|
| 84 |
+
patience = 20000
|
| 85 |
+
max_epochs = 0
|
| 86 |
+
max_steps = 0
|
| 87 |
+
eval_frequency = 300
|
| 88 |
+
frozen_components = []
|
| 89 |
+
annotating_components = []
|
| 90 |
+
before_to_disk = null
|
| 91 |
+
before_update = null
|
| 92 |
+
|
| 93 |
+
[training.checkpoints]
|
| 94 |
+
save_n_steps = 1000 # Save the model every 1000 steps
|
| 95 |
+
|
| 96 |
+
[training.optimizer]
|
| 97 |
+
@optimizers = "Adam.v1"
|
| 98 |
+
|
| 99 |
+
[training.batcher]
|
| 100 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 101 |
+
discard_oversize = false
|
| 102 |
+
tolerance = 0.2
|
| 103 |
+
|
| 104 |
+
[training.batcher.size]
|
| 105 |
+
@schedules = "compounding.v1"
|
| 106 |
+
start = 100
|
| 107 |
+
stop = 1000
|
| 108 |
+
compound = 1.001
|
| 109 |
+
|
| 110 |
+
[initialize]
|
| 111 |
+
vectors = ${paths.vectors}
|
configs/base_config_trf.cfg
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# This is an auto-generated partial config. To use it with 'spacy train'
|
| 2 |
+
# you can run spacy init fill-config to auto-fill all default settings:
|
| 3 |
+
# python -m spacy init fill-config ./base_config.cfg ./config.cfg
|
| 4 |
+
[paths]
|
| 5 |
+
train = null
|
| 6 |
+
dev = null
|
| 7 |
+
vectors = null
|
| 8 |
+
[system]
|
| 9 |
+
gpu_allocator = "pytorch"
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["transformer","spancat"]
|
| 14 |
+
batch_size = 128
|
| 15 |
+
|
| 16 |
+
[components]
|
| 17 |
+
|
| 18 |
+
[components.transformer]
|
| 19 |
+
factory = "transformer"
|
| 20 |
+
|
| 21 |
+
[components.transformer.model]
|
| 22 |
+
@architectures = "spacy-transformers.TransformerModel.v3"
|
| 23 |
+
name = "roberta-base"
|
| 24 |
+
tokenizer_config = {"use_fast": true}
|
| 25 |
+
|
| 26 |
+
[components.transformer.model.get_spans]
|
| 27 |
+
@span_getters = "spacy-transformers.strided_spans.v1"
|
| 28 |
+
window = 128
|
| 29 |
+
stride = 96
|
| 30 |
+
|
| 31 |
+
[components.spancat]
|
| 32 |
+
factory = "spancat"
|
| 33 |
+
max_positive = null
|
| 34 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 35 |
+
spans_key = "sc"
|
| 36 |
+
threshold = 0.5
|
| 37 |
+
|
| 38 |
+
[components.spancat.model]
|
| 39 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 40 |
+
|
| 41 |
+
[components.spancat.model.reducer]
|
| 42 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 43 |
+
hidden_size = 128
|
| 44 |
+
|
| 45 |
+
[components.spancat.model.scorer]
|
| 46 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 47 |
+
nO = null
|
| 48 |
+
nI = null
|
| 49 |
+
|
| 50 |
+
[components.spancat.model.tok2vec]
|
| 51 |
+
@architectures = "spacy-transformers.TransformerListener.v1"
|
| 52 |
+
grad_factor = 1.0
|
| 53 |
+
|
| 54 |
+
[components.spancat.model.tok2vec.pooling]
|
| 55 |
+
@layers = "reduce_mean.v1"
|
| 56 |
+
|
| 57 |
+
[components.spancat.suggester]
|
| 58 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 59 |
+
sizes = [1,2,3]
|
| 60 |
+
|
| 61 |
+
[corpora]
|
| 62 |
+
|
| 63 |
+
[corpora.train]
|
| 64 |
+
@readers = "spacy.Corpus.v1"
|
| 65 |
+
path = ${paths.train}
|
| 66 |
+
max_length = 0
|
| 67 |
+
|
| 68 |
+
[corpora.dev]
|
| 69 |
+
@readers = "spacy.Corpus.v1"
|
| 70 |
+
path = ${paths.dev}
|
| 71 |
+
max_length = 0
|
| 72 |
+
|
| 73 |
+
[training]
|
| 74 |
+
dev_corpus = "corpora.dev"
|
| 75 |
+
train_corpus = "corpora.train"
|
| 76 |
+
seed = ${system.seed}
|
| 77 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 78 |
+
dropout = 0.1
|
| 79 |
+
accumulate_gradient = 1
|
| 80 |
+
patience = 20000
|
| 81 |
+
max_epochs = 10
|
| 82 |
+
max_steps = 0
|
| 83 |
+
eval_frequency = 200
|
| 84 |
+
frozen_components = []
|
| 85 |
+
annotating_components = []
|
| 86 |
+
before_to_disk = null
|
| 87 |
+
before_update = null
|
| 88 |
+
|
| 89 |
+
[training.optimizer]
|
| 90 |
+
@optimizers = "Adam.v1"
|
| 91 |
+
|
| 92 |
+
[training.optimizer.learn_rate]
|
| 93 |
+
@schedules = "warmup_linear.v1"
|
| 94 |
+
warmup_steps = 250
|
| 95 |
+
total_steps = 20000
|
| 96 |
+
initial_rate = 5e-5
|
| 97 |
+
|
| 98 |
+
[training.batcher]
|
| 99 |
+
@batchers = "spacy.batch_by_padded.v1"
|
| 100 |
+
discard_oversize = true
|
| 101 |
+
size = 2000
|
| 102 |
+
buffer = 256
|
| 103 |
+
|
| 104 |
+
[initialize]
|
| 105 |
+
vectors = ${paths.vectors}
|
configs/config.cfg
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[paths]
|
| 2 |
+
train = null
|
| 3 |
+
dev = null
|
| 4 |
+
vectors = null
|
| 5 |
+
init_tok2vec = null
|
| 6 |
+
|
| 7 |
+
[system]
|
| 8 |
+
gpu_allocator = null
|
| 9 |
+
seed = 0
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 10000
|
| 15 |
+
disabled = []
|
| 16 |
+
before_creation = null
|
| 17 |
+
after_creation = null
|
| 18 |
+
after_pipeline_creation = null
|
| 19 |
+
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
|
| 20 |
+
vectors = {"@vectors":"spacy.Vectors.v1"}
|
| 21 |
+
|
| 22 |
+
[components]
|
| 23 |
+
|
| 24 |
+
[components.spancat]
|
| 25 |
+
factory = "spancat"
|
| 26 |
+
max_positive = null
|
| 27 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 28 |
+
spans_key = "sc"
|
| 29 |
+
threshold = 0.5
|
| 30 |
+
|
| 31 |
+
[components.spancat.model]
|
| 32 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 33 |
+
|
| 34 |
+
[components.spancat.model.reducer]
|
| 35 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 36 |
+
hidden_size = 128
|
| 37 |
+
|
| 38 |
+
[components.spancat.model.scorer]
|
| 39 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 40 |
+
nO = null
|
| 41 |
+
nI = null
|
| 42 |
+
|
| 43 |
+
[components.spancat.model.tok2vec]
|
| 44 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 45 |
+
width = ${components.tok2vec.model.encode.width}
|
| 46 |
+
upstream = "*"
|
| 47 |
+
|
| 48 |
+
[components.spancat.suggester]
|
| 49 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 50 |
+
sizes = [1,2,3]
|
| 51 |
+
|
| 52 |
+
[components.tok2vec]
|
| 53 |
+
factory = "tok2vec"
|
| 54 |
+
|
| 55 |
+
[components.tok2vec.model]
|
| 56 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 57 |
+
|
| 58 |
+
[components.tok2vec.model.embed]
|
| 59 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 60 |
+
width = ${components.tok2vec.model.encode.width}
|
| 61 |
+
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
|
| 62 |
+
rows = [5000,1000,2500,2500]
|
| 63 |
+
include_static_vectors = false
|
| 64 |
+
|
| 65 |
+
[components.tok2vec.model.encode]
|
| 66 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 67 |
+
width = 96
|
| 68 |
+
depth = 4
|
| 69 |
+
window_size = 1
|
| 70 |
+
maxout_pieces = 3
|
| 71 |
+
|
| 72 |
+
[corpora]
|
| 73 |
+
|
| 74 |
+
[corpora.dev]
|
| 75 |
+
@readers = "spacy.Corpus.v1"
|
| 76 |
+
path = ${paths.dev}
|
| 77 |
+
max_length = 0
|
| 78 |
+
gold_preproc = false
|
| 79 |
+
limit = 0
|
| 80 |
+
augmenter = null
|
| 81 |
+
|
| 82 |
+
[corpora.train]
|
| 83 |
+
@readers = "spacy.Corpus.v1"
|
| 84 |
+
path = ${paths.train}
|
| 85 |
+
max_length = 0
|
| 86 |
+
gold_preproc = false
|
| 87 |
+
limit = 0
|
| 88 |
+
augmenter = null
|
| 89 |
+
|
| 90 |
+
[training]
|
| 91 |
+
dev_corpus = "corpora.dev"
|
| 92 |
+
train_corpus = "corpora.train"
|
| 93 |
+
seed = ${system.seed}
|
| 94 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 95 |
+
dropout = 0.1
|
| 96 |
+
accumulate_gradient = 1
|
| 97 |
+
patience = 20000
|
| 98 |
+
max_epochs = 10
|
| 99 |
+
max_steps = 0
|
| 100 |
+
eval_frequency = 200
|
| 101 |
+
frozen_components = []
|
| 102 |
+
annotating_components = []
|
| 103 |
+
before_to_disk = null
|
| 104 |
+
before_update = null
|
| 105 |
+
|
| 106 |
+
[training.batcher]
|
| 107 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 108 |
+
discard_oversize = false
|
| 109 |
+
tolerance = 0.2
|
| 110 |
+
get_length = null
|
| 111 |
+
|
| 112 |
+
[training.batcher.size]
|
| 113 |
+
@schedules = "compounding.v1"
|
| 114 |
+
start = 1000
|
| 115 |
+
stop = 10000
|
| 116 |
+
compound = 1.001
|
| 117 |
+
t = 0.0
|
| 118 |
+
|
| 119 |
+
[training.logger]
|
| 120 |
+
@loggers = "spacy.ConsoleLogger.v1"
|
| 121 |
+
progress_bar = false
|
| 122 |
+
|
| 123 |
+
[training.optimizer]
|
| 124 |
+
@optimizers = "Adam.v1"
|
| 125 |
+
beta1 = 0.9
|
| 126 |
+
beta2 = 0.999
|
| 127 |
+
L2_is_weight_decay = true
|
| 128 |
+
L2 = 0.01
|
| 129 |
+
grad_clip = 1.0
|
| 130 |
+
use_averages = false
|
| 131 |
+
eps = 0.00000001
|
| 132 |
+
learn_rate = 0.001
|
| 133 |
+
|
| 134 |
+
[training.score_weights]
|
| 135 |
+
spans_sc_f = 1.0
|
| 136 |
+
spans_sc_p = 0.0
|
| 137 |
+
spans_sc_r = 0.0
|
| 138 |
+
|
| 139 |
+
[pretraining]
|
| 140 |
+
|
| 141 |
+
[initialize]
|
| 142 |
+
vectors = ${paths.vectors}
|
| 143 |
+
init_tok2vec = ${paths.init_tok2vec}
|
| 144 |
+
vocab_data = null
|
| 145 |
+
lookups = null
|
| 146 |
+
before_init = null
|
| 147 |
+
after_init = null
|
| 148 |
+
|
| 149 |
+
[initialize.components]
|
| 150 |
+
|
| 151 |
+
[initialize.tokenizer]
|
configs/config_lg.cfg
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[paths]
|
| 2 |
+
train = null
|
| 3 |
+
dev = null
|
| 4 |
+
vectors = "en_core_web_lg"
|
| 5 |
+
init_tok2vec = null
|
| 6 |
+
|
| 7 |
+
[system]
|
| 8 |
+
gpu_allocator = null
|
| 9 |
+
seed = 0
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 1000
|
| 15 |
+
disabled = []
|
| 16 |
+
before_creation = null
|
| 17 |
+
after_creation = null
|
| 18 |
+
after_pipeline_creation = null
|
| 19 |
+
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
|
| 20 |
+
vectors = {"@vectors":"spacy.Vectors.v1"}
|
| 21 |
+
|
| 22 |
+
[components]
|
| 23 |
+
|
| 24 |
+
[components.spancat]
|
| 25 |
+
factory = "spancat"
|
| 26 |
+
max_positive = null
|
| 27 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 28 |
+
spans_key = "sc"
|
| 29 |
+
threshold = 0.5
|
| 30 |
+
|
| 31 |
+
[components.spancat.model]
|
| 32 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 33 |
+
|
| 34 |
+
[components.spancat.model.reducer]
|
| 35 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 36 |
+
hidden_size = 128
|
| 37 |
+
|
| 38 |
+
[components.spancat.model.scorer]
|
| 39 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 40 |
+
nO = null
|
| 41 |
+
nI = null
|
| 42 |
+
|
| 43 |
+
[components.spancat.model.tok2vec]
|
| 44 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 45 |
+
width = ${components.tok2vec.model.encode.width}
|
| 46 |
+
upstream = "*"
|
| 47 |
+
|
| 48 |
+
[components.spancat.suggester]
|
| 49 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 50 |
+
sizes = [1,2,3]
|
| 51 |
+
|
| 52 |
+
[components.tok2vec]
|
| 53 |
+
factory = "tok2vec"
|
| 54 |
+
|
| 55 |
+
[components.tok2vec.model]
|
| 56 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 57 |
+
|
| 58 |
+
[components.tok2vec.model.embed]
|
| 59 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 60 |
+
width = ${components.tok2vec.model.encode.width}
|
| 61 |
+
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
|
| 62 |
+
rows = [5000,1000,2500,2500]
|
| 63 |
+
include_static_vectors = true
|
| 64 |
+
|
| 65 |
+
[components.tok2vec.model.encode]
|
| 66 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 67 |
+
width = 256
|
| 68 |
+
depth = 8
|
| 69 |
+
window_size = 1
|
| 70 |
+
maxout_pieces = 3
|
| 71 |
+
|
| 72 |
+
[corpora]
|
| 73 |
+
|
| 74 |
+
[corpora.dev]
|
| 75 |
+
@readers = "spacy.Corpus.v1"
|
| 76 |
+
path = ${paths.dev}
|
| 77 |
+
max_length = 0
|
| 78 |
+
gold_preproc = false
|
| 79 |
+
limit = 0
|
| 80 |
+
augmenter = null
|
| 81 |
+
|
| 82 |
+
[corpora.train]
|
| 83 |
+
@readers = "spacy.Corpus.v1"
|
| 84 |
+
path = ${paths.train}
|
| 85 |
+
max_length = 0
|
| 86 |
+
gold_preproc = false
|
| 87 |
+
limit = 0
|
| 88 |
+
augmenter = null
|
| 89 |
+
|
| 90 |
+
[training]
|
| 91 |
+
dev_corpus = "corpora.dev"
|
| 92 |
+
train_corpus = "corpora.train"
|
| 93 |
+
seed = ${system.seed}
|
| 94 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 95 |
+
dropout = 0.1
|
| 96 |
+
accumulate_gradient = 1
|
| 97 |
+
patience = 20000
|
| 98 |
+
max_epochs = 10
|
| 99 |
+
max_steps = 0
|
| 100 |
+
eval_frequency = 200
|
| 101 |
+
frozen_components = []
|
| 102 |
+
annotating_components = []
|
| 103 |
+
before_to_disk = null
|
| 104 |
+
before_update = null
|
| 105 |
+
|
| 106 |
+
[training.batcher]
|
| 107 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 108 |
+
discard_oversize = false
|
| 109 |
+
tolerance = 0.2
|
| 110 |
+
get_length = null
|
| 111 |
+
|
| 112 |
+
[training.batcher.size]
|
| 113 |
+
@schedules = "compounding.v1"
|
| 114 |
+
start = 1000
|
| 115 |
+
stop = 10000
|
| 116 |
+
compound = 1.001
|
| 117 |
+
t = 0.0
|
| 118 |
+
|
| 119 |
+
[training.logger]
|
| 120 |
+
@loggers = "spacy.ConsoleLogger.v1"
|
| 121 |
+
progress_bar = false
|
| 122 |
+
|
| 123 |
+
[training.optimizer]
|
| 124 |
+
@optimizers = "Adam.v1"
|
| 125 |
+
beta1 = 0.9
|
| 126 |
+
beta2 = 0.999
|
| 127 |
+
L2_is_weight_decay = true
|
| 128 |
+
L2 = 0.01
|
| 129 |
+
grad_clip = 1.0
|
| 130 |
+
use_averages = false
|
| 131 |
+
eps = 0.00000001
|
| 132 |
+
learn_rate = 0.001
|
| 133 |
+
|
| 134 |
+
[training.score_weights]
|
| 135 |
+
spans_sc_f = 1.0
|
| 136 |
+
spans_sc_p = 0.0
|
| 137 |
+
spans_sc_r = 0.0
|
| 138 |
+
|
| 139 |
+
[pretraining]
|
| 140 |
+
|
| 141 |
+
[initialize]
|
| 142 |
+
vectors = ${paths.vectors}
|
| 143 |
+
init_tok2vec = ${paths.init_tok2vec}
|
| 144 |
+
vocab_data = null
|
| 145 |
+
lookups = null
|
| 146 |
+
before_init = null
|
| 147 |
+
after_init = null
|
| 148 |
+
|
| 149 |
+
[initialize.components]
|
| 150 |
+
|
| 151 |
+
[initialize.tokenizer]
|
configs/config_md.cfg
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[paths]
|
| 2 |
+
train = null
|
| 3 |
+
dev = null
|
| 4 |
+
vectors = "en_core_web_md"
|
| 5 |
+
init_tok2vec = null
|
| 6 |
+
|
| 7 |
+
[system]
|
| 8 |
+
gpu_allocator = null
|
| 9 |
+
seed = 0
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 1000
|
| 15 |
+
disabled = []
|
| 16 |
+
before_creation = null
|
| 17 |
+
after_creation = null
|
| 18 |
+
after_pipeline_creation = null
|
| 19 |
+
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
|
| 20 |
+
vectors = {"@vectors":"spacy.Vectors.v1"}
|
| 21 |
+
|
| 22 |
+
[components]
|
| 23 |
+
|
| 24 |
+
[components.spancat]
|
| 25 |
+
factory = "spancat"
|
| 26 |
+
max_positive = null
|
| 27 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 28 |
+
spans_key = "sc"
|
| 29 |
+
threshold = 0.5
|
| 30 |
+
|
| 31 |
+
[components.spancat.model]
|
| 32 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 33 |
+
|
| 34 |
+
[components.spancat.model.reducer]
|
| 35 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 36 |
+
hidden_size = 128
|
| 37 |
+
|
| 38 |
+
[components.spancat.model.scorer]
|
| 39 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 40 |
+
nO = null
|
| 41 |
+
nI = null
|
| 42 |
+
|
| 43 |
+
[components.spancat.model.tok2vec]
|
| 44 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 45 |
+
width = ${components.tok2vec.model.encode.width}
|
| 46 |
+
upstream = "*"
|
| 47 |
+
|
| 48 |
+
[components.spancat.suggester]
|
| 49 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 50 |
+
sizes = [1,2,3]
|
| 51 |
+
|
| 52 |
+
[components.tok2vec]
|
| 53 |
+
factory = "tok2vec"
|
| 54 |
+
|
| 55 |
+
[components.tok2vec.model]
|
| 56 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 57 |
+
|
| 58 |
+
[components.tok2vec.model.embed]
|
| 59 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 60 |
+
width = ${components.tok2vec.model.encode.width}
|
| 61 |
+
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
|
| 62 |
+
rows = [5000,1000,2500,2500]
|
| 63 |
+
include_static_vectors = true
|
| 64 |
+
|
| 65 |
+
[components.tok2vec.model.encode]
|
| 66 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 67 |
+
width = 256
|
| 68 |
+
depth = 8
|
| 69 |
+
window_size = 1
|
| 70 |
+
maxout_pieces = 3
|
| 71 |
+
|
| 72 |
+
[corpora]
|
| 73 |
+
|
| 74 |
+
[corpora.dev]
|
| 75 |
+
@readers = "spacy.Corpus.v1"
|
| 76 |
+
path = ${paths.dev}
|
| 77 |
+
max_length = 0
|
| 78 |
+
gold_preproc = false
|
| 79 |
+
limit = 0
|
| 80 |
+
augmenter = null
|
| 81 |
+
|
| 82 |
+
[corpora.train]
|
| 83 |
+
@readers = "spacy.Corpus.v1"
|
| 84 |
+
path = ${paths.train}
|
| 85 |
+
max_length = 0
|
| 86 |
+
gold_preproc = false
|
| 87 |
+
limit = 0
|
| 88 |
+
augmenter = null
|
| 89 |
+
|
| 90 |
+
[training]
|
| 91 |
+
dev_corpus = "corpora.dev"
|
| 92 |
+
train_corpus = "corpora.train"
|
| 93 |
+
seed = ${system.seed}
|
| 94 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 95 |
+
dropout = 0.1
|
| 96 |
+
accumulate_gradient = 1
|
| 97 |
+
patience = 20000
|
| 98 |
+
max_epochs = 10
|
| 99 |
+
max_steps = 0
|
| 100 |
+
eval_frequency = 200
|
| 101 |
+
frozen_components = []
|
| 102 |
+
annotating_components = []
|
| 103 |
+
before_to_disk = null
|
| 104 |
+
before_update = null
|
| 105 |
+
|
| 106 |
+
[training.batcher]
|
| 107 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 108 |
+
discard_oversize = false
|
| 109 |
+
tolerance = 0.2
|
| 110 |
+
get_length = null
|
| 111 |
+
|
| 112 |
+
[training.batcher.size]
|
| 113 |
+
@schedules = "compounding.v1"
|
| 114 |
+
start = 1000
|
| 115 |
+
stop = 10000
|
| 116 |
+
compound = 1.001
|
| 117 |
+
t = 0.0
|
| 118 |
+
|
| 119 |
+
[training.logger]
|
| 120 |
+
@loggers = "spacy.ConsoleLogger.v1"
|
| 121 |
+
progress_bar = false
|
| 122 |
+
|
| 123 |
+
[training.optimizer]
|
| 124 |
+
@optimizers = "Adam.v1"
|
| 125 |
+
beta1 = 0.9
|
| 126 |
+
beta2 = 0.999
|
| 127 |
+
L2_is_weight_decay = true
|
| 128 |
+
L2 = 0.01
|
| 129 |
+
grad_clip = 1.0
|
| 130 |
+
use_averages = false
|
| 131 |
+
eps = 0.00000001
|
| 132 |
+
learn_rate = 0.001
|
| 133 |
+
|
| 134 |
+
[training.score_weights]
|
| 135 |
+
spans_sc_f = 1.0
|
| 136 |
+
spans_sc_p = 0.0
|
| 137 |
+
spans_sc_r = 0.0
|
| 138 |
+
|
| 139 |
+
[pretraining]
|
| 140 |
+
|
| 141 |
+
[initialize]
|
| 142 |
+
vectors = ${paths.vectors}
|
| 143 |
+
init_tok2vec = ${paths.init_tok2vec}
|
| 144 |
+
vocab_data = null
|
| 145 |
+
lookups = null
|
| 146 |
+
before_init = null
|
| 147 |
+
after_init = null
|
| 148 |
+
|
| 149 |
+
[initialize.components]
|
| 150 |
+
|
| 151 |
+
[initialize.tokenizer]
|
configs/config_sm.cfg
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[paths]
|
| 2 |
+
train = null
|
| 3 |
+
dev = null
|
| 4 |
+
vectors = null
|
| 5 |
+
init_tok2vec = null
|
| 6 |
+
|
| 7 |
+
[system]
|
| 8 |
+
gpu_allocator = null
|
| 9 |
+
seed = 0
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["tok2vec","spancat"]
|
| 14 |
+
batch_size = 1000
|
| 15 |
+
disabled = []
|
| 16 |
+
before_creation = null
|
| 17 |
+
after_creation = null
|
| 18 |
+
after_pipeline_creation = null
|
| 19 |
+
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
|
| 20 |
+
vectors = {"@vectors":"spacy.Vectors.v1"}
|
| 21 |
+
|
| 22 |
+
[components]
|
| 23 |
+
|
| 24 |
+
[components.spancat]
|
| 25 |
+
factory = "spancat"
|
| 26 |
+
max_positive = null
|
| 27 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 28 |
+
spans_key = "sc"
|
| 29 |
+
threshold = 0.5
|
| 30 |
+
|
| 31 |
+
[components.spancat.model]
|
| 32 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 33 |
+
|
| 34 |
+
[components.spancat.model.reducer]
|
| 35 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 36 |
+
hidden_size = 128
|
| 37 |
+
|
| 38 |
+
[components.spancat.model.scorer]
|
| 39 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 40 |
+
nO = null
|
| 41 |
+
nI = null
|
| 42 |
+
|
| 43 |
+
[components.spancat.model.tok2vec]
|
| 44 |
+
@architectures = "spacy.Tok2VecListener.v1"
|
| 45 |
+
width = ${components.tok2vec.model.encode.width}
|
| 46 |
+
upstream = "*"
|
| 47 |
+
|
| 48 |
+
[components.spancat.suggester]
|
| 49 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 50 |
+
sizes = [1,2,3]
|
| 51 |
+
|
| 52 |
+
[components.tok2vec]
|
| 53 |
+
factory = "tok2vec"
|
| 54 |
+
|
| 55 |
+
[components.tok2vec.model]
|
| 56 |
+
@architectures = "spacy.Tok2Vec.v2"
|
| 57 |
+
|
| 58 |
+
[components.tok2vec.model.embed]
|
| 59 |
+
@architectures = "spacy.MultiHashEmbed.v2"
|
| 60 |
+
width = ${components.tok2vec.model.encode.width}
|
| 61 |
+
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
|
| 62 |
+
rows = [5000,1000,2500,2500]
|
| 63 |
+
include_static_vectors = false
|
| 64 |
+
|
| 65 |
+
[components.tok2vec.model.encode]
|
| 66 |
+
@architectures = "spacy.MaxoutWindowEncoder.v2"
|
| 67 |
+
width = 96
|
| 68 |
+
depth = 4
|
| 69 |
+
window_size = 1
|
| 70 |
+
maxout_pieces = 3
|
| 71 |
+
|
| 72 |
+
[corpora]
|
| 73 |
+
|
| 74 |
+
[corpora.dev]
|
| 75 |
+
@readers = "spacy.Corpus.v1"
|
| 76 |
+
path = ${paths.dev}
|
| 77 |
+
max_length = 0
|
| 78 |
+
gold_preproc = false
|
| 79 |
+
limit = 0
|
| 80 |
+
augmenter = null
|
| 81 |
+
|
| 82 |
+
[corpora.train]
|
| 83 |
+
@readers = "spacy.Corpus.v1"
|
| 84 |
+
path = ${paths.train}
|
| 85 |
+
max_length = 0
|
| 86 |
+
gold_preproc = false
|
| 87 |
+
limit = 0
|
| 88 |
+
augmenter = null
|
| 89 |
+
|
| 90 |
+
[training]
|
| 91 |
+
dev_corpus = "corpora.dev"
|
| 92 |
+
train_corpus = "corpora.train"
|
| 93 |
+
seed = ${system.seed}
|
| 94 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 95 |
+
dropout = 0.1
|
| 96 |
+
accumulate_gradient = 1
|
| 97 |
+
patience = 200
|
| 98 |
+
max_epochs = 10
|
| 99 |
+
max_steps = 0
|
| 100 |
+
eval_frequency = 20
|
| 101 |
+
frozen_components = []
|
| 102 |
+
annotating_components = []
|
| 103 |
+
before_to_disk = null
|
| 104 |
+
before_update = null
|
| 105 |
+
|
| 106 |
+
[training.batcher]
|
| 107 |
+
@batchers = "spacy.batch_by_words.v1"
|
| 108 |
+
discard_oversize = false
|
| 109 |
+
tolerance = 0.2
|
| 110 |
+
get_length = null
|
| 111 |
+
|
| 112 |
+
[training.batcher.size]
|
| 113 |
+
@schedules = "compounding.v1"
|
| 114 |
+
start = 1000
|
| 115 |
+
stop = 10000
|
| 116 |
+
compound = 1.001
|
| 117 |
+
t = 0.0
|
| 118 |
+
|
| 119 |
+
[training.logger]
|
| 120 |
+
@loggers = "spacy.ConsoleLogger.v1"
|
| 121 |
+
progress_bar = false
|
| 122 |
+
|
| 123 |
+
[training.optimizer]
|
| 124 |
+
@optimizers = "Adam.v1"
|
| 125 |
+
beta1 = 0.9
|
| 126 |
+
beta2 = 0.999
|
| 127 |
+
L2_is_weight_decay = true
|
| 128 |
+
L2 = 0.01
|
| 129 |
+
grad_clip = 1.0
|
| 130 |
+
use_averages = false
|
| 131 |
+
eps = 0.00000001
|
| 132 |
+
learn_rate = 0.001
|
| 133 |
+
|
| 134 |
+
[training.score_weights]
|
| 135 |
+
spans_sc_f = 1.0
|
| 136 |
+
spans_sc_p = 0.0
|
| 137 |
+
spans_sc_r = 0.0
|
| 138 |
+
|
| 139 |
+
[pretraining]
|
| 140 |
+
|
| 141 |
+
[initialize]
|
| 142 |
+
vectors = ${paths.vectors}
|
| 143 |
+
init_tok2vec = ${paths.init_tok2vec}
|
| 144 |
+
vocab_data = null
|
| 145 |
+
lookups = null
|
| 146 |
+
before_init = null
|
| 147 |
+
after_init = null
|
| 148 |
+
|
| 149 |
+
[initialize.components]
|
| 150 |
+
|
| 151 |
+
[initialize.tokenizer]
|
configs/config_trf.cfg
ADDED
|
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[paths]
|
| 2 |
+
train = null
|
| 3 |
+
dev = null
|
| 4 |
+
vectors = null
|
| 5 |
+
init_tok2vec = null
|
| 6 |
+
|
| 7 |
+
[system]
|
| 8 |
+
gpu_allocator = "pytorch"
|
| 9 |
+
seed = 0
|
| 10 |
+
|
| 11 |
+
[nlp]
|
| 12 |
+
lang = "en"
|
| 13 |
+
pipeline = ["transformer","spancat"]
|
| 14 |
+
batch_size = 5000
|
| 15 |
+
disabled = []
|
| 16 |
+
before_creation = null
|
| 17 |
+
after_creation = null
|
| 18 |
+
after_pipeline_creation = null
|
| 19 |
+
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
|
| 20 |
+
vectors = {"@vectors":"spacy.Vectors.v1"}
|
| 21 |
+
|
| 22 |
+
[components]
|
| 23 |
+
|
| 24 |
+
[components.spancat]
|
| 25 |
+
factory = "spancat"
|
| 26 |
+
max_positive = null
|
| 27 |
+
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
|
| 28 |
+
spans_key = "sc"
|
| 29 |
+
threshold = 0.5
|
| 30 |
+
|
| 31 |
+
[components.spancat.model]
|
| 32 |
+
@architectures = "spacy.SpanCategorizer.v1"
|
| 33 |
+
|
| 34 |
+
[components.spancat.model.reducer]
|
| 35 |
+
@layers = "spacy.mean_max_reducer.v1"
|
| 36 |
+
hidden_size = 128
|
| 37 |
+
|
| 38 |
+
[components.spancat.model.scorer]
|
| 39 |
+
@layers = "spacy.LinearLogistic.v1"
|
| 40 |
+
nO = null
|
| 41 |
+
nI = null
|
| 42 |
+
|
| 43 |
+
[components.spancat.model.tok2vec]
|
| 44 |
+
@architectures = "spacy-transformers.TransformerListener.v1"
|
| 45 |
+
grad_factor = 1.0
|
| 46 |
+
pooling = {"@layers":"reduce_mean.v1"}
|
| 47 |
+
upstream = "*"
|
| 48 |
+
|
| 49 |
+
[components.spancat.suggester]
|
| 50 |
+
@misc = "spacy.ngram_suggester.v1"
|
| 51 |
+
sizes = [1,2,3]
|
| 52 |
+
|
| 53 |
+
[components.transformer]
|
| 54 |
+
factory = "transformer"
|
| 55 |
+
max_batch_items = 4096
|
| 56 |
+
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
|
| 57 |
+
|
| 58 |
+
[components.transformer.model]
|
| 59 |
+
@architectures = "spacy-transformers.TransformerModel.v3"
|
| 60 |
+
name = "roberta-base"
|
| 61 |
+
mixed_precision = false
|
| 62 |
+
|
| 63 |
+
[components.transformer.model.get_spans]
|
| 64 |
+
@span_getters = "spacy-transformers.strided_spans.v1"
|
| 65 |
+
window = 128
|
| 66 |
+
stride = 96
|
| 67 |
+
|
| 68 |
+
[components.transformer.model.grad_scaler_config]
|
| 69 |
+
|
| 70 |
+
[components.transformer.model.tokenizer_config]
|
| 71 |
+
use_fast = true
|
| 72 |
+
|
| 73 |
+
[components.transformer.model.transformer_config]
|
| 74 |
+
|
| 75 |
+
[corpora]
|
| 76 |
+
|
| 77 |
+
[corpora.dev]
|
| 78 |
+
@readers = "spacy.Corpus.v1"
|
| 79 |
+
path = ${paths.dev}
|
| 80 |
+
max_length = 0
|
| 81 |
+
gold_preproc = false
|
| 82 |
+
limit = 0
|
| 83 |
+
augmenter = null
|
| 84 |
+
|
| 85 |
+
[corpora.train]
|
| 86 |
+
@readers = "spacy.Corpus.v1"
|
| 87 |
+
path = ${paths.train}
|
| 88 |
+
max_length = 0
|
| 89 |
+
gold_preproc = false
|
| 90 |
+
limit = 0
|
| 91 |
+
augmenter = null
|
| 92 |
+
|
| 93 |
+
[training]
|
| 94 |
+
dev_corpus = "corpora.dev"
|
| 95 |
+
train_corpus = "corpora.train"
|
| 96 |
+
seed = ${system.seed}
|
| 97 |
+
gpu_allocator = ${system.gpu_allocator}
|
| 98 |
+
dropout = 0.1
|
| 99 |
+
accumulate_gradient = 1
|
| 100 |
+
patience = 20000
|
| 101 |
+
max_epochs = 10
|
| 102 |
+
max_steps = 0
|
| 103 |
+
eval_frequency = 200
|
| 104 |
+
frozen_components = []
|
| 105 |
+
annotating_components = []
|
| 106 |
+
before_to_disk = null
|
| 107 |
+
before_update = null
|
| 108 |
+
|
| 109 |
+
[training.batcher]
|
| 110 |
+
@batchers = "spacy.batch_by_padded.v1"
|
| 111 |
+
discard_oversize = true
|
| 112 |
+
size = 2000
|
| 113 |
+
buffer = 256
|
| 114 |
+
get_length = null
|
| 115 |
+
|
| 116 |
+
[training.logger]
|
| 117 |
+
@loggers = "spacy.ConsoleLogger.v1"
|
| 118 |
+
progress_bar = false
|
| 119 |
+
|
| 120 |
+
[training.optimizer]
|
| 121 |
+
@optimizers = "Adam.v1"
|
| 122 |
+
beta1 = 0.9
|
| 123 |
+
beta2 = 0.999
|
| 124 |
+
L2_is_weight_decay = true
|
| 125 |
+
L2 = 0.01
|
| 126 |
+
grad_clip = 1.0
|
| 127 |
+
use_averages = false
|
| 128 |
+
eps = 0.00000001
|
| 129 |
+
|
| 130 |
+
[training.optimizer.learn_rate]
|
| 131 |
+
@schedules = "warmup_linear.v1"
|
| 132 |
+
warmup_steps = 250
|
| 133 |
+
total_steps = 20000
|
| 134 |
+
initial_rate = 0.00005
|
| 135 |
+
|
| 136 |
+
[training.score_weights]
|
| 137 |
+
spans_sc_f = 1.0
|
| 138 |
+
spans_sc_p = 0.0
|
| 139 |
+
spans_sc_r = 0.0
|
| 140 |
+
|
| 141 |
+
[pretraining]
|
| 142 |
+
|
| 143 |
+
[initialize]
|
| 144 |
+
vectors = ${paths.vectors}
|
| 145 |
+
init_tok2vec = ${paths.init_tok2vec}
|
| 146 |
+
vocab_data = null
|
| 147 |
+
lookups = null
|
| 148 |
+
before_init = null
|
| 149 |
+
after_init = null
|
| 150 |
+
|
| 151 |
+
[initialize.components]
|
| 152 |
+
|
| 153 |
+
[initialize.tokenizer]
|
corpus/.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
corpus/dev.spacy
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1560591d4e8a0064c6c359a322d89b679935c6cc530f5fdcfbe553b8e06f21c3
|
| 3 |
+
size 918183
|
corpus/test.spacy
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fe99e1460aa508a681da92b37c134841509de398933a6a990eb479dd1de87c94
|
| 3 |
+
size 787612
|
corpus/train.spacy
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d06e421c115d51d076d660262478e9cf5ca18b01c249b4bf93c26d1be3aeec30
|
| 3 |
+
size 3869194
|
gold-training-data/christine_0020_0040_annotated.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9866f542e3e0a52ed20dc101448bd9a03254e714bf8e8ba6b164523356e06303
|
| 3 |
+
size 31070680
|
gold-training-data/christine_0040_0060.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:edd79c66e3b4fbe5eeb4492e0aa1259e5a3ee879df75e2bac707be7056cb8537
|
| 3 |
+
size 49669585
|
gold-training-data/greg_0000_0020.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:84a946d422975a14022c16efdb8d7f05d5ef60c935c47e7eb57ca97ad7292bb7
|
| 3 |
+
size 22953073
|
gold-training-data/greg_60_80.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ed22a3c20a95fdd2afd900350bae6e0f46a93dc698ea60f21da29b79d92c67aa
|
| 3 |
+
size 22206314
|
model_comparison.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Overall Model Performance
|
| 2 |
+
| Model | Precision | Recall | F-Score |
|
| 3 |
+
|:------------|------------:|---------:|----------:|
|
| 4 |
+
| Small | 94.1 | 89.2 | 91.6 |
|
| 5 |
+
| Medium | 94 | 90.5 | 92.2 |
|
| 6 |
+
| Large | 94.1 | 91.7 | 92.9 |
|
| 7 |
+
| Transformer | 93.6 | 91.6 | 92.6 |
|
| 8 |
+
|
| 9 |
+
# Performance per Label
|
| 10 |
+
| Model | Label | Precision | Recall | F-Score |
|
| 11 |
+
|:------------|:----------------|------------:|---------:|----------:|
|
| 12 |
+
| Small | BUILDING | 94.7 | 90.2 | 92.4 |
|
| 13 |
+
| Medium | BUILDING | 95.2 | 92.8 | 94 |
|
| 14 |
+
| Large | BUILDING | 94.8 | 93.2 | 94 |
|
| 15 |
+
| Transformer | BUILDING | 94.3 | 94.2 | 94.3 |
|
| 16 |
+
| Small | COUNTRY | 97.6 | 94.6 | 96.1 |
|
| 17 |
+
| Medium | COUNTRY | 96.5 | 96.3 | 96.4 |
|
| 18 |
+
| Large | COUNTRY | 97.7 | 96.8 | 97.2 |
|
| 19 |
+
| Transformer | COUNTRY | 96.6 | 96.8 | 96.7 |
|
| 20 |
+
| Small | DLF | 92.4 | 86.4 | 89.3 |
|
| 21 |
+
| Medium | DLF | 95 | 84.1 | 89.2 |
|
| 22 |
+
| Large | DLF | 93.5 | 88.4 | 90.9 |
|
| 23 |
+
| Transformer | DLF | 94.1 | 90.4 | 92.2 |
|
| 24 |
+
| Small | ENV_FEATURES | 86.6 | 81.2 | 83.8 |
|
| 25 |
+
| Medium | ENV_FEATURES | 86.3 | 79.1 | 82.5 |
|
| 26 |
+
| Large | ENV_FEATURES | 77.5 | 90.1 | 83.3 |
|
| 27 |
+
| Transformer | ENV_FEATURES | 85.1 | 86.9 | 86 |
|
| 28 |
+
| Small | INT_SPACE | 93.8 | 85.9 | 89.6 |
|
| 29 |
+
| Medium | INT_SPACE | 93.9 | 91.3 | 92.6 |
|
| 30 |
+
| Large | INT_SPACE | 92.4 | 93.8 | 93.1 |
|
| 31 |
+
| Transformer | INT_SPACE | 94.6 | 91.8 | 93.2 |
|
| 32 |
+
| Small | NPIP | 92.7 | 86.4 | 89.4 |
|
| 33 |
+
| Medium | NPIP | 94.5 | 82.4 | 88 |
|
| 34 |
+
| Large | NPIP | 92.7 | 86.6 | 89.6 |
|
| 35 |
+
| Transformer | NPIP | 94.8 | 83 | 88.5 |
|
| 36 |
+
| Small | POPULATED_PLACE | 94 | 90.6 | 92.3 |
|
| 37 |
+
| Medium | POPULATED_PLACE | 93 | 91.2 | 92.1 |
|
| 38 |
+
| Large | POPULATED_PLACE | 95.2 | 90.4 | 92.7 |
|
| 39 |
+
| Transformer | POPULATED_PLACE | 92.1 | 91.3 | 91.7 |
|
| 40 |
+
| Small | REGION | 84.4 | 68.4 | 75.6 |
|
| 41 |
+
| Medium | REGION | 81.4 | 75.8 | 78.5 |
|
| 42 |
+
| Large | REGION | 83 | 76.8 | 79.8 |
|
| 43 |
+
| Transformer | REGION | 81.2 | 68.4 | 74.3 |
|
| 44 |
+
| Small | SPATIAL_OBJ | 96 | 90 | 92.9 |
|
| 45 |
+
| Medium | SPATIAL_OBJ | 95.2 | 93.8 | 94.5 |
|
| 46 |
+
| Large | SPATIAL_OBJ | 95.3 | 95.5 | 95.4 |
|
| 47 |
+
| Transformer | SPATIAL_OBJ | 96.3 | 92.8 | 94.5 |
|
model_comparison.tex
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
\begin{tabular}{l|cccc|cccc|cccc|cccc}
|
| 2 |
+
\toprule
|
| 3 |
+
{} & \multicolumn{4}{c}{Precision} & \multicolumn{4}{c}{Recall} & \multicolumn{4}{c}{F-Score} \\
|
| 4 |
+
\textbf{Model} & Large & Medium & Small & Transformer & Large & Medium & Small & Transformer & Large & Medium & Small & Transformer \\
|
| 5 |
+
\textbf{Label } & & & & & & & & & & & & \\
|
| 6 |
+
\midrule
|
| 7 |
+
\textbf{BUILDING } & 94.8 & 95.2 & 94.7 & 94.3 & 93.2 & 92.8 & 90.2 & 94.2 & 94.0 & 94.0 & 92.4 & 94.3 \\
|
| 8 |
+
\textbf{COUNTRY } & 97.7 & 96.5 & 97.6 & 96.6 & 96.8 & 96.3 & 94.6 & 96.8 & 97.2 & 96.4 & 96.1 & 96.7 \\
|
| 9 |
+
\textbf{DLF } & 93.5 & 95.0 & 92.4 & 94.1 & 88.4 & 84.1 & 86.4 & 90.4 & 90.9 & 89.2 & 89.3 & 92.2 \\
|
| 10 |
+
\textbf{ENV_FEATURES } & 77.5 & 86.3 & 86.6 & 85.1 & 90.1 & 79.1 & 81.2 & 86.9 & 83.3 & 82.5 & 83.8 & 86.0 \\
|
| 11 |
+
\textbf{INT_SPACE } & 92.4 & 93.9 & 93.8 & 94.6 & 93.8 & 91.3 & 85.9 & 91.8 & 93.1 & 92.6 & 89.6 & 93.2 \\
|
| 12 |
+
\textbf{NPIP } & 92.7 & 94.5 & 92.7 & 94.8 & 86.6 & 82.4 & 86.4 & 83.0 & 89.6 & 88.0 & 89.4 & 88.5 \\
|
| 13 |
+
\textbf{POPULATED_PLACE} & 95.2 & 93.0 & 94.0 & 92.1 & 90.4 & 91.2 & 90.6 & 91.3 & 92.7 & 92.1 & 92.3 & 91.7 \\
|
| 14 |
+
\textbf{REGION } & 83.0 & 81.4 & 84.4 & 81.2 & 76.8 & 75.8 & 68.4 & 68.4 & 79.8 & 78.5 & 75.6 & 74.3 \\
|
| 15 |
+
\textbf{SPATIAL_OBJ } & 95.3 & 95.2 & 96.0 & 96.3 & 95.5 & 93.8 & 90.0 & 92.8 & 95.4 & 94.5 & 92.9 & 94.5 \\
|
| 16 |
+
\bottomrule
|
| 17 |
+
\end{tabular}
|
notebooks/prepare-training.ipynb
ADDED
|
@@ -0,0 +1,246 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": 9,
|
| 6 |
+
"metadata": {},
|
| 7 |
+
"outputs": [],
|
| 8 |
+
"source": [
|
| 9 |
+
"import srsly\n",
|
| 10 |
+
"import glob\n",
|
| 11 |
+
"from collections import Counter"
|
| 12 |
+
]
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"cell_type": "code",
|
| 16 |
+
"execution_count": 10,
|
| 17 |
+
"metadata": {},
|
| 18 |
+
"outputs": [
|
| 19 |
+
{
|
| 20 |
+
"data": {
|
| 21 |
+
"text/plain": [
|
| 22 |
+
"4"
|
| 23 |
+
]
|
| 24 |
+
},
|
| 25 |
+
"execution_count": 10,
|
| 26 |
+
"metadata": {},
|
| 27 |
+
"output_type": "execute_result"
|
| 28 |
+
}
|
| 29 |
+
],
|
| 30 |
+
"source": [
|
| 31 |
+
"files = glob.glob(\"./gold-training-data/*.jsonl\")\n",
|
| 32 |
+
"len(files)"
|
| 33 |
+
]
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"cell_type": "code",
|
| 37 |
+
"execution_count": 11,
|
| 38 |
+
"metadata": {},
|
| 39 |
+
"outputs": [],
|
| 40 |
+
"source": [
|
| 41 |
+
"all_data = []\n",
|
| 42 |
+
"for filename in files:\n",
|
| 43 |
+
" data = list(srsly.read_jsonl(filename))\n",
|
| 44 |
+
" for item in data:\n",
|
| 45 |
+
" if len(item[\"spans\"]) > 0:\n",
|
| 46 |
+
" all_data.append(item)"
|
| 47 |
+
]
|
| 48 |
+
},
|
| 49 |
+
{
|
| 50 |
+
"cell_type": "code",
|
| 51 |
+
"execution_count": 12,
|
| 52 |
+
"metadata": {},
|
| 53 |
+
"outputs": [
|
| 54 |
+
{
|
| 55 |
+
"data": {
|
| 56 |
+
"text/plain": [
|
| 57 |
+
"7868"
|
| 58 |
+
]
|
| 59 |
+
},
|
| 60 |
+
"execution_count": 12,
|
| 61 |
+
"metadata": {},
|
| 62 |
+
"output_type": "execute_result"
|
| 63 |
+
}
|
| 64 |
+
],
|
| 65 |
+
"source": [
|
| 66 |
+
"len(all_data)"
|
| 67 |
+
]
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"cell_type": "code",
|
| 71 |
+
"execution_count": 13,
|
| 72 |
+
"metadata": {},
|
| 73 |
+
"outputs": [
|
| 74 |
+
{
|
| 75 |
+
"data": {
|
| 76 |
+
"text/plain": [
|
| 77 |
+
"{'text': 'I was born in a small town called , and I was born May 5, 1928.',\n",
|
| 78 |
+
" 'spans': [{'start': 22,\n",
|
| 79 |
+
" 'end': 26,\n",
|
| 80 |
+
" 'token_start': 6,\n",
|
| 81 |
+
" 'token_end': 6,\n",
|
| 82 |
+
" 'label': 'POPULATED_PLACE'}],\n",
|
| 83 |
+
" '_input_hash': 1949719959,\n",
|
| 84 |
+
" '_task_hash': 335893137,\n",
|
| 85 |
+
" 'tokens': [{'text': 'I', 'start': 0, 'end': 1, 'id': 0, 'ws': True},\n",
|
| 86 |
+
" {'text': 'was', 'start': 2, 'end': 5, 'id': 1, 'ws': True},\n",
|
| 87 |
+
" {'text': 'born', 'start': 6, 'end': 10, 'id': 2, 'ws': True},\n",
|
| 88 |
+
" {'text': 'in', 'start': 11, 'end': 13, 'id': 3, 'ws': True},\n",
|
| 89 |
+
" {'text': 'a', 'start': 14, 'end': 15, 'id': 4, 'ws': True},\n",
|
| 90 |
+
" {'text': 'small', 'start': 16, 'end': 21, 'id': 5, 'ws': True},\n",
|
| 91 |
+
" {'text': 'town', 'start': 22, 'end': 26, 'id': 6, 'ws': True},\n",
|
| 92 |
+
" {'text': 'called', 'start': 27, 'end': 33, 'id': 7, 'ws': True},\n",
|
| 93 |
+
" {'text': ',', 'start': 34, 'end': 35, 'id': 8, 'ws': True},\n",
|
| 94 |
+
" {'text': 'and', 'start': 36, 'end': 39, 'id': 9, 'ws': True},\n",
|
| 95 |
+
" {'text': 'I', 'start': 40, 'end': 41, 'id': 10, 'ws': True},\n",
|
| 96 |
+
" {'text': 'was', 'start': 42, 'end': 45, 'id': 11, 'ws': True},\n",
|
| 97 |
+
" {'text': 'born', 'start': 46, 'end': 50, 'id': 12, 'ws': True},\n",
|
| 98 |
+
" {'text': 'May', 'start': 51, 'end': 54, 'id': 13, 'ws': True},\n",
|
| 99 |
+
" {'text': '5', 'start': 55, 'end': 56, 'id': 14, 'ws': False},\n",
|
| 100 |
+
" {'text': ',', 'start': 56, 'end': 57, 'id': 15, 'ws': True},\n",
|
| 101 |
+
" {'text': '1928', 'start': 58, 'end': 62, 'id': 16, 'ws': False},\n",
|
| 102 |
+
" {'text': '.', 'start': 62, 'end': 63, 'id': 17, 'ws': False}],\n",
|
| 103 |
+
" '_view_id': 'spans_manual',\n",
|
| 104 |
+
" 'answer': 'accept',\n",
|
| 105 |
+
" '_timestamp': 1669136567}"
|
| 106 |
+
]
|
| 107 |
+
},
|
| 108 |
+
"execution_count": 13,
|
| 109 |
+
"metadata": {},
|
| 110 |
+
"output_type": "execute_result"
|
| 111 |
+
}
|
| 112 |
+
],
|
| 113 |
+
"source": [
|
| 114 |
+
"all_data[0]"
|
| 115 |
+
]
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"cell_type": "code",
|
| 119 |
+
"execution_count": null,
|
| 120 |
+
"metadata": {},
|
| 121 |
+
"outputs": [],
|
| 122 |
+
"source": []
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"cell_type": "code",
|
| 126 |
+
"execution_count": 14,
|
| 127 |
+
"metadata": {},
|
| 128 |
+
"outputs": [
|
| 129 |
+
{
|
| 130 |
+
"name": "stdout",
|
| 131 |
+
"output_type": "stream",
|
| 132 |
+
"text": [
|
| 133 |
+
"POPULATED_PLACE: 15222\n",
|
| 134 |
+
"BUILDING: 11513\n",
|
| 135 |
+
"COUNTRY: 5969\n",
|
| 136 |
+
"SPATIAL_OBJ: 5680\n",
|
| 137 |
+
"DLF: 6191\n",
|
| 138 |
+
"INT_SPACE: 3262\n",
|
| 139 |
+
"ENV_FEATURES: 1555\n",
|
| 140 |
+
"REGION: 1408\n",
|
| 141 |
+
"NPIP: 2573\n"
|
| 142 |
+
]
|
| 143 |
+
}
|
| 144 |
+
],
|
| 145 |
+
"source": [
|
| 146 |
+
"def merge_and_deduplicate_spans(all_data):\n",
|
| 147 |
+
" # Mapping of labels to be merged\n",
|
| 148 |
+
" label_mapping = {\n",
|
| 149 |
+
" 'INTERIOR_SPACE': 'INT_SPACE',\n",
|
| 150 |
+
" 'RIVER': 'ENV_FEATURES',\n",
|
| 151 |
+
" 'FOREST': 'ENV_FEATURES',\n",
|
| 152 |
+
" 'GHETTO': 'POPULATED_PLACE'\n",
|
| 153 |
+
" }\n",
|
| 154 |
+
"\n",
|
| 155 |
+
" # Process each annotation in the dataset\n",
|
| 156 |
+
" for annotation in all_data:\n",
|
| 157 |
+
" new_spans = [] # List to hold updated and unique spans\n",
|
| 158 |
+
"\n",
|
| 159 |
+
" # Process each span\n",
|
| 160 |
+
" for span in annotation['spans']:\n",
|
| 161 |
+
" # Skip spans with the label \"CONTINENT\"\n",
|
| 162 |
+
" if span[\"label\"] == \"CONTINENT\":\n",
|
| 163 |
+
" continue\n",
|
| 164 |
+
"\n",
|
| 165 |
+
" # Update label if it's in the mapping\n",
|
| 166 |
+
" if span['label'] in label_mapping:\n",
|
| 167 |
+
" span['label'] = label_mapping[span['label']]\n",
|
| 168 |
+
"\n",
|
| 169 |
+
" # Check for duplicates\n",
|
| 170 |
+
" if span not in new_spans:\n",
|
| 171 |
+
" new_spans.append(span)\n",
|
| 172 |
+
"\n",
|
| 173 |
+
" # Replace old spans with new_spans\n",
|
| 174 |
+
" annotation['spans'] = new_spans\n",
|
| 175 |
+
" return all_data\n",
|
| 176 |
+
"\n",
|
| 177 |
+
"\n",
|
| 178 |
+
"all_data = merge_and_deduplicate_spans(all_data)\n",
|
| 179 |
+
"\n",
|
| 180 |
+
"srsly.write_jsonl(\"assets/annotated_data_spans.jsonl\", all_data)\n",
|
| 181 |
+
"\n",
|
| 182 |
+
"\n",
|
| 183 |
+
"# Create a Counter object for counting labels\n",
|
| 184 |
+
"label_counter = Counter()\n",
|
| 185 |
+
"\n",
|
| 186 |
+
"# Iterate over each annotation in the dataset\n",
|
| 187 |
+
"for annotation in all_data:\n",
|
| 188 |
+
" # Extract labels from each 'span' in the 'spans' list and add to the counter\n",
|
| 189 |
+
" labels = [span['label'] for span in annotation['spans']]\n",
|
| 190 |
+
" label_counter.update(labels)\n",
|
| 191 |
+
"\n",
|
| 192 |
+
"# Print out the counts\n",
|
| 193 |
+
"for label, count in label_counter.items():\n",
|
| 194 |
+
" print(f\"{label}: {count}\")"
|
| 195 |
+
]
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"cell_type": "code",
|
| 199 |
+
"execution_count": 15,
|
| 200 |
+
"metadata": {},
|
| 201 |
+
"outputs": [
|
| 202 |
+
{
|
| 203 |
+
"data": {
|
| 204 |
+
"text/plain": [
|
| 205 |
+
"7868"
|
| 206 |
+
]
|
| 207 |
+
},
|
| 208 |
+
"execution_count": 15,
|
| 209 |
+
"metadata": {},
|
| 210 |
+
"output_type": "execute_result"
|
| 211 |
+
}
|
| 212 |
+
],
|
| 213 |
+
"source": [
|
| 214 |
+
"len(all_data)"
|
| 215 |
+
]
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"cell_type": "code",
|
| 219 |
+
"execution_count": null,
|
| 220 |
+
"metadata": {},
|
| 221 |
+
"outputs": [],
|
| 222 |
+
"source": []
|
| 223 |
+
}
|
| 224 |
+
],
|
| 225 |
+
"metadata": {
|
| 226 |
+
"kernelspec": {
|
| 227 |
+
"display_name": "holocaust",
|
| 228 |
+
"language": "python",
|
| 229 |
+
"name": "python3"
|
| 230 |
+
},
|
| 231 |
+
"language_info": {
|
| 232 |
+
"codemirror_mode": {
|
| 233 |
+
"name": "ipython",
|
| 234 |
+
"version": 3
|
| 235 |
+
},
|
| 236 |
+
"file_extension": ".py",
|
| 237 |
+
"mimetype": "text/x-python",
|
| 238 |
+
"name": "python",
|
| 239 |
+
"nbconvert_exporter": "python",
|
| 240 |
+
"pygments_lexer": "ipython3",
|
| 241 |
+
"version": "3.10.13"
|
| 242 |
+
}
|
| 243 |
+
},
|
| 244 |
+
"nbformat": 4,
|
| 245 |
+
"nbformat_minor": 2
|
| 246 |
+
}
|
notebooks/testing.ipynb
ADDED
|
@@ -0,0 +1,1010 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": 1,
|
| 6 |
+
"metadata": {},
|
| 7 |
+
"outputs": [],
|
| 8 |
+
"source": [
|
| 9 |
+
"import spacy\n",
|
| 10 |
+
"from spacy import displacy"
|
| 11 |
+
]
|
| 12 |
+
},
|
| 13 |
+
{
|
| 14 |
+
"cell_type": "code",
|
| 15 |
+
"execution_count": 2,
|
| 16 |
+
"metadata": {},
|
| 17 |
+
"outputs": [],
|
| 18 |
+
"source": [
|
| 19 |
+
"text = \"\"\"\n",
|
| 20 |
+
"ANSWER: None QUESTION: Please tell us your full name.\n",
|
| 21 |
+
"\n",
|
| 22 |
+
"ANSWER: Marie Zosnika Schwartzman.\n",
|
| 23 |
+
"\n",
|
| 24 |
+
"QUESTION: And where were you born?\n",
|
| 25 |
+
"\n",
|
| 26 |
+
"ANSWER: In Paris . Day, August 21st, 1925.\n",
|
| 27 |
+
"\n",
|
| 28 |
+
"QUESTION: Let's talk about your family. What was your father's name?\n",
|
| 29 |
+
"\n",
|
| 30 |
+
"ANSWER: Avram.\n",
|
| 31 |
+
"\n",
|
| 32 |
+
"QUESTION: And what did he do?\n",
|
| 33 |
+
"\n",
|
| 34 |
+
"ANSWER: He was like a clothes designer.\n",
|
| 35 |
+
"\n",
|
| 36 |
+
"QUESTION: And where -- where did he come from?\n",
|
| 37 |
+
"\n",
|
| 38 |
+
"ANSWER: Poland .\n",
|
| 39 |
+
"\n",
|
| 40 |
+
"QUESTION: What -- what city in Poland ?\n",
|
| 41 |
+
"\n",
|
| 42 |
+
"ANSWER: Ciechanow . Don't ask me to spell it.\n",
|
| 43 |
+
"\n",
|
| 44 |
+
"QUESTION: Okay. And you -- ANSWER: But I think it's in the thing there.\n",
|
| 45 |
+
"\n",
|
| 46 |
+
"QUESTION: And what was your mother's name?\n",
|
| 47 |
+
"\n",
|
| 48 |
+
"ANSWER: Tovah.\n",
|
| 49 |
+
"\n",
|
| 50 |
+
"QUESTION: And her maiden name? Do you know that?\n",
|
| 51 |
+
"\n",
|
| 52 |
+
"ANSWER: Mifda (phonetic). Q: And where was she from originally?\n",
|
| 53 |
+
"\n",
|
| 54 |
+
"ANSWER: The same town , Ciechanow .\n",
|
| 55 |
+
"\n",
|
| 56 |
+
"QUESTION: Uh-huh. And was your family a very religious family?\n",
|
| 57 |
+
"\n",
|
| 58 |
+
"ANSWER: I don't think so. Not my father anyway. He was not religious. I think my mother was a little bit more religious. But we were not raised religiously. Just we kept to traditions and that's it.\n",
|
| 59 |
+
"\n",
|
| 60 |
+
"QUESTION: Now, you said you were born in Paris .\n",
|
| 61 |
+
"\n",
|
| 62 |
+
"ANSWER: Yes.\n",
|
| 63 |
+
"\n",
|
| 64 |
+
"QUESTION: When did your family come to Paris ?\n",
|
| 65 |
+
"\n",
|
| 66 |
+
"ANSWER: In 1922.\n",
|
| 67 |
+
"\n",
|
| 68 |
+
"QUESTION: What brought them there?\n",
|
| 69 |
+
"\n",
|
| 70 |
+
"ANSWER: My father came much earlier. And my family -- my mother joined with my oldest sister in Paris -- to Paris .\n",
|
| 71 |
+
"\n",
|
| 72 |
+
"QUESTION: What brought your father to Paris ?\n",
|
| 73 |
+
"\n",
|
| 74 |
+
"ANSWER: I think the Army, to a certain extent, and a job, a good job.\n",
|
| 75 |
+
"\n",
|
| 76 |
+
"QUESTION: How large was your family?\n",
|
| 77 |
+
"\n",
|
| 78 |
+
"ANSWER: Well, I had four sisters, two brothers, and my mother and father. I had a lot of family in Poland ; my grandparents, my aunts, my uncles, my cousins.\n",
|
| 79 |
+
"\n",
|
| 80 |
+
"QUESTION: But you were born in Paris . Did you go back to Poland to visit?\n",
|
| 81 |
+
"\n",
|
| 82 |
+
"ANSWER: Yes, my grandfather, we went to visit.\n",
|
| 83 |
+
"\n",
|
| 84 |
+
"QUESTION: And you stayed with large extended family there?\n",
|
| 85 |
+
"\n",
|
| 86 |
+
"ANSWER: Yes, my grandfather didn't make my mother go home , go back.\n",
|
| 87 |
+
"\n",
|
| 88 |
+
"QUESTION: He wanted her to stay?\n",
|
| 89 |
+
"\n",
|
| 90 |
+
"ANSWER: Yes.\n",
|
| 91 |
+
"\n",
|
| 92 |
+
"QUESTION: Yeah. And so where were you in the order of children? Were you the second?\n",
|
| 93 |
+
"\n",
|
| 94 |
+
"ANSWER: Second.\n",
|
| 95 |
+
"\n",
|
| 96 |
+
"QUESTION: You were the second child?\n",
|
| 97 |
+
"\n",
|
| 98 |
+
"ANSWER: Right.\n",
|
| 99 |
+
"\n",
|
| 100 |
+
"QUESTION: Yeah. And what kind of neighborhood did you live in in Paris ?\n",
|
| 101 |
+
"\n",
|
| 102 |
+
"ANSWER: We lived in a mixed neighborhood .\n",
|
| 103 |
+
"\n",
|
| 104 |
+
"QUESTION: Non-Jews and -- ANSWER: Non-Jews and Jews.\n",
|
| 105 |
+
"\n",
|
| 106 |
+
"QUESTION: Uh-huh. And did you have non-Jewish friends?\n",
|
| 107 |
+
"\n",
|
| 108 |
+
"ANSWER: Yes. Actually, I was -- when I came back from the camp , I lived with my friend I went to school with when we was six and a half years old.\n",
|
| 109 |
+
"\n",
|
| 110 |
+
"QUESTION: Uh-huh.\n",
|
| 111 |
+
"\n",
|
| 112 |
+
"ANSWER: And she was Catholic.\n",
|
| 113 |
+
"\n",
|
| 114 |
+
"QUESTION: Uh-huh.\n",
|
| 115 |
+
"\n",
|
| 116 |
+
"ANSWER: My Jewish family didn't have room for me.\n",
|
| 117 |
+
"\n",
|
| 118 |
+
"QUESTION: Uh-huh. Let's talk a little bit about your schooling. When did you begin school ? How old were you?\n",
|
| 119 |
+
"\n",
|
| 120 |
+
"ANSWER: We begin school early in France . We go to first the maternelle, which was maybe about three years old. And the -- QUESTION: Is this a public school ?\n",
|
| 121 |
+
"\n",
|
| 122 |
+
"ANSWER: Yes.\n",
|
| 123 |
+
"\n",
|
| 124 |
+
"QUESTION: So there were Jews and non-Jews in your class?\n",
|
| 125 |
+
"\n",
|
| 126 |
+
"ANSWER: Yes, Jews and non-Jews. But mostly there were non-Jews in my school . And then first grade, we start -- I think I was six, six and a half. And when I was arrested, I was in lycee . And didn't finish my education, therefore. Q: Yeah. Yeah.\n",
|
| 127 |
+
"\n",
|
| 128 |
+
"ANSWER: Six years.\n",
|
| 129 |
+
"\n",
|
| 130 |
+
"QUESTION: Yeah. We'll get to that in a few minutes. So you -- you went to a regular public school .\n",
|
| 131 |
+
"\n",
|
| 132 |
+
"ANSWER: Uh-huh.\n",
|
| 133 |
+
"\n",
|
| 134 |
+
"QUESTION: That's now what we call elementary school .\n",
|
| 135 |
+
"\n",
|
| 136 |
+
"ANSWER: Yes.\n",
|
| 137 |
+
"\n",
|
| 138 |
+
"QUESTION: Was there any problems there? You said 1t was mixed.\n",
|
| 139 |
+
"\n",
|
| 140 |
+
"ANSWER: Yeah. No problem.\n",
|
| 141 |
+
"\n",
|
| 142 |
+
"QUESTION: No problems?\n",
|
| 143 |
+
"\n",
|
| 144 |
+
"ANSWER: No.\n",
|
| 145 |
+
"\n",
|
| 146 |
+
"QUESTION: You were accepted?\n",
|
| 147 |
+
"\n",
|
| 148 |
+
"ANSWER: Yes. No problem. On the contrary, our teachers liked us very much. We were five girls in the same school , because we had girls' schools and boys' schools . It was not mixed schools . And the -- my principal, when we were arrested, came every single day to see if there were news from us. So it was no problem at all.\n",
|
| 149 |
+
"\n",
|
| 150 |
+
"QUESTION: Uh-huh. What kind of neighborhood did you live in? Was it upper class or middle class neighborhood?\n",
|
| 151 |
+
"\n",
|
| 152 |
+
"ANSWER: Probably middle class, I would say.\n",
|
| 153 |
+
"\n",
|
| 154 |
+
"QUESTION: Uh-huh. Did you live in a house or an apartment ?\n",
|
| 155 |
+
"\n",
|
| 156 |
+
"ANSWER: A house .\n",
|
| 157 |
+
"\n",
|
| 158 |
+
"QUESTION: And was it in the center of the city or the outskirts?\n",
|
| 159 |
+
"\"\"\""
|
| 160 |
+
]
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"cell_type": "code",
|
| 164 |
+
"execution_count": 3,
|
| 165 |
+
"metadata": {},
|
| 166 |
+
"outputs": [],
|
| 167 |
+
"source": [
|
| 168 |
+
"nlp = spacy.load(\"training/md/model-best/\")"
|
| 169 |
+
]
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"cell_type": "code",
|
| 173 |
+
"execution_count": 4,
|
| 174 |
+
"metadata": {},
|
| 175 |
+
"outputs": [],
|
| 176 |
+
"source": [
|
| 177 |
+
"doc = nlp(text)"
|
| 178 |
+
]
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"cell_type": "code",
|
| 182 |
+
"execution_count": 5,
|
| 183 |
+
"metadata": {},
|
| 184 |
+
"outputs": [
|
| 185 |
+
{
|
| 186 |
+
"data": {
|
| 187 |
+
"text/html": [
|
| 188 |
+
"<span class=\"tex2jax_ignore\"><div class=\"spans\" style=\"line-height: 2.5; direction: ltr\">\n",
|
| 189 |
+
" ANSWER : None QUESTION : Please tell us your full name . \n",
|
| 190 |
+
"\n",
|
| 191 |
+
" ANSWER : Marie Zosnika Schwartzman . \n",
|
| 192 |
+
"\n",
|
| 193 |
+
" QUESTION : And where were you born ? \n",
|
| 194 |
+
"\n",
|
| 195 |
+
" ANSWER : In \n",
|
| 196 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 197 |
+
" Paris\n",
|
| 198 |
+
" \n",
|
| 199 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 200 |
+
"</span>\n",
|
| 201 |
+
"\n",
|
| 202 |
+
" \n",
|
| 203 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 204 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 205 |
+
" POPULATED_PLACE\n",
|
| 206 |
+
" </span>\n",
|
| 207 |
+
"</span>\n",
|
| 208 |
+
"\n",
|
| 209 |
+
"\n",
|
| 210 |
+
"</span>\n",
|
| 211 |
+
". Day , August 21st , 1925 . \n",
|
| 212 |
+
"\n",
|
| 213 |
+
" QUESTION : Let 's talk about your family . What was your father 's name ? \n",
|
| 214 |
+
"\n",
|
| 215 |
+
" ANSWER : Avram . \n",
|
| 216 |
+
"\n",
|
| 217 |
+
" QUESTION : And what did he do ? \n",
|
| 218 |
+
"\n",
|
| 219 |
+
" ANSWER : He was like a clothes designer . \n",
|
| 220 |
+
"\n",
|
| 221 |
+
" QUESTION : And where -- where did he come from ? \n",
|
| 222 |
+
"\n",
|
| 223 |
+
" ANSWER : \n",
|
| 224 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 225 |
+
" Poland\n",
|
| 226 |
+
" \n",
|
| 227 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 228 |
+
"</span>\n",
|
| 229 |
+
"\n",
|
| 230 |
+
" \n",
|
| 231 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 232 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 233 |
+
" COUNTRY\n",
|
| 234 |
+
" </span>\n",
|
| 235 |
+
"</span>\n",
|
| 236 |
+
"\n",
|
| 237 |
+
"\n",
|
| 238 |
+
"</span>\n",
|
| 239 |
+
". \n",
|
| 240 |
+
"\n",
|
| 241 |
+
" QUESTION : What -- what \n",
|
| 242 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 243 |
+
" city\n",
|
| 244 |
+
" \n",
|
| 245 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 246 |
+
"</span>\n",
|
| 247 |
+
"\n",
|
| 248 |
+
" \n",
|
| 249 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 250 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 251 |
+
" POPULATED_PLACE\n",
|
| 252 |
+
" </span>\n",
|
| 253 |
+
"</span>\n",
|
| 254 |
+
"\n",
|
| 255 |
+
"\n",
|
| 256 |
+
"</span>\n",
|
| 257 |
+
"in \n",
|
| 258 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 259 |
+
" Poland\n",
|
| 260 |
+
" \n",
|
| 261 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 262 |
+
"</span>\n",
|
| 263 |
+
"\n",
|
| 264 |
+
" \n",
|
| 265 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 266 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 267 |
+
" COUNTRY\n",
|
| 268 |
+
" </span>\n",
|
| 269 |
+
"</span>\n",
|
| 270 |
+
"\n",
|
| 271 |
+
"\n",
|
| 272 |
+
"</span>\n",
|
| 273 |
+
"? \n",
|
| 274 |
+
"\n",
|
| 275 |
+
" ANSWER : Ciechanow . Do n't ask me to spell it . \n",
|
| 276 |
+
"\n",
|
| 277 |
+
" QUESTION : Okay . And you -- ANSWER : But I think it 's in the thing there . \n",
|
| 278 |
+
"\n",
|
| 279 |
+
" QUESTION : And what was your mother 's name ? \n",
|
| 280 |
+
"\n",
|
| 281 |
+
" ANSWER : Tovah . \n",
|
| 282 |
+
"\n",
|
| 283 |
+
" QUESTION : And her maiden name ? Do you know that ? \n",
|
| 284 |
+
"\n",
|
| 285 |
+
" ANSWER : Mifda ( phonetic ) . Q : And where was she from originally ? \n",
|
| 286 |
+
"\n",
|
| 287 |
+
" ANSWER : The same \n",
|
| 288 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 289 |
+
" town\n",
|
| 290 |
+
" \n",
|
| 291 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 292 |
+
"</span>\n",
|
| 293 |
+
"\n",
|
| 294 |
+
" \n",
|
| 295 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 296 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 297 |
+
" POPULATED_PLACE\n",
|
| 298 |
+
" </span>\n",
|
| 299 |
+
"</span>\n",
|
| 300 |
+
"\n",
|
| 301 |
+
"\n",
|
| 302 |
+
"</span>\n",
|
| 303 |
+
", \n",
|
| 304 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 305 |
+
" Ciechanow\n",
|
| 306 |
+
" \n",
|
| 307 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 308 |
+
"</span>\n",
|
| 309 |
+
"\n",
|
| 310 |
+
" \n",
|
| 311 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 312 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 313 |
+
" POPULATED_PLACE\n",
|
| 314 |
+
" </span>\n",
|
| 315 |
+
"</span>\n",
|
| 316 |
+
"\n",
|
| 317 |
+
"\n",
|
| 318 |
+
"</span>\n",
|
| 319 |
+
". \n",
|
| 320 |
+
"\n",
|
| 321 |
+
" QUESTION : Uh - huh . And was your family a very religious family ? \n",
|
| 322 |
+
"\n",
|
| 323 |
+
" ANSWER : I do n't think so . Not my father anyway . He was not religious . I think my mother was a little bit more religious . But we were not raised religiously . Just we kept to traditions and that 's it . \n",
|
| 324 |
+
"\n",
|
| 325 |
+
" QUESTION : Now , you said you were born in \n",
|
| 326 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 327 |
+
" Paris\n",
|
| 328 |
+
" \n",
|
| 329 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 330 |
+
"</span>\n",
|
| 331 |
+
"\n",
|
| 332 |
+
" \n",
|
| 333 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 334 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 335 |
+
" POPULATED_PLACE\n",
|
| 336 |
+
" </span>\n",
|
| 337 |
+
"</span>\n",
|
| 338 |
+
"\n",
|
| 339 |
+
"\n",
|
| 340 |
+
"</span>\n",
|
| 341 |
+
". \n",
|
| 342 |
+
"\n",
|
| 343 |
+
" ANSWER : Yes . \n",
|
| 344 |
+
"\n",
|
| 345 |
+
" QUESTION : When did your family come to \n",
|
| 346 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 347 |
+
" Paris\n",
|
| 348 |
+
" \n",
|
| 349 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 350 |
+
"</span>\n",
|
| 351 |
+
"\n",
|
| 352 |
+
" \n",
|
| 353 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 354 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 355 |
+
" POPULATED_PLACE\n",
|
| 356 |
+
" </span>\n",
|
| 357 |
+
"</span>\n",
|
| 358 |
+
"\n",
|
| 359 |
+
"\n",
|
| 360 |
+
"</span>\n",
|
| 361 |
+
"? \n",
|
| 362 |
+
"\n",
|
| 363 |
+
" ANSWER : In 1922 . \n",
|
| 364 |
+
"\n",
|
| 365 |
+
" QUESTION : What brought them there ? \n",
|
| 366 |
+
"\n",
|
| 367 |
+
" ANSWER : My father came much earlier . And my family -- my mother joined with my oldest sister in \n",
|
| 368 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 369 |
+
" Paris\n",
|
| 370 |
+
" \n",
|
| 371 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 372 |
+
"</span>\n",
|
| 373 |
+
"\n",
|
| 374 |
+
" \n",
|
| 375 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 376 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 377 |
+
" POPULATED_PLACE\n",
|
| 378 |
+
" </span>\n",
|
| 379 |
+
"</span>\n",
|
| 380 |
+
"\n",
|
| 381 |
+
"\n",
|
| 382 |
+
"</span>\n",
|
| 383 |
+
"-- to \n",
|
| 384 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 385 |
+
" Paris\n",
|
| 386 |
+
" \n",
|
| 387 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 388 |
+
"</span>\n",
|
| 389 |
+
"\n",
|
| 390 |
+
" \n",
|
| 391 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 392 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 393 |
+
" POPULATED_PLACE\n",
|
| 394 |
+
" </span>\n",
|
| 395 |
+
"</span>\n",
|
| 396 |
+
"\n",
|
| 397 |
+
"\n",
|
| 398 |
+
"</span>\n",
|
| 399 |
+
". \n",
|
| 400 |
+
"\n",
|
| 401 |
+
" QUESTION : What brought your father to \n",
|
| 402 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 403 |
+
" Paris\n",
|
| 404 |
+
" \n",
|
| 405 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 406 |
+
"</span>\n",
|
| 407 |
+
"\n",
|
| 408 |
+
" \n",
|
| 409 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 410 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 411 |
+
" POPULATED_PLACE\n",
|
| 412 |
+
" </span>\n",
|
| 413 |
+
"</span>\n",
|
| 414 |
+
"\n",
|
| 415 |
+
"\n",
|
| 416 |
+
"</span>\n",
|
| 417 |
+
"? \n",
|
| 418 |
+
"\n",
|
| 419 |
+
" ANSWER : I think the Army , to a certain extent , and a job , a good job . \n",
|
| 420 |
+
"\n",
|
| 421 |
+
" QUESTION : How large was your family ? \n",
|
| 422 |
+
"\n",
|
| 423 |
+
" ANSWER : Well , I had four sisters , two brothers , and my mother and father . I had a lot of family in \n",
|
| 424 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 425 |
+
" Poland\n",
|
| 426 |
+
" \n",
|
| 427 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 428 |
+
"</span>\n",
|
| 429 |
+
"\n",
|
| 430 |
+
" \n",
|
| 431 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 432 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 433 |
+
" COUNTRY\n",
|
| 434 |
+
" </span>\n",
|
| 435 |
+
"</span>\n",
|
| 436 |
+
"\n",
|
| 437 |
+
"\n",
|
| 438 |
+
"</span>\n",
|
| 439 |
+
"; my grandparents , my aunts , my uncles , my cousins . \n",
|
| 440 |
+
"\n",
|
| 441 |
+
" QUESTION : But you were born in \n",
|
| 442 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 443 |
+
" Paris\n",
|
| 444 |
+
" \n",
|
| 445 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 446 |
+
"</span>\n",
|
| 447 |
+
"\n",
|
| 448 |
+
" \n",
|
| 449 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 450 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 451 |
+
" POPULATED_PLACE\n",
|
| 452 |
+
" </span>\n",
|
| 453 |
+
"</span>\n",
|
| 454 |
+
"\n",
|
| 455 |
+
"\n",
|
| 456 |
+
"</span>\n",
|
| 457 |
+
". Did you go back to \n",
|
| 458 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 459 |
+
" Poland\n",
|
| 460 |
+
" \n",
|
| 461 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 462 |
+
"</span>\n",
|
| 463 |
+
"\n",
|
| 464 |
+
" \n",
|
| 465 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 466 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 467 |
+
" COUNTRY\n",
|
| 468 |
+
" </span>\n",
|
| 469 |
+
"</span>\n",
|
| 470 |
+
"\n",
|
| 471 |
+
"\n",
|
| 472 |
+
"</span>\n",
|
| 473 |
+
"to visit ? \n",
|
| 474 |
+
"\n",
|
| 475 |
+
" ANSWER : Yes , my grandfather , we went to visit . \n",
|
| 476 |
+
"\n",
|
| 477 |
+
" QUESTION : And you stayed with large extended family there ? \n",
|
| 478 |
+
"\n",
|
| 479 |
+
" ANSWER : Yes , my grandfather did n't make my mother go \n",
|
| 480 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 481 |
+
" home\n",
|
| 482 |
+
" \n",
|
| 483 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 484 |
+
"</span>\n",
|
| 485 |
+
"\n",
|
| 486 |
+
" \n",
|
| 487 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 488 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 489 |
+
" BUILDING\n",
|
| 490 |
+
" </span>\n",
|
| 491 |
+
"</span>\n",
|
| 492 |
+
"\n",
|
| 493 |
+
"\n",
|
| 494 |
+
"</span>\n",
|
| 495 |
+
", go back . \n",
|
| 496 |
+
"\n",
|
| 497 |
+
" QUESTION : He wanted her to stay ? \n",
|
| 498 |
+
"\n",
|
| 499 |
+
" ANSWER : Yes . \n",
|
| 500 |
+
"\n",
|
| 501 |
+
" QUESTION : Yeah . And so where were you in the order of children ? Were you the second ? \n",
|
| 502 |
+
"\n",
|
| 503 |
+
" ANSWER : Second . \n",
|
| 504 |
+
"\n",
|
| 505 |
+
" QUESTION : You were the second child ? \n",
|
| 506 |
+
"\n",
|
| 507 |
+
" ANSWER : Right . \n",
|
| 508 |
+
"\n",
|
| 509 |
+
" QUESTION : Yeah . And what kind of \n",
|
| 510 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 511 |
+
" neighborhood\n",
|
| 512 |
+
" \n",
|
| 513 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 514 |
+
"</span>\n",
|
| 515 |
+
"\n",
|
| 516 |
+
" \n",
|
| 517 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 518 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 519 |
+
" POPULATED_PLACE\n",
|
| 520 |
+
" </span>\n",
|
| 521 |
+
"</span>\n",
|
| 522 |
+
"\n",
|
| 523 |
+
"\n",
|
| 524 |
+
"</span>\n",
|
| 525 |
+
"did you live in in \n",
|
| 526 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 527 |
+
" Paris\n",
|
| 528 |
+
" \n",
|
| 529 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 530 |
+
"</span>\n",
|
| 531 |
+
"\n",
|
| 532 |
+
" \n",
|
| 533 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 534 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 535 |
+
" POPULATED_PLACE\n",
|
| 536 |
+
" </span>\n",
|
| 537 |
+
"</span>\n",
|
| 538 |
+
"\n",
|
| 539 |
+
"\n",
|
| 540 |
+
"</span>\n",
|
| 541 |
+
"? \n",
|
| 542 |
+
"\n",
|
| 543 |
+
" ANSWER : We lived in a mixed neighborhood . \n",
|
| 544 |
+
"\n",
|
| 545 |
+
" QUESTION : Non - Jews and -- ANSWER : Non - Jews and Jews . \n",
|
| 546 |
+
"\n",
|
| 547 |
+
" QUESTION : Uh - huh . And did you have non - Jewish friends ? \n",
|
| 548 |
+
"\n",
|
| 549 |
+
" ANSWER : Yes . Actually , I was -- when I came back from the \n",
|
| 550 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 551 |
+
" camp\n",
|
| 552 |
+
" \n",
|
| 553 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 554 |
+
"</span>\n",
|
| 555 |
+
"\n",
|
| 556 |
+
" \n",
|
| 557 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 558 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 559 |
+
" POPULATED_PLACE\n",
|
| 560 |
+
" </span>\n",
|
| 561 |
+
"</span>\n",
|
| 562 |
+
"\n",
|
| 563 |
+
"\n",
|
| 564 |
+
"</span>\n",
|
| 565 |
+
", I lived with my friend I went to \n",
|
| 566 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 567 |
+
" school\n",
|
| 568 |
+
" \n",
|
| 569 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 570 |
+
"</span>\n",
|
| 571 |
+
"\n",
|
| 572 |
+
" \n",
|
| 573 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 574 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 575 |
+
" BUILDING\n",
|
| 576 |
+
" </span>\n",
|
| 577 |
+
"</span>\n",
|
| 578 |
+
"\n",
|
| 579 |
+
"\n",
|
| 580 |
+
"</span>\n",
|
| 581 |
+
"with when we was six and a half years old . \n",
|
| 582 |
+
"\n",
|
| 583 |
+
" QUESTION : Uh - huh . \n",
|
| 584 |
+
"\n",
|
| 585 |
+
" ANSWER : And she was Catholic . \n",
|
| 586 |
+
"\n",
|
| 587 |
+
" QUESTION : Uh - huh . \n",
|
| 588 |
+
"\n",
|
| 589 |
+
" ANSWER : My Jewish family did n't have \n",
|
| 590 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 591 |
+
" room\n",
|
| 592 |
+
" \n",
|
| 593 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 594 |
+
"</span>\n",
|
| 595 |
+
"\n",
|
| 596 |
+
" \n",
|
| 597 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 598 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 599 |
+
" INT_SPACE\n",
|
| 600 |
+
" </span>\n",
|
| 601 |
+
"</span>\n",
|
| 602 |
+
"\n",
|
| 603 |
+
"\n",
|
| 604 |
+
"</span>\n",
|
| 605 |
+
"for me . \n",
|
| 606 |
+
"\n",
|
| 607 |
+
" QUESTION : Uh - huh . Let 's talk a little bit about your schooling . When did you begin \n",
|
| 608 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 609 |
+
" school\n",
|
| 610 |
+
" \n",
|
| 611 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 612 |
+
"</span>\n",
|
| 613 |
+
"\n",
|
| 614 |
+
" \n",
|
| 615 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 616 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 617 |
+
" BUILDING\n",
|
| 618 |
+
" </span>\n",
|
| 619 |
+
"</span>\n",
|
| 620 |
+
"\n",
|
| 621 |
+
"\n",
|
| 622 |
+
"</span>\n",
|
| 623 |
+
"? How old were you ? \n",
|
| 624 |
+
"\n",
|
| 625 |
+
" ANSWER : We begin \n",
|
| 626 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 627 |
+
" school\n",
|
| 628 |
+
" \n",
|
| 629 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 630 |
+
"</span>\n",
|
| 631 |
+
"\n",
|
| 632 |
+
" \n",
|
| 633 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 634 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 635 |
+
" BUILDING\n",
|
| 636 |
+
" </span>\n",
|
| 637 |
+
"</span>\n",
|
| 638 |
+
"\n",
|
| 639 |
+
"\n",
|
| 640 |
+
"</span>\n",
|
| 641 |
+
"early in \n",
|
| 642 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 643 |
+
" France\n",
|
| 644 |
+
" \n",
|
| 645 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 646 |
+
"</span>\n",
|
| 647 |
+
"\n",
|
| 648 |
+
" \n",
|
| 649 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 650 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 651 |
+
" COUNTRY\n",
|
| 652 |
+
" </span>\n",
|
| 653 |
+
"</span>\n",
|
| 654 |
+
"\n",
|
| 655 |
+
"\n",
|
| 656 |
+
"</span>\n",
|
| 657 |
+
". We go to first the maternelle , which was maybe about three years old . And the -- QUESTION : Is this a \n",
|
| 658 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 659 |
+
" public\n",
|
| 660 |
+
" \n",
|
| 661 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 662 |
+
"</span>\n",
|
| 663 |
+
"\n",
|
| 664 |
+
" \n",
|
| 665 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 666 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 667 |
+
" BUILDING\n",
|
| 668 |
+
" </span>\n",
|
| 669 |
+
"</span>\n",
|
| 670 |
+
"\n",
|
| 671 |
+
"\n",
|
| 672 |
+
"</span>\n",
|
| 673 |
+
"\n",
|
| 674 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 675 |
+
" school\n",
|
| 676 |
+
" \n",
|
| 677 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 678 |
+
"</span>\n",
|
| 679 |
+
"\n",
|
| 680 |
+
" \n",
|
| 681 |
+
"</span>\n",
|
| 682 |
+
"? \n",
|
| 683 |
+
"\n",
|
| 684 |
+
" ANSWER : Yes . \n",
|
| 685 |
+
"\n",
|
| 686 |
+
" QUESTION : So there were Jews and non - Jews in your class ? \n",
|
| 687 |
+
"\n",
|
| 688 |
+
" ANSWER : Yes , Jews and non - Jews . But mostly there were non - Jews in my \n",
|
| 689 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 690 |
+
" school\n",
|
| 691 |
+
" \n",
|
| 692 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 693 |
+
"</span>\n",
|
| 694 |
+
"\n",
|
| 695 |
+
" \n",
|
| 696 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 697 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 698 |
+
" BUILDING\n",
|
| 699 |
+
" </span>\n",
|
| 700 |
+
"</span>\n",
|
| 701 |
+
"\n",
|
| 702 |
+
"\n",
|
| 703 |
+
"</span>\n",
|
| 704 |
+
". And then first grade , we start -- I think I was six , six and a half . And when I was arrested , I was in lycee . And did n't finish my education , therefore . Q : Yeah . Yeah . \n",
|
| 705 |
+
"\n",
|
| 706 |
+
" ANSWER : Six years . \n",
|
| 707 |
+
"\n",
|
| 708 |
+
" QUESTION : Yeah . We 'll get to that in a few minutes . So you -- you went to a regular \n",
|
| 709 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 710 |
+
" public\n",
|
| 711 |
+
" \n",
|
| 712 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 713 |
+
"</span>\n",
|
| 714 |
+
"\n",
|
| 715 |
+
" \n",
|
| 716 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 717 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 718 |
+
" BUILDING\n",
|
| 719 |
+
" </span>\n",
|
| 720 |
+
"</span>\n",
|
| 721 |
+
"\n",
|
| 722 |
+
"\n",
|
| 723 |
+
"</span>\n",
|
| 724 |
+
"\n",
|
| 725 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 726 |
+
" school\n",
|
| 727 |
+
" \n",
|
| 728 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 729 |
+
"</span>\n",
|
| 730 |
+
"\n",
|
| 731 |
+
" \n",
|
| 732 |
+
"</span>\n",
|
| 733 |
+
". \n",
|
| 734 |
+
"\n",
|
| 735 |
+
" ANSWER : Uh - huh . \n",
|
| 736 |
+
"\n",
|
| 737 |
+
" QUESTION : That 's now what we call \n",
|
| 738 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 739 |
+
" elementary\n",
|
| 740 |
+
" \n",
|
| 741 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 742 |
+
"</span>\n",
|
| 743 |
+
"\n",
|
| 744 |
+
" \n",
|
| 745 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 746 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 747 |
+
" BUILDING\n",
|
| 748 |
+
" </span>\n",
|
| 749 |
+
"</span>\n",
|
| 750 |
+
"\n",
|
| 751 |
+
"\n",
|
| 752 |
+
"</span>\n",
|
| 753 |
+
"\n",
|
| 754 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 755 |
+
" school\n",
|
| 756 |
+
" \n",
|
| 757 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 758 |
+
"</span>\n",
|
| 759 |
+
"\n",
|
| 760 |
+
" \n",
|
| 761 |
+
"</span>\n",
|
| 762 |
+
". \n",
|
| 763 |
+
"\n",
|
| 764 |
+
" ANSWER : Yes . \n",
|
| 765 |
+
"\n",
|
| 766 |
+
" QUESTION : Was there any problems there ? You said 1 t was mixed . \n",
|
| 767 |
+
"\n",
|
| 768 |
+
" ANSWER : Yeah . No problem . \n",
|
| 769 |
+
"\n",
|
| 770 |
+
" QUESTION : No problems ? \n",
|
| 771 |
+
"\n",
|
| 772 |
+
" ANSWER : No . \n",
|
| 773 |
+
"\n",
|
| 774 |
+
" QUESTION : You were accepted ? \n",
|
| 775 |
+
"\n",
|
| 776 |
+
" ANSWER : Yes . No problem . On the contrary , our teachers liked us very much . We were five girls in the same \n",
|
| 777 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 778 |
+
" school\n",
|
| 779 |
+
" \n",
|
| 780 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 781 |
+
"</span>\n",
|
| 782 |
+
"\n",
|
| 783 |
+
" \n",
|
| 784 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 785 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 786 |
+
" BUILDING\n",
|
| 787 |
+
" </span>\n",
|
| 788 |
+
"</span>\n",
|
| 789 |
+
"\n",
|
| 790 |
+
"\n",
|
| 791 |
+
"</span>\n",
|
| 792 |
+
", because we had \n",
|
| 793 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 794 |
+
" girls\n",
|
| 795 |
+
" \n",
|
| 796 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 797 |
+
"</span>\n",
|
| 798 |
+
"\n",
|
| 799 |
+
" \n",
|
| 800 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 801 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 802 |
+
" BUILDING\n",
|
| 803 |
+
" </span>\n",
|
| 804 |
+
"</span>\n",
|
| 805 |
+
"\n",
|
| 806 |
+
"\n",
|
| 807 |
+
"</span>\n",
|
| 808 |
+
"\n",
|
| 809 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 810 |
+
" '\n",
|
| 811 |
+
" \n",
|
| 812 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 813 |
+
"</span>\n",
|
| 814 |
+
"\n",
|
| 815 |
+
" \n",
|
| 816 |
+
"</span>\n",
|
| 817 |
+
"\n",
|
| 818 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 819 |
+
" schools\n",
|
| 820 |
+
" \n",
|
| 821 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 822 |
+
"</span>\n",
|
| 823 |
+
"\n",
|
| 824 |
+
" \n",
|
| 825 |
+
"</span>\n",
|
| 826 |
+
"and \n",
|
| 827 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 828 |
+
" boys\n",
|
| 829 |
+
" \n",
|
| 830 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 831 |
+
"</span>\n",
|
| 832 |
+
"\n",
|
| 833 |
+
" \n",
|
| 834 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 835 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 836 |
+
" BUILDING\n",
|
| 837 |
+
" </span>\n",
|
| 838 |
+
"</span>\n",
|
| 839 |
+
"\n",
|
| 840 |
+
"\n",
|
| 841 |
+
"</span>\n",
|
| 842 |
+
"\n",
|
| 843 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 844 |
+
" '\n",
|
| 845 |
+
" \n",
|
| 846 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 847 |
+
"</span>\n",
|
| 848 |
+
"\n",
|
| 849 |
+
" \n",
|
| 850 |
+
"</span>\n",
|
| 851 |
+
"\n",
|
| 852 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 853 |
+
" schools\n",
|
| 854 |
+
" \n",
|
| 855 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 856 |
+
"</span>\n",
|
| 857 |
+
"\n",
|
| 858 |
+
" \n",
|
| 859 |
+
"</span>\n",
|
| 860 |
+
". It was not mixed \n",
|
| 861 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 862 |
+
" schools\n",
|
| 863 |
+
" \n",
|
| 864 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 865 |
+
"</span>\n",
|
| 866 |
+
"\n",
|
| 867 |
+
" \n",
|
| 868 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 869 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 870 |
+
" BUILDING\n",
|
| 871 |
+
" </span>\n",
|
| 872 |
+
"</span>\n",
|
| 873 |
+
"\n",
|
| 874 |
+
"\n",
|
| 875 |
+
"</span>\n",
|
| 876 |
+
". And the -- my principal , when we were arrested , came every single day to see if there were news from us . So it was no problem at all . \n",
|
| 877 |
+
"\n",
|
| 878 |
+
" QUESTION : Uh - huh . What kind of \n",
|
| 879 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 880 |
+
" neighborhood\n",
|
| 881 |
+
" \n",
|
| 882 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 883 |
+
"</span>\n",
|
| 884 |
+
"\n",
|
| 885 |
+
" \n",
|
| 886 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 887 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 888 |
+
" POPULATED_PLACE\n",
|
| 889 |
+
" </span>\n",
|
| 890 |
+
"</span>\n",
|
| 891 |
+
"\n",
|
| 892 |
+
"\n",
|
| 893 |
+
"</span>\n",
|
| 894 |
+
"did you live in ? Was it upper class or middle class neighborhood ? \n",
|
| 895 |
+
"\n",
|
| 896 |
+
" ANSWER : Probably middle class , I would say . \n",
|
| 897 |
+
"\n",
|
| 898 |
+
" QUESTION : Uh - huh . Did you live in a \n",
|
| 899 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 900 |
+
" house\n",
|
| 901 |
+
" \n",
|
| 902 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 903 |
+
"</span>\n",
|
| 904 |
+
"\n",
|
| 905 |
+
" \n",
|
| 906 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 907 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 908 |
+
" BUILDING\n",
|
| 909 |
+
" </span>\n",
|
| 910 |
+
"</span>\n",
|
| 911 |
+
"\n",
|
| 912 |
+
"\n",
|
| 913 |
+
"</span>\n",
|
| 914 |
+
"or an \n",
|
| 915 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 916 |
+
" apartment\n",
|
| 917 |
+
" \n",
|
| 918 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 919 |
+
"</span>\n",
|
| 920 |
+
"\n",
|
| 921 |
+
" \n",
|
| 922 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 923 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 924 |
+
" INT_SPACE\n",
|
| 925 |
+
" </span>\n",
|
| 926 |
+
"</span>\n",
|
| 927 |
+
"\n",
|
| 928 |
+
"\n",
|
| 929 |
+
"</span>\n",
|
| 930 |
+
"? \n",
|
| 931 |
+
"\n",
|
| 932 |
+
" ANSWER : A \n",
|
| 933 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 934 |
+
" house\n",
|
| 935 |
+
" \n",
|
| 936 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 937 |
+
"</span>\n",
|
| 938 |
+
"\n",
|
| 939 |
+
" \n",
|
| 940 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 941 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 942 |
+
" BUILDING\n",
|
| 943 |
+
" </span>\n",
|
| 944 |
+
"</span>\n",
|
| 945 |
+
"\n",
|
| 946 |
+
"\n",
|
| 947 |
+
"</span>\n",
|
| 948 |
+
". \n",
|
| 949 |
+
"\n",
|
| 950 |
+
" QUESTION : And was it in the center of the \n",
|
| 951 |
+
"<span style=\"font-weight: bold; display: inline-block; position: relative; height: 60px;\">\n",
|
| 952 |
+
" city\n",
|
| 953 |
+
" \n",
|
| 954 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 955 |
+
"</span>\n",
|
| 956 |
+
"\n",
|
| 957 |
+
" \n",
|
| 958 |
+
"<span style=\"background: #ddd; top: 40px; height: 4px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; left: -1px; width: calc(100% + 2px); position: absolute;\">\n",
|
| 959 |
+
" <span style=\"background: #ddd; z-index: 10; color: #000; top: -0.5em; padding: 2px 3px; position: absolute; font-size: 0.6em; font-weight: bold; line-height: 1; border-radius: 3px\">\n",
|
| 960 |
+
" POPULATED_PLACE\n",
|
| 961 |
+
" </span>\n",
|
| 962 |
+
"</span>\n",
|
| 963 |
+
"\n",
|
| 964 |
+
"\n",
|
| 965 |
+
"</span>\n",
|
| 966 |
+
"or the outskirts ? \n",
|
| 967 |
+
" </div></span>"
|
| 968 |
+
],
|
| 969 |
+
"text/plain": [
|
| 970 |
+
"<IPython.core.display.HTML object>"
|
| 971 |
+
]
|
| 972 |
+
},
|
| 973 |
+
"metadata": {},
|
| 974 |
+
"output_type": "display_data"
|
| 975 |
+
}
|
| 976 |
+
],
|
| 977 |
+
"source": [
|
| 978 |
+
"displacy.render(doc, style=\"span\")"
|
| 979 |
+
]
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"cell_type": "code",
|
| 983 |
+
"execution_count": null,
|
| 984 |
+
"metadata": {},
|
| 985 |
+
"outputs": [],
|
| 986 |
+
"source": []
|
| 987 |
+
}
|
| 988 |
+
],
|
| 989 |
+
"metadata": {
|
| 990 |
+
"kernelspec": {
|
| 991 |
+
"display_name": "holocaust",
|
| 992 |
+
"language": "python",
|
| 993 |
+
"name": "python3"
|
| 994 |
+
},
|
| 995 |
+
"language_info": {
|
| 996 |
+
"codemirror_mode": {
|
| 997 |
+
"name": "ipython",
|
| 998 |
+
"version": 3
|
| 999 |
+
},
|
| 1000 |
+
"file_extension": ".py",
|
| 1001 |
+
"mimetype": "text/x-python",
|
| 1002 |
+
"name": "python",
|
| 1003 |
+
"nbconvert_exporter": "python",
|
| 1004 |
+
"pygments_lexer": "ipython3",
|
| 1005 |
+
"version": "3.10.13"
|
| 1006 |
+
}
|
| 1007 |
+
},
|
| 1008 |
+
"nbformat": 4,
|
| 1009 |
+
"nbformat_minor": 2
|
| 1010 |
+
}
|
project.lock
ADDED
|
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
split:
|
| 2 |
+
cmd: python -m weasel run split
|
| 3 |
+
script:
|
| 4 |
+
- python scripts/split.py assets/annotated_data_spans.jsonl
|
| 5 |
+
deps:
|
| 6 |
+
- path: scripts/split.py
|
| 7 |
+
md5: 9983293e4982a7e989ba51456bfc992c
|
| 8 |
+
outs:
|
| 9 |
+
- path: assets/train.jsonl
|
| 10 |
+
md5: 584673b1949bfaaf9166c2212401f728
|
| 11 |
+
- path: assets/dev.jsonl
|
| 12 |
+
md5: 8dab61f2033edcfb8bbeb20a1454283d
|
| 13 |
+
- path: assets/test.jsonl
|
| 14 |
+
md5: 75ef0134ef3aa68ce76230fc6cc80dd4
|
| 15 |
+
convert:
|
| 16 |
+
cmd: python -m weasel run convert
|
| 17 |
+
script:
|
| 18 |
+
- python scripts/convert.py en assets/train.jsonl corpus
|
| 19 |
+
- python scripts/convert.py en assets/dev.jsonl corpus
|
| 20 |
+
- python scripts/convert.py en assets/test.jsonl corpus
|
| 21 |
+
deps:
|
| 22 |
+
- path: assets/train.jsonl
|
| 23 |
+
md5: 584673b1949bfaaf9166c2212401f728
|
| 24 |
+
- path: assets/dev.jsonl
|
| 25 |
+
md5: 8dab61f2033edcfb8bbeb20a1454283d
|
| 26 |
+
- path: assets/test.jsonl
|
| 27 |
+
md5: 75ef0134ef3aa68ce76230fc6cc80dd4
|
| 28 |
+
- path: scripts/convert.py
|
| 29 |
+
md5: 8e1f918fad38cf8867e50977d21c0408
|
| 30 |
+
outs:
|
| 31 |
+
- path: corpus/train.spacy
|
| 32 |
+
md5: 90c3e968d70b05835fc8042d534b15e0
|
| 33 |
+
- path: corpus/dev.spacy
|
| 34 |
+
md5: 174b964498a65cbe84993e303c7c4793
|
| 35 |
+
- path: corpus/test.spacy
|
| 36 |
+
md5: b9e5adef7f8b78fe804037a8e59a4cf0
|
| 37 |
+
train:
|
| 38 |
+
cmd: python -m weasel run train
|
| 39 |
+
script:
|
| 40 |
+
- python -m spacy train configs/config_trf.cfg --output training/ --paths.train
|
| 41 |
+
corpus/train.spacy --paths.dev corpus/dev.spacy --training.eval_frequency 10 --training.patience
|
| 42 |
+
100 --gpu-id -1 --system.seed 0
|
| 43 |
+
deps:
|
| 44 |
+
- path: configs/config.cfg
|
| 45 |
+
md5: 479f0a665e528cc7f32ca449a55cb3d1
|
| 46 |
+
- path: corpus/train.spacy
|
| 47 |
+
md5: 90c3e968d70b05835fc8042d534b15e0
|
| 48 |
+
- path: corpus/dev.spacy
|
| 49 |
+
md5: 174b964498a65cbe84993e303c7c4793
|
| 50 |
+
outs:
|
| 51 |
+
- path: training/model-best
|
| 52 |
+
md5: 8d1843c1715af55bcd109ee6e7200afd
|
| 53 |
+
evaluate:
|
| 54 |
+
cmd: python -m weasel run evaluate
|
| 55 |
+
script:
|
| 56 |
+
- python -m spacy evaluate training/model-best corpus/test.spacy --output training/metrics.json
|
| 57 |
+
deps:
|
| 58 |
+
- path: corpus/test.spacy
|
| 59 |
+
md5: b9e5adef7f8b78fe804037a8e59a4cf0
|
| 60 |
+
- path: training/model-best
|
| 61 |
+
md5: 8d1843c1715af55bcd109ee6e7200afd
|
| 62 |
+
outs:
|
| 63 |
+
- path: training/metrics.json
|
| 64 |
+
md5: 24152be0160e2a20e88a7577473b208b
|
| 65 |
+
train-sm:
|
| 66 |
+
cmd: python -m weasel run train-sm
|
| 67 |
+
script:
|
| 68 |
+
- python -m spacy train configs/config_sm.cfg --output training/sm/ --paths.train
|
| 69 |
+
corpus/train.spacy --paths.dev corpus/dev.spacy --training.patience 100 --gpu-id
|
| 70 |
+
-1 --system.seed 0
|
| 71 |
+
deps:
|
| 72 |
+
- path: configs/config.cfg
|
| 73 |
+
md5: 479f0a665e528cc7f32ca449a55cb3d1
|
| 74 |
+
- path: corpus/train.spacy
|
| 75 |
+
md5: 90c3e968d70b05835fc8042d534b15e0
|
| 76 |
+
- path: corpus/dev.spacy
|
| 77 |
+
md5: 174b964498a65cbe84993e303c7c4793
|
| 78 |
+
outs:
|
| 79 |
+
- path: training/model-best
|
| 80 |
+
md5: null
|
| 81 |
+
evaluate-sm:
|
| 82 |
+
cmd: python -m weasel run evaluate-sm
|
| 83 |
+
script:
|
| 84 |
+
- python -m spacy evaluate training/sm/model-best corpus/test.spacy --output training/sm/metrics.json
|
| 85 |
+
deps:
|
| 86 |
+
- path: corpus/test.spacy
|
| 87 |
+
md5: b9e5adef7f8b78fe804037a8e59a4cf0
|
| 88 |
+
- path: training/sm/model-best
|
| 89 |
+
md5: 11cd6594ac09a0def1f01c51343576b0
|
| 90 |
+
outs:
|
| 91 |
+
- path: training/sm/metrics.json
|
| 92 |
+
md5: 6a8ffabb350c4f0b6262c6e9c45005af
|
| 93 |
+
download-lg:
|
| 94 |
+
cmd: python -m weasel run download-lg
|
| 95 |
+
script:
|
| 96 |
+
- python -m spacy download en_core_web_lg
|
| 97 |
+
deps: []
|
| 98 |
+
outs: []
|
| 99 |
+
train-lg:
|
| 100 |
+
cmd: python -m weasel run train-lg
|
| 101 |
+
script:
|
| 102 |
+
- python -m spacy train configs/config_lg.cfg --output training/lg/ --paths.train
|
| 103 |
+
corpus/train.spacy --paths.dev corpus/dev.spacy --training.eval_frequency 50 --training.patience
|
| 104 |
+
0 --gpu-id -1 --initialize.vectors en_core_web_lg --system.seed 0 --components.tok2vec.model.embed.include_static_vectors
|
| 105 |
+
true
|
| 106 |
+
deps:
|
| 107 |
+
- path: configs/config_lg.cfg
|
| 108 |
+
md5: 283a7f5e530c92c812e7666d52e539a6
|
| 109 |
+
- path: corpus/train.spacy
|
| 110 |
+
md5: 90c3e968d70b05835fc8042d534b15e0
|
| 111 |
+
- path: corpus/dev.spacy
|
| 112 |
+
md5: 174b964498a65cbe84993e303c7c4793
|
| 113 |
+
outs:
|
| 114 |
+
- path: training/model-best
|
| 115 |
+
md5: null
|
| 116 |
+
evaluate-lg:
|
| 117 |
+
cmd: python -m weasel run evaluate-lg
|
| 118 |
+
script:
|
| 119 |
+
- python -m spacy evaluate training/lg/model-best corpus/test.spacy --output training/lg/metrics.json
|
| 120 |
+
deps:
|
| 121 |
+
- path: corpus/test.spacy
|
| 122 |
+
md5: b9e5adef7f8b78fe804037a8e59a4cf0
|
| 123 |
+
- path: training/lg/model-best
|
| 124 |
+
md5: 8e296e41d1ab48052ec122eed9154887
|
| 125 |
+
outs:
|
| 126 |
+
- path: training/lg/metrics.json
|
| 127 |
+
md5: 4efed860818658f1533fe204da7c169e
|
| 128 |
+
download-md:
|
| 129 |
+
cmd: python -m weasel run download-md
|
| 130 |
+
script:
|
| 131 |
+
- python -m spacy download en_core_web_md
|
| 132 |
+
deps: []
|
| 133 |
+
outs: []
|
| 134 |
+
evaluate-md:
|
| 135 |
+
cmd: python -m weasel run evaluate-md
|
| 136 |
+
script:
|
| 137 |
+
- python -m spacy evaluate training/md/model-best corpus/test.spacy --output training/md/metrics.json
|
| 138 |
+
deps:
|
| 139 |
+
- path: corpus/test.spacy
|
| 140 |
+
md5: b9e5adef7f8b78fe804037a8e59a4cf0
|
| 141 |
+
- path: training/md/model-best
|
| 142 |
+
md5: 2725d583e50534609145708af72679bc
|
| 143 |
+
outs:
|
| 144 |
+
- path: training/md/metrics.json
|
| 145 |
+
md5: b92630a9072590b0b9aef4393ca19106
|
| 146 |
+
train-md:
|
| 147 |
+
cmd: python -m weasel run train-md
|
| 148 |
+
script:
|
| 149 |
+
- python -m spacy train configs/config_md.cfg --output training/md/ --paths.train
|
| 150 |
+
corpus/train.spacy --paths.dev corpus/dev.spacy --training.eval_frequency 50 --training.patience
|
| 151 |
+
0 --gpu-id -1 --initialize.vectors en_core_web_md --system.seed 0 --components.tok2vec.model.embed.include_static_vectors
|
| 152 |
+
true
|
| 153 |
+
deps:
|
| 154 |
+
- path: configs/config_md.cfg
|
| 155 |
+
md5: b641ff9a6ba2162be49f940eeade2004
|
| 156 |
+
- path: corpus/train.spacy
|
| 157 |
+
md5: 90c3e968d70b05835fc8042d534b15e0
|
| 158 |
+
- path: corpus/dev.spacy
|
| 159 |
+
md5: 174b964498a65cbe84993e303c7c4793
|
| 160 |
+
outs:
|
| 161 |
+
- path: training/model-best
|
| 162 |
+
md5: null
|
| 163 |
+
build-table:
|
| 164 |
+
cmd: python -m weasel run build-table
|
| 165 |
+
script:
|
| 166 |
+
- python scripts/build-table.py
|
| 167 |
+
deps: []
|
| 168 |
+
outs: []
|
| 169 |
+
readme:
|
| 170 |
+
cmd: python -m weasel run readme
|
| 171 |
+
script:
|
| 172 |
+
- python scripts/readme.py
|
| 173 |
+
deps: []
|
| 174 |
+
outs: []
|
project.yml
ADDED
|
@@ -0,0 +1,268 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
title: "Demo spancat in a new pipeline (Span Categorization)"
|
| 2 |
+
description: "A minimal demo spancat project for spaCy v3"
|
| 3 |
+
|
| 4 |
+
# Variables can be referenced across the project.yml using ${vars.var_name}
|
| 5 |
+
vars:
|
| 6 |
+
name: "placing_holocaust"
|
| 7 |
+
lang: "en"
|
| 8 |
+
annotations_file: "annotated_data_spans.jsonl"
|
| 9 |
+
train: "train"
|
| 10 |
+
dev: "dev"
|
| 11 |
+
test: "test"
|
| 12 |
+
version: "0.0.1"
|
| 13 |
+
# Set a random seed
|
| 14 |
+
seed: 0
|
| 15 |
+
# Set your GPU ID, -1 is CPU
|
| 16 |
+
gpu_id: -1
|
| 17 |
+
vectors_model_md: "en_core_web_md"
|
| 18 |
+
vectors_model_lg: "en_core_web_lg"
|
| 19 |
+
|
| 20 |
+
# These are the directories that the project needs. The project CLI will make
|
| 21 |
+
# sure that they always exist.
|
| 22 |
+
directories: ["assets", "corpus", "configs", "training", "scripts", "packages"]
|
| 23 |
+
|
| 24 |
+
# Assets that should be downloaded or available in the directory. We're shipping
|
| 25 |
+
# them with the project, so they won't have to be downloaded.
|
| 26 |
+
assets:
|
| 27 |
+
- dest: "assets/train.json"
|
| 28 |
+
description: "Demo training data adapted from the `ner_demo` project"
|
| 29 |
+
- dest: "assets/dev.json"
|
| 30 |
+
description: "Demo development data"
|
| 31 |
+
|
| 32 |
+
# Workflows are sequences of commands (see below) executed in order. You can
|
| 33 |
+
# run them via "spacy project run [workflow]". If a commands's inputs/outputs
|
| 34 |
+
# haven't changed, it won't be re-run.
|
| 35 |
+
workflows:
|
| 36 |
+
all-sm-sents:
|
| 37 |
+
- convert-sents
|
| 38 |
+
- split
|
| 39 |
+
- create-config-sm
|
| 40 |
+
- train-sm
|
| 41 |
+
- evaluate-sm
|
| 42 |
+
# all-trf:
|
| 43 |
+
# - download
|
| 44 |
+
# - convert
|
| 45 |
+
# - create-config
|
| 46 |
+
# - train-with-vectors
|
| 47 |
+
# - evaluate
|
| 48 |
+
|
| 49 |
+
# Project commands, specified in a style similar to CI config files (e.g. Azure
|
| 50 |
+
# pipelines). The name is the command name that lets you trigger the command
|
| 51 |
+
# via "spacy project run [command] [path]". The help message is optional and
|
| 52 |
+
# shown when executing "spacy project run [optional command] [path] --help".
|
| 53 |
+
commands:
|
| 54 |
+
|
| 55 |
+
#### DOWNLOADING VECTORS #####
|
| 56 |
+
- name: "download-lg"
|
| 57 |
+
help: "Download a spaCy model with pretrained vectors"
|
| 58 |
+
script:
|
| 59 |
+
- "python -m spacy download ${vars.vectors_model_lg}"
|
| 60 |
+
|
| 61 |
+
- name: "download-md"
|
| 62 |
+
help: "Download a spaCy model with pretrained vectors"
|
| 63 |
+
script:
|
| 64 |
+
- "python -m spacy download ${vars.vectors_model_md}"
|
| 65 |
+
|
| 66 |
+
#### PREPROCESSING #####
|
| 67 |
+
- name: "convert"
|
| 68 |
+
help: "Convert the data to spaCy's binary format"
|
| 69 |
+
script:
|
| 70 |
+
- "python scripts/convert.py ${vars.lang} assets/${vars.train}.jsonl corpus"
|
| 71 |
+
- "python scripts/convert.py ${vars.lang} assets/${vars.dev}.jsonl corpus"
|
| 72 |
+
- "python scripts/convert.py ${vars.lang} assets/${vars.test}.jsonl corpus"
|
| 73 |
+
deps:
|
| 74 |
+
- "assets/${vars.train}.jsonl"
|
| 75 |
+
- "assets/${vars.dev}.jsonl"
|
| 76 |
+
- "assets/${vars.test}.jsonl"
|
| 77 |
+
- "scripts/convert.py"
|
| 78 |
+
outputs:
|
| 79 |
+
- "corpus/train.spacy"
|
| 80 |
+
- "corpus/dev.spacy"
|
| 81 |
+
- "corpus/test.spacy"
|
| 82 |
+
|
| 83 |
+
- name: "convert-sents"
|
| 84 |
+
help: "Convert the data to to sentences before converting to spaCy's binary format"
|
| 85 |
+
script:
|
| 86 |
+
- "python scripts/convert_sents.py ${vars.lang} assets/${vars.train}.jsonl corpus"
|
| 87 |
+
- "python scripts/convert_sents.py ${vars.lang} assets/${vars.dev}.jsonl corpus"
|
| 88 |
+
- "python scripts/convert_sents.py ${vars.lang} assets/${vars.test}.jsonl corpus"
|
| 89 |
+
deps:
|
| 90 |
+
- "assets/${vars.train}.jsonl"
|
| 91 |
+
- "assets/${vars.dev}.jsonl"
|
| 92 |
+
- "assets/${vars.test}.jsonl"
|
| 93 |
+
- "scripts/convert.py"
|
| 94 |
+
outputs:
|
| 95 |
+
- "corpus/train.spacy"
|
| 96 |
+
- "corpus/dev.spacy"
|
| 97 |
+
- "corpus/test.spacy"
|
| 98 |
+
|
| 99 |
+
- name: "split"
|
| 100 |
+
help: "Split data into train/dev/test sets"
|
| 101 |
+
script:
|
| 102 |
+
- "python scripts/split.py assets/${vars.annotations_file}"
|
| 103 |
+
deps:
|
| 104 |
+
- "scripts/split.py"
|
| 105 |
+
outputs:
|
| 106 |
+
- "assets/train.jsonl"
|
| 107 |
+
- "assets/dev.jsonl"
|
| 108 |
+
- "assets/test.jsonl"
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
#### CONFIG CREATIONS #####
|
| 113 |
+
|
| 114 |
+
- name: "create-config-sm"
|
| 115 |
+
help: "Create a new config with a spancat pipeline component"
|
| 116 |
+
script:
|
| 117 |
+
- "python -m spacy init fill-config configs/base_config_sm.cfg configs/config_sm.cfg"
|
| 118 |
+
deps:
|
| 119 |
+
- configs/base_config_sm.cfg
|
| 120 |
+
outputs:
|
| 121 |
+
- "configs/config.cfg"
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
#### TRAINING #####
|
| 125 |
+
|
| 126 |
+
### small ###
|
| 127 |
+
- name: "train-sm"
|
| 128 |
+
help: "Train the spancat model"
|
| 129 |
+
script:
|
| 130 |
+
- >-
|
| 131 |
+
python -m spacy train configs/config_sm.cfg --output training/sm/
|
| 132 |
+
--paths.train corpus/train.spacy --paths.dev corpus/dev.spacy
|
| 133 |
+
--training.eval_frequency 50
|
| 134 |
+
--training.patience 0
|
| 135 |
+
--gpu-id ${vars.gpu_id}
|
| 136 |
+
--system.seed ${vars.seed}
|
| 137 |
+
deps:
|
| 138 |
+
- "configs/config_lg.cfg"
|
| 139 |
+
- "corpus/train.spacy"
|
| 140 |
+
- "corpus/dev.spacy"
|
| 141 |
+
outputs:
|
| 142 |
+
- "training/model-best"
|
| 143 |
+
|
| 144 |
+
|
| 145 |
+
### medium ###
|
| 146 |
+
- name: "train-md"
|
| 147 |
+
help: "Train the spancat model with vectors"
|
| 148 |
+
script:
|
| 149 |
+
- >-
|
| 150 |
+
python -m spacy train configs/config_md.cfg --output training/md/
|
| 151 |
+
--paths.train corpus/train.spacy --paths.dev corpus/dev.spacy
|
| 152 |
+
--training.eval_frequency 50
|
| 153 |
+
--training.patience 0
|
| 154 |
+
--gpu-id ${vars.gpu_id}
|
| 155 |
+
--initialize.vectors ${vars.vectors_model_md}
|
| 156 |
+
--system.seed ${vars.seed}
|
| 157 |
+
--components.tok2vec.model.embed.include_static_vectors true
|
| 158 |
+
deps:
|
| 159 |
+
- "configs/config_md.cfg"
|
| 160 |
+
- "corpus/train.spacy"
|
| 161 |
+
- "corpus/dev.spacy"
|
| 162 |
+
outputs:
|
| 163 |
+
- "training/model-best"
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
### large ###
|
| 167 |
+
- name: "train-lg"
|
| 168 |
+
help: "Train the spancat model with vectors"
|
| 169 |
+
script:
|
| 170 |
+
- >-
|
| 171 |
+
python -m spacy train configs/config_lg.cfg --output training/lg/
|
| 172 |
+
--paths.train corpus/train.spacy --paths.dev corpus/dev.spacy
|
| 173 |
+
--training.eval_frequency 50
|
| 174 |
+
--training.patience 0
|
| 175 |
+
--gpu-id ${vars.gpu_id}
|
| 176 |
+
--initialize.vectors ${vars.vectors_model_lg}
|
| 177 |
+
--system.seed ${vars.seed}
|
| 178 |
+
--components.tok2vec.model.embed.include_static_vectors true
|
| 179 |
+
deps:
|
| 180 |
+
- "configs/config_lg.cfg"
|
| 181 |
+
- "corpus/train.spacy"
|
| 182 |
+
- "corpus/dev.spacy"
|
| 183 |
+
outputs:
|
| 184 |
+
- "training/model-best"
|
| 185 |
+
|
| 186 |
+
|
| 187 |
+
### transformer ###
|
| 188 |
+
- name: "train-trf"
|
| 189 |
+
help: "Train the spancat model"
|
| 190 |
+
script:
|
| 191 |
+
- >-
|
| 192 |
+
python -m spacy train configs/config_trf.cfg --output training/trf/
|
| 193 |
+
--paths.train corpus/train.spacy --paths.dev corpus/dev.spacy
|
| 194 |
+
--training.patience 100
|
| 195 |
+
--gpu-id ${vars.gpu_id}
|
| 196 |
+
--system.seed ${vars.seed}
|
| 197 |
+
deps:
|
| 198 |
+
- "configs/config.cfg"
|
| 199 |
+
- "corpus/train.spacy"
|
| 200 |
+
- "corpus/dev.spacy"
|
| 201 |
+
outputs:
|
| 202 |
+
- "training/model-best"
|
| 203 |
+
|
| 204 |
+
|
| 205 |
+
#### EVALUATION #####
|
| 206 |
+
|
| 207 |
+
### small ###
|
| 208 |
+
- name: "evaluate-sm"
|
| 209 |
+
help: "Evaluate the model and export metrics"
|
| 210 |
+
script:
|
| 211 |
+
- "python -m spacy evaluate training/sm/model-best corpus/test.spacy --output training/sm/metrics.json"
|
| 212 |
+
deps:
|
| 213 |
+
- "corpus/test.spacy"
|
| 214 |
+
- "training/sm/model-best"
|
| 215 |
+
outputs:
|
| 216 |
+
- "training/sm/metrics.json"
|
| 217 |
+
|
| 218 |
+
### medium ###
|
| 219 |
+
|
| 220 |
+
- name: "evaluate-md"
|
| 221 |
+
help: "Evaluate the model and export metrics"
|
| 222 |
+
script:
|
| 223 |
+
- "python -m spacy evaluate training/md/model-best corpus/test.spacy --output training/md/metrics.json"
|
| 224 |
+
deps:
|
| 225 |
+
- "corpus/test.spacy"
|
| 226 |
+
- "training/md/model-best"
|
| 227 |
+
outputs:
|
| 228 |
+
- "training/md/metrics.json"
|
| 229 |
+
|
| 230 |
+
### large ###
|
| 231 |
+
- name: "evaluate-lg"
|
| 232 |
+
help: "Evaluate the model and export metrics"
|
| 233 |
+
script:
|
| 234 |
+
- "python -m spacy evaluate training/lg/model-best corpus/test.spacy --output training/lg/metrics.json"
|
| 235 |
+
deps:
|
| 236 |
+
- "corpus/test.spacy"
|
| 237 |
+
- "training/lg/model-best"
|
| 238 |
+
outputs:
|
| 239 |
+
- "training/lg/metrics.json"
|
| 240 |
+
|
| 241 |
+
|
| 242 |
+
#### PACKAGING #####
|
| 243 |
+
|
| 244 |
+
- name: "build-table"
|
| 245 |
+
help: "builds a nice table from the metrics for README.md"
|
| 246 |
+
script:
|
| 247 |
+
- "python scripts/build-table.py"
|
| 248 |
+
|
| 249 |
+
- name: "readme"
|
| 250 |
+
help: "builds a nice table from the metrics for README.md"
|
| 251 |
+
script:
|
| 252 |
+
- "python scripts/readme.py"
|
| 253 |
+
|
| 254 |
+
- name: package
|
| 255 |
+
help: "Package the trained model as a pip package"
|
| 256 |
+
script:
|
| 257 |
+
- "python -m spacy package training/model-best packages --name ${vars.name} --version ${vars.version} --force"
|
| 258 |
+
deps:
|
| 259 |
+
- "training/model-best"
|
| 260 |
+
outputs_no_cache:
|
| 261 |
+
- "packages/${vars.lang}_${vars.name}-${vars.version}/dist/${vars.lang}_${vars.name}-${vars.version}.tar.gz"
|
| 262 |
+
|
| 263 |
+
- name: clean
|
| 264 |
+
help: "Remove intermediary directories"
|
| 265 |
+
script:
|
| 266 |
+
- "rm -rf corpus/*"
|
| 267 |
+
- "rm -rf training/*"
|
| 268 |
+
- "rm -rf metrics/*"
|
requirements.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
scikit-learn
|
| 2 |
+
tabulate
|
| 3 |
+
pandas
|
scripts/build-table.py
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import json
|
| 3 |
+
import pandas as pd
|
| 4 |
+
from tabulate import tabulate
|
| 5 |
+
import typer
|
| 6 |
+
|
| 7 |
+
def generate_detailed_markdown_chart():
|
| 8 |
+
# Directories for each model type in the desired order
|
| 9 |
+
model_dirs = [
|
| 10 |
+
('small', 'training/sm'),
|
| 11 |
+
('medium', 'training/md'),
|
| 12 |
+
('large', 'training/lg'),
|
| 13 |
+
('transformer', 'training/trf')
|
| 14 |
+
]
|
| 15 |
+
|
| 16 |
+
# DataFrame to hold the overall data
|
| 17 |
+
overall_df = pd.DataFrame(columns=['Model', 'Precision', 'Recall', 'F-Score'])
|
| 18 |
+
|
| 19 |
+
# DataFrame to hold the per-type data
|
| 20 |
+
per_type_df = pd.DataFrame(columns=['Model', 'Label', 'Precision', 'Recall', 'F-Score'])
|
| 21 |
+
|
| 22 |
+
for model_name, dir_path in model_dirs:
|
| 23 |
+
metrics_file = os.path.join(dir_path, 'metrics.json')
|
| 24 |
+
|
| 25 |
+
# Check if the file exists
|
| 26 |
+
if os.path.exists(metrics_file):
|
| 27 |
+
with open(metrics_file, 'r') as file:
|
| 28 |
+
metrics = json.load(file)
|
| 29 |
+
# Extract overall metrics
|
| 30 |
+
overall_df = overall_df.append({
|
| 31 |
+
'Model': model_name.capitalize(),
|
| 32 |
+
'Precision': round(metrics['spans_sc_p'] * 100, 1),
|
| 33 |
+
'Recall': round(metrics['spans_sc_r'] * 100, 1),
|
| 34 |
+
'F-Score': round(metrics['spans_sc_f'] * 100, 1)
|
| 35 |
+
}, ignore_index=True)
|
| 36 |
+
|
| 37 |
+
# Extract per-type metrics
|
| 38 |
+
for label, scores in metrics.get('spans_sc_per_type', {}).items():
|
| 39 |
+
per_type_df = per_type_df.append({
|
| 40 |
+
'Model': model_name.capitalize(),
|
| 41 |
+
'Label': label,
|
| 42 |
+
'Precision': round(scores['p'] * 100, 1),
|
| 43 |
+
'Recall': round(scores['r'] * 100, 1),
|
| 44 |
+
'F-Score': round(scores['f'] * 100, 1)
|
| 45 |
+
}, ignore_index=True)
|
| 46 |
+
|
| 47 |
+
# Define the order for models
|
| 48 |
+
model_order = ['Small', 'Medium', 'Large', 'Transformer']
|
| 49 |
+
per_type_df['Model'] = pd.Categorical(per_type_df['Model'], categories=model_order, ordered=True)
|
| 50 |
+
|
| 51 |
+
# Sort the per_type_df first by Label, then by Model
|
| 52 |
+
per_type_df.sort_values(by=['Label', 'Model'], inplace=True)
|
| 53 |
+
|
| 54 |
+
# Convert the DataFrames to Markdown
|
| 55 |
+
overall_markdown = tabulate(overall_df, headers='keys', tablefmt='pipe', showindex=False)
|
| 56 |
+
per_type_markdown = tabulate(per_type_df, headers='keys', tablefmt='pipe', showindex=False)
|
| 57 |
+
|
| 58 |
+
# Write the Markdown tables to a file
|
| 59 |
+
with open('model_comparison.md', 'w') as md_file:
|
| 60 |
+
md_file.write("# Overall Model Performance\n")
|
| 61 |
+
md_file.write(overall_markdown)
|
| 62 |
+
md_file.write("\n\n# Performance per Label\n")
|
| 63 |
+
md_file.write(per_type_markdown)
|
| 64 |
+
|
| 65 |
+
print("Markdown chart created as 'model_comparison.md'")
|
| 66 |
+
|
| 67 |
+
if __name__ == "__main__":
|
| 68 |
+
typer.run(generate_detailed_markdown_chart)
|
scripts/convert.py
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Convert entity annotation from spaCy v2 TRAIN_DATA format to spaCy v3 .spacy format."""
|
| 2 |
+
import srsly
|
| 3 |
+
import typer
|
| 4 |
+
import warnings
|
| 5 |
+
from pathlib import Path
|
| 6 |
+
import spacy
|
| 7 |
+
from spacy.tokens import DocBin
|
| 8 |
+
|
| 9 |
+
def convert(lang: str, input_paths: list[Path], output_dir: Path, spans_key: str = "sc"):
|
| 10 |
+
nlp = spacy.blank(lang)
|
| 11 |
+
nlp.add_pipe("sentencizer")
|
| 12 |
+
|
| 13 |
+
# Ensure output directory exists
|
| 14 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 15 |
+
|
| 16 |
+
# Process each input file
|
| 17 |
+
for input_path in input_paths:
|
| 18 |
+
print(input_path)
|
| 19 |
+
doc_bin = DocBin()
|
| 20 |
+
for annotation in srsly.read_jsonl(input_path):
|
| 21 |
+
text = annotation["text"]
|
| 22 |
+
doc = nlp.make_doc(text)
|
| 23 |
+
spans = []
|
| 24 |
+
for item in annotation["spans"]:
|
| 25 |
+
start = item["start"]
|
| 26 |
+
end = item["end"]
|
| 27 |
+
label = item["label"]
|
| 28 |
+
span = doc.char_span(start, end, label=label)
|
| 29 |
+
if span is None:
|
| 30 |
+
msg = f"Skipping entity [{start}, {end}, {label}] in the following text because the character span '{doc.text[start:end]}' does not align with token boundaries."
|
| 31 |
+
warnings.warn(msg)
|
| 32 |
+
else:
|
| 33 |
+
spans.append(span)
|
| 34 |
+
doc.spans[spans_key] = spans
|
| 35 |
+
doc_bin.add(doc)
|
| 36 |
+
# Write to output file in the specified directory
|
| 37 |
+
output_file = output_dir / f"{input_path.stem}.spacy"
|
| 38 |
+
doc_bin.to_disk(output_file)
|
| 39 |
+
|
| 40 |
+
if __name__ == "__main__":
|
| 41 |
+
typer.run(convert)
|
scripts/convert_sents.py
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import srsly
|
| 2 |
+
import typer
|
| 3 |
+
import warnings
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
import spacy
|
| 6 |
+
from spacy.tokens import DocBin
|
| 7 |
+
|
| 8 |
+
def convert(lang: str, input_paths: list[Path], output_dir: Path, spans_key: str = "sc"):
|
| 9 |
+
nlp = spacy.blank(lang)
|
| 10 |
+
nlp.add_pipe("sentencizer")
|
| 11 |
+
|
| 12 |
+
# Ensure output directory exists
|
| 13 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 14 |
+
|
| 15 |
+
total_sentences = 0
|
| 16 |
+
|
| 17 |
+
# Process each input file
|
| 18 |
+
for input_path in input_paths:
|
| 19 |
+
print(f"Processing file: {input_path}")
|
| 20 |
+
doc_bin = DocBin()
|
| 21 |
+
|
| 22 |
+
for annotation in srsly.read_jsonl(input_path):
|
| 23 |
+
text = annotation["text"]
|
| 24 |
+
doc = nlp(text) # Process the document to split into sentences
|
| 25 |
+
|
| 26 |
+
for sent in doc.sents:
|
| 27 |
+
# Create a new Doc for the sentence
|
| 28 |
+
sent_doc = nlp.make_doc(sent.text)
|
| 29 |
+
spans = []
|
| 30 |
+
for item in annotation["spans"]:
|
| 31 |
+
# Adjust span start and end for the sentence
|
| 32 |
+
start = item["start"] - sent.start_char
|
| 33 |
+
end = item["end"] - sent.start_char
|
| 34 |
+
label = item["label"]
|
| 35 |
+
|
| 36 |
+
# Only consider spans that are within the sentence
|
| 37 |
+
if start >= 0 and end <= len(sent.text):
|
| 38 |
+
span = sent_doc.char_span(start, end, label=label, alignment_mode="contract")
|
| 39 |
+
if span is None:
|
| 40 |
+
msg = f"Skipping entity [{start}, {end}, {label}] in the following text because the character span '{sent.text[start:end]}' does not align with token boundaries."
|
| 41 |
+
warnings.warn(msg)
|
| 42 |
+
else:
|
| 43 |
+
spans.append(span)
|
| 44 |
+
|
| 45 |
+
# Add sentence to DocBin only if it contains spans
|
| 46 |
+
if spans:
|
| 47 |
+
sent_doc.spans[spans_key] = spans
|
| 48 |
+
doc_bin.add(sent_doc)
|
| 49 |
+
total_sentences += 1
|
| 50 |
+
|
| 51 |
+
# Write to output file in the specified directory
|
| 52 |
+
output_file = output_dir / f"{input_path.stem}.spacy"
|
| 53 |
+
doc_bin.to_disk(output_file)
|
| 54 |
+
|
| 55 |
+
print(f"Total sentences with spans: {total_sentences}")
|
| 56 |
+
|
| 57 |
+
if __name__ == "__main__":
|
| 58 |
+
typer.run(convert)
|
scripts/readme.py
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import json
|
| 3 |
+
import pandas as pd
|
| 4 |
+
from tabulate import tabulate
|
| 5 |
+
import typer
|
| 6 |
+
|
| 7 |
+
def create_readme_for_model(model_dir: str, project_url: str):
|
| 8 |
+
# Path to the metrics and meta files
|
| 9 |
+
metrics_file = os.path.join(model_dir, 'metrics.json')
|
| 10 |
+
meta_file = os.path.join(model_dir, 'model-best', 'meta.json')
|
| 11 |
+
|
| 12 |
+
# DataFrame for the model's overall performance metrics
|
| 13 |
+
overall_df = pd.DataFrame(columns=['Metric', 'Value'])
|
| 14 |
+
|
| 15 |
+
# DataFrame for the model's per-label performance metrics
|
| 16 |
+
per_label_df = pd.DataFrame(columns=['Label', 'Precision', 'Recall', 'F-Score'])
|
| 17 |
+
|
| 18 |
+
# Read and add metrics data
|
| 19 |
+
if os.path.exists(metrics_file):
|
| 20 |
+
with open(metrics_file, 'r') as file:
|
| 21 |
+
metrics = json.load(file)
|
| 22 |
+
overall_df = overall_df.append({'Metric': 'Precision', 'Value': round(metrics['spans_sc_p'] * 100, 1)}, ignore_index=True)
|
| 23 |
+
overall_df = overall_df.append({'Metric': 'Recall', 'Value': round(metrics['spans_sc_r'] * 100, 1)}, ignore_index=True)
|
| 24 |
+
overall_df = overall_df.append({'Metric': 'F-Score', 'Value': round(metrics['spans_sc_f'] * 100, 1)}, ignore_index=True)
|
| 25 |
+
|
| 26 |
+
# Extract and add per-type metrics
|
| 27 |
+
for label, scores in metrics.get('spans_sc_per_type', {}).items():
|
| 28 |
+
per_label_df = per_label_df.append({
|
| 29 |
+
'Label': label,
|
| 30 |
+
'Precision': round(scores['p'] * 100, 1),
|
| 31 |
+
'Recall': round(scores['r'] * 100, 1),
|
| 32 |
+
'F-Score': round(scores['f'] * 100, 1)
|
| 33 |
+
}, ignore_index=True)
|
| 34 |
+
|
| 35 |
+
# Sort the per_label_df by Label
|
| 36 |
+
per_label_df.sort_values(by='Label', inplace=True)
|
| 37 |
+
|
| 38 |
+
# Convert the DataFrames to Markdown tables
|
| 39 |
+
overall_markdown = tabulate(overall_df, headers='keys', tablefmt='pipe', showindex=False)
|
| 40 |
+
per_label_markdown = tabulate(per_label_df, headers='keys', tablefmt='pipe', showindex=False)
|
| 41 |
+
|
| 42 |
+
# Read meta.json file
|
| 43 |
+
meta_info = ""
|
| 44 |
+
if os.path.exists(meta_file):
|
| 45 |
+
with open(meta_file, 'r') as file:
|
| 46 |
+
meta_data = json.load(file)
|
| 47 |
+
for key, value in meta_data.items():
|
| 48 |
+
meta_info += f"- **{key}**: {value}\n"
|
| 49 |
+
|
| 50 |
+
# README content
|
| 51 |
+
readme_content = f"""
|
| 52 |
+
# Placing the Holocaust spaCy Model - {os.path.basename(model_dir).capitalize()}
|
| 53 |
+
|
| 54 |
+
This is a spaCy model trained as part of the placingholocaust spaCy project. Training and evaluation code, along with the dataset, can be found at the following URL: [Placingholocaust SpaCy Project]({project_url})
|
| 55 |
+
|
| 56 |
+
## Model Performance
|
| 57 |
+
{overall_markdown}
|
| 58 |
+
|
| 59 |
+
## Performance per Label
|
| 60 |
+
{per_label_markdown}
|
| 61 |
+
|
| 62 |
+
## Meta Information
|
| 63 |
+
{meta_info}
|
| 64 |
+
"""
|
| 65 |
+
|
| 66 |
+
# Write the README content to a file
|
| 67 |
+
readme_file = os.path.join(model_dir, 'README.md')
|
| 68 |
+
with open(readme_file, 'w') as file:
|
| 69 |
+
file.write(readme_content)
|
| 70 |
+
|
| 71 |
+
print(f"README created in {model_dir}")
|
| 72 |
+
|
| 73 |
+
def create_all_readmes(project_url: str):
|
| 74 |
+
# Directories for each model type
|
| 75 |
+
model_dirs = ['training/sm', 'training/md', 'training/lg', 'training/trf']
|
| 76 |
+
|
| 77 |
+
for dir in model_dirs:
|
| 78 |
+
create_readme_for_model(dir, project_url)
|
| 79 |
+
|
| 80 |
+
if __name__ == "__main__":
|
| 81 |
+
project_url = "https://huggingface.co/datasets/placingholocaust/spacy-project"
|
| 82 |
+
typer.run(lambda: create_all_readmes(project_url))
|
scripts/split.py
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import toml
|
| 2 |
+
from collections import Counter
|
| 3 |
+
import srsly
|
| 4 |
+
import typer
|
| 5 |
+
from sklearn.model_selection import train_test_split
|
| 6 |
+
from pathlib import Path
|
| 7 |
+
|
| 8 |
+
def count_labels(data):
|
| 9 |
+
label_counter = Counter()
|
| 10 |
+
for annotation in data:
|
| 11 |
+
labels = [span['label'] for span in annotation['spans']]
|
| 12 |
+
label_counter.update(labels)
|
| 13 |
+
return label_counter
|
| 14 |
+
|
| 15 |
+
def split_data(input_file: Path, train_ratio: float = 0.7, dev_ratio: float = 0.15,
|
| 16 |
+
train_output: Path = Path("assets/train.jsonl"),
|
| 17 |
+
dev_output: Path = Path("assets/dev.jsonl"),
|
| 18 |
+
test_output: Path = Path("assets/test.jsonl"),
|
| 19 |
+
random_state: int = 1):
|
| 20 |
+
# Read data from JSONL
|
| 21 |
+
data = list(srsly.read_jsonl(input_file))
|
| 22 |
+
|
| 23 |
+
# Split data
|
| 24 |
+
test_ratio = 1 - train_ratio - dev_ratio
|
| 25 |
+
train_data, temp_data = train_test_split(data, test_size=(dev_ratio + test_ratio), random_state=random_state)
|
| 26 |
+
dev_data, test_data = train_test_split(temp_data, test_size=test_ratio/(dev_ratio + test_ratio), random_state=random_state)
|
| 27 |
+
|
| 28 |
+
# Count labels in each dataset
|
| 29 |
+
train_labels = count_labels(train_data)
|
| 30 |
+
dev_labels = count_labels(dev_data)
|
| 31 |
+
test_labels = count_labels(test_data)
|
| 32 |
+
|
| 33 |
+
# Write split data to JSONL files
|
| 34 |
+
srsly.write_jsonl(train_output, train_data)
|
| 35 |
+
srsly.write_jsonl(dev_output, dev_data)
|
| 36 |
+
srsly.write_jsonl(test_output, test_data)
|
| 37 |
+
|
| 38 |
+
# Combine label counts into a dictionary
|
| 39 |
+
all_labels = sorted(set(train_labels.keys()) | set(dev_labels.keys()) | set(test_labels.keys()))
|
| 40 |
+
annotations_data = {label: {'Train': train_labels.get(label, 0), 'Dev': dev_labels.get(label, 0), 'Test': test_labels.get(label, 0)} for label in all_labels}
|
| 41 |
+
|
| 42 |
+
# Print the table
|
| 43 |
+
print(f"{'Label':<20}{'Train':<10}{'Dev':<10}{'Test':<10}")
|
| 44 |
+
for label in all_labels:
|
| 45 |
+
print(f"{label:<20}{train_labels.get(label, 0):<10}{dev_labels.get(label, 0):<10}{test_labels.get(label, 0):<10}")
|
| 46 |
+
|
| 47 |
+
# Save the table in annotations.toml
|
| 48 |
+
with open("annotations.toml", "w") as toml_file:
|
| 49 |
+
toml.dump(annotations_data, toml_file)
|
| 50 |
+
|
| 51 |
+
print("Annotations data saved in annotations.toml")
|
| 52 |
+
|
| 53 |
+
if __name__ == "__main__":
|
| 54 |
+
typer.run(split_data)
|
spacy-project.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!-- WEASEL: AUTO-GENERATED DOCS START (do not remove) -->
|
| 2 |
+
|
| 3 |
+
# 🪐 Weasel Project: Demo spancat in a new pipeline (Span Categorization)
|
| 4 |
+
|
| 5 |
+
A minimal demo spancat project for spaCy v3
|
| 6 |
+
|
| 7 |
+
## 📋 project.yml
|
| 8 |
+
|
| 9 |
+
The [`project.yml`](project.yml) defines the data assets required by the
|
| 10 |
+
project, as well as the available commands and workflows. For details, see the
|
| 11 |
+
[Weasel documentation](https://github.com/explosion/weasel).
|
| 12 |
+
|
| 13 |
+
### ⏯ Commands
|
| 14 |
+
|
| 15 |
+
The following commands are defined by the project. They
|
| 16 |
+
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run).
|
| 17 |
+
Commands are only re-run if their inputs have changed.
|
| 18 |
+
|
| 19 |
+
| Command | Description |
|
| 20 |
+
| --- | --- |
|
| 21 |
+
| `download` | Download a spaCy model with pretrained vectors |
|
| 22 |
+
| `convert` | Convert the data to spaCy's binary format |
|
| 23 |
+
| `create-config` | Create a new config with a spancat pipeline component |
|
| 24 |
+
| `train` | Train the spancat model |
|
| 25 |
+
| `train-with-vectors` | Train the spancat model with vectors |
|
| 26 |
+
| `evaluate` | Evaluate the model and export metrics |
|
| 27 |
+
| `package` | Package the trained model as a pip package |
|
| 28 |
+
| `clean` | Remove intermediary directories |
|
| 29 |
+
|
| 30 |
+
### ⏭ Workflows
|
| 31 |
+
|
| 32 |
+
The following workflows are defined by the project. They
|
| 33 |
+
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run)
|
| 34 |
+
and will run the specified commands in order. Commands are only re-run if their
|
| 35 |
+
inputs have changed.
|
| 36 |
+
|
| 37 |
+
| Workflow | Steps |
|
| 38 |
+
| --- | --- |
|
| 39 |
+
| `all` | `convert` → `create-config` → `train` → `evaluate` |
|
| 40 |
+
| `all-vectors` | `download` → `convert` → `create-config` → `train-with-vectors` → `evaluate` |
|
| 41 |
+
|
| 42 |
+
### 🗂 Assets
|
| 43 |
+
|
| 44 |
+
The following assets are defined by the project. They can
|
| 45 |
+
be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets)
|
| 46 |
+
in the project directory.
|
| 47 |
+
|
| 48 |
+
| File | Source | Description |
|
| 49 |
+
| --- | --- | --- |
|
| 50 |
+
| [`assets/train.json`](assets/train.json) | Local | Demo training data adapted from the `ner_demo` project |
|
| 51 |
+
| [`assets/dev.json`](assets/dev.json) | Local | Demo development data |
|
| 52 |
+
|
| 53 |
+
<!-- WEASEL: AUTO-GENERATED DOCS END (do not remove) -->
|