Jeniya Tabassum, Sydney Lee, Wei Xu, Alan Ritter
This paper presents the results of the wet lab information extraction task at WNUT 2020. This task consisted of two sub tasks: (1) a Named Entity Recognition (NER) task with 13 participants and (2) a Relation Extraction (RE) task with 2 participants. We outline the task, data annotation process, corpus statistics, and provide a high-level overview of the participating systems for each sub task.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Relation Extraction | WNUT 2020 | F1 | 72.5 | Baseline |
| Relation Extraction | WNUT 2020 | Precision | 80.1 | Baseline |
| Relation Extraction | WNUT 2020 | Recall | 66.21 | Baseline |
| Named Entity Recognition (NER) | WNUT 2020 | F1 | 65.73 | Baseline |
| Named Entity Recognition (NER) | WNUT 2020 | Precision | 70.06 | Baseline |
| Named Entity Recognition (NER) | WNUT 2020 | Recall | 61.91 | Baseline |