31 to 40 of 84 Results
Nov 13, 2023 - Neural Techniques for German Dependency Parsing
Do, Bich-Ngoc; Rehbein, Ines, 2023, "Neural Rerankers for Dependency Parsing", https://doi.org/10.11588/data/NNGPQZ, heiDATA, V1
This resource contains code for different types of neural rerankers (RCNN, RCNN-shared and GCN) from the paper: Do and Rehbein (2020). "Neural Reranking for Dependency Parsing: An Evaluation". We also include in this resource the pre-trained models of different rerankers on 3 lan... |
Nov 13, 2023 - Neural Techniques for German Dependency Parsing
Do, Bich-Ngoc; Rehbein, Ines, 2023, "Neural PP Attachment Disambiguation Systems", https://doi.org/10.11588/data/DKWKGJ, heiDATA, V1
This resource contains code for different types of neural PP attachment disambiguation systems: A disambiguation system inspired by de Kok et al. (2017) but with the ranking loss function. A disambiguation system with biaffine attention similar to the neural dependency parser in... |
Nov 13, 2023 - Neural Techniques for German Dependency Parsing
Do, Bich-Ngoc; Rehbein, Ines, 2023, "Neural Dependency Parser with Biaffine Attention and BERT Embeddings", https://doi.org/10.11588/data/0U6IWL, heiDATA, V1
This resource contains the code of the dependency parser used in the paper: Do and Rehbein (2020). "Parsers Know Best: German PP Attachment Revisited". The parser is a re-implementation of the neural dependency parser from Dozat and Manning (2017) and is extended to use the BERT... |
Nov 13, 2023 - Neural Techniques for German Dependency Parsing
Fankhauser, Peter; Do, Bich-Ngoc; Kupietz, Marc, 2023, "Neural Dependency Parser with Biaffine Attention", https://doi.org/10.11588/data/DZ9MUS, heiDATA, V1
This resource contains the code of the dependency parser used in the paper: Fankhauser, et al. (2020). "Evaluating a Dependency Parser on DeReKo". The parser is a re-implementation of the neural dependency parser from Dozat and Manning (2017). In addition, we include two pre-trai... |
Aug 19, 2019 - Empirical Linguistics and Computational Language Modeling (LiMo)
Kotnis, Bhushan, 2019, "Negative Sampling for Learning Knowledge Graph Embeddings", https://doi.org/10.11588/data/YYULL2, heiDATA, V1
Reimplementation of four KG factorization methods and six negative sampling methods. Abstract Knowledge graphs are large, useful, but incomplete knowledge repositories. They encode knowledge through entities and relations which define each other through the connective structure o... |
Oct 7, 2019 - Empirical Linguistics and Computational Language Modeling (LiMo)
Marasović, Ana, 2019, "Multilingual Modal Sense Classification using a Convolutional Neural Network [Source Code]", https://doi.org/10.11588/data/ERDJDI, heiDATA, V1
Abstract Modal sense classification (MSC) is aspecial WSD task that depends on themeaning of the proposition in the modal’s scope. We explore a CNN architecture for classifying modal sense in English and German. We show that CNNs are superior to manually designed feature-based cl... |
Nov 17, 2021 - Medical Informatics
Benning, Nils-Hendrik; Knaup, Petra; Rupp, Rüdiger, 2021, "Measurement Performance of Activity Measurements with Newer Generation of Apple Watch in Wheelchair Users with Spinal Cord Injury: Manually and Device-Counted Pushes [Data]", https://doi.org/10.11588/data/P1HEGO, heiDATA, V1, UNF:6:br0+tP0XWfzu+FGO7V4qLw== [fileUNF]
This dataset contains the results (manually counted pushes and pushes counted by Apple Watch Series 4) of the study presented in the paper "Accuracy of Activity Measurements with Newer Generations of Apple Watch in Wheelchair Users with Spinal Cord Injury” |
Nov 2, 2016 - Perspektive Bibliothek
Boiger, Wolfgang, 2016, "MARC21-MARCXML-Konverter", https://doi.org/10.11588/data/10091, heiDATA, V1
Quellcode für eine Perl-Implementierung eines MARC21-MARCXML-Konverters. |
Aug 30, 2023 - Propylaeum@heiDATA
Mara, Hubert; Homburg, Timo, 2023, "MaiCuBeDa Hilprecht - Mainz Cuneiform Benchmark Dataset for the Hilprecht Collection", https://doi.org/10.11588/data/QSNIQ2, heiDATA, V1, UNF:6:NXlfO+rwTQYYtmBeze9QUw== [fileUNF]
Das Mainz Cuneiform Benchmark Dataset (MaiCuBeDa) beinhaltet Bilder von Keilschrifzeichen, Worten bestehend aus Keilschriftzeichen, Keilschrifzeichenzeilen und annotierten Einzelkeilen basierend auf dem Datenset HeiCuBeDa Hilprecht: https://doi.org/10.11588/data/IE8CCN . Die Anno... |
Mar 26, 2020 - Empirical Linguistics and Computational Language Modeling (LiMo)
Rehbein, Ines; Ruppenhofer, Josef, 2020, "MACE-AL-TREE", https://doi.org/10.11588/data/THPEBR, heiDATA, V1
An method for detecting noise in automatically annotated dependency parse trees, combining MACE (Hovy et al. 2013) with Active Learning. |