61 to 70 of 120 Results
Mar 26, 2020 - Empirical Linguistics and Computational Language Modeling (LiMo)
Rehbein, Ines; Ruppenhofer, Josef, 2020, "MACE-AL-TREE", https://doi.org/10.11588/data/THPEBR, heiDATA, V1
An method for detecting noise in automatically annotated dependency parse trees, combining MACE (Hovy et al. 2013) with Active Learning. |
Aug 30, 2023 - Propylaeum@heiDATA
Mara, Hubert; Homburg, Timo, 2023, "MaiCuBeDa Hilprecht - Mainz Cuneiform Benchmark Dataset for the Hilprecht Collection", https://doi.org/10.11588/data/QSNIQ2, heiDATA, V1, UNF:6:NXlfO+rwTQYYtmBeze9QUw== [fileUNF]
Das Mainz Cuneiform Benchmark Dataset (MaiCuBeDa) beinhaltet Bilder von Keilschrifzeichen, Worten bestehend aus Keilschriftzeichen, Keilschrifzeichenzeilen und annotierten Einzelkeilen basierend auf dem Datenset HeiCuBeDa Hilprecht: https://doi.org/10.11588/data/IE8CCN . Die Anno... |
Nov 2, 2016 - Perspektive Bibliothek
Boiger, Wolfgang, 2016, "MARC21-MARCXML-Konverter", https://doi.org/10.11588/data/10091, heiDATA, V1
Quellcode für eine Perl-Implementierung eines MARC21-MARCXML-Konverters. |
Nov 17, 2021 - Medical Informatics
Benning, Nils-Hendrik; Knaup, Petra; Rupp, Rüdiger, 2021, "Measurement Performance of Activity Measurements with Newer Generation of Apple Watch in Wheelchair Users with Spinal Cord Injury: Manually and Device-Counted Pushes [Data]", https://doi.org/10.11588/data/P1HEGO, heiDATA, V1, UNF:6:br0+tP0XWfzu+FGO7V4qLw== [fileUNF]
This dataset contains the results (manually counted pushes and pushes counted by Apple Watch Series 4) of the study presented in the paper "Accuracy of Activity Measurements with Newer Generations of Apple Watch in Wheelchair Users with Spinal Cord Injury” |
Sep 27, 2019
Data publications from the Institute of Medical Informatics |
Oct 7, 2019 - Empirical Linguistics and Computational Language Modeling (LiMo)
Marasović, Ana, 2019, "Multilingual Modal Sense Classification using a Convolutional Neural Network [Source Code]", https://doi.org/10.11588/data/ERDJDI, heiDATA, V1
Abstract Modal sense classification (MSC) is aspecial WSD task that depends on themeaning of the proposition in the modal’s scope. We explore a CNN architecture for classifying modal sense in English and German. We show that CNNs are superior to manually designed feature-based cl... |
Jan 17, 2024
The main purpose of language is to encode and communicate information of all sorts. Our research focuses on semantics — the study of meaning — and how a machine can assign meaning to utterances: words, sentences and texts, as humans can do. Our work is linguistically informed and... |
Aug 19, 2019 - Empirical Linguistics and Computational Language Modeling (LiMo)
Kotnis, Bhushan, 2019, "Negative Sampling for Learning Knowledge Graph Embeddings", https://doi.org/10.11588/data/YYULL2, heiDATA, V1
Reimplementation of four KG factorization methods and six negative sampling methods. Abstract Knowledge graphs are large, useful, but incomplete knowledge repositories. They encode knowledge through entities and relations which define each other through the connective structure o... |
Nov 13, 2023 - Neural Techniques for German Dependency Parsing
Fankhauser, Peter; Do, Bich-Ngoc; Kupietz, Marc, 2023, "Neural Dependency Parser with Biaffine Attention", https://doi.org/10.11588/data/DZ9MUS, heiDATA, V1
This resource contains the code of the dependency parser used in the paper: Fankhauser, et al. (2020). "Evaluating a Dependency Parser on DeReKo". The parser is a re-implementation of the neural dependency parser from Dozat and Manning (2017). In addition, we include two pre-trai... |
Nov 13, 2023 - Neural Techniques for German Dependency Parsing
Do, Bich-Ngoc; Rehbein, Ines, 2023, "Neural Dependency Parser with Biaffine Attention and BERT Embeddings", https://doi.org/10.11588/data/0U6IWL, heiDATA, V1
This resource contains the code of the dependency parser used in the paper: Do and Rehbein (2020). "Parsers Know Best: German PP Attachment Revisited". The parser is a re-implementation of the neural dependency parser from Dozat and Manning (2017) and is extended to use the BERT... |