At Exaia, we're not just embracing the future of AI, we're shaping it with pioneering Human-Centered and Explainable AI (HC-XAI) research at our core
A rigorous, multidisciplinary approach
Exaia's team is firmly committed to conducting cutting-edge research at the intersection of Natural Language Processing, Machine Learning, Speech Recognition and Computational Social Sciences, with a strong focus on HC-XAI for real-world applications. This commitment is reflected in an extensive portfolio of our research publications. The team's AI-powered models have achieved outstanding results on international benchmark datasets, setting the state-of-the-art in personality prediction, emotion recognition, mental illness detection, and more. These findings and insights drive the development of Exaia's AI-powered SaaS products.
We have
0 +
years of collaborative research experience
We have published
0 +
peer-reviewed research papers
We have developed and evaluated
0
machine and deep learning models
Publications
Frontiers in Psychiatry
Toward explainable AI (XAI) for mental health detection based on language behavior
All Publications
December 7, 2023-Elma Kerz, Sourabh Zanwar, Yu Qiao, Daniel Wiechmann
This work represents significant progress in bridging the gap between advanced AI methods and the critical need for transparency and interpretability in psychiatric diagnosis and prediction.
Keywords
explainable ai (xai) ai for health precision medicine digital health psychiatric diagnosis and prediction machine learning deep learning digital biomarkers digital phenotyping
July 9, 2023-Sourabh Zanwar, Xiaofei Li, Daniel Wiechman, Yu Qiao, Elma Kerz
Extensive experiments explore deep learning fusion strategies, including feature-level, model, and task fusion, with evaluation across two datasets covering five mental health conditions (attention deficit hyperactivity disorder, anxiety, bipolar disorder, depression and psychological stress).
Keywords
deep learning digital biomarkers nlp ml/dl information fusion models digital psychiatry llms bilstm
May 2, 2023-Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz
We present SMHD-GER, a carefully constructed German dataset, alongside benchmark models incorporating (psycho-)linguistic features and BERT-German. This resource facilitates advanced research into digital biomarkers derived from verbal behavior.
Keywords
precision medicine digital health psychiatric diagnosis and prediction machine learning deep learning digital biomarkers hybrid models
December 5, 2023-Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz
We present our system for the SMMH23 Shared Task on binary classification of English Reddit posts self-reporting social anxiety disorder. We contrast hybrid and ensemble models employing domain-adapted transformers and BiLSTM neural networks, achieving 89.31% F1 on the validation set and 83.76% F1 on the test set.
Keywords
precision medicine digital health digital biomarkers hybrid models ensemble models llms bilstm social anxiety disorder nlp ml/dl
December 8, 2023-Yu Qiao, Xiaofei Li, Daniel Wiechmann, Elma Kerz
Our work advances text simplification (TS) by enhancing explainable complexity prediction through psycho-linguistic features and pre-trained language models, while also extending a state-of-the-art Seq2Seq TS model to enable explicit control of ten attributes. Experiment results demonstrate improved performance in predicting complexity and significant enhancements in model performance across different settings.
Keywords
text simplification explainable ai llms seq2seq ts model transformer models engineered features attribute control
December 8, 2022-Xiaofei Li, Daniel Wiechmann, Yu Qiao, Elma Kerz
We present our contribution to the TSAR-2022 Shared Task on Lexical Simplification, which enhances the LSBert system with a RoBERTa transformer model for candidate selection and introduces a new feature weighting scheme for substitution ranking. Achieving a 5.9% accuracy boost, our system secures the second position among 33 ranked solutions.
Keywords
explainable ai unsupervised lexical simplification pretrained encoders text complexity transformer model readability controllable text simplification
November 7, 2022-Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz
Recent efforts to predict emotions from text encounter challenges in real-world applications due to poor generalizability across domains. We introduce a novel approach combining transformer models with Bidirectional Long Short-Term Memory networks trained on psycholinguistic features. The proposed hybrid models exhibit improved out-of-domain robustness compared to standard transformer-based approaches and competitive performance on in-domain data.
Keywords
generalizability out-of-domain robustness feature engineering bilstm emotion detection ml/dl transformer models hybrid models explainable ai (xai)
October 1, 2022-Elma Kerz, Stella Neumann and Paula Niemietz
We aim to uncover how individuals adapt their language use to different communication environments via quantitative analysis of linguistic complexity measures across writing tasks.
Keywords
linguistic adaptation second language learners complexity measures socio-cognitive factors personality traits
October 12, 2022-Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz
We detect six mental disorders leveraging BLSTM networks trained on psycholinguistic features for explainable detection, and combining them with Transformers to enhance prediction accuracy, while uncovering nuanced linguistic markers for specific conditions.
Keywords
nlp ml/dl mental disorders feature engineering mental health prediction digital health social media data explainable ai
October 12, 2022-Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz
Our submission to the SMM4H22 focuses on detecting self-reported chronic stress on Twitter. Combining a pre-trained transformer model (RoBERTa) with a Bidirectional Long Short-Term Memory (BiLSTM) network trained on diverse psycholinguistic features, we address the class imbalance issue by augmenting the training dataset with another dataset used for stress classification in social media.
Keywords
nlp ml/dl mental disorders feature engineering mental health prediction digital health social media mining for health data augmentation
June 20, 2022-Elma Kerz, Yu Qiao, Sourabh Zanwar, Daniel Wiechmann
We introduce SPADE, the first dataset featuring continuous samples of argumentative speech labeled with the Big Five personality traits and enriched with socio-demographic data. Leveraging 436 engineered features and transformer models, our benchmark models aim to facilitate research in automatic personality detection, with feature ablation experiments shedding light on the predictive power of different types of features for individual personality traits.
Keywords
nlp ml/dl personality assessment xai techniques feature ablation feature engineering explainable ai data resource
May 26, 2022-Elma Kerz, Yu Qiao, Sourabh Zanwar, Daniel Wiechmann
We introduce two key advancements in personality prediction from verbal behavior: a comprehensive set of psycholinguistic features and hybrid models combining BERT with BLSTM networks trained on these features within-text distributions. Our models, evaluated on benchmark datasets for Big Five traits and MBTI types, outperform existing approaches, with ablation experiments assessing feature impact.
Keywords
nlp ml/dl personality prediction big five personality traits myers-briggs type indicator (mbti) hybrid models transformer lm bilstm benchmark datasets explainable ai (xai) feature ablation
May 22, 2022-Daniel Wiechmann, Yu Qiao, Elma Kerz, Justus Mattern
We investigate the use of NLP and ML to predict gaze patterns in naturalistic reading, with a focus on the impact of text characteristics on transformer-based language models (BERT and GPT-2). Through experiments, we determine how feature selection and model architecture influence eye-tracking prediction, while employing SP-LIME to assess the relative importance of different feature groups.
Keywords
naturalistic reading behavioral measures eye-movement nlp ml/dl llms benchmark datasets explainable ai (xai) feature ablation
November 30, 2021-Yu Qiao, Sourabh Zanwar, Rishab Bhattacharyya, Daniel Wiechmann, Wei Zhou, Elma Kerz, Ralf Schlüter
We predict TED talk-style affective ratings using a crowdsourced dataset of argumentative speech we created, elicited through debating prompts. Employing fine-tuning on a pre-trained model, we combine fluency features from automatic speech recognition with linguistic features. We employ SP-LIME to assess feature importance.
Keywords
affective computing argumentative speech nlp ml/dl classification task pre-trained model human raters automatic speech recognition (asr) xai feature ablation sp-lime
November 10, 2021-Justus Mattern, Yu Qiao, Daniel Wiechmann, Elma Kerz, Markus Strohmaier
We address the critical need for benchmark datasets to combat COVID-19-related disinformation, particularly in the German language context. Introducing FANG-COVID, our dataset comprises real and fake German news articles, complemented by Twitter propagation data. Additionally, we propose an explainable model for fake news detection, comparing its performance to black-box models and conducting feature ablation to evaluate human-interpretable features
Keywords
deception detection fake news detection twitter explainable ai (xai) feature ablation interpretable model benchmark dataset
August 30, 2021-Yu Qiao, Xuefeng Yin, Daniel Wiechmann, Elma Kerz
Alzheimers disease (AD) is a rapidly increasing neurodegenerative disease. Automated detection methods are crucial due to its increasing prevalence and high cost. In our work, linguistic features and pre-trained models are used to achieve 83.1% accuracy in detecting AD, with an ensemble model showing robust performance.
Keywords
alzheimer dementia precision diagnostics digital health digital biomarkers hybrid models ensemble models ml/dl nlp text features speech fluency features
August 30, 2021-Yu Qiao, Wei Zhou, Elma Kerz, Ralf Schlüter
We evaluate the impacts of state-of-the-art Automatic Speech Recognition (ASR) systems on downstream text analytics systems in challenging scenarios, particularly focusing on spontaneously produced speech by second language learners. We assess 30 measures that capture the complexity, diversity, and sophistication of language production, revealing distinct effects of ASR on specific complexity measures while also accounting for variations in task types.
Keywords
text analytics speech recognition second language asr-text analytics interface hybrid hidden markov model
April 20, 2021-Elma Kerz, Daniel Wiechmann, Yu Qiao, Emma Tseng, Marcus Ströbel
We predict second language learner proficiency using machine learning. Introducing complexity contours from a sliding window method, we employ recurrent neural network (RNN) classifiers to capture sequential information. Results show RNN classifiers trained on complexity contours outperform traditional methods, and sensitivity-based pruning highlights feature importance, validating CEFR levels.
Keywords
cefr-level assessment ml/dl nlp sliding window feature engineering xai feature importance
April 19, 2021-Elma Kerz, Yu Qiao, Daniel Wiechmann
Effective communication is paramount to personal, academic and professional success as it fosters social relationships and facilitates knowledge sharing. Prior research has addressed its verbal and nonverbal aspects, including auditory measures and nonverbal cues. Our work extends this research by employing a comprehensive set of psycholinguistic features in a multi-label classification task to predict affective ratings of online viewers across 14 categories.
Keywords
affective computing public speaking multi-label classification ted talks feature engineering xai feature ablation
December 12, 2020-Elma Kerz, Yu Qiao, Daniel Wiechmann
The widespread issue of fake news involves false or misleading information presented as legitimate news, undermining fact-based reporting and hindering accurate perceptions for political actors, authorities, media, and citizens. While existing language-based methods for detecting fake news often rely on opaque models with high accuracy, there is a growing need to adopt transparent (explainable) models, particularly in critical sectors like healthcare, finances, military, and news. This study employs interpretable features derived from multi-disciplinary language approaches to train bi-directional recurrent neural network classification models. Applied to benchmark datasets, our approach yields comparable results to top-performing black box models, highlighting its potential. We perform ablation experiments to assess the significance of human-interpretable features in distinguishing fake news from genuine news.
Keywords
deception detection fake news detection black-box models white-box models explainable ai (xai) feature ablation human-interpretable features disinformation
October 09, 2020-Elma Kerz, Daniel Wiechmann, Felicity Frinsel, Morten H. Christiansen
Research spanning two decades highlights statistical learning mechanisms aiding language processing in children and adults, though often utilizing simplified artificial languages. We investigate the sensitivity of both native and non-native adult speakers to authentic language statistics across various English registers, revealing their ability to adapt to multiple distributional statistics during online processing.
Keywords
statistical learning human language processing distributional statistics adaptation
April 09, 2020-Marcus Ströbel, Elma Kerz, Daniel Wiechmann
Recent studies show significant differences in first language (L1) acquisition across different language components and life stages, which has led to an investigation of their impact on second language (L2) acquisition. Previous research has focused mainly on L1-L2 reading comprehension, here we extend it to the domain of L1-L2 writing. The results indicate strong correlations between L1 and L2 complexity on various linguistic measures and emphasize the persistent influence of the L1 on L2 ability.
Keywords
language attainment interindividual differences sliding window mixed-effects modeling text analytics feature engineering
July 10, 2020-Elma Kerz, Yu Qiao, Daniel Wiechmann and Marcus Ströbel
We present a novel methodology for investigating the developmental trajectory of writing skills in English and German schoolchildren across various grade levels, utilizing classification tasks. Using two benchmark corpora, we leverage "complexity contours," sequences of measurements capturing the evolution of linguistic complexity within written texts. Integral to our approach is the application of Recurrent Neural Network (RNN) classifiers, adept at processing the sequential information inherent in these complexity contours.
Keywords
writing development ml sensitivity-based pruning feature importance xai sliding window rnn benchmark datasets
May 20, 2020-Elma Kerz, Fabio Pruneri, Daniel Wiechmann, Yu Qiao, Marcus Ströbel
The purpose of our work is twofold: (1) to introduce, to our knowledge, the largest available resource of keystroke logging (KSL) data generated by Etherpad (https://etherpad. org/), an open-source, web-based collaborative real-time editor, that captures the dynamics of second language (L2) production and (2) to relate the behavioral data from KSL to indices of syntactic and lexical complexity of the texts produced obtained from a tool that implements a sliding window approach capturing the progression of complexity within a text.
Keywords
keystroke logging NLP text analytics sliding window feature engineering
April 10, 2020-Elma Kerz, Daniel Wiechmann
We investigate the relationship between verbal working memory (vWM), second language (L2) experience, and L2 sentence comprehension in L2 learners. Results indicate positive correlations between vWM and L2 experience, with vWM significantly influencing L2 sentence comprehension across different sentence types, underscoring their interconnection.
Keywords
verbal working memory human language processing inter-individual differences amount of linguistic experience second language
December 17, 2020-Elma Kerz, Daniel Wiechmann
We provide an overview of the experience-related, cognitive, and affective factors influencing individual differences in both native and non-native language attainment. We address recent topics in cognitive science research and introduce methodological paradigms and statistical techniques to model variability in language performance data and connect it with individual differences factors.
Keywords
inter-individual differences human language processing statistical modeling cognitive psychology
August 15, 2020-Elma Kerz, Daniel Wiechmann, Tandis Silkens
Evidence indicates a strong connection between individual differences (ID) in statistical learning ability (SL) and language performance in native speakers, with potential implications for second language (L2) learners. We address the interaction between SL, personality traits, and L2 experience, revealing a nuanced relationship where SLs impact on language comprehension is moderated by personality traits, highlighting the complexity of ID factors in L2 comprehension.
Keywords
inter individual differences, human language processing, statistical modeling, cognitive psychology, personality traits, L2 experience
September 8, 2020-Elma Kerz, Andreas Burgdorf, Daniel Wiechmann, Stefan Meeger, Yu Qiao, Christian Kohlschein, Tobias Meisen
Learning analytics and educational data mining have garnered increased attention as valuable tools for understanding human learning processes. This paper presents an adaptive language learning system aimed at monitoring and enhancing academic vocabulary skills, resulting in comprehensive longitudinal data on individual vocabulary development trajectories. We pursue a dual objective: firstly, to investigate the pace and pattern of vocabulary growth trajectories, and secondly, to elucidate the influence of various individual differences factors on the variability observed in these growth trajectories.
Keywords
edtech elearning eductational data mining inter-individual differences growth numbers longitudinal data personality verbal working memory attention
August 5, 2020-Elma Kerz, Arndt Heilmann, Stella Neumann
We investigate whether intermediate-advanced L2 speakers of English develop sensitivity to the frequencies of multiword sequences (MWS), akin to native speakers. Using eye-tracking, we replicated MWS frequency effects found in native speakers, revealing faster processing of sentences containing MWS compared to control items, indicating similar language processing mechanisms between native and non-native speakers.
Keywords
human language processing statistical learning eye tracking
September 15, 2019-Elma Kerz, Daniel Wiechmann
Recent research has focused primarily on demonstrating that both native and non-native speakers learn the statistical properties of multiword sequences (MWS), but there is also increasing emphasis on accounting for individual differences (IDs) in language processing. We investigate the relationship between individual variability in online processing of MWS and statistical learning (SL) ability using a within-subject design and a series of SL tasks in different modalities.
Keywords
human language processing statistical learning inter-individual differences
July 13, 2018-Marcus Ströbel, Elma Kerz, Daniel Wiechmann, Yu Qiao
We present a novel method that evaluates text complexity locally as "complexity contours" rather than using global summary statistics. The approach proves effective in automatically categorizing texts from different text genres.
Keywords
text genre classification linguistic complexity sliding window
April 03, 2017-Elma Kerz, Daniel Wiechmann, Florian B. Riedel
We investigate the potential for implicit learning of novel form-meaning mappings outside controlled laboratory settings, utilizing crowdsourcing experiments, revealing that awareness at both noticing and understanding levels facilitated learning outcomes.
Keywords
implicit learning crowdsourcing experiments noticing awareness human language processing
October 26, 2017-Elma Kerz, Daniel Wiechmann
We investigate the role of statistical multi-word sequences in native and second language learning and explore how individual differences in cognitive and affective factors influence the processing of these sequences.
Keywords
inter-individual differences human language processing working memory personality multiword statistics
December 11, 2016-Ströbel Marcus, Elma Kerz, Daniel Wiechmann, Stella Neumann
We introduce a novel method for automatically assessing text complexity, employing a sliding-window technique to monitor complexity distribution within a text. This distribution is represented by "complexity contours," derived from a series of measurements for a chosen linguistic complexity metric. Our approach is realized in the form of CoCoGen – Complexity Contour Generator, a computational tool currently supporting 32 indices of linguistic complexity. This paper aims to (1) outline the design of our computational tool based on the sliding-window technique and (2) demonstrate its application in the domain of second language (L2) learning, particularly in L2 writing assessment.
Keywords
text analytics sliding-window complexity contours