Accepted Presentations
- Silje Susanne Alvestad, Nele Poldvere, Asbjørn Følstad and Petter Bae Brandtzæg – Fakespeak in the Age of Large Language Models: A Comparative Study of Persuasion in AI-Generated and Human-Written Propaganda Narratives
- Zoltán Bánréti and László Hunyadi – Challenging AI: How Does an Artificial Intelligence Learn an Artificial Language?
- Tony Berber-Sardinha, Anderson Avila, Claudia Nunes Delfino, Hazem Amamou, Rogerio Yamada and Ru-Bing Chen – A Corpus-Based Approach to How Language Models React to Competing Ideologies
- Zétény Bujka, András Lukács, Péter Vedres and Anna Babarczy – Do Large Language Models Possess a Theory of Mind? A Comparative Evaluation Using the Strange Stories Paradigm
- Nicholas Catasso – Grammaticality, Acceptability and Variation in Human and AI Judgments
- Réka Dodé, Gábor Madarász, Mátyás Osváth, Kristóf Varga and Enikő Héja – Opportunities and Challenges in Classifying Hungarian Scientific Texts by Field of Science
- Cecilia Domingo, Paul Piwek, Svetlana Stoyanchev and Michel Wermelinger – Reference Processing in Pair-Programming Dialogue
- Dániel Golden – Large Language Models and the Philosophy of Language Games
- Christian Lang, Marco Gierke and Ngoc Duyen Tanja Tu – Orthographic Diversity in Large Language Models – A Case Study of Foreign Word Spelling in German
- Alexey Matyushin – LLMs as Tools for Drafting Ad Hoc Pharmaceutical Glossaries
- Bálint Levente Mórász and László János Laki – The Impact of Example-Selection Metrics on LLM-based Machine Translation
- Natalia Moskvina, Raquel Montero, Masaya Yoshida, Ferdy Hubers, Paolo Morosi, Walid Irhaymi, Jin Yan, Elena Pagliarini, Fritz Günther and Evelina Leivada – Language Comprehension in LLMs and Humans Across Languages
- Vasile Păiș, Maria Mitrofan, Verginica Barbu Mititelu and Dan Tufis – Large Language Models as Multiword Expressions Annotators
- Noémi Prótár and Dávid Márk Nemeskey – Bridging the Gap Between Qualitative and Quantitative – How Linguistic Analysis Can Help Automatic Text Simplification Evaluation
- Irene Russo and Paola Vernillo – Seeing the Unsaid: Visualizing English Idioms with Text-to-Image Generation
- Tommaso Sgrizzi, Asya Zanollo and Cristiano Chesi – Syntactic Maps or Surface Hacks? Testing Restructuring Verb Order and Clitic Placement in LLMs
- Gábor Simon, Tímea Borbála Bajzát, Natabara Máté Gyöngyössy, Péter Gergő Molnár, Noémi Prótár and Balázs Indig – Large Language Models in Metaphor Identification: The Case of Presuicidal Interactions
- Kata Ágnes Szűcs, Noémi Vadász, Zsolt Záros and Zoltán Szatucsek – Integrating Large Language Models in Structural Data Processing in Hungarian Civil Registers
- Lili Tamás, Mariann Lengyel and Noémi Ligeti-Nagy – Language Models Achieve Human-Level Sarcasm Detection
- Ágoston Tóth – An LLM-Motivated Theory of Language
- Üveges István and Ring Orsolya – LLM-Supported Annotation Guide for Exploring the Digital Discourses of the Russia–Ukraine Conflict
- Boglárka Vermeki – Modelling Language Proficiency with Puli-BERT-Large: A Case Study on CEFR Classification in Hungarian Learner Texts
- Roberto Jiménez de la Torre and Carlos Á. Iglesias – A Modular LLM-Enhanced Agent-Based System for the Generation and Evaluation of Journalistic Interview Questions
- Botond Szemes and Kata Dobás – Digital Literary Memory in Central-East-Europe. Analysing Wikipedia with LLMs
- Zijian Győző Yang, Ágnes Bánfi, Réka Dodé, Gergő Ferenczi, Flóra Földesi, Enikő Héja, Mariann Lengyel, Gábor Madarász, Mátyás Osváth, Bence Sárossy, Kristóf Varga and Noémi Ligeti-Nagy – Toward Hungarian-Centric Language Understanding: Hungarian-Adapted PULI Large Language Models
- Katerina Zoi and Dimitrios Mysiloglou – Reconstructing the Past: How LLMs Reflect or Adapt to Papadiamantis’ Cultural Worldview