{"id":667,"date":"2025-10-29T19:06:24","date_gmt":"2025-10-29T19:06:24","guid":{"rendered":"https:\/\/wolala.nytud.hu\/?page_id=667"},"modified":"2025-11-24T12:01:07","modified_gmt":"2025-11-24T12:01:07","slug":"programme","status":"publish","type":"page","link":"https:\/\/wolala.nytud.hu\/2025\/programme\/","title":{"rendered":"Programme"},"content":{"rendered":"\n<h3 class=\"wp-block-heading alignwide has-text-align-left\" style=\"margin-top:var(--wp--preset--spacing--10);margin-bottom:var(--wp--preset--spacing--10)\">Conference Programme Details<\/h3>\n\n\n\n<hr class=\"wp-block-separator alignwide has-alpha-channel-opacity is-style-default\"\/>\n\n\n\n<div class=\"wp-block-group alignwide has-global-padding is-content-justification-left is-layout-constrained wp-container-core-group-is-layout-12dd3699 wp-block-group-is-layout-constrained\">\n<p class=\"has-text-align-left\">The conference will run from Thursday 20 November, 9:30 CET, until Friday 21 November, 17:30 CET.<\/p>\n\n\n\n<hr class=\"wp-block-separator alignwide has-alpha-channel-opacity is-style-default\"\/>\n<\/div>\n\n\n\n<h4 class=\"wp-block-heading alignwide has-text-align-left\">Day One<\/h4>\n\n\n\n<figure class=\"wp-block-table alignwide my-schedule-table is-style-stripes\" style=\"margin-top:0;margin-bottom:0\"><table><tbody><tr><td class=\"has-text-align-left\" data-align=\"left\"><strong>Time<\/strong>   <\/td><td><strong>Thursday 20 November 2025<\/strong><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">09:00 &#8211; 09:30<\/td><td>Registration<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">09:30 &#8211; 09:45<\/td><td>Welcome remarks<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\"><strong>09:45 &#8211; 10:45<\/strong><\/td><td><strong>Keynote by Alessandro Lenci<\/strong><br><br><strong>Chair:<\/strong> <strong>No\u00e9mi Ligeti-Nagy<\/strong><br><br><em>Beyond prediction: What LLMs miss about meaning and why<\/em><br><br><details><br><summary>Abstract<\/summary>Large Language Models (LLMs) have achieved remarkable fluency, generating text that often feels indistinguishable from human writing. Yet beneath this surface competence lies a profound question: do these systems truly capture meaning? This talk explores the conceptual and cognitive limits of current LLMs, focusing on the distinction between statistical prediction and semantic representation. In particular, I will ask why pattern recognition alone cannot yield genuine semantic understanding.<\/details><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">10:45 &#8211; 11:15<\/td><td><em>Coffee break<\/em><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\"><br><br><br><br>11:15 &#8211; 11:35<\/td><td><strong>Section 1:<\/strong> <strong>Human-LLM comprehension<\/strong><br><br><strong>Chair: G\u00e1bor Pr\u00f3sz\u00e9ky<\/strong><br><br><strong>Natalia Moskvina<\/strong>, Raquel Montero, Masaya Yoshida, Ferdy Hubers, Paolo Morosi, Walid Irhaymi, Jin Yan, Elena Pagliarini, Fritz G\u00fcnther and Evelina Leivada \u2013 <em>Language comprehension in LLMs and humans across languages<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_31-1.pdf\" data-type=\"link\" data-id=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_31-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">11:35 &#8211; 11:55<\/td><td>Z\u00e9t\u00e9ny Bujka, Andr\u00e1s Luk\u00e1cs, P\u00e9ter Vedres and <strong>Anna Babarczy<\/strong> \u2013 <em>Do Large Language Models possess a theory of mind? A comparative evaluation using the Strange Stories paradigm <\/em>(<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa_2025_abstract_ToM_withAuthors.pdf\" data-type=\"link\" data-id=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_13-2.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">11:55 &#8211; 12:15<\/td><td>Zolt\u00e1n B\u00e1nr\u00e9ti and <strong>L\u00e1szl\u00f3 Hunyadi <\/strong>\u2013 <em>Challenging AI: How does an Artificial Intelligence learn an artificial language? <\/em>(<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/Challenging-AI.final-abstract.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">12:15 &#8211; 12:35<\/td><td><strong>Christian Lang<\/strong>, Marco Gierke and Ngoc Duyen Tanja Tu<strong> <\/strong>\u2013 <em>Orthographic diversity in Large Language Models \u2013 A case study of foreign word spelling in German<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/Orth_Var_LLMs_deanonymized-1.pdf\" data-type=\"link\" data-id=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_20-2.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">12:35 &#8211; 12:55<\/td><td><strong>Lili Tam\u00e1s<\/strong>, Mariann Lengyel and No\u00e9mi Ligeti-Nagy \u2013 <em>Language Models achieve human-level sarcasm detection<\/em> (<a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/Sarcasm_WoLaLa_Extended_Abstract.pdf\" data-type=\"link\" data-id=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_29-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><em>abstract<\/em><\/a>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">12:55 &#8211; 14:00<\/td><td><em>Lunch<\/em><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\"><strong>14:00 &#8211; 15:00<\/strong><\/td><td><strong>Keynote by Erhard Hinrichs<\/strong><br><br><strong>Chair:<\/strong> <strong>Veronika Lipp<\/strong><br><br><em>The added value of LLMs for lexicography and for lexical semantics<\/em><br><br><details><br><summary>Abstract<\/summary>With the availability of deep learning methods, LLMs, and generative AI, the question has been posed whether dictionaries &#8212; and lexicographic resources more generally &#8211;can be created by purely automatic means. This hypothesis has been identified as \u201cthe end of lexicography\u201d by, among others, Gilles-Maurice de Shriver and David Joffe. On the basis of three use cases from digital lexicography, I want to examine this hypothesis and draw some more general conclusions about the added value of LLMs and generative AI for lexicography and for lexical semantics.<\/details><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\"><br><br><br><br>15:00 &#8211; 15:20<\/td><td><strong>Section 2: Idioms, metaphors and terminology<\/strong><br><br><strong>Chair:<\/strong> <strong>Marko Tadi\u0107<\/strong><br><br>Vasile P\u0103i\u0219, Maria Mitrofan, <strong>Verginica Barbu Mititelu<\/strong> and Dan Tufi\u0219 \u2013 <em>Large Language Models as multiword expressions annotators<\/em> (<a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_submission_15_paper_v1.pdf\" data-type=\"link\" data-id=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_submission_15_paper_v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><em>abstract<\/em><\/a>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">15:20 &#8211; 15:40<\/td><td><strong>Irene Russo<\/strong> and Paola Vernillo \u2013 <em>Seeing the unsaid: Visualizing English idioms with text-to-image generatio<\/em>n (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/wolala_abstract_Russo_Vernillo.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">15:40 &#8211; 16:00<\/td><td><strong>G\u00e1bor Simon<\/strong>, T\u00edmea Borb\u00e1la Bajz\u00e1t, <strong>Natabara M\u00e1t\u00e9 Gy\u00f6ngy\u00f6ssy<\/strong>, P\u00e9ter Gerg\u0151 Moln\u00e1r, <strong>No\u00e9mi Pr\u00f3t\u00e1r <\/strong>and Bal\u00e1zs Indig<em> \u2013 <em>Large Language Models in metaphor identification: The case of presuicidal interactions<\/em><\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa_2025_MetaId.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">16:00 &#8211; 16:20<\/td><td><strong>Alexey Matyushin<\/strong> <em>\u2013 <em>LLMs as tools for drafting ad hoc pharmaceutical glossaries<\/em><\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa2025_Submission_Matyushin.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">16:20 &#8211; 16:50<\/td><td><em>Coffee break<\/em><\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\"><br><br><br><br>16:50 &#8211; 17:10<\/td><td><strong>Section 3: Language games and power plays<\/strong><br><br><strong>Chair:<\/strong> <strong>Martin Wynne<\/strong><br><br><strong>\u00c1goston T\u00f3th<\/strong> \u2013 <em>An LLM-motivated theory of language<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/TothAgoston2025-abstract.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">17:10 &#8211; 17:30<\/td><td><strong>D\u00e1niel Golden<\/strong> \u2013 <em>Large Language Models and the philosophy of language games<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_27-2.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">17:30- 17:50<\/td><td><strong>Silje Susanne Alvestad,<\/strong> Nele Poldvere, Asbj\u00f8rn F\u00f8lstad and Petter Bae Brandtz\u00e6g \u2013 <em>Fakespeak in the age of Large Language Models: A comparative study of persuasion in AI-generated and human-written propaganda narratives<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_submission_24_paper_v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">17:50 &#8211; 18:10<\/td><td><strong>Katerina Zoi <\/strong>and <strong>Dimitrios Mysiloglou<\/strong> \u2013 <em>Reconstructing the past: How LLMs reflect or adapt to Papadiamantis\u2019 cultural worldview<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_26-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">18:10 &#8211; 18:15<\/td><td>Closing remarks<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">18:30 &#8211;<br><\/td><td><em>Conference dinner<\/em><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h4 class=\"wp-block-heading alignwide\">Day Two<\/h4>\n\n\n\n<figure class=\"wp-block-table alignwide my-schedule-table-2 is-style-stripes\"><table><tbody><tr><td><strong>Time<\/strong><\/td><td><strong>Friday 21 November 2025<\/strong><\/td><\/tr><tr><td><strong>09:30 &#8211; 10:30<\/strong><\/td><td><strong>Keynote by Andr\u00e1s Kornai<\/strong><br><br><strong>Chair:<\/strong> <strong>D\u00e1vid M\u00e1rk Nemeskey<\/strong><br><br><em>The linguistic power of LLMs<\/em> (<a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/wolala-2.pdf\">slides<\/a>)<br><br><details><br><summary>Abstract<\/summary>Opinion on the power of LLMs is on a broad spectrum. At the high end we find the view that these models are so good that we no longer need to deal with messy humans and can do all sorts of exciting linguistics by inspecting LLMs. At the low end we find the view that LLMs are stochastic parrots that cannot possibly have any bearing on how natural language works in humans. In this talk we approach the matter from the perspective of formal language theory, and conclude that there is nothing in natural language that stands in the way of treating LLMs as full and faithful models of human linguistic competence.<\/details><\/td><\/tr><tr><td>10:30 &#8211; 11:00<\/td><td><em>Coffee Break<\/em><\/td><\/tr><tr><td><br><br><br><br>11:00 &#8211; 11:20<\/td><td><strong>Section 4: Chat, translate, evaluate<\/strong><br><br><strong>Chair:<\/strong> <strong>Dan Tufi\u0219<\/strong><br><br><strong>No\u00e9mi Pr\u00f3t\u00e1r<\/strong> and D\u00e1vid M\u00e1rk Nemeskey \u2013 <em>Bridging the gap between qualitative and quantitative \u2013 How linguistic analysis can help automatic text simplification evaluation<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa2025_abstract.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>11:20 &#8211; 11:40<\/td><td><strong>Cecilia Domingo<\/strong>, Paul Piwek, Svetlana Stoyanchev and Michel Wermelinger \u2013 <em>Reference processing in pair-programming dialogue<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_submission_16_paper_v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>11:40 &#8211; 12:00<\/td><td><strong>B\u00e1lint Levente M\u00f3r\u00e1sz<\/strong> and L\u00e1szl\u00f3 J\u00e1nos Laki \u2013 <em>The impact of example-selection metrics on LLM-based machine translation<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_paper_19_with_name-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>12:00 &#8211; 12:20<\/td><td>Roberto Jim\u00e9nez de la Torre and <strong>Carlos \u00c1. Iglesias<\/strong> \u2013 <em>A modular LLM-enhanced agent-based system for the generation and evaluation of journalistic interview questions<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_submission_10_paper_v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>12:20 &#8211; 12:40<\/td><td><strong>Tommaso Sgrizzi<\/strong>, Asya Zanollo and Cristiano Chesi \u2013 <em>Syntactic maps or surface hacks? Testing restructuring verb order and clitic placement in LLMs<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/wolala-abstract.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>12:40 &#8211; 13:40<\/td><td><em>Lunch<\/em><\/td><\/tr><tr><td><br><br><br><br>13:40 &#8211; 14:00<\/td><td><strong>Section 5: LLMs in practice<\/strong><br><br><strong>Chair:<\/strong> <strong>Tam\u00e1s V\u00e1radi<\/strong><br><br><strong>Botond Szemes<\/strong> and <strong>Kata Dob\u00e1s<\/strong> \u2013 <em>Digital literary memory in Central-East-Europe. Analysing Wikipedia with LLMs<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/WoLaLa-2025_submission_8_paper_v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>14:00 &#8211; 14:20<\/td><td><strong>R\u00e9ka Dod\u00e9<\/strong>, G\u00e1bor Madar\u00e1sz, M\u00e1ty\u00e1s Osv\u00e1th, Krist\u00f3f Varga and Enik\u0151 H\u00e9ja \u2013 <em>Opportunities and challenges in classifying Hungarian scientific texts by field of science<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/wolala.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>14:20 &#8211; 14:40<\/td><td><strong>Kata \u00c1gnes Sz\u0171cs<\/strong>, <strong>No\u00e9mi Vad\u00e1sz<\/strong>, Zsolt Z\u00e1ros, Emese Varga and Zolt\u00e1n Szatucsek<strong> <\/strong>\u2013 <em>Integrating Large Language Models in structural data processing in Hungarian civil registers<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/wolala_absztrakt_2025-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>14:40 &#8211; 15:00<\/td><td><strong>Bogl\u00e1rka Vermeki<\/strong> \u2013 <em>Modelling language proficiency with Puli-BERT-Large: A case study on CEFR classification in Hungarian learner texts<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/Abstract_WoLaLa_2025_1_VB.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>15:00 &#8211; 15:20<\/td><td>Zijian Gy\u0151z\u0151 Yang, \u00c1gnes B\u00e1nfi, R\u00e9ka Dod\u00e9, Gerg\u0151 Ferenczi, Fl\u00f3ra F\u00f6ldesi, Enik\u0151 H\u00e9ja, Mariann Lengyel, G\u00e1bor Madar\u00e1sz, <strong>M\u00e1ty\u00e1s Osv\u00e1th<\/strong>, Bence S\u00e1rossy, Krist\u00f3f Varga and No\u00e9mi Ligeti-Nagy \u2013 <em>Toward Hungarian-centric language understanding: Hungarian-adapted PULI Large Language Models<\/em> (<em><a href=\"https:\/\/wolala.nytud.hu\/wp-content\/uploads\/2025\/11\/wolala___PULI_hun_full.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">abstract<\/a><\/em>)<\/td><\/tr><tr><td>15:20 &#8211; 15:50<\/td><td><em>Coffee Break<\/em><\/td><\/tr><tr><td><strong>15:50 &#8211; 16:45<\/strong><\/td><td><strong>Panel: <\/strong><em>Do LLMs \u201cunderstand\u201d language?<\/em><br><br><strong>Moderator:<\/strong> <strong>Csaba Pl\u00e9h <\/strong>(Central European University)<br> <br><strong>Panellists: Erhard Hinrichs<\/strong> (University of T\u00fcbingen), <strong>Andr\u00e1s Kornai <\/strong>(HUN-REN SZTAKI, BME), <strong>Alessandro Lenci<\/strong> (University of Pisa) and <strong>Tam\u00e1s V\u00e1radi <\/strong>(ELTE Research Centre for Linguistics)<strong><br><\/strong><\/td><\/tr><tr><td>16:45 &#8211; 17:00<\/td><td>Closing remarks, rewards<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Conference Programme Details The conference will run from Thursday 20 November, 9:30 CET, until Friday 21 November, 17:30 CET. Day One Time Thursday 20 November 2025 09:00 &#8211; 09:30 Registration 09:30 &#8211; 09:45 Welcome remarks 09:45 &#8211; 10:45 Keynote by Alessandro Lenci Chair: No\u00e9mi Ligeti-Nagy Beyond prediction: What LLMs miss about meaning and why Abstract [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-667","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/pages\/667","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/comments?post=667"}],"version-history":[{"count":163,"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/pages\/667\/revisions"}],"predecessor-version":[{"id":939,"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/pages\/667\/revisions\/939"}],"wp:attachment":[{"href":"https:\/\/wolala.nytud.hu\/2025\/wp-json\/wp\/v2\/media?parent=667"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}