Add metadata: paper link, task category, license, languages

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -139,10 +139,20 @@ configs:
139
  path: data/LC003ALP100EV_problems-*
140
  - split: LC003ALP100IV_problems
141
  path: data/LC003ALP100IV_problems-*
 
 
 
 
 
 
 
 
142
  ---
143
 
144
  ## IRLBench: A Multi-modal, Culturally Grounded, Parallel Irish-English Benchmark for Open-Ended LLM Reasoning Evaluation
145
 
 
 
146
  ### Overview
147
  > Recent advances in Large Language Models (LLMs) have demonstrated promising knowledge and reasoning abilities, yet their performance in multilingual and low-resource settings remains underexplored. Existing benchmarks often exhibit cultural bias, restrict evaluation to text-only, rely on multiple-choice formats, and, more importantly, are limited for extremely low-resource languages. To address these gaps, we introduce IRLBench, presented in parallel English and Irish, which is considered definitely endangered by UNESCO. Our benchmark consists of 12 representative subjects developed from the 2024 Irish Leaving Certificate exams, enabling fine-grained analysis of model capabilities across domains. By framing the task as long-form generation and leveraging the official marking scheme, it does not only support a comprehensive evaluation of correctness but also language fidelity. Our extensive experiments of leading closed-source and open-source LLMs reveal a persistent performance gap between English and Irish and critical insights, in which models produce valid Irish responses less than 80% of the time, and answer correctly 55.8% of the time compared to 76.2% in English for the best-performing model. We release IRLBench and an accompanying evaluation codebase to enable future research on robust, culturally aware multilingual AI development.
148
 
 
139
  path: data/LC003ALP100EV_problems-*
140
  - split: LC003ALP100IV_problems
141
  path: data/LC003ALP100IV_problems-*
142
+ language:
143
+ - en
144
+ - ga
145
+ task_categories:
146
+ - question-answering
147
+ license: mit
148
+ tags:
149
+ - multilingual
150
  ---
151
 
152
  ## IRLBench: A Multi-modal, Culturally Grounded, Parallel Irish-English Benchmark for Open-Ended LLM Reasoning Evaluation
153
 
154
+ [Paper](https://huggingface.co/papers/2505.13498)
155
+
156
  ### Overview
157
  > Recent advances in Large Language Models (LLMs) have demonstrated promising knowledge and reasoning abilities, yet their performance in multilingual and low-resource settings remains underexplored. Existing benchmarks often exhibit cultural bias, restrict evaluation to text-only, rely on multiple-choice formats, and, more importantly, are limited for extremely low-resource languages. To address these gaps, we introduce IRLBench, presented in parallel English and Irish, which is considered definitely endangered by UNESCO. Our benchmark consists of 12 representative subjects developed from the 2024 Irish Leaving Certificate exams, enabling fine-grained analysis of model capabilities across domains. By framing the task as long-form generation and leveraging the official marking scheme, it does not only support a comprehensive evaluation of correctness but also language fidelity. Our extensive experiments of leading closed-source and open-source LLMs reveal a persistent performance gap between English and Irish and critical insights, in which models produce valid Irish responses less than 80% of the time, and answer correctly 55.8% of the time compared to 76.2% in English for the best-performing model. We release IRLBench and an accompanying evaluation codebase to enable future research on robust, culturally aware multilingual AI development.
158