Revisiting non-English text simplification: A unified multilingual benchmark

MJ Ryan, T Naous, W Xu - arXiv preprint arXiv:2305.15678, 2023 - arxiv.org
Recent advancements in high-quality, large-scale English resources have pushed the
frontier of English Automatic Text Simplification (ATS) research. However, less work has …

Distill or annotate? cost-efficient fine-tuning of compact models

J Kang, W Xu, A Ritter - arXiv preprint arXiv:2305.01645, 2023 - arxiv.org
Fine-tuning large models is highly effective, however, inference can be expensive and
produces carbon emissions. Knowledge distillation has been shown to be a practical …

ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment

T Naous, MJ Ryan, A Lavrouk, M Chandra… - arXiv preprint arXiv …, 2023 - arxiv.org
We present a systematic study and comprehensive evaluation of large language models for
automatic multilingual readability assessment. In particular, we construct ReadMe++, a …

[PDF][PDF] Optimizing Resource Allocation with Data-Driven Approaches in Heterogeneous Computing

A Nair, V Choudhary, S Desai, M Gupta, K Mehta… - researchgate.net
Heterogeneous computing environments present unique challenges for resource allocation,
often resulting in underutilized resources and performance bottlenecks. In this paper, we …