Abstract
Machine Translation (MT) revolutionizes cross-lingual communication but is prone to errors, necessitating thorough evaluation for enhancement. Translation quality can be assessed by humans and automatic evaluation metrics. Human evaluation, though valuable, is costly and subject to limitations in scalability and consistency. While automated metrics supplement manual evaluations, this field still has considerable potential for development. However, there exists prior survey work on automatic evaluation metrics, it is worth noting that most of these are focused on resource-rich languages, leaving a significant gap in evaluating MT outputs across other language families.
To bridge this gap, we present an exhaustive survey, encompassing discussions on MT meta-evaluation datasets, human assessments, and diverse metrics. We categorize both human and automatic evaluation approaches, and offer decision trees to aid in selecting the appropriate approach. Additionally, we evaluate sentences across languages, domains and linguistic features, and further meta-evaluate the metrics by correlating them with human scores.
We critically examine the limitations and challenges inherent in current datasets and evaluation approaches. We propose suggestions for future research aimed at enhancing MT evaluation, including the importance of diverse and well-distributed datasets, the refinement of human evaluation methodologies, and the development of robust metrics that closely align with human judgments.