Machine Translation Evaluation: Manual Versus Automatic—A Comparative Study

dc.contributor.author Maurya, Kaushal Kumar
dc.contributor.author Ravindran, Renjith P.
dc.contributor.author Anirudh, Ch Ram
dc.contributor.author Murthy, Kavi Narayana
dc.date.accessioned 2022-03-27T05:58:22Z
dc.date.available 2022-03-27T05:58:22Z
dc.date.issued 2020-01-01
dc.description.abstract The quality of machine translation (MT) is best judged by humans well versed in both source and target languages. However, automatic techniques are often used as these are much faster, cheaper and language independent. The goal of this paper is to check for correlation between manual and automatic evaluation, specifically in the context of Indian languages. To the extent automatic evaluation methods correlate with the manual evaluations, we can get the best of both worlds. In this paper, we perform a comparative study of automatic evaluation metrics—BLEU, NIST, METEOR, TER and WER, against the manual evaluation metric (adequacy), for English-Hindi translation. We also attempt to estimate the manual evaluation score of a given MT output from its automatic evaluation score. The data for the study was sourced from the Workshop on Statistical Machine Translation WMT14.
dc.identifier.citation Advances in Intelligent Systems and Computing. v.1079
dc.identifier.issn 21945357
dc.identifier.uri 10.1007/978-981-15-1097-7_45
dc.identifier.uri http://link.springer.com/10.1007/978-981-15-1097-7_45
dc.identifier.uri https://dspace.uohyd.ac.in/handle/1/8972
dc.subject Automatic metrics
dc.subject Machine translation (MT)
dc.subject Manual metrics
dc.subject MT evaluation
dc.title Machine Translation Evaluation: Manual Versus Automatic—A Comparative Study
dc.type Book Series. Conference Paper
dspace.entity.type
Files
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: