Our Team developed and released I-measure

I-measure is a new measure for evaluation of grammatical error detection and correction.

How does it work?

I-measure evaluates grammatical error detection and correction by aligning a system hypothesis both with the original text and with one or more reference corrections thereof. This allows us to quantify the beneficial or deleterious effect of applying the corrections suggested by the system.


The Measure

The I-measure is positive if the text quality has improved, negative if it has deteriorated and zero if it remains the same, either because no change has been effected or because the positive impact of errors successfully corrected by the system is equal to the negative impact of new errors introduced.


Research Details

Felice and Briscoe’s research paper “Towards a standard evaluation method for grammatical error detection and correction” describes the I-measure in more detail.


The I-measure is available to download from GitHub.

The GitHub repository contains a Python implementation of I-measure.