|
| | BLEU (const size_t maxOrder=4) |
| | Create an instance of BLEU class. More...
|
| |
| ElemType | BLEUScore () const |
| | Get the BLEU Score. More...
|
| |
| ElemType | BrevityPenalty () const |
| | Get the brevity penalty. More...
|
| |
| template<typename ReferenceCorpusType , typename TranslationCorpusType > |
| ElemType | Evaluate (const ReferenceCorpusType &referenceCorpus, const TranslationCorpusType &translationCorpus, const bool smooth=false) |
| | Computes the BLEU Score. More...
|
| |
| size_t | MaxOrder () const |
| | Get the value of maximum length of tokens in n-grams. More...
|
| |
| size_t & | MaxOrder () |
| | Modify the value of maximum length of tokens in n-grams. More...
|
| |
| PrecisionType const & | Precisions () const |
| | Get the precisions for corresponding order. More...
|
| |
| ElemType | Ratio () const |
| | Get the ratio of translation to reference length ratio. More...
|
| |
| size_t | ReferenceLength () const |
| | Get the value of reference length. More...
|
| |
| template<typename Archive > |
| void | serialize (Archive &ar, const unsigned int) |
| | Serialize the metric. More...
|
| |
| size_t | TranslationLength () const |
| | Get the value of translation length. More...
|
| |
template<typename ElemType = float, typename PrecisionType = std::vector<ElemType>>
class mlpack::metric::BLEU< ElemType, PrecisionType >
BLEU, or the Bilingual Evaluation Understudy, is an algorithm for evaluating the quality of text which has been machine translated from one natural language to another.
It can also be used to evaluate text generated for a suite of natural language processing tasks.
The BLEU score is calculated using the following formula:
The value of BLEU Score lies in between 0 and 1.
- Template Parameters
-
| ElemType | Type of the quantities in BLEU, e.g. (long double, double, float). |
| PrecisionType | Container type for precision for corresponding order. e.g. (std::vector<float>, std::vector<double>, or any such boost or armadillo container). |
Definition at line 53 of file bleu.hpp.
| ElemType Evaluate |
( |
const ReferenceCorpusType & |
referenceCorpus, |
|
|
const TranslationCorpusType & |
translationCorpus, |
|
|
const bool |
smooth = false |
|
) |
| |
Computes the BLEU Score.
- Template Parameters
-
| ReferenceCorpusType | Type of reference corpus. |
| TranslationCorpusType | Type of translation corpus. |
- Parameters
-
| referenceCorpus | It is an array of various references or documents. So, the and each reference is an array of paragraphs. So, and then each paragraph is an array of tokenized words/string. Like, . For ex. ``` refCorpus = {{{"this", "is", "paragraph", "1", "from", "document", "1"}, {"this", "is", "paragraph", "2", "from", "document", "1"}}, |
{{"this", "is", "paragraph", "1", "from", "document", "2"}, {"this", "is", "paragraph", "2", "from", "document", "2"}}} ```
- Parameters
-
| translationCorpus | It is an array of paragraphs which has been machine translated or generated for any natural language processing task. Like, . And then, each paragraph is an array of words. The ith paragraph from the corpus is . For ex. ``` transCorpus = {{"this", "is", "generated", "paragraph", "1"}, {"this", "is", "generated", "paragraph", "2"}} ``` |
| smooth | Whether or not to apply Lin et al. 2004 smoothing. |
- Returns
- The Evaluate method returns the BLEU Score. This method also calculates other BLEU metrics (brevity penalty, translation length, reference length, ratio and precisions) which can be accessed by their corresponding accessor methods.