Open topic with navigation
The size of the character N-grams to use to tokenize Asian text.
You must not use
NGram with the SentenceBreaking configuration parameter.
If you set
NGram for Japanese, you can use SentenceBreakingOptions for normalization.
|Configuration Section:||LanguageTypes or
In this example, all text is indexed as N-grams of two characters.