Finnish POS tagging part 2

Previously I wrote about Building a Finnish POS tagger. This post is to elaborate a bit on training with OpenNLP, which I skimmed last time, put the code for it out, and do some additional tests on it.

I am again using the Finnish Treebank to get 4.4M pre-tagged sentences to train on. Start with a Python script to transform the Treebank XML into an OpenNLP suitable format. A short example of the output below, in the format OpenNLP takes as input (at least in the configuration I used). One line contains one sentence, each word with associated POS tag, word and tag separated with an underscore “_”.

  • 1_Num artikla_N Nimi_N ja_CC tarkoitus_N
  • Hankintakeskukseen_N sovelletaan_V perustamissopimuksen_N ja_CC tämän_Pron perussäännön_N määräyksiä_N ._Punct
  • Hankintakeskuksen_N toiminnan_N kestolle_N ei_V aseteta_V määräaikaa_N ._Punct

The tags have been assigned by human experts who provide the Treebank. The whole Treebank file is parsed and output similar to above is generated by the Python script.

Check Github for the code to train the OpenNLP tagger. Or use the command line options.

Previously I described the test results using the Treebank data with a train/test split, showing reasonably good results. However, how well does it work in practice with some simple test sentences? Does it matter how the training and tagger input data is pre-processed? What do I mean by pre-processed?

Stemming and lemmatization are two basic transformations that are often used in NLP. Stemming is a process of cutting the ending of a word to get simple version that matches all different forms of the word. The result is not always a real “word”. For example, “argue”, “arguing”, “argus” could all stem to “argu”. Lemmatization on the other hand produces more “real” words (the Wikipedia link describes it as producing the dictionary base forms).

A related question that came to my mind: Does it matter if you stem/lemmatize your words you give as input to the tagger to train and test? I could not find a good answer on Google. One question on Stack Overflow about stemming vs POS tagging. And the response seems to be not to give an answer but riddles… Who would’ve guessed about the machine learning community? 😛

Well, reading the discussion and other answers on the StackOverflow page seems to suggest not to stem before POS tagging. And the wikipedia pages on stemming and lemmatization describe the difference as in Lemmatization requiring the context (the POS tag) to properly function. Which makes sense, since words can have multiple meaning depending on their context (part of speech). So therefore we should probably conclude that it is better to not stem or lemmatize before training a POS tagger (or using it I guess). But common sense never stopped us before, so lets try it.

To see for myself, I tried to train and use the tagger with some different configurations:

  • Tagger: Plain = Takes words in the sentence and tries to POS tag them as is. Not stemmed, not lemmatized, just as they are.
  • Tagger: Voikko = Takes words in the sentence, converts them to baseform (lemma?), reconstructs the sentence from the baseformed words. You can see the actual results and effect in the output column in the results table below.
  • Trained on: 100k = The tagger was trained on the first 100k sentences in the Finnish Treebank.
  • Trained on: 4M = The tagger was trained on the first 4M sentences in the Finnish Treebank.
  • Trained on: basecol = The tagger was trained on baseform column of the treebank.
  • Trained on: col1 = The tagger was trained on column 1 of the treebank, containing the unprocessed words (no baseforming or anything else).
  • Trained on: voikko = The tagger was trained on column 1 of the treebank, but before training all words in the sentence were baseformed using Voikko. Similar to “Tagger: Voikko” but for training data.
  • Input: The input sentence fed to the tagger. This was split to an array on whitespace, as the OpenNLP tagger takes an array of words for sentence as input.
  • Output: The output from the tagger, formatted as word_tag. Word = the word given to the tagger as input for that part of the sencence, tag = the POS tag assigned by the tagger for that word.

So the Treebank actually has a “baseform” column that is described in the Treebank docs as having the baseform of each word. However, I do not have the tool used for the Treebank to baseform the words. Maybe it was manually done by the people who also tagged the sentences. Don’t know. I use Voikko as a tool to baseform words.

I still wanted to try the use of the baseform column in the Treebank so I ran all the words (baseform col and col1) in the Treebank through Voikko to see if it would recognize them. Recorded all the misses and sorted them highest occurence count to lowest. This showed me that the Treebank has its own “oddities”. Some examples:

  • “merkittävä” becomes “merkittää”
  • “päivästä” becomes “päivänen”
  • “työpaikkoja” becomes “työ#paikko”

These are just a few examples of highly occurring and odd looking baseforms in the Treebank. None of these, in my opinion, map quite directly to understandable Finnish words. And Voikko provides different results (gives different baseform for “merkittävä”, “päivästä”, etc), so the two baseforming approaches would not match. I wanted results that I felt I could show to people who would understand what they meant. On the other hand, some of the words in the Treebank are quite domain-specific and valid but Voikko does not recognize them. Common Treebank examples of this include “CN-koodeihin”, “CN-koodiin”, “ETY-tyyppihyväksynnän”, “ETY-tyyppihyväksyntään”, “läsnäollessa”. Treebank has valid baseforms for these but Voikko does not recognize these specific ones.

So I just tried it with the different configuration versions above, as illustrated in the results table below:

Tagger Trained on Input Output
Plain 100k basecol junassa on vessa junassa_N on_A vessa_N
tuli tuli tulipesästä tuli_N tuli_N tulipesästä_V
voi on maukasta leivän päällä voi_N on_A maukasta_N leivän_PrfPrc päällä_Abbr
juodaan jaffaa ladassa juodaan_Unkwn jaffaa_Punct ladassa_Unkwn
liika vesi vesitti kilpailun liika_N vesi_N vesitti_N kilpailun_Abbr
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_N ovat_Unkwn todennäköisimmin_Adv rutiininomaiset_Unkwn tietokannan_Unkwn ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_Unkwn sinulla_V on_Unkwn tuollaisia_Unkwn lauseita_Unkwn tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_CS jotain_Adv piti_N keksiä_V :(_Punct
Plain 100k col1 junassa on vessa junassa_N on_V vessa_N
tuli tuli tulipesästä tuli_V tuli_V tulipesästä_N
voi on maukasta leivän päällä voi_V on_V maukasta_N leivän_N päällä_N
juodaan jaffaa ladassa juodaan_V jaffaa_CC ladassa_N
liika vesi vesitti kilpailun liika_N vesi_N vesitti_V kilpailun_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_N ovat_V todennäköisimmin_Adv rutiininomaiset_A tietokannan_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_N sinulla_N on_V tuollaisia_A lauseita_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Abbr kun_CS jotain_Pron piti_V keksiä_A :(_Punct
Voikko 100k voikko junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_V tuli_N tulipesä_N
voi on maukasta leivän päällä voi_V olla_V maukas_N leipä_N pää_N
juodaan jaffaa ladassa juoda_V jaffa_CC lada_V
liika vesi vesitti kilpailun liika_Adv vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_Adv rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_N olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_CS jokin_Pron pitää_V keksi_N :(_Punct
Voikko 100k basecol junassa on vessa juna_N olla_V vessa_Unkwn
tuli tuli tulipesästä tuli_N tuli_N tulipesä_N
voi on maukasta leivän päällä voi_N olla_V maukas_N leipä_N pää_N
juodaan jaffaa ladassa juoda_PrsPrc jaffa_CC lada_PrsPrc
liika vesi vesitti kilpailun liika_N vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_Adv rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_CS jokin_Pron pitää_V keksi_Adv :(_Punct
Plain 4M basecol junassa on vessa junassa_Unkwn on_Unkwn vessa_Unkwn
tuli tuli tulipesästä tuli_N tuli_N tulipesästä_Punct
voi on maukasta leivän päällä voi_N on_V maukasta_Unkwn leivän_Abbr päällä_Abbr
juodaan jaffaa ladassa juodaan_Unkwn jaffaa_Punct ladassa_Unkwn
liika vesi vesitti kilpailun liika_N vesi_N vesitti_N kilpailun_Abbr
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_A ovat_Unkwn todennäköisimmin_Adv rutiininomaiset_Unkwn tietokannan_Adv ylläpitotoimet._Abbr
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_Unkwn sinulla_Unkwn on_Unkwn tuollaisia_Unkwn lauseita_Punct tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_Punct jotain_Adv piti_N keksiä_PrfPrc :(_Punct
Plain 4M col1 junassa on vessa junassa_N on_V vessa_N
tuli tuli tulipesästä tuli_V tuli_V tulipesästä_N
voi on maukasta leivän päällä voi_V on_V maukasta_A leivän_N päällä_N
juodaan jaffaa ladassa juodaan_V jaffaa_V ladassa_PrsPrc
liika vesi vesitti kilpailun liika_A vesi_N vesitti_V kilpailun_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_N ovat_V todennäköisimmin_Adv rutiininomaiset_A tietokannan_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_Pron sinulla_Pron on_V tuollaisia_A lauseita_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_Punct jotain_Pron piti_V keksiä_N :(_Punct
Voikko 4M col1 junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_V tuli_V tulipesä_N
voi on maukasta leivän päällä voi_V olla_V maukas_A leipä_N pää_N
juodaan jaffaa ladassa juoda_V jaffa_Num lada_V
liika vesi vesitti kilpailun liika_A vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_A rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_Punct jokin_Pron pitää_V keksi_N :(_Punct
Voikko 4M voikko junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_V tuli_N tulipesä_N
voi on maukasta leivän päällä voi_V olla_V maukas_A leipä_N pää_N
juodaan jaffaa ladassa juoda_N jaffa_N lada_N
liika vesi vesitti kilpailun liika_Adv vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_A rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_Punct jokin_Pron pitää_V keksi_N :(_Punct
Voikko 4M basecol junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_N tuli_N tulipesä_N
voi on maukasta leivän päällä voi_N olla_V maukas_A leipä_N pää_N
juodaan jaffaa ladassa juoda_V jaffa_N lada_V
liika vesi vesitti kilpailun liika_N vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_Adv rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksiä 😦 no_Interj kun_CS jokin_Pron pitää_V keksi_N :(_Punct

You can find all the POS tags etc. listed and explained in the Treebank Manual. Here are most of the above:

  • N = Noun
  • V = Verb
  • PrfPrc = Past participle
  • A = Adjective
  • CS = Subordinating conjunction
  • Abbr = Abbreviation
  • Num = Numeral
  • Punct = Punctuation
  • Adv = Adverb
  • Unkwn = Unknown

Some of these (CS, PrfPrc, Adv, …) are bit more detailed than I ever want to get after leaving primary school 100 years ago. That is to say, I have no idea what they mean. Luckily I am really only interested in the POS tag as input to other algoritms so don’t really care what they are as long as they are correct and help to differentiate the words in context. Of course, with my lack of the language nuances and academic details of all those tags, I am not very good at judging the correctness of the taggings above. But a few notes anyway:

  • Using the baseform column from the Treebank to train the tagger and to tag unprocessed sentences (tagger “plain”): Lots of unknowns and failed taggings in general. Size of training corpus makes little difference.
  • Using Treebank col 1 to train and the “plain” tagger gives better results. Still it has some issues but most general cases are not too bad.
  • Baseforming all words in the sentence to be tagged with Voikko (tagger “Voikko”) and using col 1 to train results in about similar performance as “plain” tagger with col 1.
  • Tagger “Voikko” with training type “voikko” and 4M sentences seems to give the best match. It has some issues though.
  • Baseforming the sentence to tag with Voikko has a chicken and egg problem (as mentioned in the Wikipedia links I put high above). You can get multiple baseforms for a word, depending on what POS the word is. If you need to define this to do POS tagging, then how do you pick which one to use? For example “keksiä” in Finnish refers to “innovating” but could also mean “cookie”. Here, I just used the first baseform of a word given by Voikko, which for “keksiä” just happens to be the one for “cookie”. When the correct one in this case would be the “innovation” one..
  • As there are two different baseforming approaches here (Voikko and Treebank baseform col), mixing them causes worse results than using a unified baseforming approach (Voikko for both training and later tagging). So better to stick with just the same baseformer/lemmatizer for all data.
  • Special elements such as smileys would need to be trained separately :). Here they are just treated as punctuation.
  • “Jaffa” is a Finnish drink. It gets classified here correctly as N but also as numerical, punctuation, or verb. Maybe too rare a word or something? Numerical and punctuation are still odd.
  • Splitting with whitespace here causes issues with sentences ending in puctuation. The last words of sentences with “.”, “?”, or such, end up classified as “Punct”. Better splitting (tokenization) needed. Since punctuation is also trained on the tagger, it should not be just discarded though as I guess it can provide valuable context for the rest of the words.
  • Some of my test sentences I made up to be difficult to POS tag, and with very limited sentences above, this is likely not a generally representative case. For example, “Tuli tuli” can be translated as “Fire came” (intent here), “Fire fire”, “It came it came”, and probably valid taggings would also be “N V N”, “V N N”, “N N N”, “V V V”. Some of it might even be difficult for humans without broader context, although the “tulipesä” (fireplace) would likely tip people off. Similarly “voi” could also be translated as “butter” (intent here) or “could”.
  • Much bigger tests would be very useful to categorize what can be tagged right, what causes issues, etc.
  • It would also be useful to have a system available to choose whether the sentence was tagged right or not, and to retrain further the tagger with the errors. Maybe use a generator to build further examples of such errors.

So I guess the better configurations here can do a reasonable job of tagging most sentences, as illustrated by these results and the ones I listed before (the accuracy test on Treebank test/train split).

Most obviously, words with multiple meanings (possible POS tags) still require some more tuning. Maybe something with broader context (e.g., previous sentences, following sentences, iterations, probabilistic approaches,..?)?

I am not so familiar with all the works, such as Google’s Parsey McParseface. Because you know, its deep learning and that is all the rage, right ? 🙂 Would be interesting to try, but the whole setup is more than I can do right now.

Better tuning of OpenNLP parameters might also help if I had more expertise on that, and the its mapping to Finnish language peculiarities. In general, I am sure I am missing plenty of magic tricks the NLP guru’s could use.

In generall, I guess it is most likely just better to train the tagger before lemmatization/baseforming as noted before.

What more can I summarize here? Not much, not further than the bullets and points above. But this may provide a useful starting point for those interested in POS tagging for Finnish. Possibly useful points for some other languages as well..

Advertisement

One thought on “Finnish POS tagging part 2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s