Foundations of Statistical Natural Language Processing, Markov Models (chapter 9) Example
State Trasition Probability Table
| Current State | Later State | Probability |
| Cola Pref. (CP) | Cola Pref. (CP) | 0.7 |
| Cola Pref. (CP) | Iced Tea Pref. (IP) | 0.3 |
| Iced Tea Pref. (IP) | Cola Pref. (CP) | 0.5 |
| Iced Tea Pref. (IP) | Iced Tea Pref. (IP) | 0.5 |
Observation Probability Table
| Current State | Cola | Iced Tea | Lemon Tea |
| Cola Pref. (CP) | 0.6 | 0.1 | 0.3 |
| Iced Tea Pref. (IP) | 0.1 | 0.7 | 0.2 |
For the observation sequence: {Lemon Tea, Iced Tea, Cola}
Forward Procedure
| |||||||
| αCP | 1.0 | 0.21 (1.0*0.3*0.7) | 0.0462 (0.21*0.1*0.7 + | 0.021294 (0.0462*0.6*0.7 + | |||
| αIP | 0 | 0.09 (1.0*0.3*0.3) | 0.0378 (0.21*0.1*0.3 + | 0.010206 (0.0462*0.6*0.3 + | |||
| P | 0.3 | 0.084 | 0.0315 | ||||
Backward Procedure
| |||||||
| βCP | 0.0315 (0.045*0.3*0.7 + | 0.045 (0.6*0.1*0.7 + | 0.6 | 1.0 | |||
| βIP | 0.029 (0.045*0.2*0.5 + | 0.245 (0.6*0.7*0.5 + | 0.1 | 1.0 | |||
| P | 0.0315 | ||||||
Best State Sequence
| |||||||
| γCP | 1.0 | 0.3 (0.21*0.045)/ | 0.88 (0.0462*0.6)/ | 0.676 0.021294/ | |||
| γIP | 0 | 0.7 (0.09*0.245)/ | 0.12 (0.0378*0.1)/ | 0.324 0.010206/ | |||
| State | CP | IP | CP | CP | |||
Viterbi Algorithm
| |||||||
| δCP | 1.0 | 0.21 max{1.0*0.3*0.7, 0*0.2*0.5} | 0.0315 max{0.21*0.1*0.7, 0.09*0.7*0.5} | 0.019404 max{0.0462*0.6*0.7, | |||
| δIP | 0 | 0.09 max{1.0*0.3*0.3, 0*0.2*0.5} | 0.0315 max{0.21*0.1*0.3, 0.09*0.7*0.5} | 0.008316 max{0.0462*0.6*0.3, 0.0378*0.1*0.5} | |||
| ψCP | CP | IP | CP | ||||
| ψIP | CP | IP | CP | ||||
| Xt | CP | IP | CP | CP | |||
| P(X) | 0.019404 max{δi(T+1)} | ||||||