My research is focused on the development of a new method for Hidden Markov Model (HMM) decoding, which uses estimated posterior probabilities for hidden states in an HMM, in contrast to the use of prior probabilities which are used in standard Viterbi decoding. In most modern speech recognition systems, HMM's are used to model speech either at the phone or word level and, as such, are vital to recognition. The crux of my research is the development of new "Viterbi-esque" algorithms which use these posterior probabilities, along with the HMM, to hopefully obtain a gain in recognition. These new algorithms have been tested in Isolated Word Recognition (IWR) systems and so far, results seem to indicate that it may be possible to obtain such a gain. In three different IWR scenarios (of varying levels of difficulty) the new decoding algorithms have outperformed the standard HMM decoding methods, however there is still much more testing and development which needs to be done.