Markov decision processes (MDP), also known as stochastic dynamic control or stochastic dynamic programming, are commonly used for sequential decision problems under uncertainties. In these sequential decision problems, a sequence of inter-related decisions need to be made when the outcomes are uncertain. This talk will briefly introduce Markov reward processes and Markov decision theory. Industry examples are presented to illustrate the application of MDP.