Introduction to Random Processes is divided into five thematic blocks: Introduction, Probability review, Markov chains, Continuous-time Markov chains, and Gaussian, Markov and stationary random processes. In this page you will find the lecture slides we use to cover the material in each of these blocks. Links to the slides are also available from the class schedule table.

__Acknowledgment:__ The material made available here was originally conceived, prepared and shared by my friend and colleague
Alejandro Ribeiro, who teaches a Stochastic Systems class at Penn. I have only revised and adapted the contents to our needs,
so if you like the slides and find them useful all credit is due to him.

**Introduction**

This first lecture outlines the organizational aspects of the class as well as its contents. The topics are exemplified through the study of a simple stochastic system known as lower-bounded random walk.

- Slides for this introductory block,
which I will cover in the first class.
**(Updated 08/10/17)** - Speech is an example of a phenomenon for which stochastic modeling is fruitful. Speech waveforms for the words "Hi", "Good" and "Bye", as well as the fricative sound "S" are available in this files. To play these sounds use this Matlab code.

**Probabilty review**

This block is a fast-paced review of Probability theory, intended as a refresher and to ensure everyone is on the same page. For additional reading, see Gubner Chs. 1-5 and 7 as well as Ross Ch. 3.

- First set of slides covering sigma-algebras and
the axiomatic definition of probability, conditional probability, and the notion of independence between events. We will
then introduce random variables -- both discrete and continuous -- and commonly used distributions. We finish off with expectations, and
joint distributions useful to study systems with multiple random inputs. I expect to cover this material in three lectures.
**(Updated 09/07/17)** - Second set of slides
covering Markov's and Chebyshev's inequalities, and different notions of convergence for sequences of random variables (random processes).
These notions of convergence allows one to establish and understand remarkable limit theorems, such as the law of large numbers and
the
central limit theorem.
This second block is concluded with conditional probabilities and expectations, tools that will prove extremely useful for calculations down the road.
I expect to cover this material in two lectures.
**(Updated 09/20/17)**

**Markov chains**

This block deals with discrete-time Markov chains. Markov chains are random processes that involve a discrete time index n and a time-varying state X(n) taking values in a finite or countable space. The defining property of Markov chains is that the probability distribution of X(n+1) is unequivocally determined if X(n) is known. This implies implies that given X(n) the past history of the process, that is, what happened until time n-1 is irrelevant for the future of the process (what will happen after time n). For additional reading, see Ross Ch. 4 and Gubner Ch. 12.

- First set of slides covering
definitions, examples, notations and the introduction of the Chapman-Kolmogorov equations for the multi-step transition probabilities.
We discuss then two simple examples, gambler's ruin and buffers in communication networks. We finish with the introduction of the
different classes of sates that may compose a Markov chain. I expect to cover this material in three lectures.
**(Updated 10/04/17)** - Second set of slides covering
limiting distributions of irreducible, aperiodic, and positive reccurrent (i.e., ergodic) Markov chains. We argue why these limiting
probabilities are also known as
stationary distribution. We introduce the notion of ergodicity and the relationship between stationary distribution and the long-run
fraction of time that a Markov chains spends in each state.
We discuss a simple model of buffers (queues) in communication systems, where we calculate the limiting probabilities, queue balance equations
and probability of buffer overflow. I expect to cover this material in two lectures.
**(Updated 10/04/17)** - Third set of slides covering
PageRank algorithms for ranking nodes in graphs. I expect to cover this material in one lecture.
**(Updated 10/04/17)** - In case you want to read more about PageRank and website ranking,
here is the original paper
by Google's co-founders L. Page and S. Brin.
This chapter from
D. Easley and J. Kleinberg's book
*Networks, Crowds, and Markets: Reasoning About a Highly Connected World*is also a nice reference explaining the basics behind web search.

**Continuous-time Markov chains**

Moving on from discre-time random processes, we will now consider a continuous time index t but still restrict our attention to finite or countable states X(t). We will also retain the memoryless property which in this case is stated as requiring the probability distribution of X(s) given X(t) for any s>t, to be independent of X(u) for all times u<t. Random processes with this Markovian property are called continuous-time Markov chains, which are pervasive and the main subject of this thematic block. In rough terms, we can say that any system that can be deterministically modeled with a differential equation can be stochastically modeled as a continuous-time Markov chain. For additional reading, see Ross Chs. 5,6 and 8 as well as Gubner Chs. 11 and 12.

- First set of
slides covering exponential random variables and their memoryless property. We then move on to counting processes with independent and
stationary increments, and introduce the Poisson process as a very important special case. We show that interarrival times of a Poisson
process are i.i.d. exponential random variables. We finish with a few examples, and study the superposition, thinning and splitting of a
Poisson process. I expect to cover this material in two lectures.
**(Updated 10/23/17)** - Second set of slides
covering the definition of continuous-time Markov chains, the notions of transition rates, times, and the embedded discrete-time Markov
chain, as well as examples such as birth and death processes and the M/M/1 queue.
We then introduce the Chapman-Kolmogorov equations, and Kolmogorov's forward and backward equations that the transition probabilty
function satisfies. We finish with the study of limiting distributions, balance equations, and ergodicity.
I expect to cover this material in three lectures.
**(Updated 10/31/16)** - Third set of slides
covering queuing sytems, arguably one of the most pervasive applications of continuous-time Markov chains.
Queuing theory is fundamental to the
analysis of (public) transportation systems, packet traffic in communication networks, logistics, and customer service, too name a few
application domains. For
such queuing systems, we will determine important practical quantities such as the average number of customers waiting in line,
and the average time a customer spends waiting in the queue.
I expect to cover this material in one lecture.
**(Updated 11/16/16)** - Fourth set of slides
covering simple deterministic and stochastic models for predator-prey population dynamics. Specifically, we will focus on the Lotka-Volterra
system of differential equations, and its stochastic counterpart modeled as a continuous-time Markov chain. I expect to cover this material
in one lecture, but only if time allows.
**(Updated 10/20/16)** - Fifth set of slides covering simulation and modeling of biochemical reactions that are characterized by the involvement of a small number of reactants. In such cases it is known that randomness may play an important role in the overall behavior of the system. Gillespie's algorithm is the workhorse simulation technique described here along with representative examples. I will not cover this material in class, which is optional reading for those interested in exploring additional applications of continuous-time Markov chains.

**Gaussian, Markov, and stationary random processes**

The natural progression is to eliminate the restrictions on the countable number of states and (possibly) the Markov property, thus leading to general Gaussian and stationary random processes studied in this last block of the class. The most important tools in the analysis of stationary random processes are the autocorrelation function and the power spectral density, the latter of which is a generalization of the Fourier transform to random settings. Of particular importance in engineering is the analysis of those stationary processes obtained at the output of linear time-invariant systems fed with random input signals. For additional reading, see Gubner Chs. 10 and 11 as wel as Ross Ch. 10.

- First set of
slides introducing Markov processes, Gaussian processes and stationary processes. We then study Gaussian processes to some detail putting emphasis on Brownian motion and
white noise. Finally, we introduce biased Brownian motion and geometric Brownian motion to facilitate the future discussion on arbitrages, stock and option pricing.
I expect to cover this material in two lectures.
**(Updated 11/16/16)** - Second set of slides
covering the definition of arbitrage in the context of betting and investments, the arbitrage theorem, as well as the no-arbitrage
condition when a stock flipping investment strategy is adopted and stock prices follow a geometric Brownian motion.
We then introduce the risk neutral measure, defined to offer the same return as a risk-free investment.
We finish with the derivation of the Black-Scholes formula for stock options pricing, where prices are set to prevent
arbitrage opportunities when expected investment returns are valuated under the risk neutral measure.
I expect to cover this material in two lectures.
**(Updated 11/28/16)** - Third set of slides
covering stationary random processes, and the implications of stationarity on the processes' pdfs, mean and autocorrelation functions. Wide-sense stationary
random proesses and their properties are then introduced and compared with their strictly stationary counterparts.
We conclude with the study of the power spectral density of a stationary process, with emphasis on the output
of a linear time-invariant filter and its applications to system identification and equalization.
I expect to cover this material in three lectures.
**(Updated 12/5/16)**