Report

Kurtis Cahill James Badal Introduction Model a Maze as a Markov Chain Assumptions First Approach and Example Second Approach and Example Experiment Results Conclusion Problem: To find an efficient approach of solving the rate of visitation of a cell inside a large maze Application: To find the best possible place to intercept information Allows Stochastic principles to be applied to the problem Each maze cell will be model as a state in Markov Chain The Markov Chain will be one recurrent class To reduce the complexity of the problem and simulation, certain assumptions will be applied: 1. Unbiased transition to adjacent cells 2. Random walk can’t be stationary 3. No isolated cells inside the maze ri – Steady-state rate of the ith state of the Markov Chain pji– Probability of moving from state j to state i on the next step The transition matrix for the random walk on this maze System of Steady State Rate Equations Row Reduced System of Steady State Rate Equations ri – Steady-state rate of the ith state of the Markov Chain p – Proportionality constant ni – Number of connections to the ith cell Solution to System of Steady State Rate Equations Random Walker starts at a certain maze location and walks 108 steps At each step the random walker increments the visit count of the most recently visited cell The mean and standard deviation are measured at the end of the experiment The measured result is compared to the calculated result Random Walk result of a 2x2 Maze Random Walk result of a 5x5 Maze Random Walk result of a 10x10 Maze Random Walk result of a 20x20 Maze Random Walk result of a 40x40 Maze Modeled the maze as a Markov Chain Applied Stochastic principles to the maze First Approach is n3 complexity Second Approach is n complexity Tested the calculated result with the measured result