top of page

Phase Three

Markov Chains

In order to ensure that we know what we are doing, James provided us with exercises which put Markov chains to use, below you will also find my working out. 

IMG_1429.jpg
IMG_1430.jpg

Aside from real-life practices, in games the example given was an AI moving between 4 locations, determining which area is safest after taking a shot. In this example the Initial State Matrix is the initial values for how safe the locations are, the transition matrix is the outcome of the AI taking a shot which lowers the values of each. 

From this example, we then developed an idea in a group of how Markov chains can be used in our practices. We came to the conclusion that it could be used to mix ingredients. Each ingredient added has an effect on the other pre-existing ingredients which in turn may make the potion stronger or weaker. 

Personally, I took the Markov chain off on my own to create my own prototype for this phase. You can see this idea explained more, as well as the maths behind it and a coded playable prototype. 

About:

Markov chains are statistical models used to order and calculate real-life processes. In this workshop, we were taught how to create Markov chains and how they can be applied in games design. James provided us with an example of calculating how customers may transition from one area to another and how statistics of transactions and market share may change over time. In this example, Markov chains are used to work out the percentage of customers each product would have as time goes on. As an exercise, we calculated the transition of customers for the first 3 months of sales for this hypothetical example. In order to complete a Markov chain, we need data concerning the initial state matrix, this is essentially the beginning data, i.e. the market share of both products, and the Transition Matrix, which is the prediction or values assigned to how the matrixes will change, i.e. how the market share may change over time. Markov chains relate to prior workshops surrounding probability and matrixes, however as they are all used in Markov chains, I thought it was only needed to be mentioned here. 

Below you can find the slides that we were shown during the lecture. 

bottom of page