Sunday, September 11, 2011

Supply Chain Management/Operations Management - Decision Tree

A decision model represents shows an evaluation situation. There are three types of nodes in a decision tree:
• Decision nodes represents by squares which are variables or actions that the decision maker controls.
• Chance event nodes, represented by circles, are variables or events that cannot be controlled by the decision maker.
• Terminal or end nodes, represented in a decision tree diagram by unconnected branches, are endpoints where outcome values are attached.
By convention, the tree is drawn chronologically from left to right and branches out, like a tree lying on its side. Decision tree analysis language includes colourful biological analogies. The starting (usually) decision node is called the root, and the radial lines are called branches (sometimes twigs). The terminal nodes are sometimes called leaves, and an overly complex tree bears the label, a bushy mess.

Tree Annotations:

We label the decision tree diagram with these numbers:
• Probabilities for each outcome branch emanating from chance nodes.
• Outcome values, representing present values (PVs) of the resulting cash flow stream, discounted to the date of the root decision. Sometimes, we may choose to place benefits and costs along branches as they are realized. More commonly, terminal node values represent the entire outcome.
• Node expected values, calculated during the process of solving the tree. When solving with a utility function as risk policy, I like to show expected monetary value (EMV), expected (value) utility (EU), and certainty equivalent (CE) for each node.

Tree Calculations

Solving a decision tree is a simple back-calculation process. This is sometimes called back-solving or folding back the tree. Starting from the terminal nodes at the right, moving left, we solve for the value of each node and label it with its EV. These EVs are EMVs or EV costs when we measure value in dollars or other currency.
Here are the three simple rules for solving a tree:
• At a chance node, calculate its EMV (or EV cost) from the probabilities and values of its branches. This becomes the value of the node and the branch leading to it.
• Replace a decision node with the value of its best alternative. We are applying the EMV decision rule.
• If a cost value lies along a branch, recognize that cost in passing from right to left. That is, subtract a cost when solving a tree to maximize EMV. Treat the rest of the cost as a "toll" to be paid when traversing the tree. (Sometimes a tree is designed to solve for EV costs, in which case, a cost along a branch is added.)
The decision tree is built with each possibility following a separate path in the tree with the final expected values at the end of the process. The drawing in below is a highly simplified version of a decision tree chart.


The interesting aspect of this type of decision process is that there is always the option to do nothing in reference to the other options. If none of the options look like they will produce the desired result, it may be better to do nothing at all or rethink the problem from a different point of view.
One critical aspect of decision tree analysis is that you are required to estimate the anticipated probability that an event will occur—these estimates are represented by numbers next to each “chance of success” phrase on the diagram. How well you perform these estimates will render the decision tree analysis either accurate or flawed. This is why it may be necessary to spin up the analysis multiple times, to determine what range of estimates will deliver the expected values in the analysis and achieve your desired results.

Basic concepts for the decision tree:

• Probability. Outside of throwing dice or some other activity in which the probability of a certain outcome can be estimated/calculated with fair accuracy, the probability of any given event occurring depends on your own experience and beliefs. Yet the decision stills boils down to your own personal experience and beliefs.
• Utility. This concept helps us compare unlike elements in a decision by using a common reference value that places a relative common value on the disparate elements (i.e., playing with your dog, buying a DVD, or mopping the kitchen floor might be commonly expressed in terms of time or dollars).
• Expected Value. This is the result of the decision tree analysis, on average, if we repeat the analysis numerous times. Expected value (EV) is calculated as follows:

• Decision Forks. There are two types of forks created in a decision tree analysis. The first is a decision fork, which is used to present the various alternatives in the decision tree (i.e., we could make or buy the product).
• Value of Information. This quantifies the value of obtaining additional information for a decision analysis, even if the quality of the information is variable.
• Sensitivity Analysis. Since the inputs to the decision tree are never known with absolute certainty, the sensitivity analysis can be used to bolster your confidence in your decision estimates.
• Conditional Probability. This refers to the probability of one event occurring after we know that another event has occurred. If you live in a busy city, you might hear ambulance sirens randomly throughout the day. However, the probability that you will hear an ambulance siren is usually less than the probability that you will hear an ambulance siren after witnessing a five-car collision at a busy intersection.
• State Variables. These allow the user to construct complex decision tree utility functions in which there are numerous inputs.

No comments:

Post a Comment