Dynamic System Behavior
Markov analysis is used to study dynamic system behavior. You create a Markov diagram, which is also known as a state transition diagram, to represent the system as a set of random variables and their interdependencies. Markov diagrams are useful for modeling the behavior of a system when there are common cause failures, imperfect coverage, complex repair policies, degradation, shock effects, induced failures, dependent failures, and other sequence dependent events.
Because other reliability techniques assume that the configuration of the system is both simple and static, they cannot be used to analyze dependent random events. For example, while block diagrams and fault trees provide concise representations of the failure modes for a system, they:
Limit system components to only operational or failed states.
Assume that the system configuration remains the same during the mission.
Assume that the temporal order of component failures does not affect system reliability.
The basic assumption in a Markov model is that the state transitions are memoryless, which means that the transition rates are determined only by the present state and not by history. In reliability analysis, these conditions are satisfied by Exponential failure and repair distributions, which imply constant failure and repair rates (exponential transition distribution).
Markov models can represent various system states and component failure dependency (or failure sequence). In addition to being completely independent analyses, they can be linked to FTA events, providing a hybrid model for accurately modeling the dynamic behavior of the system.
The Markov engine analyzes both the dynamic behavior and steady state behavior. It easily calculates capacity, reliability, and maintainability indices of the systems. The Markov module handles developing and modifying Markov diagrams interactively and calculates results quickly so that you can easily investigate the effects of changes in model parameters.