System analysis allows developers to objectively carry out quantitative assessments of systems in order to select and/or update the most efficient system architecture and to generate derived engineering data. During engineering, assessments should be performed every time technical choices or decisions are made to determine compliance with system requirements.
System analysis provides a rigorous approach to technical decision-making. It is used to perform trade-off studies, and includes modeling and simulation, cost analysis, technical risks analysis, and effectiveness analysis.
Purpose and Principles of the Approach
The system analysis process is used to: (1) provide a rigorous basis for technical decision making, resolution of requirement conflicts, and assessment of alternative physical solutions (system elements and physical architectures); (2) determine progress in satisfying system requirements and derived requirements; (3) support risk management; and (4) ensure that decisions are made only after evaluating the cost, schedule, performance, and risk effects on the engineering or re-engineering of a system (ANSI/EIA 1998).
This process is also called the decision analysis process by NASA (2007, 1-360) and is used to help evaluate technical issues, alternatives, and their uncertainties to support decision-making. (See Decision Management for more details.). System analysis supports other system definition processes:
• Stakeholder requirements definition and system requirements definition processes use system analysis to solve issues relating to conflicts among the set of requirements; in particular, those related to costs, technical risks, and effectiveness (performances, operational conditions, and constraints). System requirements subject to high risks, or those which would require different architectures, are discussed.
• The Architectural Design: Logical and Architectural Design: Physical processes use it to assess characteristics or design properties of candidate logical and physical architectures, providing arguments for selecting the most efficient one in terms of costs, technical risks, and effectiveness (e.g., performances, dependability, human factors, etc.). Like any system definition process, the system analysis process is iterative. Each operation is carried out several times; each step improves the precision of analysis.
Activities of the Process
Major activities and tasks performed within this process include
• Planning the trade-off studies:
• Determine the number of candidate solutions to analyze, the methods and procedures to be used, the expected results (examples of objects to be selected: behavioral architecture/scenario, physical architecture, system element, etc.), and the justification items.
• Schedule the analyses according to the availability of models, engineering data (system requirements, design properties), skilled personnel, and procedures.
• Define the selection criteria model:
• Select the assessment criteria from non-functional requirements (performances, operational conditions, constraints, etc.), and/or from design properties.
• Sort and order the assessment criteria.
• Establish a scale of comparison for each assessment criterion, and weigh every assessment criterion according to its level of relative importance with the others.
• Identify candidate solutions, related models, and data.
• Assess candidate solutions using previously defined methods or procedures:
• Carry out costs analysis, technical risks analysis, and effectiveness analysis placing every candidate solution on every assessment criterion comparison scale.
• Score every candidate solution as an assessment score.
• Provide results to the calling process: assessment criteria, comparison scales, solutions’ scores, assessment selection, and possibly recommendations and related arguments.
Checking Correctness of System Analysis
The main items to be checked within system analysis in order to get validated arguments are
• Relevance of the models and data in the context of use of the system,
• Relevance of assessment criteria related to the context of use of the system,
• Reproducibility of simulation results and of calculations,
• Precision level of comparisons' scales,
• Confidence of estimates, and
• Sensitivity of solutions' scores related to assessment criteria weights.
See Ring, Eisner, and Maier (2010) for additional perspective.
Methods and Modeling Techniques
• General usage of models: Various types of models can be used in the context of system analysis:
• Physical models are scale models allowing simulation of physical phenomena. They are specific to each discipline; associated tools include mock-ups, vibration tables, test benches, prototypes, decompression chamber, wind tunnels, etc.
• Representation models are mainly used to simulate the behavior of a system. For example, enhanced functional flow block diagrams (eFFBDs), statecharts, state machine diagrams (based in systems modeling language (SysML)), etc.
• Analytical models are mainly used to establish values of estimates. We can consider the deterministic models and probabilistic models (also known as stochastic models) to be analytical in nature. Analytical models use equations or diagrams to approach the real operation of the system. They can be very simple (addition) to incredibly complicated (probabilistic distribution with several variables). • Use right models depending on the project progress
• At the beginning of the project, first studies use simple tools, allowing rough approximations which have the advantage of not requiring too much time and effort. These approximations are often sufficient to eliminate unrealistic or outgoing candidate solutions.
• Progressively with the progress of the project it is necessary to improve precision of data to compare the candidate solutions still competing. The work is more complicated if the level of innovation is high.
• A systems engineer alone cannot model a complex system; he has to be supported by skilled people from different disciplines involved.
• Specialist expertise: When the values of assessment criteria cannot be given in an objective or precise way, or because the subjective aspect is dominating, we can ask specialists for expertise. The estimates proceed in four steps:
1. Select interviewees to collect the opinion of qualified people for the considered field.
2. Draft a questionnaire; a precise questionnaire allows an easy analysis, but a questionnaire that is too closed risks the neglection of significant points.
3. Interview a limited number of specialists with the questionnaire, including an in-depth discussion to get precise opinions.
4. Analyze the data with several different people and compare their impressions until an agreement on a classification of assessment criteria and/or candidate solutions is reached.