Model is abstraction & consists of assumptions we make.
In creation we remove unnecessary detail and focus on elements of system that are in important from desired performance point of view.
Assumptions about system must be made during model construction therefore we must check that the model is a good fit.
Process of robust model required two parts:
- design and verification of model
- validation of model
Verification does not imply validation or vice versa.
Validation sometimes blended with verification. Especially case when measurement data for system is available.
If comparison of systems measurements & model results indicate that results produced by model are close to those from system, implemented model is assumed to be both verified implementation of assumptions and valid representation of system.
Validation
That assumption can be a disaster for complex systems. Need to ask if assumptions are correct and adequate enough when choosing model.
Validation can be difficult but is vital. If model is complex, validation should be continuous process with model being updated as result of feedback from model’s operation.
Validation – demonstrating model is reasonable representation of actual system copying behaviour accurately enough to satisfy analysis objectives. Approach varies according to system and model chosen. May represent different parts of systems at different levels of abstraction therefore have different levels of validation.
Most models have three different aspects to considering during validation:
- assumptions
- input parameter values and distributions
- output values and conclusions
Initial validation tests concentrate on model output – only if it suggests a problem then detailed validation is undertaken.
Three approaches to model validation, any combination can be applied:
- Expert intuition as epitomised in AI design
- Real system measurements as for aircraft where full size models are used
- Theoretical results and analysis as in case of the Large Hadron Collider
Performance measures extracted from model will only have bearing on real system if model is good representation of system.
Goodness of model is subjective – it is our version of perceived reality.
Example – If ultimate goal is to measure system’s performance, criteria for judging goodness of fit of model based on how accurate readings correspond to readings which would be obtained from real system. e.g. stresses on bridge – does model collapse at same point real bridge would?
Verification
Like debugging in software development. Ensures model does what it is supposed to do.
Models such as simulation models often computer programs.
Techniques used to develop/debug/maintain computer programs are useful for these models. General software techniques such as system walkthrough and consistency testing.