Posted by: Andy Tickner | 2nd February 2011

Proof of concept evaluation

AustroControl and KEMEA have evaluated the proof-of-concept software produced in SERSCIS. The evaluation was performed using a simulation of Airport Collaborative Decision Making (A-CDM), a critical ICT infrastructure, as the use case. A-CDM deals with the turn-around of an aircraft at an airport and has direct influence on the performance of the European air traffic network. It is modelled by simulating the airside workflow, which involves actors ranging from Eurocontrol’s central flow-management unit to individual ramp service providers like aircraft catering.

In the evaluation a fault-free case, as well as three degraded or fault scenarios, were applied. The faults comprise:

  • a reduction in the performance of a ramp service provider that without mitigation would lead to substantial delays in aircraft turn-around,
  • a passenger no-show, which requires the already loaded baggage to be searched and off-loaded from the aircraft, and
  • delays in the communication via the central database (called A-CDM Information Sharing Platform) caused by DoS attacks.

These three fault scenarios cover both physical as well as ICT security threats.

The assessment of the system behaviour and the capabilities of the SERSCIS tools was done by Key Performance Indicators (KPI). Those KPIs are directly derived from the business objectives of the stakeholders in the scenario. Using a set of KPIs the behaviour of the system can be monitored effectively and efficiently. Failures of individual services can be detected and – given that mitigation strategies are implemented – their effectiveness can be observed as well.

The evaluation itself consisted of two distinct sets of experiments. The first set, conducted by KEMEA, dealt with the modelling tools in an off-line fashion. This set included examples of causes for the above faults, and some additional cases. In the second set Austro Control used the above-mentioned simulation to do an assessment of the online tools and mechanisms. Both aspects of the evaluation yielded a positive result. The offline tools were successfully used to model the use case and threat scenarios as well as mitigation actions, while the runtime tools were applied to assess the threat mitigation strategy by adaptation. For the runtime part, KPIs allowed both physical and ICT dependability issues to be detected, and both the problems and mitigation strategies to be related directly to their impact on business-level objectives.


Categories