Safety is not a new requirement
The software industry is probably unique in its custom to release products to the market even when they are likely to have residual bugs. Users have (grudgingly) come to accept that complex programs apparently can not be made bug-free, and have adjusted to the occasional system failure as a fact of modern life. This, however, is entirely different in the area of safety-critical software systems, and for good reason: A failure of such a system could harm or even kill humans. Therefore, it must be shown to be reliable before it can be allowed to control, e.g. an airplane, a chemical plant, a vehicle, etc.
There exist a number of internationally accepted standards for the certification of safety-critical software (e.g. DO-178B, IEC 61508, EN 50128). As different as these standards may seem, they do share some common principles: The effort necessary to obtain certification for a given program is generally high and it depends on two parameters:
Code complexity: The effort of certifying a program is roughly proportional to the amount of code to be examined. This comprises the code of the program itself, but also that of the runtime environment (i.e. operating system, libraries, etc.) which the program relies on.
Criticality level: The safety standards assign levels of criticality to applications, according to the worst potential damage that could result from a malfunction. Although they use different nomenclatures#1#, the general concept in all of the standards is similar: the higher the level, the more rigorous testing or even formal verification needs to be done.
In many areas of safety-critical applications, multiple independent applications are executed in a common machine. Besides helping to reduce hardware complexity (thus increasing its reliability) this also reduces cost. However, such a configuration creates new potential for problems: without special precautions, the programs are able to disturb each other, so each of them has to trust all others to behave correctly. Any program is able to cause a malfunction of any other program. Thus, if the functions have different criticality levels, the highest of those levels now implicitly applies to all software in the system.
Safety has to cope with new requirements
Safety-critical application programs come in various levels of both functional complexity and criticality. Increasingly, there is a desire to consolidate disparate applications on a single hardware architecture for the benefit of efficiency, stability, and ease of maintenance. If several programs having different criticality levels are to coexist in one machine, the underlying OS must ensure that they remain strictly independent and therefore are capable of achieving safety certification independently. PikeOS combines resource partitioning and virtualization to make coexistent applications certifiable independently and at different criticality levels.
Each guest operating system virtual machine (VM) has its own, separate set of resources, and programs hosted by one VM are independent of those hosted by another. This allows for legacy programs such as Linux applications to coexist with safety-critical programs in one machine. Unlike other popular virtualization systems, PikeOS was purpose designed for embedded control systems, therefore it features not only separation of spatial resources, but also strictly separates temporal resources of its client OSes. This allows hard real-time systems to be virtualized, while still retaining their timing properties. This separation of resources is established by a minimal amount of trusted code, so the system is well suited for safety-critical projects requiring certification in accordance with the prevailing standards for software safety.
Safety in avionics
One example of resource partitioning and virtualization in safety critical systems is the use of PikeOS by Airbus for their next generation aircraft. Airbus is using PikeOS for certified equipment to be deployed on the A350 XWB aircraft.
Among the many requirements related to this new Airbus architecture, the following were particularly important:
- a multi-partitioned system that provides POSIX® as one of the main requirements
- the ability to develop certifiably safe software while also allowing high flexibility including the reuse of existing code
- the possibility to easily build upon the existing technology to provide a secure storage device and network connection access
- a flexible platform that allows interactive display functionality
The two key aspects of PikeOS architecture that enable mixed certification platforms are resource partitioning and virtualization. PikeOS partitions resources both spatially and temporally. Spatial partitioning provides separate resource pools for user memory and kernel memory. Temporal partitioning ensures deterministic access of a program to processor time. Strict partitioning is what enables each application to have its own level of criticality and certifiability, without impact from other partitions.
Safety and certification
More and more industry sectors are concerned by providing the necessary level of safety for the equipments they propose to their customers. Some industries require an official approval from independent authorities according to international standards. This requirement translates into a special process called certification.
When an entire equipment is certified, evidence must be provided to the certification authority. Those evidences concern both hardware and software parts. As such, PikeOS is required to provide the same documents, source code and other test results as any other software component for the certified system.