Back to the Overview

Certification hacking the Paper Tiger

Hacking the Paper Tiger

SYSGO, Security

Certification. Certification is one word that puts a lot of people into a boredom and mental state of being in a swamp. This is definitely partly true. On the other hand a certification process is an analysis of a product from specific view angles. Sometimes these angles are fixed and rigid but sometimes agile and creative. The main message is that certification results, i.e. artefacts and certification reports, are the answer to those questions arose from the view angles.

Speaking about security certification, such as Common Criteria (CC) aka ISO/IEC 15408, certification artefacts and reports provide answers to the scope and the assurance of a product’s security functionality. Common Criteria is a well-established framework for evaluating a product’s security and issuing an internationally mutually recognised certificate.

For security researchers, the three following documents should be quite useful

  • Security Target
  • Certification report
  • Security manual for certified usage

In the following we will present our research of analysing these documents of some operating systems. Since these documents are not written in Agatha Christie style, we will provide the explanations for Common Criteria whenever it is needed.

We will consider some old anecdotes (as a warming up example) and then present results on security certified QNX operating system. We will show in detail how the assumptions in the security model for certified QNX looks like, the scope of certified security functionality, discuss the trusted computing base in the scope of typical OS deployment, the assumptions and last but not least the human roles.

The goal is to explain context, state facts, and provide the audience the needed references and help to understand how CC works. The ultimate objective is to show that “certification papers” can indeed give precious insights into products as well as assist security research and vulnerability analysis.

Security Certification Artefacts

In a nutshell, in Common Criteria there are three roles:

  • Developer: develops product
  • Security evaluation lab: performs product assessment resulting in an evaluation report. This report is not public.
  • Certification authority: issues certificate and certification report based on evaluation report

In CC products are evaluated against their security specification described in semi-formal language. This security specification is called Security Target, i.e. the Security Target (ST) defines what exactly product claims as security, what are the assumptions and what the level of assurance is. The certification report is a summary written by the certification agency based on the evaluation report. The security manual sometimes is public, sometimes is not public. We will not consider it here and assess some security targets and certification reports.

Security Assurance: How to qualify or quantify “nothing bad happens”

We all know a simple rule when we buy any service or good: “You get what you pay for”. Still, we need to exactly assess what is offered and treat the price label and marketing claims with a grain of salt.

When it comes to security it gets extremely complicated because you desire a system to which nothing bad can happen. Intuitively, it converts to nothing happens is good. Thus, the only qualification and quantification possible for security is assurance: I have a specific assurance that the bad things are kept away and if something happens I will have a non false-positive warning. With other words, the deployed solution provides the needed or required trustworthiness.

Trustworthiness can be achieved by a 3rd party assurance, e.g. by means of certification such as Common Criteria. “You get what you pay for” means in CC that Security Target is our reference when you are shopping for certified security.

The good news about a Security Target is its formal structure and the vendor has to use predefined semi-formal languages.

The bad news is this formal structure, i.e. not everyone can easily read this document. The easiest part to understand is the Evaluation Assurance Level (EAL): it’s one number and the higher is more secure.
However, this is simply false because EAL claims only the level assurance for the technical part of the ST, i.e. with that degree the claimed security is expected to work as expected. Thus, the technical part of ST, the security functional requirements, is that defines implemented security and it will be qualified by EAL. This technical part is written in a semi-formal language, i.e. not everyone can easily read this document.

This issue is partially addressed by Common Criteria Protection Profiles (PP) that describe security functionality for a class of products. For example, Operating System Protection Profile [OSPP] specifies security requirements for commodity operating systems such as Windows, OSX, Linux. A product vendor can instantiate the PP and derive a ST for his product. Since PP is usually developed by a broader community it is vendor agnostic, easier to read, and the claimed functionality corresponds to the current state of the art for the given product class. A PP-based ST can ease the ST reading, since a PP often provides generic context and state-of-the-art description. 

Security Modelling

The base for security modelling consists of several pillars: assets, threats, threat agents, the threat agent’s malicious actions, and assumptions on environment and product usage.

Once the base has been established, the product’s top-level security objectives are defined. The latter is followed by an analysis on how the defined objectives protect assets from defined threats under the assumptions made. Thus, if any assumptions are made, they shall be very carefully analysed because they can make the whole modelling unsound, e.g. there may be a contradiction among or just weak assumptions, the usage domain of the device is wrongly assessed, the human roles are not considered or underestimated.

Now security objectives can be mapped to product functionality, subsystems. Here there are two very tricky steps:

  • Not all subsystems are relevant to security. However one has to include all systems relevant for security and those needed for a normal/intended operation of a product
  • Modern systems are very complex and we deal with it by breaking it into well-defined subsystems, reducing subsystems with the minimal functionality, providing a well-defined API for subsystem composition and extensions. It is important to develop strategies that such compositions and extensions can not destroy or bypass the certified security.

In the following sections we present some tricky issues and pitfalls in working with Common Criteria Protection Profiles and Security Targets.

CAPP: Controlled Access Protection Profile

There was a protection profile called Controlled Access Protection Profile (CAPP) that specified requirements on an access control system. CAPP states:

The CAPP provides for a level of protection, which is appropriate for an assumed non-hostile and well-managed user community requiring protection against threats of inadvertent or casual attempts to breach the system security. The profile is not intended to be applicable to circumstances in which protection is required against determined attempts by hostile and well-funded attackers to breach system security.

Because of this usage domain CAPP cannot be applied to a system where hostile users can interact with the certified product, i.e. cannot be meaningful applied to an operating system that has connection to the Internet.

This PP was applied to some operating systems (e.g. Windows) but the PP is deprecated. This is good.


In the context of the recent Jeep hacks [no.1, no.2] we were reading the QNX Security Target for neutrino 6.4 (the current version is 6.6).

The interesting things started in Section 2.3, where the scope of certification is described:

SYSGO Quotation

“The TOE is an RTOS kernel and C language support library, and is intended to be embedded in an appliance with other utility software – it is not designed for stand-alone deployment. Its architecture focuses on providing reliable execution of realtime, mission-critical applications, and for this reason the TOE itself does not implement the traditional set of IT security checks-and-balances, instead leaving these up to the TOE Environment. When the TOE is deployed as part of a larger, properly configured system it will perform its functions as designed; care must be taken by the TOE administrators to ensure that the hardware on which the TOE is installed, and the other operating system components with which it interacts, are properly designed, configured, and deployed.”

Thus, they do only the microkernel procnto and libc. In theory, it could be sufficient but a bit few for a typical QNX use-case.

To check what the certification agency thought about such product scope we referred to their report and indeed they clarified the scope of QNX certification scope that it “does not constitute a complete product intended for consumer use …”.

Back to the Security Target, we found a very strange statement on how to operate the Target of Evaluation (i.e. only procnto and libc):

“In order to operate the TOE in the CC-compliant configuration, TOE administrators must ensure that all application interactions with procnto are mediated by and occur through the lib/c library – applications are not allowed to access or communicate with procnto directly”

That is very surprising because it means that programs simply are not allowed to execute system calls directly, invoke CPU traps/interrupts, and otherwise modify the user-space library. The question is whether any or what security mechanism are implemented in libc only, e.g. that would create vulnerabilities for TOCTOU-based attacks (Time Of Check to Time Of Use).
Nota bene: in QNX libc is not a thin wrapper as in some other operating systems, but rather “browsing the content of libc is like going to a great buffet”.

It is a very strong requirement for an application developer, i.e. it is a very weak requirement for an operating system, which is supposed to be able to handle malicious programs.

So we decided to jump to the section with assumptions on QNX kernel to find

The people administering the TOE and writing processes and threads for execution by the TOE are non-hostile, appropriately trained, and follow all guidance.”

It means that any developers of programs to be executed on this product shall be non-hostile, i.e. all applications ever being executed on this product shall be non-hostile.

All security bells went ringing.

We continue reading in Section 3.2 “Threats to Security”, in where it is written that the real threats are “Misbehaving processes or threads which the TOE executes”.

Putting together the latter, assumption “A.NOEVIL”, and the requirements that all programs shall always behave, i.e. be non-evil, and please use libc, explicitly implies that security target is boiled down to protection against unintentional bugs. That’s a safety protection, i.e. when a protection is only against a probability that a trustworthy (as described in Section 2.3 of the ST) and experienced (as described in Section 3.2 of the ST) programmer makes an unintentional bug. It’s even not enough for strong safety.

That is definitely not security protection.

Security protection is when an OS is robust against explicitly malicious applications and we don’t see it in the QNX Security Target. On rereading Section 2.3 “the TOE itself does not implement the traditional set of IT security checks-and-balances”, we now better understand what the clause means.


Writing a good protection profile or security target is a tedious work requiring focusing, constant consistency checks, and reviews to be unambiguous. Definition of the security scope is the most important step in the secure product development lifecycle. Common Criteria is a very strong and consistent framework to model and assure security.

Integrating third party components, e.g. an operating system, requires careful analysis of security claims, assumptions made on technical and organisational usage-domain to ensure that all assumptions are met.  Otherwise, as we have illustrated, certified security due to the made assumptions can be boiled down to not very useful functionality after all in typical deployment environments.

Moreover, such a certificate may create false expectations on the customer side. As a consequence, he/she risks to overestimate the security guaranteed by the OS and to ignore important aspects during system integration or development of safety/security case.

To address these challenges an open community approach for security specification shall be established. The Operating System Protection Profile (OSPP) is a good approach for generic OS such as Windows, Linux.

For security critical products a more specific protection profile for secure operating systems is needed. We strongly believe that OS-vendors should bundle their efforts to define a strong common PP, instead of creating a zoo of hand-tailored proprietary specifications, which tend to mislead their customers. The MILS Community is working on defining such a profile and supporting documents.