ISO/IEC JTC 1/SC 22/WG14 N1232

ISO/IEC JTC 1/SC 22/OWGV N0062

02 April 2007

 

Thomas Plum, [email protected]

 

Vulnerability, Safety, Security, and Quality

 

 

Introduction

 

This paper is a personal contribution, not the official opinion of any specific group.  The paper attempts to cover a broad range of subjects at an overview level.  It provides references to treatments in greater detail, but the author will appreciate receiving suggestions for additional references to provide broader coverage.  Whenever anyone undertakes a broad overview beyond one’s own specialty, there will probably be a large number of embarrassing oversights and mistakes, but that may be an inevitable price of  the necessary breadth of coverage.

 

Information-technology standards are the domain of ISO/IEC Joint Technical Committee 1 (JTC 1).  Within that domain, programming languages fall within Standing Committee 22 (SC22).  There is no direct responsibility for the areas of Security, Safety, and Quality in the charter of SC22, but programming languages (and their standardized libraries) play an important role (both positively and negatively) in all three areas.  Therefore, SC22 has created Other Working Group: Vulnerabilities (OWGV), to produce a Technical Report entitled “Guidance to Avoiding Vulnerabilities in Programming Languages through Language Selection and Use”.

 

This informal paper is primarily addressed to the programming language standards committees (especially WG14 and WG21), and of course to the SC22 members, who initiated OWGV. 

 

Where no citation is given, this author must take responsibility for the terminology.  Further warning: these personal opinions do not mean that a consensus view has been adopted yet by OWGV.

 

 

Terminology

 

Implementation-defined behavior  behavior, for a well-formed program construct and correct data, that depends on the implementation and that each implementation shall document.

 

Unspecified behavior  behavior, for a well-formed program construct and correct data, that depends on the implementation.

 

Undefined behavior  behavior, such as might arise upon use of an erroneous program construct or erroneous data, for which the standard imposes no requirements.

 

Extension  syntax or semantics which goes beyond the language standard.

 

Static analysis   analysis of program properties which does not require execution of the program; as used here, it includes link-time analysis.

 

Dynamic analysis   analysis of program properties which requires execution of the program.

 

Safety-critical software (herein abbreviated to the category of safety)  – software for applications where failure can cause very serious consequences such as human injury or death.  In this category, a failure is called a hazard.  Closely related categories include high-integrity software, predictable execution, and reliability.

 

Software quality  (herein abbreviated to the category of quality) – the degree to which software implements the needs described by its specification.  In this category, a failure is called a bug.   Closely related categories include conformance.

 

Software security (herein abbreviated to the category of security) – the aspect of software which is defined by a security policy (including factors such as confidentiality, authentication, non-repudiation, etc. [SC27/WG2]).  One important distinction between safety and security is that security must address the possibility of an attacker, who is assumed to be malevolent [Seacord].    In this category, a failure is called a weakness (see the “Common Weakness Enumeration” or “CWE” [Martin]).  If a weakness in a particular application is such that an attacker could exploit it, then that weakness is called a vulnerability in that application.  Closely related categories include trustworthiness.  

 

The term software assurance (SwA) includes predictable execution, conformance, and trustworthiness, that is, all three categories above – safety, quality, and security. [Jarzombek]

 

Remote content – executable material that is external to the current application, e.g. a program invoked by the system() function.

 

Security-critical behavior (herein abbreviated to critical behavior, or CB) – a behavior which when executed can directly cause a weakness, e.g. any execution of the system() function.

 

Critical undefined behavior (herein abbreviated CUB) – an undefined behavior which, no matter what component it appears in, can directly cause a weakness.  Thus, the set of CUBs is a subset of the CBs.  The classic, and still most frequent, example is the buffer overflow.

 

A vulnerability in a programming language is (in terms of the categories above) a feature of that language which can cause, or is strongly correlated with, a weakness, a hazard, or a bug.  (OWGV may be nearing a consensus on something like this definition; note that it is distinct from vulnerability in an application, as described above.)

 

 

Discussion

 

In the past, many control systems were constrained to meet the requirements of safety, but not security requirements.  Then, some of these systems were connected to the internet, e.g. so they can be monitored remotely, but now those connections provide the entry for a malevolent attacker.  (See [Jarzombek].)  Therefore, the categories of systems that require security analysis may be vastly expanded in the near term (see [Clarke] for novelistically-dramatized illustrations).

 

Applications in most programming languages can be vulnerable to execution of remote content – remote site inclusion, cross-site scripting, etc. – but the APIs that access the remote content are a known set.  In other words, execution of remote content is a critical behavior (CB), but not a critical undefined behavior (CUB).

 

Each occurrence of implementation-defined behavior, unspecified behavior, or undefined behavior in a language’s standard will typically increase the cost of any tool for that language, because the validation tool will require implementation-specific configuration.  However, the costs of validation tools do not by themselves cause vulnerabilities and therefore those costs are outside the scope of the OWGV.  Furthermore, it has been reported that non-standard implementation-specific extensions contribute an even greater factor in validation tool costs [Engler], and non-standard extensions are clearly outside the scope of OWGV.

 

Guidelines to achieve safety, security and quality will typically define some subset of the full language and are usually enforced by some variety of static analysis tools.  Such guidelines are typically designed to prevent some (hopefully large) class of problems; in this sense, they produce an incremental, or heuristic, level of improvement.  In many cases, the cost-benefit for such guidelines and associated tools is significantly favorable, especially if the alternative is status-quo, “more-of-the-same”.  For influential recent examples, see [Kass] and [Koo], and also [CERT-SCS]. 

 

In another example of the subsetting-plus-tool approach, the JSF++ standard [JSF++] prevents occurrence of the buffer-overflow CUB by prohibiting any use of the built-in array data type, requiring all arrays to be implemented by the C++ class library components instead.  (Obviously, JSF++ prohibits any use of “plain C” in an application.)

 

We could contrast the subsetting-plus-tool approach with the “full set” approach, which will typically require even larger tools to avoid subsetting the language.  For one example in the “full set” category, the “vtable exploit” CUB can be diagnosed by whole-program analysis; see [Quinlan].

 

There is a specific set of CUBs included in the CWE, as follows:  Various categories of buffer overflow (121, 122, 124) and out-of-bounds indexing (129); (signed) integer wraparound (128), truncation (197), and overflow (190); indirecting a null pointer (476); double free (415); using pointer after free (416); and non-reentrant functions re-invoked by signal handler (179).  (For the full list of weaknesses, see [CWE];  each definition has its own page, so http://cwe.mitre.org/data/definitions/121.html       provides definition 121, etc.)

 

To summarize: we propose to divide security-critical software components into three sub-categories:

  1. “obviously critical, per spec” - software components which are determined by the security policy and the specification to be critical for correctly implementing that policy (e.g., a component which checks a user’s password);
  2. obviously critical, per contents” - any software component that contains a (well-defined) behavior which has been determined to be security-critical (e.g., a component that invokes system()).
  3. critical, but not obviously” - any software component that contains a CUB (as above).

 

Categories 1 and 2 are probably beyond the scope of programming-language experts, as they involve security policy and software/systems engineering methodology.  Presumably, such components must be quality-tested to a quality level which is the highest level supported by the project, but in no case less than an organization-wide definition of minimal quality testing – e.g. a component which invokes system() must ensure that the program name passed to the function has been validated according to the project’s security policy.

 

However, category 3 requires specialized expertise in the programming language’s specification and implementation, to understand how a CUB in some component could create a security weakness even if that component is not obviously security-critical.  At the risk of over-simplification, OWGV (or SC22) should ask each language committee “What are the critical undefined behaviors (CUBs) in your programming language?”

 

We close with a brief disclosure of Plum Hall’s commercial interests: we are working on methods to attack the problems of security, safety and quality at the language-foundation level by preventing UBs (especially CUBs) with definitive methods, rather than the incremental or heuristic methods described above.  A combination of methods are required, at compile-time, link-time, and (to a surprisingly small extent) at run-time [Plum#1].

 

Conformance test suites for C, C++, Java, and C# are developed and distributed by Plum Hall.  These suites (especially the C suite) are sometimes used in the process of qualifying compilers for use in safety-critical applications [Plum#2].

 

Projects with the highest requirements for safety and quality can make use of commercially-available compiler-testing services which maintain huge databases of regression tests for error-free code generation.  A service such as the Compiler Qualification Service (CQS) from Japan Novel has for many years been routinely used by the large producers of embedded systems in Japan. The CQS service is available in Europe and the English-speaking countries, where it is currently distributed by Plum Hall [Plum#3].

 

 

 

REFERENCES

 

[SC27/WG2] Terms of reference for JTC1 SC27/WG2 (Cryptography and security mechanisms), from http://www.nia.din.de/sixcms/detail.php?id=5182

 

[Black] Paul E. Black et. al. “Proceedings of the Static Analysis Summit”.  NIST Special Publication 500-262.  http://samate.nist.gov/docs/NIST_Special_Publication_500-262.pdf

 

[CERT-SCS] CERT Secure Coding Standards”.   http://www.securecoding.cert.org

 

[Clarke] Richard A. Clarke, Breakpoint.  Penguin Group, 2007.  http://www.amazon.com/Breakpoint-Richard-Clarke/dp/0399153780

 

[CWE] “Common Weakness Enumeration”.  http://cwe.mitre.org/

 

[Engler] “Keynote Presentation – Dawson Engler”, in [Black], pp 9-13, esp. pg. 10.

 

[Jarzombek]  OWGV-N0018 Joe Jarzombek, US Department of Homeland Security, "Considerations in Advancing the National Strategy to Secure Cyberspace," presented at Meeting #1 of OWGV, 27 June 2006http://www.aitcnet.org/isai/DocLog/22-OWGV-N-0018/DHS%20SwA%20Overview%2023Jun06.pdf

 

[JSF++] “JOINT STRIKE FIGHTER AIR VEHICLE C++ CODING STANDARDS FOR THE SYSTEM DEVELOPMENT AND DEMONSTRATION PROGRAM”, Document Number 2RDU00001 Rev C, December 2005”. Click on “JSF_AV_C++_Coding_Standards_Rev_C.doc” at  http://www.jsf.mil/downloads/down_documentation.htm

 

[Kass] Michael Kass et. al.  “Source Code Security Analysis Tool Functional Specification Version 1.0”,  NIST Draft Special Publication 500-268, 29 January, 2007http://samate.nist.gov/docs/source_code_security_analysis_tool_spec_01_29_07.pdf

 

[Koo] Michael Koo et. al. “Source Code Security Analysis Tool Test Plan; Draft 1 for public comment of Version 1.0”. http://samate.nist.gov/docs/source_code_security_analysis_test_plan_03_09_07.pdf.

 

[Martin]  Robert A. Martin, The MITRE Corporation, "The Common Weakness Enumeration Initiative," presented at Meeting #1 of OWGV, 27 June 2006. http://www.aitcnet.org/isai/DocLog/22-OWGV-N-0019/CWE_Overview.pdf

 

[OWGV] ISO/IEC JTC 1/SC 22/OWG:Vulnerabilities  http://www.aitcnet.org/isai/

 

[Plum#1] http://www.plumhall.com/sscc.html

 

[Plum#2] http://www.plumhall.com/descr.html

 

[Plum#3] http://www.plumhall.com/cqs/cqs.html

 

[Quinlan] Dan Quinlan, et.al.  “Support for Whole-Program Analysis and the Verification of the One-Definition Rule in C++”, in [Black] pp. 27-35.

 

[Seacord] Robert C. Seacord, Secure Coding in C and C++. Addison-Wesley, 2006.  http://www.amazon.com/Secure-Coding-C%2B%2B-Software-Engineering/dp/0321335724/