Minutes prepared by: Jim Moore, The MITRE Corporation, [email protected]
These minutes were approved at Meeting #4.
INCITS/Information Technology Industry Council
1250 Eye Street, NW - Suite 200
Washington, DC 20005
USA
Host:
InterNational Committee for Information Technology Standards
USA
Host Contact information:
Blue Pilot Consulting, Inc.
Email: John Benito
Phone: +1 (831) 427-0528
Cell: +1 (831) 600-5547
A wiki for the meeting is located at:
http://wiki.dinkumware.com/twiki/bin/view/OWGVulnerabilityDC/WebHome
The convener, John Benito, called the meeting to order at 9:34 am, 11 December 2006.
The convener asked if document distribution was regarded well. There were no critical comments.
Attending the meeting were:
Tom Plum was recently named as the liaison from WG14. Fred Long is from the University of Wales and is a visiting scientist at the SEI. Steve Michell will handle the liaison to WG9 in the absence of Erhard Ploedereder.
The convener reminded us of guidelines for courtesy in the meeting.
The minutes [N0042] were approved with the correction shown in the agenda:
Change the first paragraph of 5.3 to read as follows: "Robert Seacord and Tom Plum were given Action Item 01-09 to propose a set of levels. In correspondence, Plum suggested that some other concepts need to be agreed upon, such as critical vulnerability, before a scheme of levels could be developed. Seacord submitted a chart describing the CERT approach [N0031: txt, jpg]. In Seacord's absence, OWGV discussed the suggestion described in his email note. The action item was closed."
The group reviewed open action items. The following status was noted for selected items:
01-06 This item will be closed administratively.
01-08 The convener described interaction with WG21 to get a liaison representative. During the discussion, Jim Moore accepted a new Action Item #3-01 to invite a representative from the Lockheed-Martin JSF team that developed the C++ guidelines.
01-11 The convener has contacted the Eiffel chair regarding a liaison representative. The chair is taking it to their committee.
01-13 The convener has followed up but with no success. We decided to close this action item and replace it with a new item to try to recruit a liaison from the Open Group. The convener took Action Item #3-02, "Try to recruit Open Group (either UK and US) or others in the Java real-time/safety-critical. Follow up with Ben Brosgol. Andy Wellings might have some suggestions."
02-03 Steve Michell hasn't been able to do this yet.
02-04 Completed
02-05 Completed
02-06 Open
02-08 Open
02-09 Open
02-10 Closed: Netherlands is willing to host a meeting
02-11 Closed administratively
The Secretary agreed to send reminder notes for action items assigned to persons who did not attend the meeting. (Shortly after the meeting, he did this.)
The group reviewed recent items in the decision log. As a result, we decided to add an item to the agenda to discuss several of them. This was added to the agenda as item 3.7. During the discussion, Robert Seacord agreed to provide a paper regarding vulnerability classification. (During the meeting, the provided the link to [Seacord-2005]).
We approved the agenda as proposed with the addition of item 3.7 noted above and item 1.7.4 "Attracting Additional Stakeholders".
The following meetings were scheduled:
We will meet in the Netherlands during 2008.
For future meetings, we decided to add an agenda item to review the decision log of the previous meeting. [Decision Log #03-01]
15 January is the cut-off for the post-meeting mailer for meeting #3. 2 April is the cut-off date for the pre-meeting mailer for meeting #4.
Fred Long says that we are handicapped somewhat because we are not specific to a particular language, hence we don't specifically appeal to any group. Robert Seacord says that we are not clearly communicating what the group is doing. It is suggested that we need to improve our appeal to tool vendors and to key users. Derek Jones suggests that our work might be perceived as being more significant if we were developing a conformance document rather than a Technical Report. (It was noted that we could advance our work further and then seek a change in our terms of reference so that we could publish a conformance document.) Derek Jones argues for writing guidelines that provide a safety valve of permitting documented "deviations". Tom Plum suggests that certain kinds of deviations might trigger certain verification and validation actions, such as code reviews. Robert Seacord proposes using a Wiki to fuel discussion. Jim Moore spoke of his recent liaison visit to the SC27 working groups and the apparent interest in coding guidelines. Jim accepted Action Item #03-03 to "Seek a liaison relationship between SC22 and SC27 on the development of coding guidelines to support security."
Jim Moore gave a brief summary of his written report [N0045]. SC 7/WG 19 has appointed a liaison to the committee for matters concerning modeling languages. (It was noted that there is a mistake in the written report; it said "SC22/WG19" when "SC7/WG19 was intended. This was corrected and the document reposted.)
Dan Nagle reported that J3 is adding an annex to Fortran draft to list processor-defined behaviour (implementation dependencies). The standard is due to be published in 2008. J3 continues to be very interested in the work of OWGV. They are interested in writing the annex for the OWGV document that would provide the Fortran-specific guidance.
No report
Steve Michell reported that the Ada amendment is in final ballot in JTC1. The next meeting of the WG9 HRG (the group charged with liaison to OWGV) is scheduled for June 2007. They will decide what action should be taken by WG9. They might decide that a reorganized 15942 could serve as their language-specific annex.
Nothing to report
(Place holder only, no current liaison.)
TG2 has not met since the previous meeting of OWGV.
No report
The MISRA C++ group is working to create a document for review by January.
No report
No report
No report
Steve Michell summarized these issues in document [N0048]. Because most of it was incorporated into the draft TR, we decided that it did not need to be discussed separately.
Clive Pygott summarized these in document [N0044]. The group thanked Clive, in his absence, for sharing the material.
Brian Wichmann's paper had been posted as [N0049]. Derek Jones reminded the group that the paper is about tool assurance rather than predictable execution. If we want predictable execution, we want to evaluate that with a tool. How do we know that the tool is correct?
Tom Plum: It is more expensive to analyze a language that permits side effects or other problematic semantics. This phenomenon should inform the cost-benefit analysis for the selection of vulnerabilities to be treated. Jim Moore suggested that the draft report already includes provisions for different kinds of analysis and treatment.
Dan Nagle: "Stepwise predictability" means that if a reasonably competent programmer understands the state of the machine after one instruction, then he can infer the state of the machine after the next instruction.
Steve Michell: We have to assume that the rules of the language apply - i.e. that the compiler implements it correctly.
Derek Jones: We have to include aspects of human understanding, i.e. things that normally competent humans make mistakes about.
Dan Nagle: This sort of discussion leads us back to the point that we should select items based on how often problems are experienced in practice.
Derek Jones: Predictable execution involves having knowledge. The amount of knowledge required is a consideration in defining the concept. We want to be able to predict behaviour with only a small subset of the applicable knowledge.
Fred Long: Perhaps we should talk about "expected" execution rather than "predictable" execution.
Tom Plum: Some applications will require a concept of predictability that is more formal.
We discussed an email note that Tom Plum sent to the mailer on 30 November. In it, he wrote the words shown below in black:
IMHO there is (at least) a consensus that whatever criteria we define for execution must exclude the "unacceptable" behaviors, such as core dump. Herewith I propose some terminology for this category: A Critical Condition:
A Critical [Language] Vulnerability: (Note: a Critical [Application] Vulnerability in even the most innocuous non-critical component can nonetheless crash the system; i.e., there is a "non-linear" relationship between the criticality of the vulnerability and the criticality of the component.) The Critical [Application] Vulnerabilities are as follows:
|
Robert Seacord pointed out that the note uses the term "vulnerability" in two different senses. As a result of this and other comments, we separated the meaning of "language" vulnerability and "application" vulnerability and clarified the note with the additions shown in red above. This subject was revisited in item 3.6, below.
Tom's note outlined a set of "critical vulnerabilities". These could be viewed as criteria for deciding whether a vulnerability is worth paying serious attention to.
We formulated a table drawing a contrast between two notions called "formally predictable behaviour" and "pragmatically expected behaviour:
Formally predictable behaviour | Pragmatically expected behaviour |
Requires full knowledge of the state of the system | Reasons in the presence of incomplete and uncertain knowledge |
Relies on mathematical formality | Relies on expected knowledge and expected competency of the programmer |
Requires a formal specification of behaviour | Requires the source code and the language standard (and maybe implementation knowledge) |
It was generally agreed that the concept of predictable behaviour to be used in our report should be close to that described in the right hand column of the table. Material derived from the table should go into the document somewhere. [Decision Log #03-03]
Derek Jones had contributed a paper on programmer expertise, posted as [N0050].
Tom Plum: Our guidelines ought to encourage folks to use the the programming language standard, rather than deriving knowledge from particular implementations.
Robert Seacord: Our guidelines ought to encourage language vendors to provide the additional knowledge needed to provide predictability for non-standard features.
Derek Jones: If we can find out which parts of the language are poorly understood by most practitioners, then we can provide guidelines that focus on those parts.
Derek Jones: The idea is to write guidelines that permit programmers to work with the smallest possible amount of knowledge, e.g. operator precedence. We need a model of developer knowledge, the amount that we should assume. A prevailing lack of expertise regarding a particular language feature would be a legitimate reason for writing a guideline concerning that feature.
Derek Jones had contributed a paper proposing some vulnerability guidelines; it was posted as [N0051].
We considered the guidelines proposed by his paper. The first is CG1: Use of a construct having unspecified behavior shall produce a result [that] is the same for all of the possible behaviors defined by the language definition.
The guidelines that he proposed would be translated into language-specific guidelines in the annexes. For example, CG1 might translate into a lot of specific guidelines for a particular language.
Derek Jones took Action Item #03-04: For some of the languages of interest, provide a cross-reference for the terms that the languages use for "implementation-defined," "unspecified", etc.
Next, we considered CG2: Use of a construct having implementation defined behavior shall produce a result [that] is the same for all of the possible behaviors likely to occur in the implementations used.
CG2 could be rewritten to say that the guideline should specify the additional knowledge that is required to infer predictability in execution.
Next, we considered CG3: Any use of a construct having undefined behavior shall not occur in a program.
Robert Seacord said that we need to subdivide this into erroneous outcomes versus benign outcomes. Fred Long suggested that the Ada concept of bounded errors is useful in this context.
Next, we considered CG4: Algorithms or operations shall not make use of knowledge of representation details.
Jim Moore asked why this rule is a flat prohibition. Derek Jones suggested that the language specific guidelines could provide for the appropriate deviations and exceptions. Tom Plum said that he believes that organizational policies would probably provide exceptions; this could be, at best, the default rule. Jim Moore suggested that programmers in many languages, including C, cannot practically follow this rule. Dan Nagle would prefer a rule that any use of representation should be encapsulated and documented.
Next, we considered CG5: No assumptions about the accuracy of the result of any expression that contains operands having a floating-point type shall be made unless they are backed by a mathematical analysis.
This guidelines seemed basically acceptable to the group; we would hope for something similar with respect to concurrency.
Next, we considered two proposed guidelines that Derek Jones suggested were alternatives ...
After discussion regarding name space issues in various languages and programming methods, we decide to set these two aside. The issues might be language-specific.
The latter two suggestions, CG6 and CG7, were in a section of the report devoted to "performance failures", mistakes that occur due to the limits of human cognitive capabilities in complex situations. There was some disagreement about whether issues of this sort should be treated at all. It is finally agreed that such issues are "in scope" for the TR but should be positioned late in the TR so that more concrete issues appear first. This issue was revisited during the discussion of the next topic.
The group then considered the working draft of PDTR 24772, prepared by John Benito and posted as [N0040].
Dan Nagle asked that future working drafts be issued with line numbers to improve ease of reference. That was agreed. [Decision Log #03-05]
Steve Michell volunteered to rewrite 1.3 of the current draft to improve the readability. [Action Item #03-05]
It was agreed that none of the documents currently listed as references are normative. They should all be moved into a bibliography annex. References to the language standards should be added. [Decision Log #03-06]
A few definitions were drafted for Clause 3:
3.x (language) vulnerability A construct or a combination of constructs in a programming language that can lead to an application vulnerability. 3.x (application) vulnerability A security vulnerability or safety hazard 3.x (security) vulnerability A set of conditions that allows an attacker to violate an explicit or implicit security policy 3.x (safety) hazard (Definition from 61508 or whatever.) 3.x predictable execution The property of the program such that all possible executions have results which can be predicted from the relevant programming language definition and any relevant language-defined implementation characteristics and knowledge of the universe of execution. Note: In some environments, this would raise issues regarding numerical stability, exceptional processing, and concurrent execution. Note: Predictable execution is an ideal which must be approached keeping in mind the limits of human capability, knowledge, availability of tools etc. Neither this nor any standard ensures predictable execution. Rather this standard provides advice on improving predictability. The purpose of this document is to assist a reasonably competent programmer approach the ideal of predictable execution. |
Subsequent to the meeting, Jim Moore found the following possibilities for hazard:
hazard. (1) an intrinsic property or condition that has the potential to cause harm or damage; (2) a source of potential harm or a situation with a potential for harm in terms of human injury, damage to health, property, or the environment, or some combination of these. (IEEE 1012-2004 IEEE Standard for Software Verification and Validation, 3.1.11) software hazard. (1) a software condition that is a prerequisite
to an accident (IEEE Std 1228-1994 IEEE Standard for Software
Safety Plans, 3.1.5) hazard. potential source of harm (IEC 61508-4 and ISO/IEC Guide 51) |
These definitions should be incorporated into the draft TR. [Decision Log #03-02]
It was decided that there should be a section 1.4 in the draft document to discuss the intended audience. [Decision Log #03-07].
We decided that 5.3 should be rewritten as follows: "Portability can refer to people and tools as well as applications. In this document, we are primarily concerned with the first two. Portability of applications may be an ancillary benefit of applying these guidelines but is not the purpose of the guidelines."
We moved on to 5.4, the list of vulnerabilities. The initial set of issues in the draft TR were derived from a document contributed by Steve Michell.
The first issue, 5.4.1, was strong typing versus weak typing. Tom Plum said that several issues are under this single heading. After some discussion, the following issues were separated out:
We moved on to 5.4.2, unbounded types. We decided it should be rephrased as something like, "how do you deal with data when you don't know its size a priori".
The next item was 5.4.3, runtime support for typing. We agreed on guidance something like, "If you're relying on run-time checking, it's probably because you don't have the static information needed to do a static analysis. Since you can't do the static analysis, you need to make sure that the dynamic checking is done everywhere."
The next item was 5.4.4, arrays. Some key points to be covered in the description include:
The next item was 5.4.5, objects with variant structures. Accessing the data in the wrong view leads to undefined behaviour. Use of multiple representations certainly complicates static analysis. It may also lead to inadvertent aliasing. A guideline might suggest ensuring that the view when you retrieve is the same as the view when you last stored. Checkable tags are one way to do this.
The discussion of the appropriate role for "human cognitive limitations" continued. We tentatively settled on a possible organization for Clause 5 of the technical report:
5. Vulnerability issues 5.1 Issues arising from lack of knowledge 5.1.1 Issues arising from unspecified behaviour 5.1.1.x specific issues 5.1.2 Issues arising from implementation defined behaviour 5.1.2.x specific issues 5.1.3 Issues arising from undefined behaviour 5.1.3.x specific issues 5.1.4 Issues arising from incorrect assumptions (including numerical accuracy, concurrency, not looking in the specification) 5.1.4.x specific issues 5.2 Issues arising from human cognitive limitations 5.2.1 Issues arising from visual similarity 5.2.2 Issues arising from name confusion |
Each of the issues resulting from lack of knowledge is described in a language-independent fashion and then treated in the language-specific annex. The issues resulting from cognitive resources are described generically but we are not agreed that they should be treated in the language-specific annexes. [Decision Log #03-04]
The next item was 5.4.6 Name overloading, operator overloading, overriding. We decide, for now, to treat this as a human limitations issue.
The next item was 5.4.7, unbounded objects. The first problem has to do with the size and location of dynamically allocated objects. Steve Michell says that the first paragraph is wrong. It should be "Some languages can produce objects that have sizes which are determined at run-time." This is different from unbounded types. This involves techniques of dynamic memory allocation, both pointers and heaps. It may include things passed to/from runtime libraries. Derek Jones suggests generalizing this to state that when you request any resource you should make sure that you really have gotten it. Examples include allocating storage, opening files,
The next item was 5.4.8, constants. We discussed these and decided that most of them do not generalize across languages. One of the enumerated problems was that a program might write to a loop iteration variable expecting that the number of iterations is affected. Derek Jones stated that this is a cognitive limitation. His paper [Jones-2006] treats the subject.
Another small problem was found in the draft. It appears that section 8.1.6 is repeated.
John Benito was asked to incorporate the decisions made during this discussion into the draft TR. [Decision Log #03-08]
It was decided that some of the items in the decision log from Meeting #2 should be discussed and possibly reconsidered.
Fred Long said that since we are planning to take a pragmatic approach to predictability, we don't need the concept of "stepwise predictability" which is a formal concept. Dan Nagle disagreed, saying that "stepwise" prediction is what programmers do when they read their code.
For now we decide to relabel the decision about "stepwise predictability" so that the question is still open. We will draft material for the document based on this week's discussion of "pragmatic predictability". Those who believe that "stepwise predictability" is a useful concept will be welcome to contribute alternative wording.
Derek Jones had proposed a rule for floating point that amounts to "get help from a numerical analyst". We might want to apply that approach to concurrency also, but who are the experts? Fred pointed out that floating point problems are usually bounded errors.
Rephrase: Use Derek's approach to floating point. Treatment of concurrency may be delayed for now but remains on our agenda as an important thing to do. [Decision Log #03-09]
Robert Seacord said that "likelihood" refers to the probability that a vulnerability can actually be exploited. The discussion became somewhat fragmented over the relationship between safety hazards and security vulnerabilities; we decided to set this issue aside.
We decided on the latter, confirming the the previous decision.
We decided that the answer is "Yes", but we should probably write guidance rather than conformance criteria. This has the effect of replacing the previous tentative decision #02-14 with a new one. [Decision Log #03-10]
We decided that the answer is Yes. The rationale should be a freely available document, available on a web site. Wiki technology might be an interesting way to draft this document. [Decision Log #03-11]
Based on discussion, Steve Michell selected some material from [N0048] and revised it as [N0054]. This led to a discussion of a template for the language-independent description of a vulnerability. Several alternatives are captured in [N0056].
Jim Moore reviewed the meeting notes and informally covered action items and decisions. He will distribute cleaner minutes, action item log, and decision log shortly after the meeting.
By acclamation, we thanked Deb Spittle and INCITS for the meeting facilities and Blue Pilot for the muffins.
The meeting was adjourned at approximately 12:30 pm on Wednesday, 13 December 2006.