Mistaken assumptions about authorized users constrains the trustworthiness of information systems
The National Institute of Standards and Technology (NIST) released an updated guide to its Risk Management Framework (RMF) in December when it published the final public draft of Special Publication 800-39. Among several areas where the document has changed substantially from the previous draft version is its treatment of trust and trustworthiness, with a revised section on trust and trustworthiness of both organizations and information systems, and a newly added appendix describing several trust models or approaches to establishing trusted relationships between organizations. The choice of which model (or models, as they are generally not mutually exclusive) to use depends on a variety of factors about the context in which trust is sought and the nature of the entities that need to trust each other. By separately addressing trust between organizations and trust between systems, SP 800-39 illustrates the different types of factors used to assess the trustworthiness of other entities and, perhaps unintentionally, highlights some of the limitations inherent in trusted computing models that can lead to unintended gaps in security.
No recent incident highlights this problem more effectively than the leak of hundreds of thousands of State Department cables and other documents, apparently downloaded and exfiltrated without detection by an authorized user of the Net-Centric Diplomacy database. The system holding the classified information was deployed on a secure, access-controlled military network, with no individual user-level authentication and authorization mechanisms, and no capabilities to monitor activity by users accessing the database. Considering the system from the perspective of a computer security assurance model like ISO/IEC 15408 (Common Criteria), even in this vulnerable implementation, the system satisfies one of the two properties used to establish assurance levels for a system: the security functionality provided by the system works as specified. Where the system appears to come up short on assurance is with respect to the second property: the system cannot be used in a way such that the security functionality can be corrupted or bypassed. In hindsight it seems fair to suggest that the security requirements for this system were not well thought out; an argument can be made that since the deficiencies in its security posture result from the failure to implement some types of controls rather than any malfunction or evasion of the controls that were implemented, the system might actually qualify as “trusted” under at least lower evaluation assurance levels. The most relevant weakness in the system as implemented is not about the trustworthiness of the system, but of the users authorized to access it.
Authorized use of the Department of Defense’s secure internet protocol router network (SIPRnet) is limited to users with security clearances sufficient to access classified information. Receiving such a security clearance requires a fairly extensive investigation, and the result is that individuals deemed qualified for a secret (or higher) clearance are considered to be trustworthy. Few systems that contain or process sensitive classified information rely only on a user’s security clearance for authentication and authorization, but for the Net-Centric Diplomacy database, it would seem that if a user could log on to SIPRnet, the user could get access to information stored in the database. The trustworthiness of a system envisioned for this kind of use should be directly tied to the trustworthiness (or lack thereof) of the authorized users of the system, but in this case, either no such personnel-level evaluation occurred, or the assumption of trustworthiness associated with clearance holders resulted in a gross underestimation of the threat posed by authorized insiders. The consequences are now readily apparent.
As NIST defines in SP 800-39, trustworthiness “is an attribute of a person of organization that provides confidence to others of the qualifications, capabilities, and reliability of that entity to perform specific tasks and fulfill assigned responsibilities.” When considered for people or organizations, trustworthiness determinations may take into account factors beyond competency, such as reputation, risk tolerance, or interests, incentives, or motivation of the person or organization to behave as expected. There is no expectation of course that every situation requires entities to demonstrate trustworthiness, or to be trusted at the same level across different contexts or purposes. With respect to information systems, among the factors NIST cites that contribute to determinations of trustworthiness are the security functionality delivered by the system and the assurance or grounds for confidence that security functionality has been implemented correctly and operates effectively. Unsurprisingly, this perspective fits nicely with the concepts of minimum security controls requirements and standard control baselines that NIST also uses, in Federal Information Processing Standard 200 and Special Publication 800-53, respectively. IT trust frameworks focused on system assurance defined in terms of predictability, reliability, or functionality tend equate assurance with trustworthiness, characterizing trusted systems in a way that dissociates the system from those that use, operate, or access it. Such an approach ignores the constraints on trustworthiness that might be applied if the system was evaluated in concert with the non-system actors (organizations and people) that have the capability to influence the system’s behavior or the disposition of the information the system receives, stores, or disseminates. A highly trusted system (in the common criteria sense) that is designed to be used in a particular way can and sometimes is misused. Data exchanged with trusted organizations or trusted systems is only as secure or private as the authorized users within the organization or that have access to the system choose to keep it. This is why the provision of organizational capabilities to monitor user actions with trusted systems should be an essential prerequisite to establishing trust relationship, particularly when those relationships are negotiated only at the organization-to-organization or system-to-system level.