Trusted computers are reliable, but that’s not the same thing as trustworthy
Trust in a security context normally means reliability or, in the identification and authentication context, authenticity. When the term trusted is applied to a system or capability, the same connotation conveys — that is, a trusted system is one that can be relied upon to perform as expected or intended. While these are valuable attributes for a system, and are characteristics that security controls can go a long way toward providing, a reliable system is only trustworthy to the extent that it is used properly, and trusted computing standards do not provide any direct assurances of this nature. For example, the specification for the WS-Trust web services security defines trust as “the characteristic that one entity is willing to rely upon a second entity to execute a set of actions and/or to make set of assertions about a set of subjects and/or scopes.” WS-Trust defines a context and security model in which web services can exchange security tokens (i.e., credentials) and communicate claims about service requesters to give service providers the information they need to determine whether to respond to the request. The need for brokering trust among requesters and providers reflects the practical reality that all service requesters cannot be known to all providers, so using WS-Trust enables a service provider to require that a requester present certain information, such as assertions about authentication and authorization to request the service being provided. If the requester cannot provide the information with a security token acceptable to the provider, the requester can ask an authorized third party (one trusted by and perhaps specified by the provider) to evaluate the requester and, assuming that evaluation is successful, issue a security token with the appropriate claims to the requester to be presented to the provider. Because different providers are free to determine their own requirements as to what claims need to be presented for authentication and authorization, using authorized third party “security token services” provides the mechanism to negotiate and validate claims among different parties and in different contexts.
The “trust” in the WS-Trust security model is really two trust relationships — one established between the service provider and whatever third parties it authorizes to evaluate requesters and issue tokens to them, and the other between the requester and the third party security token service (STS). From the service provider’s perspective, the strength of the trust in the service requester brokered by the STS is a function of the evaluation criteria or other basis by which the STS determines that a given requester should be issued a token and, in turn, invoke the provider’s service. This is precisely the locus of trust found in any centralized trust model where all parties to an exchange establish trust relationships with a central authority, rather than with each other. In such models, if the claims associated with tokens issued by the STS to requesters offer assertions about things like identity, reason for requesting the service, and permission to do so, then providers must understand and accept the basis for granting tokens to requesters before we know if we can say the provider can trust the requester.
Willful or inadvertent misuse of systems by authenticated and authorized users can and does occur, and it is important to address the potential for such misuse and mitigate it to the extent possible in order to maximize the reliability of a given system used for a given purpose. In business contexts or industry domains where highly sensitive data such as financial information or health records is concerned, it may not be possible to broker trust among parties to information exchange unless there is a way to evaluate the trustworthiness of the parties, not just validate their identity, role, or organizational affiliation.
Another limitation with a conception of trust based on reliability is the fact that a perfectly reliable system can deliver erroneous or inaccurate information — whether through the actions of a user or because the information stored, processed, or transmitted by the system has poor integrity — so accessing information from a “trusted system” in the technical sense does not in and of itself make the information trustworthy. This general concept of valid (in the sense of well-formed or conforming) information flows that provide untrustworthy information is addressed in greatest detail in the contexts of the Byzantine Generals problem, which explains a situation that is often colloquially referred to as Byzantine failure, and systems that resist failures of this type are said to be Byzantine fault tolerant. The key trust issues raised by the two generals problem highlight the importance of knowing the trustworthiness of users of a system, not just the system itself, and the criticality of ensuring data integrity, particularly when the data being transmitted or accessed is intended to be used in support of decision making.