NITRD Tailored Trustworthy Spaces program suggests avenues for research

Among the areas announced as federal cybersecurity research priorities for the Networking and Information Technology Research and Development (NITRD) program is an initiative intended to promote development of “tailored trustworthy spaces” that not only reflect different requirements associated with different computing uses and contexts, but do so in a flexible way so that the functional and technical provisions for a given trust domain can change or adapt as its underlying requirements change. The basic idea is to describe and express (presumably in electronic, machine-readable form) the security requirements applicable to a given situation and the policies and provisions in place to meet those requirements, so that a prospective user of the space can compare what the space provides against what the user needs in order to “trust” the space.

As noted previously, the connotation of the word trusted in a computing context is somewhat different and more limited than in personal, organizational, or sociological contexts, since the idea of a trustworthy system tends to focus on the reliability of the system itself (meaning it performs in the ways it is expected to) and perhaps security protections applied to the environment in which it operates. Major technology industry and vendor initiatives such as Microsoft’s Trustworthy Computing reflect this emphasis (although to be fair, Microsoft does acknowledge the importance of business practices) in the sense that what they purportedly strive to deliver is software and systems that offer sufficient security, privacy, and reliability to avoid reducing the trust in a given operational domain through the introduction of vulnerabilities or poorly-performing systems. A broader scope might actually be useful when talking about decisions whether or not to trust a given system, including what the would-be truster knows about the provider, host, manager, or other users of the system. The shortcomings of this emphasis on the trustworthiness of the system completely independent of whatever entity is running the system (or on whose behalf it is being run) are increasingly relevant for current technical paradigms like cloud computing and in industry-specific contexts such as health information technology. Without specifically addressing the limitations of conventional connotations of trust in computing, the recommendations published by NITRD and its Cyber Security Information Assurance (CSIA) Interagency Working Group look to tailored trustworthy spaces to “establish trust between systems based on verifiable information that test the limits of traditional trust policy articulation and negotiation methods, raising the bar for highly dynamic human understandable and machine readable assured policies.” One way to “test the limits of traditional trust policy” would be to add assertions about the trustworthiness of the providers of a system, presumably with a structured way to express both the assertions (claims) related to trustworthiness and the basis of trust required for different users in different contexts.

It’s fair to pose questions such as whether this and other “game-changing” research priorities are really the best use of billions of federal information security dollars, but the potential impact from extending the technical ability to establish and manage computing across multiple trust domains is much broader than systems security and privacy protections alone. As the government and industry wrestle with challenges like how to get health care providers to adopt, and get the public to trust their use of, electronic health records and related technology, whether government agencies and commercial companies can trust public cloud providers, and how to satisfy varying international privacy laws while improving anti-terrorism efforts, the vision for tailored trustworthy spaces seems to offer a lot of potential. For addressing these and other challenges, the most interesting of the areas identified for future research include “trust negotiation tools and data trust models to support negotiation of policy” and “data protection tools, access control management, monitoring and compliance verification mechanisms to allow for informed trust of the entire transaction path.” Concerns about the ability to perform such negotiation, monitoring, and verification activities (at least in efficient ways) are routinely cited in contemporary online communication and information exchange contexts, and as these increasingly involve multi-lateral interactions among different types of organizational entities with different needs, biases, and risk tolerances, the absence of effective mechanisms to evaluate trustworthiness and negotiate acceptable parameters of trust relationship will remain a barrier to success.