A need for more meaningful security testing
The recently released fiscal year 2008 report to Congress on FISMA implementation once again highlights government-wide progress in meeting certain key objectives for their information systems. Among these is the periodic testing of their security controls, which is required for every system in an agency’s FISMA system inventory under one of the “Agency Program” requirements in the law (Pub. L. 107-347 §3544 (5)), and an annual independent evaluation of “the effectiveness of information security policies, procedures, and practices” (Pub. L. 107-347 §3544 (2)(A)) for a representative subset of their information systems. The FY2008 report indicates that testing of security controls was performed for 93 percent of the 10,679 FISMA systems across the federal government, a slight decrease from the 95 percent rate in fiscal 2007, but still reflecting a net increase of 142 systems tested. This sounds pretty good except for one small detail: there is no consistent definition for what it means to “test” security controls, and no prescribed standard under which independent assessments are carried out. With a pervasive emphasis on control-compliance auditing in the government, such as the use of the Federal Information System Control Audit Manual (FISCAM), it seems there remains too much emphasis on verifying that security controls are in place rather than checking to see that they are performing their intended functions.
As the annual debate resurfaces on the effectiveness (or lack thereof) of FISMA in actually increasing the security posture of the federal government, there will presumably be more calls for revision of the law in order to decrease its emphasis on documentation and try to shift attention to making the government more secure. The generally positive tone of the annual FISMA report seems hard to reconcile with the 39 percent growth in year-over-year security incidents reported to US-CERT by federal agencies (18,050 in 2008 vs. 12,986 in 2007). There is certainly an opportunity for security-minded executives to perhaps shift some resources from security paperwork exercises to penetration testing or other meaningful IT audit activities. This would align well with efforts already underway at some agencies to move toward continuous monitoring and assessment of systems and away from the current practice of comprehensive documentation and evaluation only once every three years under federal certification and accreditation guidelines. The lack of sufficient funding is often cited as a reason for not doing more formal internally and externally performed security assessments like pen tests. The current FISMA report suggests that security resources may not be applied appropriately — according to business risk, system sensitivity or criticality, or similar factors — as the rate of security control testing is the same for high and moderate risk impact level systems, and only slightly lower (91 percent vs. 95 percent) for low impact systems. With just under 11 percent of all federal information systems categorized as “high” for security, agencies might sensibly start with those systems as a focus for more rigorous security control testing, and move forward from there.