How Open Testing Standards Can Improve Security

dark reading logo

When creating security metrics, it's critical that test methodologies cover multiple scenarios to ensure that devices perform as expected in all environments.

Networks are a complex collection of components defined by many different standards. These standards help solve network problems ranging from security to performance and usability.

An open standard is a publicly available standard that can be consumed in a variety of ways for deploying a secure solution for a network. Readers of open security standards use them to understand how a technology might be useful to solve security on the network. Implementers of open standards can create solutions to address documented security issues. Network operators read standards to understand how the different implementations work together to make a complete security solution.

These network solutions often come from different sources, which leads to the creation of a variety of testing procedures and methodologies to ensure that network components support all the security and performance requirements of the network users. Since the majority of standards are also open, it would make sense that the methods for testing are also open. But often this isn't the case, and I think it should be.

The Case for Open Security Testing Standards
The argument I often hear against open testing standards is because network component engineers can see the test and create a solution based on the known criteria. This, to use a grade school analogy, seems like cheating since the test questions are known in advance, making it possible for a network operator to engineer their products to pass the test. If the tests have full coverage for the security features that a network operator wants, then it doesn't matter if they know what is being tested. The outcome of the testing will be a network component that shows compliance to the full coverage of test cases. By creating an open testing environment, network component engineers can build a solution that will meet the network operators' requirements.

When creating security metrics, it's critical that test methodologies cover multiple scenarios to ensure that devices perform as expected in all environments. For security test methodologies, it may be necessary to randomize input parameters to cover all use cases in order to detect devices that have tuned device performance to meet test case needs rather than the needs of real use cases. For example, when measuring if a firewall detects CVEs, it's important to run a traffic mix with vulnerabilities to ensure the device detects and blocks attacks under a variety of conditions.

Another advantage of open testing standards is that they give users and network operators the ability to see what security testing is performed and how testing is performed. Knowing what security test cases are being performed allows the operator to confirm that the test meets specific requirements. If not, they can add additional tests.

Creating a Feedback Loop
If there is an organization responsible for maintaining the standard, operators can feed that information back to cover missing areas so that in the future the network operator won't have to run additional testing. Knowing how network components are tested also lets network operators and users better understand the meaning of results because results alone often don't give enough context about the testing conditions of the network component. For example, it's important to understand if a device passes security tests when there is no load but doesn't detect attacks when it's under load.

It's also important to compare security results from different networking providers as a means of increasing transparency into testing methodologies, which also leads to better decision-making processes. In other words, open testing standards provide an "apples to apples" comparison opportunity. In security performance testing, for example, the results of a bandwidth test on a firewall can change greatly based on the security features that are enabled. If no open standard exists to specify that information, a user might be looking at results for two different implementations and not understand that the results differ depending on what features are enabled.