Guide for Evaluator in CC Evaluation: Tips and Tricks Malaysia Lab Style (A31b)
Common Criteria has been known for many years as the platform for developers, sponsor, labs, certification body and consumers to exchange ideas, working together, standardized processes and most of all, getting IT products/systems endorsed through proper methodology. When we talk about standardization and proper process, yet we all knows that CC has its unique ways of evaluating IT products and systems, plus set a stage of common understanding throughout the broad network in IT industries; whilst in the perspectives of evaluator, still there are gaps in understanding the work around of evaluations activities. Indeed every evaluator and tester involved in CC projects uses the same methodology but, from the perspective of decisions made on the findings, verdicts, analysis and relevant understanding towards those IT products/systems in ensuring it fulfills the criteria defined by the CEM, that does not seem to be getting much easier as the technology has evolved. Due to these emerging concerns, local Malaysia CC Test Lab has drafted an “Evaluator Guidance” document to assist evaluators and testers in the test lab to ensure their decisions on the evaluation verdict is precise, justified and valid on each evaluation findings. The “Evaluator Guidance” is a document designed as a guideline for evaluator and tester in CSM MySEF to assist them in understanding further of each requirement, statement, and work unit defined in the CEM. This guideline works as a supporting document for the CC Part 1, Part 2, Part 3 and CEM, allowing the evaluator and tester to clarify their uncertainty of the evaluation requirements, their verdict on CEM work units, and, most of all, their justification for each of the findings through the aspects of repeatability and reproducibility. This presentation will demonstrate how an “Evaluator Guidance” document helps to support and assist the evaluator and tester in the test lab, leads to better understanding of evaluation requirements, speeds up project timeline, improves in drafting a test plan, helps execute testing with better clarification of the test scenarios relevant to the technology which is the TOE, and many perspectives that can have such improvement by having this document. This presentation will demonstrate how this document can function as a toolkit for evaluation or as just a means of clarification, depending on the evaluator and their level of experience and competency in the CC evaluation. Comparing to years of training, level of training programs related to CC and experience gained; whilst this document shall benefit the evaluation resources from point of interest.