Friday, August 6, 2010

Limits of Software Evaluation Assurance

Its very common to test applications for Software Quality Assurance but there is a Software Evaluation Assurance that defines the Software assurance level that needs a equivalent consideration. The levels are of importance to all QA engineers: These levels starts with:

EAL1 - Functionally tested - Its suffice to ensure the functionality is consistent as per documentation. that's what most QA Engineers test and verify.

at EAL2 - Structurally tested - In this the users require a low to moderate level of Independently assured security in the absence of complete development record. (e.g. legacy systems)

at EAL3 - Methodically tested and checked - users require moderate level of independently assured security and require thorough investigation of TOE and its development without substantial re-engineering.

at EAL 4 - Methodically designed, tested and reviewed - users require moderate to high level of independently assured security in conventional commodity TOE and prepared to incur security specific engineering costs. examples are Windows, Linux operating systems.

EAL 5 is Semi-formally designed and tested, EAL 6 is Semi-formally verified design and tested, EAL 7 is Formally verified design and tested. EAL 7 is used for devices such as file systems, USB device, graphics and networking cards.

so if a software is at EAL 4 then EAL 2 would also be required and it would be part of the development process. Are there any automation tools can validate beyond EAL 1? In EAL 7 one would assume that the testing model is mathematically proven and guaranteed to work. That is definitely an ideal case. In some systems tools development projects it may be possible to develop a custom test automation software to do the verification that guarantee the outcome.

Using data integrity and data validity as drivers for data inTesting

Every time there is a need for functional testing of Software application the task at hand is clearly for verifying that application perform as per the specifications These specs are prepared by the Business Analyst and/or Customer and the other stakeholders defining the requirement.

The Data on which this application will be typically run when its in use cannot be defined 100% of the time beforehand because that's ideally what is being modeled by customer to generate the product/application/service. We attempt to go a step further in the requirements analysis where the predictive modeling that defines the requirements is the result of the inefficiencies of the data at hand with the current working set of data.

In trying to do ensure data integrity and the data validity of the delivered application, we then ask what could go wrong? The goal here is that the consistency of Data and Data Integrity be maintained only then the modeling will be a successful. A typical example is customer table will have 10 fields and all fields should be entered in order to generate a graph on a certain column. This example ensures that modeling requirement is acceptable. Its also the top priority to ensure a UAT(user acceptance testing).

A very appropriate explanation on the difference between Data Integrity and Data Validity here.

We need to assume that the customer will eventually use his own data although that is not shared 100% upfront most of the time. In such cases its essential to have all typed data applied to testing dataset in addition to the 100% coverage dataset. A brief primer on Data Modeling can be found here