As discussed in my earlier articles, software is not just an hidden component within a product but it is the component which distinguish product in a crowded market place. Which mean software occupies very significant portion of overall product development. It is evident from the fact that the size of software in modern high-end car is much more than a software size in space shuttle or commercial airliner.
Current Software Testing Trends
Software is not only growing in size but also in complexity, which means it become more important to test software and identify potential bug/defect before the release of the product, and detecting such defects earlier in life cycle to reduce to total cost and resource required to fix such defects. We've already discussed how we can utilize software based solutions to accelerate software development rather than waiting for expensive hardware or end target to be made available to engineers.
But we've not address major portion of software development practice in our industries where software testing, specifically unit testing is delegated to another organization, mostly services based IT companies. This kind of model is very popular as cost and time for in house unit testing is traditionally high due to inherent nature unit testing i.e. it's an resource intensive process. Drawback of such model is that white-box testing or unit testing are performed by engineers who have not developed the code and such test engineers are not briefed about what particular piece of software does in overall system. This issue is compounded in majority of the cases where third party test engineer only have poorly written comments in software as their only source of requirements.
Referring safety critical software development standards from industrial to automotive, all of these standards stress on requirement based test, whether software is part of low safety critical component or high safety critical component. Yet it's being neglected in majority of teams and due to missing low level requirements document, third party test engineering's throughputs of white box testing is impacted. Which also means, many of the critical bugs seep through unit testing phase and cost of fixing bug increases as bugs which could have been found at coding or unit testing phase are now uncovered during system test or field trials which is incur more cost to fix them.
Few methods to test software without requirements
Method #1
Work with whatever documentation you can get your hands on. It could be basic backlog (if using agile), help files, doxygen documents, older version of BRD (Business Requirement Document) or FRD (Functional Requirements Documents), or older test cases. Try to investigate, ask around and there will always be some documented trail even if its a thin one. If this does not work out in your case then do not discount your experience as a software user.
Method #2
You can use older or current version of application as a reference to test the future release of software product. I admit this violate basic testing rule i.e. "Never write test case using the application as reference". However, in cases where we are working in less than perfect situation, we have to mold the rules to fit our needs. But we should also be aware of following aspect when using older or current software as reference:
Application might contain bugs, hence never assume that application has correct behavior and make sure you do not take application as a granted example for correct functionality.
In such cases use your experience and take the help of application just to give you jump start and always be critical even if it's working.
Method #3
Discuss with project team member and explore the opportunities where you can attend their meetings to gain more knowledge about the application. If not, then ask if development team can share their local unit and integration test results and try to arrange time for knowledge transfer sessions to understand more about workings of software and its intended role.
Leveraging existing methodologies to improve output of unit testing
In cases were required specifications are not available, apart from above methods, test engineers can still use existing testing methodologies to identify potential defects in unit testing phase. Below are few good testing practices which can help in improving quality of testing.
Code coverage
Code coverage provides additional insight into testing activity. Using code coverage, test engineers can identify hot spots in code and add additional tests into those parts of code which are untested to ensure all parts of software is testes thoroughly. There are multiple tools, both commercial and open source which provide code coverage metric. In cases where requirement-based test are missing, 100% code coverage metric ensure code is tested at least once which give more confidence in software. Now depending upon end product and industry compliance guidelines, there are different types of coverage matrix like Statement, Branch, Modified Condition/Decision Coverage (MC/DC) and at architectural level Function Coverage, Function Call Coverage. Depending upon industry segment, there are different criteria of code coverage and respective organization have to satisfy those in order to meet regulatory requirements before launching their product. Note, 100% code coverage does not mean software is defect free. Code coverage is additional matric for testing completeness.
Boundary Value Testing
Many a times while writing code, developers focus only on functional rage of input variables. Which mean code needs to be tested not only for boundary values of functional range but also for underlying data type. It is especially useful in embedded software where software interacts with sensors and actuators. For example if software under test reads physical value of some parameter via ADC(Analog to Digital Converter) with resolution of 10 bits then functional range for 10bit ADC would be from 0 to 1,023 and if engineer have used short data type i.e. 16 bit data type then it can hold up to 65,535 as maximum value. If engineer have not taken precaution while discarding values above functional range of ADC's valid range then it will lead to unintended results. Another example would be where integer is passed as input parameter to function which is used as index to array of some data type. Testing code with large values usually exposes bugs if developer have not validated index input before using it to access array elements.
Hence its always recommended to add boundary value test cases where test engineers can verify different functions or methods with minimum, mid-range and maximum value for input parameter type and report if any discrepancy is found.
Leveraging Stubs or Mock
Stubs or mocks play very vital role in white-box or unit testing phase. Stubs/mocks make white-box testing easy by isolating software under test from other project dependencies, which allows engineers to focus on verification of their respecting software unit. Although there is fine difference between Stub and mock but for the simplicity, we've clubbed them together. Using stubs test engineers can return invalid values to software under test, which can be useful to identify if software has adequate check before values are consumed by software unit under test. Especially for those stub dependencies whose returns are assigned to local or global variables. As it might happen in software unit under test, return of stubbed dependency is assigned to variable of wrong data type or smaller data type which results in loss of information.
There are many commercially available test automation tools which have inbuilt support for stubs/mocks. For example, VectorCAST, Tessy, Parsoft, LDRA are few prominent test automation tools from safety critical industries. In case of other testing frameworks there are addon mocking frameworks like GMock, HippoMocks, FakeIt, Isolator++ etc.
Error Guess and Fault Injection
An experienced test engineer can use his or her domain knowledge and past project experience, to add few additional testcases, which are based on error guessing to identify potential faults in software under test. Additionally, if testing tool support adding spurious inputs to software under test then those tests need to be added, otherwise using debugger test engineer can add spurious value to measure the response of the software. Fault injection type of tests are very important in integration or system testing phase as these spurious values travel through different parts of software which can generate exception or faulty results.
In conclusion, all is not lost when requirement document does not exist or is insufficient. There is still hope! Please share your experiences in similar situations.
Comments