top of page
Writer's pictureSunil Kumar Yadav

Pitfalls Of Using Native Compiler For Embedded software Testing

Updated: Mar 5, 2021


As modern technology progresses, newer generation of microcontrollers/microprocessors are getting smaller in size and more powerful with lower power requirement . The impact of this change can be witnessed from compact wearable devices to more powerful smart phones. All of these advancement have been occurred in short span of last couple of decades. For example the on board guidance computer of Apollo 11 was a called the Apollo Guidance Computer (AGC). It had 2048 words of memory which could be used to store “temporary results” – referred as RAM (Random Access Memory). The Apollo computer had 32,768 bits of RAM memory. In addition, it had 72KB of Read Only Memory (ROM), which is equivalent to 589,824 bits.


In comparison to AGC, current generation smart phones have > 8GB RAM and 128GB ROM. Not to mention complex operating system and applications which they support. Such powerful devices were considered distant dream couple of decades back. With the rapid transformation in overall tech ecosystem, workflow for embedded system developments has been changed dramatically. This means now engineers have access to tools and technologies which were not available for a few decades back.


Modern Compiler and Optimization

Hence to keep up the pace with miniaturized and powerful SOCs, compilers are getting more complex and smart in optimizing code for speed and overall foot print. Now a day using commercial cross compilers, used in embedded software development, one cannot guarantees that final object code is exact replica of intended code.


For example modern compiler can simply translate below C/C++ code block

Into something like below in final object code

Or simply initialized without even requiring control to enter inside the loop, depending upon whether variable a and b are global or local.



Theory Vs Reality

With increasing complexity of modern compilers and their ability to optimize software, it is becoming more important to test software to ensure end binaries/executable is behaving as intended by software developers. Due to the same reasons many regulated industries have their own set of stringent rule and few even recommend not only performing testing on physical target board but also ensuring final binaries i.e. assembly code is tested and traced to actual source code to ensure compiler have not added or removed code block which might changes or impact intended functionality.


Hence it’s becoming more important to test embedded software as close as possible to end product or environment where embedded system would be deployed to minimize the defect or unwarranted behavior, which may result in financial or physical damage. Which mean it’s important to test software using same tool chain i.e. cross compiler and associated settings on physical target as used during design and development phase. In other words, testing embedded software as close as to end product or environment where system will be used.


But reality is bit different than ideal world. Many a times to reduce the overall cost of system development, organizations/engineering teams try to take easy approach, which in short terms seem effective but the long term consequences could be high . Yes, you guessed it right, using native compiler ( i.e. x86 host architecture based compiler) to test the embedded software. Which saves cost in terms of cross compilers, targets or simulators etc in short term but long term consequences out weigh short term gain. For example, due to differences in architecture of end product and x86 machine, large portion of software may not be compatible with x86 architecture while performing software testing. Hence setting up test environment may become huge task in itself, which consume lots of valuable engineering time to debug and setup test environment itself.


Below are few examples which engineers may come across if host compiler based testing approach is used.


Compiler Errors:

  • Host compiler i.e. x86 based compiler may not be able to compile the software as it may contain lots of keywords which are intrinsic to the cross compiler. For example keywords like __inline, __asm, register name, IO peripherals and ports, physical memory location, use of special decorators like __far, __near, #pragmas etc

  • Host compiler(x86) may end up failing to compile the software due to missing header files which is only supported by cross compiler.


Even if somehow engineer suppresses compile time error by modifying original source code, it may end up with errors at link time.


Linker Error:

  • Host compiler(x86) based library does not support certain functionality. For example engineer tries to build-test QNX based source code using MinGW(gcc) then at link stage it fail. As MinGW libc does not support sigsetjmp/siglongjmp used by QNX compiler.

  • Host compiler may not support other intrinsic library. e.g. libserviceutils.la, poco etc. Not to mention implementation of libraries of native compiler and cross compiler may differ.

Even after modifying the source code one create native executable, it’s not guaranteed that test will run successfully. It may simply crash after encountering physical memory location or registers. Not to mentions authenticity of result due to many differences in native x86 architecture and architecture of embedded target like differences between data type, support on floating point engine, library implementation, signal handling etc.


To understand this in more detail, let’s try to go through one such example where software under test is intended for 16 bit target but after modification of source code engineer can compile the code using native compiler. After successfully setting up manual or automated unit test setup, end results would be inaccurate.


The above code was intended for 16bit architecture and testing the same using MinGW(gcc) compiler on host(x86 architecture) will not execute IF condition's true block as size of integer is 4 byte and for MyUnion which expected to be 16bit but due to change in datatype due to compiler now occupying 32bit of which 16bit MSB will hold garbage value. This becomes difficult to debug especially if software testing is done by third party who have minimal information on end architecture and low level requirements.


Resolving Quality Vs Cost Argument

From above small example we’ve got fair idea of disadvantages of using host compiler (x86 based) in testing of embedded software. Let’s quickly touch upon solutions to avoid pits fall of host compiler in embedded software testing. Ideal approach to test embedded software would be to test it on end product itself. But many a time it is not possible due to either hardware is under development or it is not feasible to provide hardware to each and every engineer owing to higher cost. In such cases engineer can utilize FPGA or demonstration/development board which has same architecture as their end product. Another cost effective solution would be to utilize instruction set simulator (ISS) which enables engineers to test software, which can run tests for cross complied source for respective architecture. Recently QEMU have become one of the choice for engineers to run tests by emulating/simulating required architecture, resulting in cost effective and smoother testing cycles, without compromising on quality of results. Depending upon which approach engineers choose, there are pros and cons to each alternative. This article discusses in details of advantages of testing software on end target, FPGA, development board and instruction set simulators.


Recent Posts

See All

Komentarze


bottom of page