Sunday 26 June 2016

Week 5 Summary - Code cleanup and test coverage

Hello Readers,

I am back again with my weekly updates. This week was the midterm evaluations week. Thus, the first half of the week passed in code cleanup and increasing health. This to make a pull request from core branch to the master, which will be evaluated for the midterms. After that, starting thursday, I worked on the dev-exp branch, where I focused on writing more tests, and eliminating the errors, module wise.

Test Coverage 

Test coverage of a code measures how much of the code has been covered in the tests written. Loosely, this corresponds to how many of the loc/ possible paths the tests written are covering. Test coverage is absolutely vital before deploying any code, because, once it's all integrated together, if a subtle error occurs, it becomes very difficult to determine where the bug is. Test coverage eliminates that by testing each small part of the code individually, which can help us 'catch' bugs.

In a TDD based development flow, we write some of the tests first, to get an idea of the interface to whatever we are going to code, and also an idea of how the actual code would be. However, it becomes difficult sometimes to test all the possible paths the code is going to take before we write the code itself. 

Thus, I mainly wrote the tests for the fetch/decode and execute module of pyleros. It is a bit tricky to write individual tests, because the pipeline stages were only designed to work with one another(synchronously) and not separately. In the design, the data flows automatically from fedec to execute and vice-versa. Thus to design tests for say fedec, I had to somewhat emulate parts of the other module(execute) using my code. So, for example, to test the add instruction, I initialize all the IN/ OUT signals required, and the instantiate the myhdl.block for fedec. I also initialise the alu. Then, I pass the OUT signals of the fedec to the IN of alu, and finally make an assertion on the result. In the input of the fedec, I give the corresponding instructions, which I then test one by one weather they produce correct results from the alu. This tests weather the fetch and decoding of the instruction is happening correctly. In this way, the instruction set is divided into different classes, and tested for each class. A similar procedure for the execute module. 

Future Plan

In the next week, I plan to:
1.) Further increase test coverage, ideally taking it > 95%.
2.) Write examples for the simulation and test that they work correctly. 
3.) Refactor the code to use interfaces and some of the other higher level features of myhdl.

This should have my simulation part ready in the next week. The week after that will be focused on hardware setup and testing. 

All in all, the work is going pretty well and is thoroughly enjoyable.

No comments:

Post a Comment