Saturday, January 3, 2015

Developer Confessions: Superficial Testing

Developer Confessions

I am a developer who not thoroughly debug and test his code. I leave most of the quality assurance work to the testers and supervisors. While I just try to make the functionality to "work". I just did the old run and see if my assigned feature works along with the incomplete or not so thorough unit testing. Of course this habits changed as time passed by and classical beginner mistakes are corrected (most of the time). The classical mistakes I did in debugging involved testing my work.

I was lectured that testing and quality assurance of features should begin as early as requirement phase but some details eluded me. At the requirements phase where I get the to know my assignment, I clarify expectations but many times I do not clarify well enough. The depth of how I validate my expectations is not always complete. Moreover, special cases and their expected outcomes on those scenarios should have been thought of planned ahead for risk mitigation. I am ashamed that I fail to those at that phase. There are always scenarios I miss and not discussed to testers or requirements. Consequently, my test plans missed problematic scenarios that will happen.

Furthermore, my test plans are usually focused on functional areas only. Functional areas were my main focus to debug and test in isolation. Isolated debugging has proven to be efficient. I debug code and correct bugs quickly since the scope of where to look for risks were isolated. This practice works until my assigned features interact with other features. Not to mention the usability issues missed with this bias for functional testing. Additionally, some of my assigned features require a sequence to be completed before being used such as log in and region/age check. My test plans usually assumed that the sequence is completed. I do not always think about what can go wrong in the sequence before my feature is shown such as negative age, null parameter, invalid file path and among other causes of errors. This was a sign that my test plans were more oriented to functions and meeting requirements rather than use cases. The bias for functional testing makes the test plan ignore configuration, stress and cross functional tests.

Configuration test is for quality check of the software on different settings, environments, hardware, operating system (with different versions), third party software interactions, network infrastructure among other kinds of configurations. While stress tests is for quality assurance under heavy load like checking if a website breaks with 25 concurrent sessions or an analytic application processes heavy datasets slowly. Both kinds of tests are expensive and can overwhelm me as a developer but my test plan should have something like graceful error handling for those for problematic configurations and stressful workloads. I give all the cross functional testing to the testers unless there is something the requirement that compels me to work with another functionality. Given the importance of those tests, part of me believes that programmers do the functional tests while testers do the other kinds of tests. I did believe that since I assumed that the testers are more aware about other functionalities as I am. In effect, my tests are usually isolated with my functionalities like my unit tests.
Unfortunately, my unit tests are also shallow at times. I have the habit of not making a unit test for any null case which results to more puzzling null reference errors appearing later. My unit test just involves checking the right value given the sample parameter I give. I usually put valid parameter values. My seldom test of invalid and null parameters risks the project as I noticed when my functionalities are mixed with the work of others. Yes unit test isolate tests to a unit/component/module. Nevertheless, there must be unit tests for cases where my component interact with other components if the requirements or use case specify what other units interact with my assigned functionalities.

Through my experience as a programmer, debugging became a hassle because of classical novice testing mistakes. Good debugging requires a keen eye on detail and sequence. Other habits needed is being systematic and organized with testing. Testing is needed in debugging. I have improve in debugging and testing for the sake of bug elimination. However, I am a programmer and I am not motivated to test nor do I like it. Nonetheless, I find it as a necessary hassle. This dislike has made my tests plans lack depth or superficial during my novice days. What further discourages me from testing in depth until now is the fact that every application has bugs just as every human commits sin.

Looks like I have some resolutions to make for 2015
End of Confession.