Subscribe to Software Autopsy        RSS Feed
-----

Code - Self certification

Icon Leave Comment
Software Development Life Cycle
1. Requirements
2. Design Ė High Level and low level
3. Code
4. Testing
5. Deployment
6. Maintenance
How does one cycle complete?

1. Developers are assigned requirements
2. Testers test the developed them and report bugs.
3. Developers fix them and testers retest them.
4. If the bug is fixed then it is closed, otherwise it is re-opened
5. The user/client performs User Acceptance Test and the software is deployed.
6. Then the software goes into maintenance phase.

In an ideal situation everything should go smoothly, but reality is different.
What really happens?
1. The developers might miss implementing a requirement or miss implementing a requirement partially.
2. The tester could have documented scenarios and test cases, but failed to test the particular scenario and test case
3. Tester might have missed documenting a scenario and test cases, thus missing a possible end user interaction with the software.
4. The developer/tester couldnít have thought about a possible input format and this failed to write code for covering/tester didnít test for that particular input type.
There could be many other possibilities which could be uncovered only when the software goes live.
So does it mean that the developer is inefficient? Or does it mean that the tester is inefficient? There is no way to answer it, or we donít want to answer. Probably nobody wants to take a call on that.
So whatís the solution?
Here is the proposed solution
Every developer and tester should self-certify their deliverables i.e. with what confidence they are releasing their deliverables for next round of evaluation.
Developerís Score Card

Build version -

Developer Requirement % covered Developerís score (Confidence %) % missed (Filled by reviewer) Final Score (%) Final Score (% after deployment)

Testerís score Card

Build version -
Tester Requirement % covered (scenarios) % covered (test cases) Testerís score (Confidence %) % missed (Filled by reviewer) Final Score (% after deployment)


To be even more precise another column can be added to both developer score car and tester score card, during UAT

The above score can be used in Root-Cause Analysis to find out the deviation in confidence % after the build is complete.

Must be all the companies do it , I am not sure though.

0 Comments On This Entry

 

October 2017

S M T W T F S
1234567
891011121314
15161718192021
22 232425262728
293031    

0 user(s) viewing

0 Guests
0 member(s)
0 anonymous member(s)

Recent Entries

Search My Blog