Software Development Life Cycle
1. Requirements
2. Design – High Level and low level
3. Code
4. Testing
5. Deployment
6. Maintenance
How does one cycle complete?
1. Developers are assigned requirements
2. Testers test the developed them and report bugs.
3. Developers fix them and testers retest them.
4. If the bug is fixed then it is closed, otherwise it is re-opened
5. The user/client performs User Acceptance Test and the software is deployed.
6. Then the software goes into maintenance phase.
In an ideal situation everything should go smoothly, but reality is different.
What really happens?
1. The developers might miss implementing a requirement or miss implementing a requirement partially.
2. The tester could have documented scenarios and test cases, but failed to test the particular scenario and test case
3. Tester might have missed documenting a scenario and test cases, thus missing a possible end user interaction with the software.
4. The developer/tester couldn’t have thought about a possible input format and this failed to write code for covering/tester didn’t test for that particular input type.
There could be many other possibilities which could be uncovered only when the software goes live.
So does it mean that the developer is inefficient? Or does it mean that the tester is inefficient? There is no way to answer it, or we don’t want to answer. Probably nobody wants to take a call on that.
So what’s the solution?
Here is the proposed solution
Every developer and tester should self-certify their deliverables i.e. with what confidence they are releasing their deliverables for next round of evaluation.
Developer’s Score Card
Build version -
Developer Requirement % covered Developer’s score (Confidence %) % missed (Filled by reviewer) Final Score (%) Final Score (% after deployment)
Tester’s score Card
Build version -
Tester Requirement % covered (scenarios) % covered (test cases) Tester’s score (Confidence %) % missed (Filled by reviewer) Final Score (% after deployment)
To be even more precise another column can be added to both developer score car and tester score card, during UAT
The above score can be used in Root-Cause Analysis to find out the deviation in confidence % after the build is complete.
Must be all the companies do it , I am not sure though.
1. Requirements
2. Design – High Level and low level
3. Code
4. Testing
5. Deployment
6. Maintenance
How does one cycle complete?
1. Developers are assigned requirements
2. Testers test the developed them and report bugs.
3. Developers fix them and testers retest them.
4. If the bug is fixed then it is closed, otherwise it is re-opened
5. The user/client performs User Acceptance Test and the software is deployed.
6. Then the software goes into maintenance phase.
In an ideal situation everything should go smoothly, but reality is different.
What really happens?
1. The developers might miss implementing a requirement or miss implementing a requirement partially.
2. The tester could have documented scenarios and test cases, but failed to test the particular scenario and test case
3. Tester might have missed documenting a scenario and test cases, thus missing a possible end user interaction with the software.
4. The developer/tester couldn’t have thought about a possible input format and this failed to write code for covering/tester didn’t test for that particular input type.
There could be many other possibilities which could be uncovered only when the software goes live.
So does it mean that the developer is inefficient? Or does it mean that the tester is inefficient? There is no way to answer it, or we don’t want to answer. Probably nobody wants to take a call on that.
So what’s the solution?
Here is the proposed solution
Every developer and tester should self-certify their deliverables i.e. with what confidence they are releasing their deliverables for next round of evaluation.
Developer’s Score Card
Build version -
Developer Requirement % covered Developer’s score (Confidence %) % missed (Filled by reviewer) Final Score (%) Final Score (% after deployment)
Tester’s score Card
Build version -
Tester Requirement % covered (scenarios) % covered (test cases) Tester’s score (Confidence %) % missed (Filled by reviewer) Final Score (% after deployment)
To be even more precise another column can be added to both developer score car and tester score card, during UAT
The above score can be used in Root-Cause Analysis to find out the deviation in confidence % after the build is complete.
Must be all the companies do it , I am not sure though.
0 Comments On This Entry
← March 2021 →
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 | 31 |
0 user(s) viewing
0 Guests
0 member(s)
0 anonymous member(s)
0 member(s)
0 anonymous member(s)
My Blog Links
Tags
Recent Entries
-
-
-
Code - Self certification
on Jun 09 2011 08:02 PM
-
-