I've made a chart of how long it takes for me to complete each task versus how long I estimated the task would take. I have a dataset of 42 tasks. For each task, I have a specific unit test in mind that needs to be complete before I consider the task completed. Sometimes I decide to do more than what I originally intended, sometimes less. Sometimes, I abandon the task and move on to something new. I do not factor the abandoned tasks into my calculations. So here's how I come up with my metric. If the tasks takes longer than expected then I divide the actual length by the estimated length and multiply the number by -1. If the task takes shorter than expected then I do the reverse and divide the estimated length by the actual length. So if I estimate that a task will take 10 hours and I finish in 5 hours then that would result in 2. If it's the other way around, then that would result in -2. The chart below represents a moving average of the last 5 tasks. On average for every 1 hour that I predict a task will take it takes 1 hour and 40 minutes to complete. There was one area where two tasks in a row took 16 and 18 times longer to complete than I thought, so I really got slammed there.

https://ibb.co/kDubvp

https://ibb.co/jT8Qo9

https://ibb.co/dvTEgU

https://ibb.co/kDubvp

https://ibb.co/jT8Qo9

https://ibb.co/dvTEgU

### 2 Comments On This Entry

Page 1 of 1

#### jon.kiparsky

24 August 2018 - 09:45 AM
Interesting use of metrics. It looks like you're converging towards zero, so that's a pretty cool trend.

Page 1 of 1

### Trackbacks for this entry [ Trackback URL ]

### Tags

### My Blog Links

### Recent Entries

### Recent Comments

### Search My Blog

### 2 user(s) viewing

**2**Guests

**0**member(s)

**0**anonymous member(s)