The Purpose of Point-based Estimates

December 16, 2015 @ 8:06 am Posted to Agile by Antony Koch

It’s easy to forget precisely what the purpose of point-based estimates is, often resulting in attempts to equate them to time. However, that’s not what they’re for.

Point based estimates using the Fibonacci scale, t-shirt sizes, and any other method of measuring relative complexity are tools to help the business prioritise a backlog. These finger in the air estimates are useful insofar as they can provide a crude method of deciding whether to tackle 3 simpler tasks or one more complex one in the upcoming sprint. This represents the limit of their usefulness. Beyond that these estimates hold no value, especially when attempts are made to attribute a period of time to them.

The time a story takes does not correlate to its original points value. Traditional burndowns work in the sense of points-per-sprint, however there is no remit for turning these into a real period of time–they merely highlight a teams ability to reliably compare complexity to a base-story’s estimate.

Smalls, mediums and larges will blur into each other when time is taken into consideration. Metrics off the back of finger-in-the-air estimates only compound incorrect thinking in the upper echelons of the business as to a teams ability to deliver production code.

If you need to know hours because you’ve a deadline, or you’re billing out to a client, then sprint planning is the place for estimates in half days or above. The stories you choose to plan are in the upcoming sprint because of your finger in the air point estimates, but now it’s time to get into the nitty gritty and figure out just how long this thing will take. These discussions often make stories considerably more or considerably less complex when compared to their original estimate.

In summary, if you need to know how long a story will take in hours, get your best guys to estimate in half day chunks. Don’t use points.

No comments (click to be first!)

What is a Tech Lead?

August 25, 2015 @ 10:27 am Posted to .Net, Agile, Consultancy by Antony Koch

The tech lead role seems to be used in wider and wider circles, with each company defining the role in different ways. So what is a tech lead, and what are their responsibilities?

To my mind, the responsibility flows in three directions:

  1. Upwards: Responsibile to the board for delivering the project to a high technical standard, an architecture that adheres to the company’s core principles and a solution that befits the budget and scope
  2. Downwards: Ensuring the team follows rigourous coding practices by laying out the framework for them to do so. Ensuring TDD is used and that the acceptance tests are kept up to date.
  3. Outwards: To the client. Ensuring their internal infrastructure is able to support the solution as designed, that they feel engageged in the creation and delivery of their project from a technical level, and that they have adequate access to see their work progressing steadily.

In my previous role the tech lead was the most senior technical consultant assigned to a project, and ultimately the person who hands the keys over to the client. They are responsible for defining the architecture,  assuring sign off from the other tech leads, and defining the engineering standards to be followed by the development team.

Ensuring a good solution at the code level is critical in order to avoid major refactors down the line. Rigor around coding also ensures new features can be added with minimal risk by maximising code reuse. Taken to the next level, the tech lead’s job is to look ahead and build a solution that takes the future into account while not holding back development work. This can be a bit of a juggling act sometimes, with some observers feeling that work is being done that is not of direct benefit in terms of features to the client. This plays into the consultancy part of the role – talking clients round to doing something now for a payoff in the future.

The final piece of the puzzle for me is to ensure that the financial implications of any solution are considered. One example from my own experience is to consider high usage rates of a system as a pointer to high costs if not caught soon enough. Large numbers of files with a high number of requests can quickly become very expensive, meaning compression and minification of static assets takes on more meaning that simply performance. It means cold, hard cash to clients.

No comments (click to be first!)

Another fine mess I’ve gotten me in to

November 7, 2012 @ 7:20 pm Posted to Agile by Antony Koch

I’m both excited and petrified about talking at Agile Dev Practices running March 4-7th in Berlin next year.

I’ll be talking with the testing tweetmeister Tony Bruce, discussing BDD lessons learnt from our previous roles.

It’ll be an interesting talk as I’ve experienced the highs and lows of BDD in two previous roles now, as well as going through the motions in my current role at Amido, so I feel I’ve a lot to talk about and a lot to discuss abou how to cut the wheat from the chaff and get down to transitioning to a BDD-style approach, or improving your current one.

No comments (click to be first!)

Is Anyone Truly Agile?

July 17, 2012 @ 8:14 pm Posted to Agile by Antony Koch

Is it me, or is everyone trying to adopt agile instead of merrily working through the process?

Portions of a business can appear to be using agile practices, however how often does it really run right through the business’s core values? Do the directors get it? Does everyone in the company adhere to the agile manifesto, truthfully?

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

I imagine when we go to dev meetups or QA conferences we all tell each other just how agile we are, but in reality are we all secretly telling porkies?

Does anyone have any positive tales about turning around a business, or any tips for getting the waterfall merchants on board?

I think there should be a kind of agile check list, much like the (slightly outdated) Joel test for development teams – I’m going to do some hunting around and see what’s out there and report back here later.

No comments (click to be first!)

Getting it right first time is not un-agile

July 15, 2012 @ 2:06 pm Posted to Agile by Antony Koch

While talking through some upcoming sprint candidates, I pushed a BA for more information because I wanted to ‘get the code right first time’. The BA’s response to this request was that he deemed it ‘un-agile’.

I found that quite interesting and I wanted to talk about why coding things correctly first time has value and is agile, and that it’s also not subject to the ‘just enough’ / ‘just barely good enough’ mantra that agile promotes.

But Why?

Getting the code right first time is a statement borne out of numerous re-writes and refactors due to code having not been right first time. Did it work? Sure. Was it good enough? No. But why was that, and why is it so critical?

The cost of refactoring

The cost of refactoring is obviously high. Writing something that works and then rewriting it again at some point incurs obvious costs in man hours, however it also costs something much more than that. A codebases ‘quality’ is such a fragile ecosystem that it takes very little for it to be broken. Bad code impinges on this ecosystem so much that in The Pragmatic Programmer it’s posited that a bad piece of code is like a broken window in a building: it’s an invitation to break more windows, an invitation to write more bad code. Once bad code is introduced, it can spread like wildfire – copied by those who don’t understand it and turning hastily into an anti-pattern. Sealing these windows up – fixing bad code – is critical for the continued quality of the codebase. Taking a beat and writing something right first time is the first – and most critical – step towards this ethereal chalice we call quality.

Raising the Bar

Just good enough may be alright in a conceptual sense, however in a practical sense we must raise the bar of what good enough really is. Good enough not only adheres to the agreed patterns and practices, it refines and improves them. It drives comment amongst developers and serves to benefit the overall quality of the codebase, and ultimately the quality of the application. These are things we all try to do, but only through an appropriate amount of thought do we draw the right conclusions. Making sure your code adheres to the principles of SOLID is a good start. A method should do one thing and one thing only, and it should do it bloody well. Business logic does not belong in the presentation layer, and presentation logic does not belong in the business logic layer. Is the domain simply a representation of state, modified by external code, or does the domain manipulate itself? Making sure everyone is on the same page, but also making sure everyone is striving for something a bit better that gives value to the rest of the team is critical.

The End Game

So is it un-agile to get the code right first time? I don’t think so. The inherent benefits may simply be lost on those who aren’t working on the code every day. Getting it right first time is not doing too much work, it is not doing more than the bare minimum. This is because our definition of the bare minimum is different to a few lines on a story card. The bare minimum for a developer involves a high standard. An eye on the future. Thinking about others in the team and how they will feel when they see your code. But most of all it’s about taking pride in your work and mending those broken windows every time you see them.

To close off this post I will ask you one question: if you’re working on a closed source system – as a lot of us in the .Net world do – would you be proud if it were open sourced, or would you want to go back and fix up a few things first?

No comments (click to be first!)