A non-generic AutoFixture Create method

October 24, 2016 @ 8:03 am Posted to .Net by Antony Koch

Sometimes I need to dynamically generate test fixtures, but don’t in the test context have the ability to use generics, instead having only an instance of a Type.

I looked through the AutoFixture code and managed to find a reflection-friendly way to return me an object that can then be changed using Convert.ChangeType where necessary. Here’s the snippet:


typeof(SpecimenFactory).GetMethods().Single(x => x.IsStatic && x.IsGenericMethod && x.Name == "Create" && x.GetParameters().Length == 1 && x.GetParameters().Single().ParameterType == typeof(ISpecimenBuilder)).MakeGenericMethod(type).Invoke(fixture, new [] { fixture });

No comments (click to be first!)

Gauge your code’s adherence to the single responsibility principle

September 26, 2016 @ 7:40 pm Posted to .Net, OO by Antony Koch

During a routine ponderance on software engineering, I was thinking about a conversation recently in which we discussed how and when to copy and paste code. The team agreed that it’s OK to copy code once or twice, and to consider refactoring on the third occasion. This led me to wonder about the size of the code we copy, and how it might indicate adherence to the single responsibility principle.

For the uninitiated reader, the single responsibility principle – one of the first five principles of Object Oriented Programming and Design – can be succinctly summed up as:

“A class should have one reason to change”

The subject of many an interview question, often recited as rote, yet often misunderstood, the core premise can – I think – be understood by just talking about the code block, or class, at hand, in the form of a response to the question: “Tell me all the reasons you might need to edit this class?” Responses such as:

  • “If we want to change the backing store, we have to change this class.”
  • “If we want to change the business rules for persisting, we have to change this class.”
  • “If we want to change the fields used in the response, we have to change this class.”

When recited in one’s mind, are all that are required. And by considering the answers carefully, we can make an informed decision about whether to refactor, or that we’re actually happy with what we have, and are left with the option to change the class easily at a later date. The latter part of this sentence is critical to becoming a better developer, because what we have might be acceptably incomplete, and refactoring might take an inordinate amount of time and fail to offer significant business value to justify the expense.

This said, is copying and pasting code OK? Mark Seemann wrote an excellent blog post on the subject – which I won’t attempt to better – suffice to say I agree, and that it’s OK to copy and paste under a certain set of circumstances. The primary concern is the tradeoff: to suitably generify code requires at most a deeper understanding of the abstractions in play, and at least the ability to introduce dependencies between classes and modules that might not have otherwise been required. A quick copy paste of code that’s unlikely to change is not going to kill anyone. It might introduce an overhead should the code’s underlying understanding change, however volatile concepts do not in the first place represent good candidates for copying and pasting.

Now to wistfully return to the subject at hand – how can we use copying and pasting to judge our code’s adherence to the single responsibility principle? Quite simply: if we can copy only a line or two, then the surrounding code within the method body is perhaps not doing as targeted a job as we might hope. If we can copy entire classes, we can say that we’ve adhered strictly to the core tenets of the single responsibility principle: this class has such a defined person it can be lifted and shifted around the codebase with ease.

This means we can judge any of our code in a couple of ways: answer the question “what reasons does this class have to change?” as well as being honest with ourselves about our ability to copy and paste this code into a different codebase without being refactored.  Would half of the class be thrown away? Would we have to change a bunch of code in order to fit a different persistence model, say copying from a SQL Server backed system to a system backed by Event Store? I think it’s an interesting idea, and definitely one I’m going to keep trying in the coming days.

No comments (click to be first!)

A simple Dapper Wrapper

August 5, 2016 @ 10:28 am Posted to .Net by Antony Koch

If you need to sub out some Dapper functionality, and aren’t too worried about the specifics of the call, then I’ve crafted a nifty class you can use to perform just such a task.

In all it’s glory, the subbable IDapperWrapper, with its default implementation:


    public interface IDapperWrapper
    {
        IEnumerable<T> Return<T>(IDbConnection connection, Func<IDbConnection, IEnumerable<T>> toRun);
        T Return<T>(IDbConnection connection, Func<IDbConnection, T> toRun);
        void Void(IDbConnection connection, Action<IDbConnection> toRun);
    }

    public class DapperWrapper : IDapperWrapper
    {
        public IEnumerable<T> Return<T>(IDbConnection connection, Func<IDbConnection, IEnumerable<T>> toRun)
        {
            return toRun(connection);
        }

        public T Return<T>(IDbConnection connection, Func<IDbConnection, T> toRun)
        {
            return toRun(connection);
        }

        public void Void(IDbConnection connection, Action<IDbConnection> toRun)
        {
            toRun(connection);
        }
    }

No comments (click to be first!)

How can one ‘keep it simple’ in a complex system?

June 5, 2016 @ 8:56 pm Posted to .Net, Tech by Antony Koch

The often used phrase “Keep it simple, stupid,” abbreviated KISS, is solid advice in the field software development. We strive for simplicity, planning and refactoring continuously to ensure our code is extensible, reusable, and all those other words ending in ‘ble’ that apply. But what is simple? And how can things be kept simple in a complex system with several complex domains?

Simple is subjective. To some it means few moving parts, to others it means code that reads like a book, even if it’s repeated in several places. Disagreements can easily arise when deciding what simple means. Keeping an open mind is critical with regards to the definition of simplicity, because all sides can have valid arguments.

Complexity, like simplicity, is also subjective. Code can be composed in a complex way, however the component parts may be implemented simply. So can only simple things be over complicated?

There’s a saying: work smart, not hard, that applies to software development more deeply than in many other domains. Finding smart solutions usually means less code, fewer moving parts and a more concept-based approach.

To some developers a concept-based approach can be perplexing. Simplicity masquerading as complexity that remains obscure until scrutinised further. It can also, however, be over-engineered — six classes used where one might have sufficed until a later date.

Does this mean those developers who find smart solutions complex aren’t up to scratch, or should the codebase cater to the needs of the team and be legible to all? In my opinion, no. Smart trumps legibility every time, for the simple reason that legibility is subjective and based on the abilities of the reader. Some are baffled by lambdas and some aren’t. This doesn’t mean teams should avoid lambdas, it means teams should shift dead weight.

All of this begs the question: can something that seems complex always be reduced to something simple? In most cases, yes. A video I watched (which I will need to find later as the author escapes me) stated that in most cases he could walk into a company and reduce a code base by a factor of 80%. That is to say that a 100,000 line codebase could be reduced to roughly 20,000 lines of code.

Part of the reason this is, in my eyes, true, is because teams wilfully introduce technical debt, qualifying its introduction with ‘We’ll fix it if we need to later’. This is a flag to me that says “we know we aren’t doing it properly.” This is not counterintuitive to Ayende’s JFHCI — in fact it works with it: work smart, not hard. His example of hard coding is not an introduction of technical debt, it’s a forward thinking solution with minimal down payment now.

So how do we keep things simple in complex domains? Here’s a bulleted list of how it can be done:

Limit your abstractions

Don’t introduce a phony abstraction in order to make it mockable. If you see an IFoo with a single implementation Foo, you’re overcomplicating and you’re missing the point of interfaces in the first place. Code should be written to concepts as per Ayende’s limit your abstractions post.

Test outside in

Test your components using their public API. Don’t test the components internals because you’re then testing implementation. This allows two benefits:

  • Get the internals working correctly, quickly, with minimal fuss and with good test coverage
  • Once complete, it allows you to refactor into any concepts you may have uncovered along the way.

Work smart, not hard

Highly focussed components with specific jobs connected in a smart way. Some people might not understand them; it’s your job to enlighten them. If they still don’t understand it, cut them loose and hire someone who does. The inverse is true too, though — if everyone disagrees with your code it’s either wrong and you need to learn, or it’s right and you need to leave.

No comments (click to be first!)

The Purpose of Point-based Estimates

December 16, 2015 @ 8:06 am Posted to Agile by Antony Koch

It’s easy to forget precisely what the purpose of point-based estimates is, often resulting in attempts to equate them to time. However, that’s not what they’re for.

Point based estimates using the Fibonacci scale, t-shirt sizes, and any other method of measuring relative complexity are tools to help the business prioritise a backlog. These finger in the air estimates are useful insofar as they can provide a crude method of deciding whether to tackle 3 simpler tasks or one more complex one in the upcoming sprint. This represents the limit of their usefulness. Beyond that these estimates hold no value, especially when attempts are made to attribute a period of time to them.

The time a story takes does not correlate to its original points value. Traditional burndowns work in the sense of points-per-sprint, however there is no remit for turning these into a real period of time–they merely highlight a teams ability to reliably compare complexity to a base-story’s estimate.

Smalls, mediums and larges will blur into each other when time is taken into consideration. Metrics off the back of finger-in-the-air estimates only compound incorrect thinking in the upper echelons of the business as to a teams ability to deliver production code.

If you need to know hours because you’ve a deadline, or you’re billing out to a client, then sprint planning is the place for estimates in half days or above. The stories you choose to plan are in the upcoming sprint because of your finger in the air point estimates, but now it’s time to get into the nitty gritty and figure out just how long this thing will take. These discussions often make stories considerably more or considerably less complex when compared to their original estimate.

In summary, if you need to know how long a story will take in hours, get your best guys to estimate in half day chunks. Don’t use points.

No comments (click to be first!)

Dive into open source

November 10, 2015 @ 2:05 pm Posted to OS by Antony Koch

I had often wondered how to get into contributing to open source software. What are the rules? What is the etiquette?

Then one day, I realised that I knew the answer to both those questions:

What are the rules? Be nice. Don’t be a dick. Follow the standards.

What is the etiquette? Follow the above rules. People are nice and are there to help so long as you have tried your hardest at obeying the above rules.

I’ve now created PRs for a very small number of projects. Seeing my name as a contributor, though, feels great. It also looks great to prospective clients.

Short update, but the crux of it is to just do it. Fail, retry, succeed.

No comments (click to be first!)

A quick and dirty XUnit/AutoFixture MongoDb test harness

September 21, 2015 @ 6:37 am Posted to .Net, Testing by Antony Koch

As part of my new journey into outside-in testing I’ve – of course, for those that know me – looked into the tooling aspect. A part of that is implementing a MongoDB test harness injected via AutoFixture’s XUnit plugins.

As a quick side note, I think tooling is critical to elevating ones self from being a good developer to becoming a great one, as it removes the need to focus on anything other than writing production code. An example of this is my usage of NCrunch for continuous test execution. I can’t extol the benefits of not stopping to run all my tests enough, and it’s ben great to see the ongoing development of NCrunch since it’s free days.

Anyway – back to the point at hand.

I am building a MongoDb-backed application outside of my regular 9-5 engagement with my banking client, Investec, and needed a quick way to access the db in my tests. Avoiding ceremony was critical as I’m really into working on the production code as much as possible, not wasting time writing tests that will need to be rewritten when my implementation invariably changes as I learn more about the system I’m building. My app involves a twitter login, meaning everything I do from a database perspective is best served using the twitter UserId field. For cut one, I’ve come up with the following code, which I’ve added comments to for clarity:

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Web.Configuration;

using MongoDB.Bson;
using MongoDB.Driver;

namespace Amko.TwendingToday.Fast.Mongo
{
    public class MongoDbHarness : IDbHarness
    {
        private IMongoDatabase _db;

        public MongoDbHarness()
        {
            // Extract connection string from config
            var connectionString = ConfigurationManager.AppSettings[ServiceConstants.MongoDbUri];

            // Get a thread-safe client object by using a connection string
            var mongoClient = new MongoClient(connectionString);

            // Get a reference to a server object from the Mongo client object
            const string databaseName = "twitterapp"; // I've taken the real name out here to avoid giving spoilers :)

            // bootstrap our DB object
            _db = mongoClient.GetDatabase(databaseName);

        }

        public async Task<BsonDocument> GetAsync(string collection, string userId)
        {
            // Extract collection as a set of BsonDocument. The production code deals in
            // concrete domain objects, but for test purposes I'm loathe to spend time
            // building and rebuilding the ever changing domain in my test code. It
            // doesn't offer much over string indexes
            var coll = _db.GetCollection<BsonDocument>(collection);

            // build a filter for the get
            var filterDefinition = Builders<BsonDocument>.Filter.Eq("UserId", userId);

            // pull out the matching docs asynchronously
            var list = await coll.Find(filterDefinition).ToListAsync();

            // return the first one
            return list.FirstOrDefault();
        }
    }

    public interface IDbHarness
    {
        Task<BsonDocument> GetAsync(string collection, string userId);
    }
}

Although I’ve mentioned it in the above comments, I think it’s worth mentioning the point regarding the test domain and my decision not to create it. It’s common to build a copy of the domain object in test code to deserialise database into, or in worse cases to use the production models, but I decided against this. I feel that for it to be a true outside in test I should express my query against the resultant JSON from the DB as I might query the JSON in the real world, as it’s an extra verification step against how I think the system is working.

The injection of this is handled by AutoFixture using the following specimen builder:

    public class MongoHarnessSpecimenBuilder : ISpecimenBuilder
    {
        public object Create(object request, ISpecimenContext context)
        {
            var type = request as Type;

            if (type == null || type.IsPrimitive)
            {
                return new NoSpecimen(request);
            }

            if (type == typeof(IDbHarness))
            {
                return new MongoDbHarness();
            }

            return new NoSpecimen(request);
        }
    }

And this allows me to build my tests like this:

        [Scenario]
        [StructuremapAutoData]
        public void TracksUserWhenTheyLogin(
            AuthController sut, 
            ActionResult actionResult,
            AuthenticatedUser user,
            IDbHarness dbHarness,
            DateTime now,
            string returnUrl)
        {

I’ll dig out some resources for building the AutoFixture framework backing my AutomapData attribute and post them here later.

No comments (click to be first!)

Microsoft Band – A User’s Review

September 4, 2015 @ 6:42 am Posted to Tech by Antony Koch

I’ve decided to write a short review about the Microsoft Band, a product I feel I have had for long enough to draw meaningful conclusions against it.

I’ll start by saying I like the device. Anyone that buys something by Microsoft that has such an odd and ungainly shape ought to know what they’re getting themselves in for: it’s not perfect, it doesn’t fit perfect, but it has lots of features and once you get past the aesthetics it’s a quality bit of kit.

It can sometimes feel uncomfortable, but its adjustable strap makes up for that. It’s not that bad, though, and as with most things once you get used to it you don’t even know it’s there. People seem to have judged the device based on 4 hours of testing and I don’t feel that such a short length of time qualifies one to make broad statements about wearability. One criticism I would level at the device as hardware and not software is that the rim of the screen scratches extremely easily. Mine was ruined after a day of wearing the screen facing inwards, which reminded me to turn the screen to the top of my wrist as it goes, thus proving that it’s probably nigh on impossible to avoid scratching the thing. It’s a shame Microsoft didn’t foresee this and use the same type of glass across the whole screen instead of just the touch areas.

I have read plenty of reviews slamming the heart monitor, but I’ve found it to be accurate enough provided the strap is worn very tightly.

The GPS failed to work first time I tried to use it and I was close to sending it back before trying to lock it again while very stationary. This seemed to crack it and its accuracy seems on a par with my iPhone 6. Locking still takes 10-15 seconds though, however I don’t think that’s too long to wait for such a small device, and once locked I was up and running (literally). The device also supports ‘workouts’ and cycling, neither of which I’m particularly famous for.

The step counter is, well, a step counter. Not much more to be said about that really. It seems quite accurate so long as you’re not looking at the device while walking – it’s motion sensors can’t handle that, but I think that’s fair enough given the simplistic nature of the means by which the steps are gathered.

The device is certainly a gateway into realising just why wearable tech is so popular – having my wrist gently vibrate when my phone receives a notification – as well as providing the ability to read through most of them – has become extremely useful. The choice of whether I should pull out my phone or not allows me to stay engaged with people longer than I normally would. That doesn’t say too much about my manners, but I’m hopeful those around me now consider me to be strangely more present than I was previously.

The Microsoft Band also has a Ultra Violet (UV) screener, providing live results from a scan and advising just how long it’ll take to sunburn. I didn’t realise I needed this in my life until I had it, and it really is a worthwhile concept to have to hand with two young kids.

The final app worth a mention is the sleep monitor. Just before your head hits the pillow, switch on sleep mode and come morning you’ll have interesting statistics about time to fall asleep, length of sleep, the number of times you woke up and how much deep and light sleep you had. I’ve read that the ability of a wrist-based device to measure your deep sleep is non-existent, but it’s nonetheless interesting to find out information on a subject you tend to have none about, being asleep n’all.

The final piece of the puzzle is the software the Microsoft Band syncs with on your mobile phone or tablet. My phone of choice is an iPhone 6, and I’ve found no issues in pairing or syncing the device. The information displayed is of interest and when using GPS the bing maps have a nice little three coloured snail-to-cheetah legend advising you of your speed compared to your average at different times of your run.

In summary, I think the Microsoft Band is a great piece of equipment for £150. It offers lots more than other devices in the same price bracket, and you’ll get benefit from having your wrist tethered to your phone. Its phone agnosticism is one of its great selling points in my book.

No comments (click to be first!)

What can we learn by refactoring without touching our unit tests? A converts’ explanation of outside-in testing.

August 26, 2015 @ 8:13 pm Posted to .Net, Mocking, Testing, Unit Testing by Antony Koch

Everywhere I’ve worked develops ‘features’. No great revelation there. However a feature is often, or at least ought to be, quite a small piece of functionality. A stripe of business value carved out of a larger solution. This stripe can be tested and released within a sprint while adding value to the business.

A lot of features can be expressed using a ‘happy path’: that is to say that there tends to be very few ways in which this feature can be executed without any exceptions thrown or any downstream entities not being found.

A typical scenario expressed in Gherkin takes the following form:

Given a context
When an action is perform
Then there is an outcome

Expanding upon this with a more real world set of scenarios:

Given an Amazon Prime customer
When the customer searches for goods
Then a Prime delivery option should be displayed

Given a non Amazon Prime customer
When the customer searches for goods
Then no prime delivery option should be displayed

Simple enough, right? Those 6 lines of text define everything we’re about to code up.

So of the time spent coding up, how much of it, in a TDD setting, is spent on unit tests? 50%? 75%? 75% of your time spent writing code that no user will touch.

Now let’s say the next feature comes along and we learn that our initial solution requires some refactoring. Where we once had one query and no commands we may now need nested queries and a single command along with, say, an extra integration point with a cloud service.

How much time do you need to spend rewriting those unit tests? Do you reconsider whether or not to refactor because of the burden of rewriting those tests? I know I have.

Now imaging you had no unit tests. Only acceptance, or black box, tests. Same web page. Same prime indicator. Same test. How much time do you spend writing tests when you refactor your underlying solution?

If you’ve got it right?

None.

100% of your time writing production code. That’s what it’s supposed to be about, right? A bright idea hits you half way through your refactor. Your tests are still green. What’s the cost of experimenting?

Nothing.

You can commit what you have now – it’s working – and start tinkering. Your creativity starts to flow. Your repertoire of of enterprise patterns can come to the fore. Your factory factory loses its purpose: you don’t need interfaces just so you can mock, just so you can abstract.

This may seem somewhat preach, but I feel like this is the right way to do things.

More blog posts are to follow.

No comments (click to be first!)

What is a Tech Lead?

August 25, 2015 @ 10:27 am Posted to .Net, Agile, Consultancy by Antony Koch

The tech lead role seems to be used in wider and wider circles, with each company defining the role in different ways. So what is a tech lead, and what are their responsibilities?

To my mind, the responsibility flows in three directions:

  1. Upwards: Responsibile to the board for delivering the project to a high technical standard, an architecture that adheres to the company’s core principles and a solution that befits the budget and scope
  2. Downwards: Ensuring the team follows rigourous coding practices by laying out the framework for them to do so. Ensuring TDD is used and that the acceptance tests are kept up to date.
  3. Outwards: To the client. Ensuring their internal infrastructure is able to support the solution as designed, that they feel engageged in the creation and delivery of their project from a technical level, and that they have adequate access to see their work progressing steadily.

In my previous role the tech lead was the most senior technical consultant assigned to a project, and ultimately the person who hands the keys over to the client. They are responsible for defining the architecture,  assuring sign off from the other tech leads, and defining the engineering standards to be followed by the development team.

Ensuring a good solution at the code level is critical in order to avoid major refactors down the line. Rigor around coding also ensures new features can be added with minimal risk by maximising code reuse. Taken to the next level, the tech lead’s job is to look ahead and build a solution that takes the future into account while not holding back development work. This can be a bit of a juggling act sometimes, with some observers feeling that work is being done that is not of direct benefit in terms of features to the client. This plays into the consultancy part of the role – talking clients round to doing something now for a payoff in the future.

The final piece of the puzzle for me is to ensure that the financial implications of any solution are considered. One example from my own experience is to consider high usage rates of a system as a pointer to high costs if not caught soon enough. Large numbers of files with a high number of requests can quickly become very expensive, meaning compression and minification of static assets takes on more meaning that simply performance. It means cold, hard cash to clients.

No comments (click to be first!)