F# on aspnetcore: Escaping the framework

February 15, 2017 @ 2:24 pm Posted to .Net, dotnetcore, F# by Antony Koch

Mark Seemann has both blogged and talked about escaping the OO .Net Web API framework in order to use a more idiomatic functional style. This is achieved by providing a function per verb to the controller’s constructor, and replacing the IHttpControllerActivator:

type CompositionRoot() =  
    interface IHttpControllerActivator with
        member this.Create(request, controllerDescriptor, controllerType) =
            if controllerType = typeof<HomeController> then
                new HomeController() :> IHttpController
            elif controllerType = typeof<DoesSomethingController> then
                let imp x = x * x
                let c = new DoesSomethingController(imp) :> _
            else
                raise
                <| ArgumentException(
                    sprintf "Unknown controller type requested: %O" controllerType,
                    "controllerType")    

Then in the startup for your app (global or Startup):

GlobalConfiguration.Configuration.Services.Replace(  
    typeof<IHttpControllerActivator>,
        CompositionRoot(
            reservations, 
            notifications, 
            reservationRequestObserver, 
            seatingCapacity))

This works great, and I love its honesty. It makes you feel the pain, to quote Greg Young, and in composing tight workflows in your composition root the ‘what’ of your domain is laid bare.

However, this won’t work in aspnetcore because it’s more Mvc and less WebApi, or – to use MS phrasology – more Web and less Http, meaning there’s no IHttpControllerActivator. The fix is simple, and aligned with the terminology: drop the ‘http!’ One instead replaces the IHttpControlleractivator with an IControllerActivator instance inside the aspnetcore DI framework and the same results are achieved:

type CustomControllerActivator() =  
    interface IControllerActivator with
        member this.Create(c : ControllerContext) : obj =
            if c.ActionDescriptor.ControllerTypeInfo.AsType() = 
typeof<DoesSomethingController> then  
                let imp x = x * x
                new DoesSomethingController(imp) |> box
            else   
                invalidArg "controllerType" "Cannot find controller"

        member this.Release (c : ControllerContext, ctrl : obj) =   
            ()

And in your OWIN startup:

    member this.ConfigureServices (services:IServiceCollection) =
        services.AddSingleton<IControllerActivator>(new CustomControllerActivator()) |> ignore

        services.AddMvc() |> ignore

Sorted!

No comments (click to be first!)

A non-generic AutoFixture Create method

October 24, 2016 @ 8:03 am Posted to .Net by Antony Koch

Sometimes I need to dynamically generate test fixtures, but don’t in the test context have the ability to use generics, instead having only an instance of a Type.

I looked through the AutoFixture code and managed to find a reflection-friendly way to return me an object that can then be changed using Convert.ChangeType where necessary. Here’s the snippet:


typeof(SpecimenFactory).GetMethods().Single(x => x.IsStatic && x.IsGenericMethod && x.Name == "Create" && x.GetParameters().Length == 1 && x.GetParameters().Single().ParameterType == typeof(ISpecimenBuilder)).MakeGenericMethod(type).Invoke(fixture, new [] { fixture });

No comments (click to be first!)

Gauge your code’s adherence to the single responsibility principle

September 26, 2016 @ 7:40 pm Posted to .Net, OO by Antony Koch

During a routine ponderance on software engineering, I was thinking about a conversation recently in which we discussed how and when to copy and paste code. The team agreed that it’s OK to copy code once or twice, and to consider refactoring on the third occasion. This led me to wonder about the size of the code we copy, and how it might indicate adherence to the single responsibility principle.

For the uninitiated reader, the single responsibility principle – one of the first five principles of Object Oriented Programming and Design – can be succinctly summed up as:

“A class should have one reason to change”

The subject of many an interview question, often recited as rote, yet often misunderstood, the core premise can – I think – be understood by just talking about the code block, or class, at hand, in the form of a response to the question: “Tell me all the reasons you might need to edit this class?” Responses such as:

  • “If we want to change the backing store, we have to change this class.”
  • “If we want to change the business rules for persisting, we have to change this class.”
  • “If we want to change the fields used in the response, we have to change this class.”

When recited in one’s mind, are all that are required. And by considering the answers carefully, we can make an informed decision about whether to refactor, or that we’re actually happy with what we have, and are left with the option to change the class easily at a later date. The latter part of this sentence is critical to becoming a better developer, because what we have might be acceptably incomplete, and refactoring might take an inordinate amount of time and fail to offer significant business value to justify the expense.

This said, is copying and pasting code OK? Mark Seemann wrote an excellent blog post on the subject – which I won’t attempt to better – suffice to say I agree, and that it’s OK to copy and paste under a certain set of circumstances. The primary concern is the tradeoff: to suitably generify code requires at most a deeper understanding of the abstractions in play, and at least the ability to introduce dependencies between classes and modules that might not have otherwise been required. A quick copy paste of code that’s unlikely to change is not going to kill anyone. It might introduce an overhead should the code’s underlying understanding change, however volatile concepts do not in the first place represent good candidates for copying and pasting.

Now to wistfully return to the subject at hand – how can we use copying and pasting to judge our code’s adherence to the single responsibility principle? Quite simply: if we can copy only a line or two, then the surrounding code within the method body is perhaps not doing as targeted a job as we might hope. If we can copy entire classes, we can say that we’ve adhered strictly to the core tenets of the single responsibility principle: this class has such a defined person it can be lifted and shifted around the codebase with ease.

This means we can judge any of our code in a couple of ways: answer the question “what reasons does this class have to change?” as well as being honest with ourselves about our ability to copy and paste this code into a different codebase without being refactored.  Would half of the class be thrown away? Would we have to change a bunch of code in order to fit a different persistence model, say copying from a SQL Server backed system to a system backed by Event Store? I think it’s an interesting idea, and definitely one I’m going to keep trying in the coming days.

1 comment

A simple Dapper Wrapper

August 5, 2016 @ 10:28 am Posted to .Net by Antony Koch

If you need to sub out some Dapper functionality, and aren’t too worried about the specifics of the call, then I’ve crafted a nifty class you can use to perform just such a task.

In all it’s glory, the subbable IDapperWrapper, with its default implementation:


    public interface IDapperWrapper
    {
        IEnumerable<T> Return<T>(IDbConnection connection, Func<IDbConnection, IEnumerable<T>> toRun);
        T Return<T>(IDbConnection connection, Func<IDbConnection, T> toRun);
        void Void(IDbConnection connection, Action<IDbConnection> toRun);
    }

    public class DapperWrapper : IDapperWrapper
    {
        public IEnumerable<T> Return<T>(IDbConnection connection, Func<IDbConnection, IEnumerable<T>> toRun)
        {
            return toRun(connection);
        }

        public T Return<T>(IDbConnection connection, Func<IDbConnection, T> toRun)
        {
            return toRun(connection);
        }

        public void Void(IDbConnection connection, Action<IDbConnection> toRun)
        {
            toRun(connection);
        }
    }

No comments (click to be first!)

How can one ‘keep it simple’ in a complex system?

June 5, 2016 @ 8:56 pm Posted to .Net, Tech by Antony Koch

The often used phrase “Keep it simple, stupid,” abbreviated KISS, is solid advice in the field software development. We strive for simplicity, planning and refactoring continuously to ensure our code is extensible, reusable, and all those other words ending in ‘ble’ that apply. But what is simple? And how can things be kept simple in a complex system with several complex domains?

Simple is subjective. To some it means few moving parts, to others it means code that reads like a book, even if it’s repeated in several places. Disagreements can easily arise when deciding what simple means. Keeping an open mind is critical with regards to the definition of simplicity, because all sides can have valid arguments.

Complexity, like simplicity, is also subjective. Code can be composed in a complex way, however the component parts may be implemented simply. So can only simple things be over complicated?

There’s a saying: work smart, not hard, that applies to software development more deeply than in many other domains. Finding smart solutions usually means less code, fewer moving parts and a more concept-based approach.

To some developers a concept-based approach can be perplexing. Simplicity masquerading as complexity that remains obscure until scrutinised further. It can also, however, be over-engineered — six classes used where one might have sufficed until a later date.

Does this mean those developers who find smart solutions complex aren’t up to scratch, or should the codebase cater to the needs of the team and be legible to all? In my opinion, no. Smart trumps legibility every time, for the simple reason that legibility is subjective and based on the abilities of the reader. Some are baffled by lambdas and some aren’t. This doesn’t mean teams should avoid lambdas, it means teams should shift dead weight.

All of this begs the question: can something that seems complex always be reduced to something simple? In most cases, yes. A video I watched (which I will need to find later as the author escapes me) stated that in most cases he could walk into a company and reduce a code base by a factor of 80%. That is to say that a 100,000 line codebase could be reduced to roughly 20,000 lines of code.

Part of the reason this is, in my eyes, true, is because teams wilfully introduce technical debt, qualifying its introduction with ‘We’ll fix it if we need to later’. This is a flag to me that says “we know we aren’t doing it properly.” This is not counterintuitive to Ayende’s JFHCI — in fact it works with it: work smart, not hard. His example of hard coding is not an introduction of technical debt, it’s a forward thinking solution with minimal down payment now.

So how do we keep things simple in complex domains? Here’s a bulleted list of how it can be done:

Limit your abstractions

Don’t introduce a phony abstraction in order to make it mockable. If you see an IFoo with a single implementation Foo, you’re overcomplicating and you’re missing the point of interfaces in the first place. Code should be written to concepts as per Ayende’s limit your abstractions post.

Test outside in

Test your components using their public API. Don’t test the components internals because you’re then testing implementation. This allows two benefits:

  • Get the internals working correctly, quickly, with minimal fuss and with good test coverage
  • Once complete, it allows you to refactor into any concepts you may have uncovered along the way.

Work smart, not hard

Highly focussed components with specific jobs connected in a smart way. Some people might not understand them; it’s your job to enlighten them. If they still don’t understand it, cut them loose and hire someone who does. The inverse is true too, though — if everyone disagrees with your code it’s either wrong and you need to learn, or it’s right and you need to leave.

No comments (click to be first!)

A quick and dirty XUnit/AutoFixture MongoDb test harness

September 21, 2015 @ 6:37 am Posted to .Net, Testing by Antony Koch

As part of my new journey into outside-in testing I’ve – of course, for those that know me – looked into the tooling aspect. A part of that is implementing a MongoDB test harness injected via AutoFixture’s XUnit plugins.

As a quick side note, I think tooling is critical to elevating ones self from being a good developer to becoming a great one, as it removes the need to focus on anything other than writing production code. An example of this is my usage of NCrunch for continuous test execution. I can’t extol the benefits of not stopping to run all my tests enough, and it’s ben great to see the ongoing development of NCrunch since it’s free days.

Anyway – back to the point at hand.

I am building a MongoDb-backed application outside of my regular 9-5 engagement with my banking client, Investec, and needed a quick way to access the db in my tests. Avoiding ceremony was critical as I’m really into working on the production code as much as possible, not wasting time writing tests that will need to be rewritten when my implementation invariably changes as I learn more about the system I’m building. My app involves a twitter login, meaning everything I do from a database perspective is best served using the twitter UserId field. For cut one, I’ve come up with the following code, which I’ve added comments to for clarity:

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Web.Configuration;

using MongoDB.Bson;
using MongoDB.Driver;

namespace Amko.TwendingToday.Fast.Mongo
{
    public class MongoDbHarness : IDbHarness
    {
        private IMongoDatabase _db;

        public MongoDbHarness()
        {
            // Extract connection string from config
            var connectionString = ConfigurationManager.AppSettings[ServiceConstants.MongoDbUri];

            // Get a thread-safe client object by using a connection string
            var mongoClient = new MongoClient(connectionString);

            // Get a reference to a server object from the Mongo client object
            const string databaseName = "twitterapp"; // I've taken the real name out here to avoid giving spoilers :)

            // bootstrap our DB object
            _db = mongoClient.GetDatabase(databaseName);

        }

        public async Task<BsonDocument> GetAsync(string collection, string userId)
        {
            // Extract collection as a set of BsonDocument. The production code deals in
            // concrete domain objects, but for test purposes I'm loathe to spend time
            // building and rebuilding the ever changing domain in my test code. It
            // doesn't offer much over string indexes
            var coll = _db.GetCollection<BsonDocument>(collection);

            // build a filter for the get
            var filterDefinition = Builders<BsonDocument>.Filter.Eq("UserId", userId);

            // pull out the matching docs asynchronously
            var list = await coll.Find(filterDefinition).ToListAsync();

            // return the first one
            return list.FirstOrDefault();
        }
    }

    public interface IDbHarness
    {
        Task<BsonDocument> GetAsync(string collection, string userId);
    }
}

Although I’ve mentioned it in the above comments, I think it’s worth mentioning the point regarding the test domain and my decision not to create it. It’s common to build a copy of the domain object in test code to deserialise database into, or in worse cases to use the production models, but I decided against this. I feel that for it to be a true outside in test I should express my query against the resultant JSON from the DB as I might query the JSON in the real world, as it’s an extra verification step against how I think the system is working.

The injection of this is handled by AutoFixture using the following specimen builder:

    public class MongoHarnessSpecimenBuilder : ISpecimenBuilder
    {
        public object Create(object request, ISpecimenContext context)
        {
            var type = request as Type;

            if (type == null || type.IsPrimitive)
            {
                return new NoSpecimen(request);
            }

            if (type == typeof(IDbHarness))
            {
                return new MongoDbHarness();
            }

            return new NoSpecimen(request);
        }
    }

And this allows me to build my tests like this:

        [Scenario]
        [StructuremapAutoData]
        public void TracksUserWhenTheyLogin(
            AuthController sut, 
            ActionResult actionResult,
            AuthenticatedUser user,
            IDbHarness dbHarness,
            DateTime now,
            string returnUrl)
        {

I’ll dig out some resources for building the AutoFixture framework backing my AutomapData attribute and post them here later.

No comments (click to be first!)

What can we learn by refactoring without touching our unit tests? A converts’ explanation of outside-in testing.

August 26, 2015 @ 8:13 pm Posted to .Net, Mocking, Testing, Unit Testing by Antony Koch

Everywhere I’ve worked develops ‘features’. No great revelation there. However a feature is often, or at least ought to be, quite a small piece of functionality. A stripe of business value carved out of a larger solution. This stripe can be tested and released within a sprint while adding value to the business.

A lot of features can be expressed using a ‘happy path’: that is to say that there tends to be very few ways in which this feature can be executed without any exceptions thrown or any downstream entities not being found.

A typical scenario expressed in Gherkin takes the following form:

Given a context
When an action is perform
Then there is an outcome

Expanding upon this with a more real world set of scenarios:

Given an Amazon Prime customer
When the customer searches for goods
Then a Prime delivery option should be displayed

Given a non Amazon Prime customer
When the customer searches for goods
Then no prime delivery option should be displayed

Simple enough, right? Those 6 lines of text define everything we’re about to code up.

So of the time spent coding up, how much of it, in a TDD setting, is spent on unit tests? 50%? 75%? 75% of your time spent writing code that no user will touch.

Now let’s say the next feature comes along and we learn that our initial solution requires some refactoring. Where we once had one query and no commands we may now need nested queries and a single command along with, say, an extra integration point with a cloud service.

How much time do you need to spend rewriting those unit tests? Do you reconsider whether or not to refactor because of the burden of rewriting those tests? I know I have.

Now imaging you had no unit tests. Only acceptance, or black box, tests. Same web page. Same prime indicator. Same test. How much time do you spend writing tests when you refactor your underlying solution?

If you’ve got it right?

None.

100% of your time writing production code. That’s what it’s supposed to be about, right? A bright idea hits you half way through your refactor. Your tests are still green. What’s the cost of experimenting?

Nothing.

You can commit what you have now – it’s working – and start tinkering. Your creativity starts to flow. Your repertoire of of enterprise patterns can come to the fore. Your factory factory loses its purpose: you don’t need interfaces just so you can mock, just so you can abstract.

This may seem somewhat preach, but I feel like this is the right way to do things.

More blog posts are to follow.

No comments (click to be first!)

What is a Tech Lead?

August 25, 2015 @ 10:27 am Posted to .Net, Agile, Consultancy by Antony Koch

The tech lead role seems to be used in wider and wider circles, with each company defining the role in different ways. So what is a tech lead, and what are their responsibilities?

To my mind, the responsibility flows in three directions:

  1. Upwards: Responsibile to the board for delivering the project to a high technical standard, an architecture that adheres to the company’s core principles and a solution that befits the budget and scope
  2. Downwards: Ensuring the team follows rigourous coding practices by laying out the framework for them to do so. Ensuring TDD is used and that the acceptance tests are kept up to date.
  3. Outwards: To the client. Ensuring their internal infrastructure is able to support the solution as designed, that they feel engageged in the creation and delivery of their project from a technical level, and that they have adequate access to see their work progressing steadily.

In my previous role the tech lead was the most senior technical consultant assigned to a project, and ultimately the person who hands the keys over to the client. They are responsible for defining the architecture,  assuring sign off from the other tech leads, and defining the engineering standards to be followed by the development team.

Ensuring a good solution at the code level is critical in order to avoid major refactors down the line. Rigor around coding also ensures new features can be added with minimal risk by maximising code reuse. Taken to the next level, the tech lead’s job is to look ahead and build a solution that takes the future into account while not holding back development work. This can be a bit of a juggling act sometimes, with some observers feeling that work is being done that is not of direct benefit in terms of features to the client. This plays into the consultancy part of the role – talking clients round to doing something now for a payoff in the future.

The final piece of the puzzle for me is to ensure that the financial implications of any solution are considered. One example from my own experience is to consider high usage rates of a system as a pointer to high costs if not caught soon enough. Large numbers of files with a high number of requests can quickly become very expensive, meaning compression and minification of static assets takes on more meaning that simply performance. It means cold, hard cash to clients.

No comments (click to be first!)

Types of Interview Candidates

August 25, 2015 @ 10:24 am Posted to .Net, JavaScript by Antony Koch

The first question I like to ask interview candidates is to rate themselves out of 10 as a C# (or JavaScript) developer. I feel the question offers several insights into the candidate, as well as providing a solid base for the type, and depth, of questions one might ask.

Firstly it provides an opportunity for the candidate to think about the intent of what I’m asking and act accordingly: why is this important? Will I be hired based solely off of this answer? The answer to the latter question should obviously be a no, giving the former much greater meaning. In a range of 10 numbers spread across perhaps 10 or so candidates I’ve received exactly two numbers in response. 8.5 and 7.

The 8.5s

I’ve found that most developers who don’t get past the phone interview tend to rate themselves near a 9 out of 10. The reason for this is that they’re unaware of the breadth of .Net, falsely thinking that they’ve discovered all there is to discover and are masters of their domain. They’re often from small to medium sized companies and represent the smartest person in the room. Hubris has informed them it’s due to their talent, but the reality is that they’ve lacked a challenge and lack the desire to step outside their comfort zone and get stuck into some of the darker places software development can take them.

Simple questions tend to trip them up. Some of these niners mis-spell SOLID on their CVs. Seriously.

The 7s

This is the answer I would give, provided the opportunity to justify it. .Net is big. I know a lot of it, but I don’t know what I don’t know, but I know that there’s probably a lot I don’t know. This is I want to hear. A humble developer is one who tends to get their heads down and learn those things that they’ve just found out they don’t know. Good devs struggle to move positions sometimes, afraid they’re not good enough. Lesser devs are ignorant to their failings and consequently move often, positioning themselves in the same situation as they were in before, only at the new market rate.

That said, not all 7s are blinders. Some 7s are mid-level and consider themselves so. Some mid-level developers are considerably better. I would rate myself as a more senior developer given my experience, but then again: so does everyone else. This moves me into the loop of self doubt, hopefully reinforcing my previous statement about being self critical but possibly highlighting that you shouldn’t read this blog any more as it’s going to contain primarily garbage.

No comments (click to be first!)

OO Principles, #1: Tell don’t ask

October 30, 2013 @ 8:21 am Posted to .Net, Java, OO by Antony Koch

I first heard “Tell don’t ask” during a course being given by ThoughtWorks and it’s certainly one of a set of phrases and one-liners that run through my head when coding.

I find that a core set of easy to remember principles help govern my coding both during the planning, implementation and refactor and ultimately save me quite a bit of time in the process. One could argue that I’m performing a miniaturised version of Agile in my head, holding a planning session, then performing code reviews before having a retrospective, all in the space of a few minutes.

The main principal behind “Tell, don’t ask” (which I’ll refer to as TDA from now on) is that you should tell your objects what you want them to do, rather than asking them about their state and performing additional callsbased on the answer. This encapsulates the business logic in the correct place and can prevent mutable properties from leaking to external classes where their state can be changed incorrectly, causing a direct and negative effect on the system’s overall state.

Example

Here’s a trivial example but hopefully it helps to explain the principle in greater detail.

public class Converter {
    private boolean convertFromBase;
    private IMap mapper;

    {
        convertFromBase = false;
        mapper = new XmlResponseMapper();
    }

    public boolean getConvertFromBase() {
        return this.convertFromBase;
    }

    public void setConvertFromBase(boolean convertFromBase) {
        this.convertFromBase = convertFromBase;
    }

    public MappedObject createFrom(Response response) {
        return mapper.map(response);
    }

    public MappedObject convertFromBase(Response response, MappedObject mappedObject) {
        return mapper.map(response, mappedObject);
    }
}

So here we have a simple class that performs conversion to either a fresh instance of an object, or maps properties from the response into an existing object. Nothing too daring. Let’s look at an example of this object’s consumer:

public class ResponseConsumer {
    private Converter converter = new Converter();
    private MappedObject mappedObject;
    public ResponseConsumer(MappedObject mappedObject) {
        this.mappedObject = mappedObject;
    }

    public MappedObject handleResponse(Response response) {
        converter.setConvertFromBase(mappedObject != null);
        if (converter.getConvertFromBase()) {
            return converter.convertFromBase(response, this.mappedObject);
        }

        return converter.createFrom(response);
    }
}

This object consumes our convert and based on its own instantiation and state asks the converter if it should convert from base and then calls a different method based on whether the response is true or false. This is awful code and breaks a number of other paradigms in the process, but that only serves to prove the point!

The main focus of this code is the handleResponse method. We tell it to set convertFromBase to a boolean (meaning the class is mutable), then (stupidly, I’ll admit) ask it whether we should convert from base and call convertFromBase if it’s true and createFrom if it’s false. What this code, in effect, is doing is handling the business logic of conversion outside the converter class.

That doesn’t sound right.

So if we were to fix this, how might we do that? Well, for starters, let’s think about the abstraction. We have a response and we want to convert it into an object our system understands. So our class which understands how to handle response should hand off the response, or parts of it, to a class that understands how to convert it. So a single point of entry ‘convert’ would be ideal. Let’s make one! First of all, here’s our response consumer class:


public class ResponseConsumer {
    private Converter converter;

    public ResponseConsumer(MappedObject mappedObject) {
        converter = new Converter(mappedObject);
    }

    public MappedObject handleResponse(Response response) {
        return converter.convert(response);
    }
}

We can see this has been greatly reduced in size, with methods down to a single line, including our new ‘convert’ method. Let’s look at the converter class:


public class Converter {
    private final MappedObject mappedObject;
    private IMap mapper;

    {
        mapper = new XmlResponseMapper();
    }

    public Converter(MappedObject mappedObject) {
        this.mappedObject = mappedObject;
    }

    private boolean getConvertFromBase() {
        return this.mappedObject != null;
    }

    private MappedObject createFrom(Response response) {
        return mapper.map(response);
    }

    private MappedObject convertFromBase(Response response, MappedObject mappedObject) {
        return mapper.map(response, mappedObject);
    }

    public MappedObject convert(Response response) {
        if (getConvertFromBase())
            return convertFromBase(response, mappedObject);

        return createFrom(response);
    }
}

This now holds the logic of conversion, encapsulated within one method: convert. This method knows how to convert and knows it very well, handing off to other (now private) methods. The state of this class cannot be changed now either, making it immutable, therefore allowing instances to be more trustworthy.

The secret here was to first consider how we might tell the class what to do, in this case: convert. Once we realise we only have one thing to convert (a Response) then we should create the class according to that. We should tell the converter to convert and let it do all the dirty work.

Summary

I hope this made some sense, especially as it’s my first attempt at this sort of blog post. I also stopped writing it half way through to go on honeymoon so a bit of a refresh had to happen in order to finish it. The main crux is as soon as you spot branches of code using if/else structures, examine the object(s) upon which your if statement relies. If your if statement is asking another object to fulfill the business logic, that is an indicator to extract that into the object in question as a derived property (C#) or a method (java). Tell it to figure out the boolean value instead of asking it!

No comments (click to be first!)