Babeling in defence of JavaScript

And so it goes, the eternal question “What is wrong with JavaScript?” and the inevitable, inescapably droll, reply:

Oh, ho ho ha ha haaaaaaaaaaah… The gag never gets less funny. I need to be clear that Scott Hanselman is one of my favourite people in the public eye. I hold him to be an industry treasure and I’m fully aware of him just poking fun here but we’ve all seen this dialog before and we all know it is not always so lighthearted.

At the end of the day, these scenarios showing how ‘broken’ JavaScript is are almost always bizarrely contrived examples that can be easily solved with the immortal words of the great Tommy Cooper:

Patient: “Doctor, it hurts when I do this”
Doctor: “Well, don’t do it”

Powerful Facts

Lets be absolutely clear that JavaScript is an incredibly powerful language. It is the ubiquitous web programming language. Of course it currently has a monopoly that ensures this status. That does not change the fact that JavaScript runs on the fastest, most powerful and most secure websites. So clearly it does exactly what is needed when in the right hands.

JavaScript is free with a very low barrier to entry – all you need is a web browser.

JavaScript through its node.js guise powers Netflix, LinkedIn, NASA, PayPal… The list goes on and on.

Furthermore it is easy enough to learn and use that it is a firm favourite for beginners learning programming. It is in this last point that we observe some particularly harmful industry attitudes towards JavaScript.

What’s The Damage?

So now that we can all agree that Tommy Cooper has fixed JavaScript from the grave and now that we’re clear about just how seriously capable JavaScript is as a language, we can get onto the central point: industry attitudes to JavaScript are damaging. While many languages such as SQL and PHP are common targets of derision and it seems to me that each case has it’s own unique characteristics and nuances, there is something notably insidious about the way JavaScript is targeted.

One of the more painful examples of JavaScript’s negative press can be observed in the regular reports from those learning programming that they feel mocked for learning JavaScript. This is, quite frankly, appalling. We work in an industry that is suffering from a massive global undersupply of talent and we’re making potential recruits feel like crap. Well done team! Even globally established personalities such as Miguel de Icaza of Xamarin fame can’t help but fan these flames. What chance do new recruits have?

The JavaScript Apocalypse?

Moving on to the issue that prompted me to start writing this article; WebAssembly is here. It has a great website explaining all about it: webassembly.org. It even has a logo! It also has a bunch of shiny new features that promise to improve the experience of end users browsing the web.

WebAssembly logo
Of course WebAssembly has a logo!

From distribution, threading and performance improvements to a new common language with expanded data types, WebAssembly offers a bunch of improvements to the web development toolkit. I’m all for these changes! JavaScript and the web programming environment are far from perfect and these are another great step in the right direction.

Of course WebAssembly’s common language also promises to open up the web client for other programming languages. “Hurrah!” I hear many cheer. I’m seeing countless messages of support for the death of JavaScript at the hands of the obviously infinitely superior quality languages of C#, Rust and Java 🙄 Yeah… I’m not so sure…

Nah!

Like most programming languages, JavaScript is a product of its environment: namely, the web browser. It did have competition in the early days with VBScript back in IE4/5… I think… It was a long time ago. But otherwise it has developed on its own in response to demand from the web developer community and in response to the changing web landscape. The modern incarnations of JavaScript (ECMA Script 6/7/8) are incredibly powerful, including modern language features such as an async programming model, functional capabilities and so on. In many ways modern JavaScript resembles the languages to which it is so frequently compared but it also lacks many language features that are less relevant to web client programming such as generics and C#’s LINQ. It’s loose typing system make it well suited for working with the HTML DOM. Overall it would appear, as you might expect, that JavaScript is made for web client programming and is in fact the best choice for this task.

Even the WebAssembly project agrees, confirming on the project website that JavaScript will continue to be the ‘special’ focus of attention and you know what? This is a good thing!

Babel

Look, we already have other languages that compile for the web client but I don’t see any existential threat from the (albeit beautiful) CoffeeScript or from the (misguided) TypeScript. Sure, WebAssembly will make this more effective but the reasons that TypeScript hasn’t already taken over the web development world will still apply to C# and WebAssembly. We have seen a similar battle play out in the database world where NoSQL was lauded as the slayer of the decrepit 1970’s technology we all know as SQL. That was until NoSQL databases started to implement SQL. Turns out that SQL is hard to beat when it comes to querying data, which is unsurprising when you consider its 50-odd years of evolution in that environment and the same rule will apply to any JavaScript challengers. Personally I suspect a large part of JavaScript’s alternatives failing to take hold is that web client programming doesn’t need the added static typing, etc.; in my experience all these challengers do is introduce compiler warnings and complexity that waste time. Ultimately I don’t have all the answers here but it is fair to say that it would take a serious effort to out-web the language that has evolved for the web environment.

The Tower of Babel (from WikiPedia)

Where my real concern lies is in the well known problems that are brought about by having too much choice when it comes to communicating. We use human readable programming languages so that we can communicate our programs to each other. With that in mind it is clearly more effective in the long run if we all learn to talk the same language. The story of The Tower of Babel shows us that for a long time we have considered too much choice to be a very bad thing when it comes to communication.

It would be a frustrating situation indeed if we were to end up having to consider and manage the overhead of multiple languages for a single task all because of some daft attitudes towards JavaScript. Furthermore, businesses that are struggling to find web developers don’t now also need to worry about whether these developers are Rust, Java or C# web developers. JavaScript is the right tool for the job so lets stop wasting time with all the JavaScript bashing and get on board with an incredibly powerful language we can all understand!

Advertisements
Babeling in defence of JavaScript

All you (probably) need to know about Microservices

So I’ve finally succumbed to writing about that with the never ending hype cycle: microservice architecture.

Where to begin? Well a distributed microservice architecture is extremely complicated to build, maintain and debug. It’s something that is born of very unique organisational constraints. If you really need to go down this path then you’re probably working at an organisation similar to Netflix or McDonald’s and you really wouldn’t be here scouring for info.

Fin.

P.S. There may of course be academic reasons for learning about this architecture and if that’s the case I promise you’ll get mega marks for focusing your essay on the ‘why’ rather than the ‘how’ and in that case I highly recommend this: https://martinfowler.com/articles/microservice-trade-offs.html

All you (probably) need to know about Microservices

A functional solution to interfacitis?

/ˈɪntəfeɪsʌɪtəs/
noun
noun: interfacitis
inflammation of a software, most commonly from overuse of interfaces and other abstractions but also from… well… actually it’s mostly just interfaces.

An illness of tedium

Over the years my experience has come to show me that unnecessary abstractions cause some of the most significant overheads and inertia in software projects. Specifically, I want to talk about one of the more tedious and time consuming areas of maintaining abstracted code; that which lies in the overzealous use of interfaces (C#/Java).

Neither C# or Java are particularly terse languages. When compared to F# with its Hindley-Milner type inference, working in these high-level OO languages often feels like filling out forms in triplicate. All too often I have experienced the already verbose syntax of these languages amplified by dozens of lengthy interfaces, each only there to repeat the exact signature of it’s singular implementation. I’m sure you’ve all been there. In my experience this is one of the more painful areas of maintenance, causing slowdowns, distraction and lack of focus. And I’ve been thinking for some time now that we’d probably be better off using interfaces (or even thin abstract classes) only when absolutely necessary.

What is necessary?

I like to apply a simple yard stick here: if you have a piece of application functionality that necessitates the ability to call multiple different implementations of a component then you probably require an interface. This means situations such as plugins or provider-based architectures would use an interface (of course!) but your CustomerRegistrationService that is called only by your CustomerRegistrationController will not. The message is simple, don’t start introducing unnecessary bureaucracy for the sake of it.

There are, I admit, cases where you might feel abstraction is required. What about a component that calls out to a third party system on the network? Surely you want to be able to isolate this behind an interface? And so I put it to you; why do you need an interface here? Why not use a function? After all, C# is now very well equipped with numerous, elegant functional features and many popular DI frameworks support delegate injection. Furthermore if you are following the SOLID practice of interface segregation then chances are your interface will contain only one or two method definitions anyways.

An example

So, for those times when you absolutely must abstract a single implementation, here is a simple example of an MVC controller using ‘functional’ IoC:

public class RegistrationController : Controller
{

    private readonly Func<string, RegistrationDetails> _registrationDetailsQuery;

    public RegistrationController(Func<string, RegistrationDetails> registrationDetailsQuery)
    {
        _registrationDetailsQuery = registrationDetailsQuery;
    }

    public ActionResult Index()
    {
        var currentRegistration = _registrationDetailsQuery(User.Identity.Name);

        var viewModel = ViewModelMapper.Instance
            .Map<RegistrationDetails, RegistrationDetailsViewModel>(currentRegistration);

        return View(viewModel);
    }
}

 

Addendum

13-March-2018: It has been pointed out to me that a further benefit of this approach is that static providers may also supply IoC dependencies whereas instances are required for interface-based IoC. What are your thoughts on this approach?

A functional solution to interfacitis?

Demystifying AI – The AI explosion

This is an article I had originally written as part of a stream of work that has now been put on hold indefinitely. I thought it a shame for it to languish in OneNote.

What’s with all this attention to Artificial Intelligence then?

Well that is a very good question. To be perfectly frank, not that much has changed of late in the world of Artificial Intelligence (AI) as a whole that should justify all the current excitement. That’s not to say that there isn’t cool stuff going on; there really is great progress being made… in the world of Machine Learning. And if we are to begin the process of ‘Demystifying AI’ then this is a very good place to start.

 

AI is a very broad area of technology encompassing research from robotics to computing emotions (affective computing) and everything in between, including Machine Learning or ‘ML’. As alluded to just a moment ago it is within ML specifically that we are seeing the greatest progress. Think of a modern ‘AI’ technology that is gaining a lot of attention and you can place a safe bet on it specifically using ML techniques: Natural Language Processing? That’s ML. Image classification? That’s ML. Sentiment Analysis? Also ML. The recent news of Go players being defeated by a computer? You guessed it… ML.

 

What is ML?

ML is an approach to analysing data that is based on training statistical models to predict outcomes. You may well have come across Statistical (or Linear) Regression back in your school days; well this is possibly the best known example from a range of techniques that make up the world of ML. To put it simply, an ML model learns from past data to make better decisions in the future.

mlaidl

Now it’s time to introduce what is arguably the beating heart of the AI frenzy: Deep Learning. While there are no trendy acronyms for Deep Learning it is fair to say that Deep Learning has become a bit of a buzz-word itself. Deep Learning takes its name from the concept of Deep Neural Networks (or DNNs, there’s your acronym!). The useful details of what DNNs are and how they function cannot be easily summarized, suffice it to say that DNNs are an ML technology that borrows heavily from the structure of the brain, hence the ‘neural’ part of the name. [N.B. These details are already planned for a follow-up piece.] To re-cap: Deep Learning is a subset of ML, which is in-turn a subset of AI and it is Deep Learning that drives the current hype.

 

What is Deep Learning?

Say you wanted to build some software to identify objects in an image; your usual non-Deep-Learning approach to this would include manually writing rules into the software to recognise the details you’re looking for. If you wanted to identify if a picture was of a bird or a cat, you would manually write rules to identify features such as whiskers or ears or wings and so on. This is complicated, time-consuming and error prone. Deep Learning takes a different approach. Instead you would create a Deep Learning model then supply it with a bunch of pictures. For each picture you supply, you would tell the model if it was of a bird or a cat. As you supply each image and it’s corresponding label, the model learns. Once enough data has been supplied you can then supply an image without a label and the model will give an accurate indication of whether it is a bird or a cat.

 

So what is the ‘explosion’ all about?

Continuing the bird/cat model example, the more example labelled pictures you supply to the model, the better the results will be. This seems simple and even somewhat obvious but it strikes at the heart of the current ‘AI boom’. Deep Learning has been around for a while now, evolving over a period of 30 years more or less, and one of the key reasons that it has never been so commercially successful as now is that there just hasn’t been enough readily available data to make it so accessible. To give you some idea of why this has been an issue, if you want to get to a high level of accuracy for classifying complex pictures then you’re going to need  thousands or even millions of examples depending on the complexity. Well we now have data, lots and lots and lots of it and it has never been more easy to get our hands on it. Do a quick Google image search for ‘Cat’; there is a rough cut of half your ‘training’ set (*ahem* copyright issues aside) and I’m sure you can figure out how to get the other half.

 

So we have data, but that isn’t all we need. The other side of the current explosion is raw computing power. Building a statistical model that can accurately identify cats and birds in pictures is very heavy work for a computer but thankfully with the advent of cloud-scale computing resources, available computing power is now big enough and cheap enough to make running this sort of model both practical and cost-effective. It’s cheap enough that Google can even give this stuff away as an educational toy (https://teachablemachine.withgoogle.com/).

 

So its all about pictures of cats and birds?

Beyond the abundance of data and computing power, probably the most significant factor in the commercial success of Deep Learning is its versatility. This is especially true when considering the success of Deep Learning against other ML techniques which have not gained the same level of attention. If you have enough data, regardless of its form, Deep Learning can be trained to extract knowledge from it. This has sent businesses, scientists and engineers into a global flurry of R&D to find all the amazing ways in which this technology can enhance our lives.

For years now the financial services industry has been at the forefront of applying ML techniques to everything from fraud prevention and risk management to investments and savings predictions; there are few – if any – areas of the industry that have yet to see the benefits of AI.

Manufacturing is seeing growing uptake in the application of ML to improve efficiency through waste reduction and better predictive analysis of production demands and infrastructure maintenance.

More recently utilities are beginning to get into the ML game with the UK National Grid striking up discussions with Google to investigate applying the infamous DeepMind AI to maximise National Grid’s use of renewables and to more efficiently balance supply and demand across its nationwide infrastructure.

Across all sectors business now find themselves in a position to use ML to better understand and engage with their customers. From utilities gaining greater knowledge of their customer’s consumption habits through to retailers and service providers more effectively capturing sales conversion opportunities, the possibilities are as varied as your data.

 

Would you like some knowledge with that?

So that concludes this effort to clear away some of the fog and hyperbole from the current AI phenomenon (ahem! It’s all ML, remember!?). In a nutshell, if you have a ton of data and you need to get knowledge from it then Deep Learning could well be your go-to tool.

Demystifying AI – The AI explosion

FileFormatException: Format error in package

OK so we’re all completely clear on what this error means and what must be done to resolve it right? I mean with a meaningful error like that how can anyone be mistaken? Oh? What’s that? You still don’t know? Let’s be a bit more specific: System.IO.FileFormatException: Format error in package Better? Didn’t think so. It’s not an error message, that’s why. I’ll tell you what it is though, it’s stupid and even more stupid when you find out what causes it.

I came across this delightfully wishy-washy error when configuring an Umbraco 7 deployment pipeline in TeamCity and Octopus Deploy. The Umbraco .csproj MSBuild file referenced a bunch of files as you might expect but I also needed to add a .nuspec file which referenced a bunch of other files. Long-story-short, the error came about because the files specified by the .csproj overlapped with the files specified by the .nuspec file. There were about 1000-odd generated files that the NuGet packaging components in their infinite wisdom added to the .nupkg archive as many times as they were referenced. NuGet was able to do this silly thing without any complaints and inspecting the confused package in NuGet Package Explorer or 7Zip or Windows Zip gave no indication of any issues whatsoever. It was not until Octopus called on NuGet to unpack the archive for deployment that we got the above error.

Stupid, right? Stupid!

FYI: I was able to get to the bottom of this issue after 2 freaking days of pain when I eventually used JetBrains dotPeek to debug step-through the NuGet.Core and System.IO.Packaging components to see what on earth was going on. In the end it was this piece of code in System.IO.Packaging.Package that was causing the issue:

public PackagePartCollection GetParts()
{
...
	PackagePart[] partsCore = this.GetPartsCore();
	Dictionary dictionary = new Dictionary(partsCore.Length);
	for (int index = 0; index < partsCore.Length; ++index)
	{
	  PackUriHelper.ValidatedPartUri uri = (PackUriHelper.ValidatedPartUri) partsCore[index].Uri;
	  if (dictionary.ContainsKey(uri))
		throw new FileFormatException(MS.Internal.WindowsBase.SR.Get("BadPackageFormat"));
	  dictionary.Add(uri, partsCore[index]);
	  ...
	}
...
}

I mean, why would anyone consuming such a core piece of functionality as this API ever want to know anything about the conditions that led to the corruption of a 30MB package containing thousands of files? I mean it’s not like System.IO.Packaging was ever intended to be re-used all across the globe, right?

Anyways, here’s the error log for helping others with searching for this error and stuff.

[14:21:27]Step 1/1: Create Octopus Release
[14:21:27][Step 1/1] Step 1/1: Create Octopus release (OctopusDeploy: Create release)
[14:21:27][Step 1/1] Octopus Deploy
[14:21:27][Octopus Deploy] Running command:   octo.exe create-release --server https://octopus.url --apikey SECRET --project client-co-uk --enableservicemessages --channel Client Release --deployto Client CI --progress --packagesFolder=packagesFolder
[14:21:27][Octopus Deploy] Creating Octopus Deploy release
[14:21:27][Octopus Deploy] Octopus Deploy Command Line Tool, version 3.3.8+Branch.master.Sha.f8a34fc6097785d7d382ddfaa9a7f009f29bc5fb
[14:21:27][Octopus Deploy] 
[14:21:27][Octopus Deploy] Build environment is NoneOrUnknown
[14:21:27][Octopus Deploy] Using package versions from folder: packagesFolder
[14:21:27][Octopus Deploy] Package file: packagesFolder\Client.0.1.0-unstable0047.nupkg
[14:21:28][Octopus Deploy] System.IO.FileFormatException: Format error in package.
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.GetParts()
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.Open(Stream stream, FileMode packageMode, FileAccess packageAccess, Boolean streaming)
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.Open(Stream stream)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.GetManifestStreamFromPackage(Stream packageStream)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.c__DisplayClassa.b__5()
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.EnsureManifest(Func`1 manifestStreamFactory)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage..ctor(String filePath, Boolean enableCaching)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.PackageVersionResolver.AddFolder(String folderPath)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.CreateReleaseCommand.c__DisplayClass1_0.b__5(String v)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.c__DisplayClass15_0.b__0(OptionValueCollection v)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.ActionOption.OnParseComplete(OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.Option.Invoke(OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.ParseValue(String option, OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.Parse(String argument, OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.c__DisplayClass26_0.b__0(String argument)
[14:21:28][Octopus Deploy]    at System.Linq.Enumerable.WhereArrayIterator`1.MoveNext()
[14:21:28][Octopus Deploy]    at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
[14:21:28][Octopus Deploy]    at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.Parse(IEnumerable`1 arguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.Options.Parse(IEnumerable`1 arguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.ApiCommand.Execute(String[] commandLineArguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Program.Main(String[] args)
[14:21:28][Octopus Deploy] Exit code: -3
[14:21:28][Octopus Deploy] Octo.exe exit code: -3
[14:21:28][Step 1/1] Unable to create or deploy release. Please check the build log for details on the error.
[14:21:28][Step 1/1] Step Create Octopus release (OctopusDeploy: Create release) failed
FileFormatException: Format error in package

Things I wish I knew 10 years ago: Abstractions

We need to talk about abstractions

The main reason I decided to start this blog is that I have begun working for a company that has genuinely challenged many of my assumptions about how software should be developed. I have spent much of my career learning from the more prominent voices in software development about how to write software effectively. I have learned, practiced and preached the tenets of clean code, TDD, layered design, SOLID, to name a few of the better known programming practices and had always believed that I was on a true path to robust, maintainable software. Now I find myself in a position where over the space of just one year I have already questioned many of the practices I had learned and taught in the preceding decade.

I hope to share on this blog much of what I have discovered of late but for my first entry discussing programming practices I want to talk about abstractions. In particular I want to call into question what I have come to understand as overuse of abstractions – hiding implementations away in layers/packages, behind interfaces, using IoC and dependency inversion – as often encountered in the C#/.NET and Java world.

Abstractions?

I have been wondering lately if I have simply spent years misunderstanding and misapplying abstractions, but I have seen enough code written by others in books, tutorials, blogs, sample code and more diagrams than I can bear to know that I have not been alone in my practices. Furthermore, I have found myself on a few occasions of late in discussions with developers of similar experience who have come to share a similar feeling towards abstractions.

The all too familiar layer diagram
The all too familiar layer diagram. © Microsoft. https://msdn.microsoft.com/en-us/library/ff648105.aspx
A typical layering structure
A typical layering structure

So what do I mean by abstractions and what is the point of them, really? The old premise and the one that I would always reiterate is that abstractions help enforce separation of concerns (SoC) by isolating implementation details from calling code. The reasoning being that code of one concern should be able to change without affecting the code dealing with other concerns, supposedly because code dealing with one concern will change for different reasons and at different times than the code dealing with other concerns. Of course we mustn’t forget that one of the more natural causes of abstractions is the isolation of logic to enable Unit Testing. Ultimately the result is that software is written in such a way that the different code dealing with different concerns is kept separate by abstractions such as interfaces and layers while making use of IoC and Dependency Injection to wire the abstractions together. Furthermore it is worth me stating that the usual separate ‘concerns’ touted by such advocacy frequently includes Presentation/UI, Service/Application Logic, Business Logic, Data Access Logic, Security, Logging, etc.

[Authorize]
public class StudentController : Controller
{

    private readonly IStudentRepository _repository;
    private readonly IStudentService _service;
    private readonly IUnitOfWork _unitOfWork;

    public StudentController
    (
        IStudentRepository repository, 
        IStudentService service, 
        IUnitOfWork unitOfWork
    )
    {
        _repository = repository;
        _service = service;
        _unitOfWork = unitOfWork;
    }

    public ActionResult UpdateStudentDetails(StudentDetailsViewModel model)
    {
        if (ModelState.IsValid)
        {
            var student = _repository.Get(model.StudentId);

            student.Forename = model.Forename;
            student.Surname = model.Surname;
            student.Urn = model.Urn;

            _service.UpdateStudentDetails(student);

            _unitOfWork.Commit();
        }

        return View(model);
    }
}

Abstracted code, obscurity through indirection.

YAGNI!

I am not about to start claiming that everything should just be thrown together in one Big Ball of Mud. I still feel that SoC certainly is worth following but it can be effectively achieved by applying simple encapsulation, such as putting more repetitive and complex logic of one concern within its own class so that it may be repeatedly invoked by code dealing with other concerns. An example of this would be the code to take an entity key, fetch and materialize the correlating entity from a data store and return it to the caller. This would be well served in a method of a repository class that can be called by code that simply needs the entity. Of course packages/libraries also have their place, in sharing logic across multiple applications or solutions.

Where I see problems starting to arise is when, for example, the aforementioned repository is hidden behind an interface, likely in a separate layer/package/library and dynamically loaded by an IoC infrastructure at runtime. Let’s not pull any punches here, this practice is hiding significant swathes of software behind a dynamic infrastructure which is only resolved at runtime. With the exception of some very specific cases, I see this practice as overused, unnecessarily complex and lacking in the obvious transparency that code must feature to be truly maintainable. The problem is further compounded by the common definition of the separate concerns and layers themselves. Experience has shown me that when coming to maintain an application that makes use of all of these practices you end up with a voice screaming in your head “Get the hell out of my way!”. The abstractions don’t seem to help like they promise and all of their complexity just creates so much overhead that slows down debugging and impedes changes of any significant proportion.

With one exception I have never spoken to anyone who has ever had to swap out an entire layer (i.e. UI, Services, Logic, Data Access, etc.) of their software. I’ve personally been involved in one project where it was required but it was a likely eventuality right from the start and so we were prepared for it. I have rarely seen an example of an implementation of an abstraction being swapped or otherwise significantly altered that did not affect its dependents, regardless of the abstraction. Whenever I have seen large changes made to software it very rarely involves ripping out an entire horizontal layer, tier or storage mechanism. Rather it will frequently involve ripping out or refactoring right across all layers affecting in one change the storage tables, the objects and logic that rely on those tables and the UI or API that relies on those objects and logic. More often than not large changes are made to a single business feature across the entire vertical stack, not a single conceptual technical layer and so it stands to reason that should anything need separating to minimise the impact of changes it should be the features not the technical concerns.

Invest in reality

So my main lesson here is that: The reality of enforcing abstractions through layering and IoC is very different from the theory and usually is not worth it, certainly when used to separate the typical software layers. With the exception of cases such as a component/plug-in design I am now completely convinced that the likelihood of layered abstractions and IoC ever paying off is so small it just isn’t worth the effect that these abstractions have on the immediate maintainability of code. It makes sense in my experience not to focus on abstracting code into horizontal layers and wiring it all up with IoC but to put that focus into building features in vertical slices, with each slice organised into namespaces/folders within the same project (think MVC areas and to a lesser extent the DDD Bounded Context). Spend the effort saved by this simplification keeping the code within the slices clear, cohesive and transparent so that it is easy for someone else to come along, understand and debug. I’d even go so far as to try to keep these slices loosely dependent on each other – but not to the point that you make the code less readable, i.e. don’t just switch hard abstractions of layers into hard abstractions of slices. I don’t want to offend anyone, I’m just putting my experience out there… why not give this a try… I promise you probably won’t die.

Vertical slices with MVC Areas
Vertical slices with MVC Areas

Take a look at the following updated controller action. You know almost exactly what it is doing just by looking at it this one method. This contains ALL of the logic that is executed by the action and to anyone first approaching this code they can be confident in their understanding of the logic without having to dig through class libraries and IoC configuration. Any changes that are made to the action would simply be made here and in the DB project, so much more maintainable! Being completely honest, even recently, seeing code written like this would rub me up the wrong way so I understand if this gets some others on edge but I’ve come full circle now and am pretty convinced of the simplified approach. And its this dichotomy I’d like to discuss.

[Authorize]
public class StudentsController : Controller
{
    public ActionResult UpdateStudentDetails(StudentDetailsViewModel model)
    {
        if (ModelState.IsValid)
        {
            using (var context = new StudentsContext())
            {
                var student = context.Students.Single(s => s.Id == model.StudentId);

                student.Forename = model.Forename;
                student.Surname = model.Surname;
                student.Urn = model.Urn;

                SendStudentDetailsConfirmationEmail(student);

                context.SaveChanges();
            }
        }

        return View(model);
    }

    private void SendStudentDetailsConfirmationEmail(Student student)
    {
        ...
    }
}

Transparent, maintainable, intention-revealing code and no need for IoC!

This is just an opening

So this has been my first attempt to open up some conversation around the use of abstractions in software. I’ve tried to keep it brief and in doing so I’ve only just scratched the surface of what I have learned and what I have to share. There is still so much more for me to cover regarding what I and others I know in the community have been experiencing in recent years: Should we abstract anything at all? What is maintainable if not SoC via IoC? How do we handle external systems integration? What about handling different clients sharing logic and data (UI, API, etc.)? How does this impact self/unit-testing code? When should we go the whole hog and abstract into physical tiers? I could go on… So I intend to write further on this subject in the coming weeks and in the meantime it would be great to hear if anyone has any thoughts on this, good or bad! So drop me a line and keep checking back for further posts.

Things I wish I knew 10 years ago: Abstractions