Concepts of Compliant Data Encryption


This is a somewhat lengthy article that is intended to help anyone who is taking their first steps into learning about encrypting sensitive data in a compliant environment such as meeting PCI DSS requirements. The hope is that this is an effective stepping stone into the dry, dry world of encryption standards and compliance.


As part of some recent work on a proposal for a PCI DSS compliant solution I found myself having to become intimately acquainted with the concepts and standards for protecting data. My initial foray into this world was met with what felt like an impenetrable wall of esoteric information. I had a few terms to get me started on my research. I knew that the solution has been designed to integrate with a ‘Key Management System’ and to use tiered encryption keys known as a ‘Data Encryption Key’ – for encrypting data – and a ‘Key Encryption Key’ for encrypting the Data Encryption Key – and that these are used in symmetric ciphers such as AES256. Now I’m usually pretty good at research (ahem! Googling *cough*) but I struggled to find any clear, easily digestible information on how these concepts all hung together. Wikipedia wasn’t much help and there was a lot of ambiguity between the various articles provided by vendors that only served to hinder a fledgling student of this subject. At any rate, I ploughed on and after a few late nights of reading through extensive, lengthy, dry product briefs and standards documents I managed to wrap my head around the problem space. This whole experience drove me to promise myself that I would record this knowledge in a simple form for posterity. So here we go…

What Problem Domain?

Alright, so where to begin?! Let’s start with the basics… The first problem I encountered here was in trying to understand what *this* is even called! Surely once I knew what the problem domain is commonly called then research would be so much easier. If only! Starting with the good ol’ Wikipedia material on ‘Key Management‘ didn’t turn up anything particularly useful. I know we were looking at using an external Key Management Service (KMS) such as AWS KMS so looking at the documentation there I found this problem space referred to as ‘Envelope Encryption‘. Interestingly this terminology is also used by Google GCM. Oddly however, more ‘classic’ non-vendor sources such as Wikipedia don’t have any reference to this as established terminology; is ‘Envelope Encryption’ a vendor-specific term? It wouldn’t surprise me if it was, especially given the confusion it raises with PKCS envelopes in the PKI space. Searching for Envelope Encryption does however turn up a Wikipedia article on ‘Key Encapsulation‘, which refers us back to the concepts of asymmetric PKI – GAH! 😩. Even worse than that, some OWASP info I found on the subject referred to this as ‘Tiered Encryption’. Makes sense but nowhere else seems to use that term. Finally, further digging in Wikipedia turned up ‘Key Wrap‘ as a concept that seems to describe the problem quite well, even referring to the NIST standard 800-38f – AES Key Wrap Mode covering ‘Key Wrapping’ and the use of ‘Key Encryption Keys’. Turns out this also aligns with PCI, ISO and IETF. Phew!

So, we’re dealing with Key Wrapping. Good, let’s go.

Gimme the freaking concepts already!

Symmetric Encryption

I’ll set the scene with the most fundamental tool we need to use: symmetric encryption. Protecting data at rest is typically achieved using ‘symmetric encryption‘, i.e. one single secret key for encryption and the same key for decryption. It is more than likely that we’re talking about the NIST approved AES (Rijndael) block cipher to perform the cryptographic operations on our sensitive data. For my fellow Microsoft stack developers you’ll probably be using one of the following APIs:


  • CryptoAPI – Also known as CAPI, now obsolete in favour of CNG:
  • CryptoAPI Next Gen – Also known as CNG, available since Windows Vista. The AES API here is accessed via BCryptEncrypt with the AES-GCM flag set


I hope to cover off the differences in the Microsoft Cryptographic APIs in a future post. For now if you are not sure what to use then read up on the various sources above but you’ll probably want to just stick with CNG in your preferred programming model and you should be fine.

Key Management

Using most crypto APIs is a fairly well documented and relatively simple process so we’ll assume you’re not doing anything too crazy and get straight onto key management.

The saying goes that encryption is easy and key management is very, very hard. As I’m sure you are aware, if we only have one secret key for encrypting and decrypting our data then we’d better make jolly well certain that we’re handling that key carefully.

The Wrap

The problem at the root of Key Wrapping is how an information system should store its’ sensitive data at rest (i.e. on disk; in a filesystem or in a database, etc.) while ensuring Confidentiality, Integrity and Availability (CIA triad). So this is different from other common problem domains of encryption such as transmission and identity (PKI, PKCS, signing, etc.) and as such different concepts apply here.

The ‘wrapping’ part refers to the fact that we want to use two types of keys to protect our data. Specifically when talking about symmetric data encryption we’ll want a data encryption key to protect the data and we’ll also want a key encryption key to protect the data encryption key. For this document I’ll use the terminology DEK (data encryption key) and KEK (key encryption key) as per the terminology accepted by NIST.

KEK, MEK, DEK? What the feck?

It’s worth treading carefully in this space and ensuring that wires are not getting crossed when talking about the different keys. For instance, Microsoft frequently uses the DEK terminology to refer to the data encryption key but at the same time using the term Master Key in its DPAPI and SQL TDE models to refer to the KEK with AWS KMS using the term Customer Master Key for the KEK. Where this gets confusing is going back to standards such as NIST that use the term Master Key for something quite different and so it is worth always being aware of your frame of reference when researching in this space. Notably Google’s GCM KMS also uses the NIST style DEK/KEK terminology.

Why bother wrapping?

We need a DEK to encrypt our data, that is inescapable. Furthermore application design best practices dictate that it is worth keeping the DEK close to our data so that a) we can encrypt and decrypt our data without sending the sensitive data outside of our sovereignty (ideally without sending it beyond our application scope), and b) so that we can encrypt and decrypt our data without external dependencies and without the cost of network overheads (resilience, performance). But if we simply keep the unprotected DEK next to the data it protects then anyone who gets the data will be able to decrypt it.

This is where key wrapping comes in. By encrypting the DEK at rest we can keep the DEK close to its subjects and keep it secure and so we use a KEK to protect the DEK. To ensure then that we don’t have the same issue with an unprotected KEK we turn to a tamper proof and standards compliant key management tool such as a Hardware Security Module or a Key Management Service such as AWS KMS.

Your application should never see the KEK and so all of that key management and all of the complexity that comes with it is outsourced to standards compliant (PCI, FIPS, ISO) suppliers. Instead, our application requests a DEK from the KMS or HSM, which returns the DEK in both encrypted and unencrypted form. We store the encrypted form and use the unencrypted form in a transient process (I’ll cover in-memory DEK protection in a future post), disposing of it when we’re done encrypting. We then call the KMS to decrypt the DEK again at a later time when we need to decrypt the data. In short, key wrapping enables us to decouple key management responsibilities from our application’s data encryption requirements.

For further reading on this I’ll point you to the documentation for AWS KMS as this explains the concepts perfectly clearly. And don’t forget, AWS uses the term Customer Master Key – or CMK – to refer to the KEK!

Key Rotation

The final concept that your solution will need to consider is key rotation. ‘Key Rotation’ refers to the process of continually changing your encryption keys. This is a process that should be factored into the design of your solution and for the most part this should be completely automated and securely out of reach of human eyes. There should however also be provisions for manual intervention in response to security incidents.

Cryptographic Periods

Before we complete the discussion on key rotation we must first cover the inescapably esoteric concept of Cryptographic Periods (or Cryptoperiods). A cryptoperiod is the amount of time that an encryption key should ‘live’. It is not enough to have an encryption key and keep it safe. A key won’t last forever. At some point it will become too weak or compromised to serve its purpose. This could be due to anything from the risk of someone discovering the key to the fact that computers will eventually become powerful enough to break the key’s protection. Cryptoperiods are there to manage the risk of compromised encryption. There are a number of key points to be aware of when dealing with cryptoperiods.

First of all, the timespan is usually calculated starting not in days or hours but in terms of cryptographic operations. So if you want to know how long a key should live in terms of elapsed time then you should calculate how many encryptions it can be used in (i.e. how many rows of data can the key be used to encrypt) and extrapolate from there.

The calculation for a cryptoperiod must account for a number of factors including the key type, the sensitivity of the data, the amount of time that the data originator requires access to the data, the amount of time that the data recipient requires access to the data as well as environmental factors from the operating environment (how secure is the server, operating system, application?) right up to staff turnover. As a rough guide, for a symmetric data encryption key protecting hundreds of records you could theoretically keep the data encryption key for as long as 3 years. At higher volumes of data you could be getting down to weeks.

I wish I could give even just an example calculation here but as far as I can tell this is an intentionally arbitrary concept used by standards such as PCI, FIPS and NIST to force a thought process and internal discussions. There are rough guidelines – such as the aforementioned weeks-to-years for data encryption keys – and as long as you adhere to these  and show your working out then you should be OK.

On avoiding re-encryption, I have come across a number of instances where it has been suggested that you may be able to negate the need to re-encrypt historical data with new DEKs by reducing the amount of data covered by a DEK to as low as 1:1. In theory this does make sense but having discussed this with a PCI QSA it is a non-starter if PCI DSS compliance is your goal. You either re-encrypt your data every 5 years’ as an absolute maximum, preferably within 1-3 years, or you delete it within that time.

One thing is absolutely clear however and that is at the end of a cryptoperiod the key should be securely destroyed and any data protected with that key should be re-encrypted with a new key or the data should itself be securely destroyed, the latter being most preferable if at all possible (datensparsamkeit).

Finally, as an FYI, there is some mention of the concept on Wikipedia but it is not very helpful. If you want in-depth detail on the subject then you are best turning to NIST and the indispensable 800-57 publication. That is a very dry and prolonged read but necessary in this matter, it is even directly referenced by PCI DSS 3.2.

And so we return to key rotation…

Key Rotation reprise

Once you know how long you are going to keep your keys you can implement your key rotation policies. Generally speaking these policies will be different for your DEK and your KEK. Your KEK may only require rotation every year while you will likely require a new DEK every ‘X’ number of encryptions performed as per your cryptoperiod calculation with any long-term records requiring re-encryption again with a new DEK every few weeks to years. For your KEK and DEK the process is similar in that you first create a new key, use that new key to encrypt your protected data then dispose of the old key. Where the processes differ of course is how and when this process is triggered. For your DEK you will likely have to count the number of encryptions it is involved in and renew when it exceeds a threshold while also scanning for historical records that are in need of re-encryption. Your KEK on the other hand will/should be held in a HSM or KMS service and this may or may not automatically cycle your KEK. It may be that you need to count your DEKs and request a new KEK on a threshold or you may need to handle an event message from the HSM/KMS that notifies when a KEK is being cycled and then update your stored (encrypted) DEK material.

One useful pattern to aid your future self is to store metadata about the data encryption context alongside your DEKs. Every row of data encrypted by a DEK will of course need to have a reference to that DEK so that your application know which DEK to use for decryption. Over time the size and type of the DEK used by your application will likely change to accommodate enhancements in encryption APIs and along with this you would expect the ciphers used will also change as computing power grows. Consider what will happen if you keep your protected data for long periods of time. The longer you keep your data, the more likely will be to have to update ciphers, such as moving from AES 256 to AES 512 or to a new algorithm altogether. To help deal with this, your application will benefit from having a record of exactly how each piece of data was encrypted. This can be stored alongside your DEK material as metadata and used by the application to make decisions about how to use the encrypted data and when to update it.

Crypto means ‘Cryptography’

Just needed to take this chance to get this point in: ‘Crypto’ means ‘Cryptography’. Anyone who tries to tell you otherwise is a shyster (I think they call them influencers these days) and they’re trying to sell you something I promise you don’t need or want.

Concepts of Compliant Data Encryption

Babeling in defence of JavaScript

And so it goes, the eternal question “What is wrong with JavaScript?” and the inevitable, inescapably droll, reply:

Oh, ho ho ha ha haaaaaaaaaaah… The gag never gets less funny. I need to be clear that Scott Hanselman is one of my favourite people in the public eye. I hold him to be an industry treasure and I’m fully aware of him just poking fun here but we’ve all seen this dialog before and we all know it is not always so lighthearted.

At the end of the day, these scenarios showing how ‘broken’ JavaScript is are almost always bizarrely contrived examples that can be easily solved with the immortal words of the great Tommy Cooper:

Patient: “Doctor, it hurts when I do this”
Doctor: “Well, don’t do it”

Powerful Facts

Lets be absolutely clear that JavaScript is an incredibly powerful language. It is the ubiquitous web programming language. Of course it currently has a monopoly that ensures this status. That does not change the fact that JavaScript runs on the fastest, most powerful and most secure websites. So clearly it does exactly what is needed when in the right hands.

JavaScript is free with a very low barrier to entry – all you need is a web browser.

JavaScript through its node.js guise powers Netflix, LinkedIn, NASA, PayPal… The list goes on and on.

Furthermore it is easy enough to learn and use that it is a firm favourite for beginners learning programming. It is in this last point that we observe some particularly harmful industry attitudes towards JavaScript.

What’s The Damage?

So now that we can all agree that Tommy Cooper has fixed JavaScript from the grave and now that we’re clear about just how seriously capable JavaScript is as a language, we can get onto the central point: industry attitudes to JavaScript are damaging. While many languages such as SQL and PHP are common targets of derision and it seems to me that each case has it’s own unique characteristics and nuances, there is something notably insidious about the way JavaScript is targeted.

One of the more painful examples of JavaScript’s negative press can be observed in the regular reports from those learning programming that they feel mocked for learning JavaScript. This is, quite frankly, appalling. We work in an industry that is suffering from a massive global undersupply of talent and we’re making potential recruits feel like crap. Well done team! Even globally established personalities such as Miguel de Icaza of Xamarin fame can’t help but fan these flames. What chance do new recruits have?

The JavaScript Apocalypse?

Moving on to the issue that prompted me to start writing this article; WebAssembly is here. It has a great website explaining all about it: It even has a logo! It also has a bunch of shiny new features that promise to improve the experience of end users browsing the web.

WebAssembly logo
Of course WebAssembly has a logo!

From distribution, threading and performance improvements to a new common language with expanded data types, WebAssembly offers a bunch of improvements to the web development toolkit. I’m all for these changes! JavaScript and the web programming environment are far from perfect and these are another great step in the right direction.

Of course WebAssembly’s common language also promises to open up the web client for other programming languages. “Hurrah!” I hear many cheer. I’m seeing countless messages of support for the death of JavaScript at the hands of the obviously infinitely superior quality languages of C#, Rust and Java 🙄 Yeah… I’m not so sure…


Like most programming languages, JavaScript is a product of its environment: namely, the web browser. It did have competition in the early days with VBScript back in IE4/5… I think… It was a long time ago. But otherwise it has developed on its own in response to demand from the web developer community and in response to the changing web landscape. The modern incarnations of JavaScript (ECMA Script 6/7/8) are incredibly powerful, including modern language features such as an async programming model, functional capabilities and so on. In many ways modern JavaScript resembles the languages to which it is so frequently compared but it also lacks many language features that are less relevant to web client programming such as generics and C#’s LINQ. It’s loose typing system make it well suited for working with the HTML DOM. Overall it would appear, as you might expect, that JavaScript is made for web client programming and is in fact the best choice for this task.

Even the WebAssembly project agrees, confirming on the project website that JavaScript will continue to be the ‘special’ focus of attention and you know what? This is a good thing!


Look, we already have other languages that compile for the web client but I don’t see any existential threat from the (albeit beautiful) CoffeeScript or from the (misguided) TypeScript. Sure, WebAssembly will make this more effective but the reasons that TypeScript hasn’t already taken over the web development world will still apply to C# and WebAssembly. We have seen a similar battle play out in the database world where NoSQL was lauded as the slayer of the decrepit 1970’s technology we all know as SQL. That was until NoSQL databases started to implement SQL. Turns out that SQL is hard to beat when it comes to querying data, which is unsurprising when you consider its 50-odd years of evolution in that environment and the same rule will apply to any JavaScript challengers. Personally I suspect a large part of JavaScript’s alternatives failing to take hold is that web client programming doesn’t need the added static typing, etc.; in my experience all these challengers do is introduce compiler warnings and complexity that waste time. Ultimately I don’t have all the answers here but it is fair to say that it would take a serious effort to out-web the language that has evolved for the web environment.

The Tower of Babel (from WikiPedia)

Where my real concern lies is in the well known problems that are brought about by having too much choice when it comes to communicating. We use human readable programming languages so that we can communicate our programs to each other. With that in mind it is clearly more effective in the long run if we all learn to talk the same language. The story of The Tower of Babel shows us that for a long time we have considered too much choice to be a very bad thing when it comes to communication.

It would be a frustrating situation indeed if we were to end up having to consider and manage the overhead of multiple languages for a single task all because of some daft attitudes towards JavaScript. Furthermore, businesses that are struggling to find web developers don’t now also need to worry about whether these developers are Rust, Java or C# web developers. JavaScript is the right tool for the job so lets stop wasting time with all the JavaScript bashing and get on board with an incredibly powerful language we can all understand!

Babeling in defence of JavaScript

A functional solution to interfacitis?

noun: interfacitis
inflammation of a software, most commonly from overuse of interfaces and other abstractions but also from… well… actually it’s mostly just interfaces.

An illness of tedium

Over the years my experience has come to show me that unnecessary abstractions cause some of the most significant overheads and inertia in software projects. Specifically, I want to talk about one of the more tedious and time consuming areas of maintaining abstracted code; that which lies in the overzealous use of interfaces (C#/Java).

Neither C# or Java are particularly terse languages. When compared to F# with its Hindley-Milner type inference, working in these high-level OO languages often feels like filling out forms in triplicate. All too often I have experienced the already verbose syntax of these languages amplified by dozens of lengthy interfaces, each only there to repeat the exact signature of it’s singular implementation. I’m sure you’ve all been there. In my experience this is one of the more painful areas of maintenance, causing slowdowns, distraction and lack of focus. And I’ve been thinking for some time now that we’d probably be better off using interfaces (or even thin abstract classes) only when absolutely necessary.

What is necessary?

I like to apply a simple yard stick here: if you have a piece of application functionality that necessitates the ability to call multiple different implementations of a component then you probably require an interface. This means situations such as plugins or provider-based architectures would use an interface (of course!) but your CustomerRegistrationService that is called only by your CustomerRegistrationController will not. The message is simple, don’t start introducing unnecessary bureaucracy for the sake of it.

There are, I admit, cases where you might feel abstraction is required. What about a component that calls out to a third party system on the network? Surely you want to be able to isolate this behind an interface? And so I put it to you; why do you need an interface here? Why not use a function? After all, C# is now very well equipped with numerous, elegant functional features and many popular DI frameworks support delegate injection. Furthermore if you are following the SOLID practice of interface segregation then chances are your interface will contain only one or two method definitions anyways.

An example

So, for those times when you absolutely must abstract a single implementation, here is a simple example of an MVC controller using ‘functional’ IoC:

public class RegistrationController : Controller

    private readonly Func<string, RegistrationDetails> _registrationDetailsQuery;

    public RegistrationController(Func<string, RegistrationDetails> registrationDetailsQuery)
        _registrationDetailsQuery = registrationDetailsQuery;

    public ActionResult Index()
        var currentRegistration = _registrationDetailsQuery(User.Identity.Name);

        var viewModel = ViewModelMapper.Instance
            .Map<RegistrationDetails, RegistrationDetailsViewModel>(currentRegistration);

        return View(viewModel);



13-March-2018: It has been pointed out to me that a further benefit of this approach is that static providers may also supply IoC dependencies whereas instances are required for interface-based IoC. What are your thoughts on this approach?

A functional solution to interfacitis?

FileFormatException: Format error in package

OK so we’re all completely clear on what this error means and what must be done to resolve it right? I mean with a meaningful error like that how can anyone be mistaken? Oh? What’s that? You still don’t know? Let’s be a bit more specific: System.IO.FileFormatException: Format error in package Better? Didn’t think so. It’s not an error message, that’s why. I’ll tell you what it is though, it’s stupid and even more stupid when you find out what causes it.

I came across this delightfully wishy-washy error when configuring an Umbraco 7 deployment pipeline in TeamCity and Octopus Deploy. The Umbraco .csproj MSBuild file referenced a bunch of files as you might expect but I also needed to add a .nuspec file which referenced a bunch of other files. Long-story-short, the error came about because the files specified by the .csproj overlapped with the files specified by the .nuspec file. There were about 1000-odd generated files that the NuGet packaging components in their infinite wisdom added to the .nupkg archive as many times as they were referenced. NuGet was able to do this silly thing without any complaints and inspecting the confused package in NuGet Package Explorer or 7Zip or Windows Zip gave no indication of any issues whatsoever. It was not until Octopus called on NuGet to unpack the archive for deployment that we got the above error.

Stupid, right? Stupid!

FYI: I was able to get to the bottom of this issue after 2 freaking days of pain when I eventually used JetBrains dotPeek to debug step-through the NuGet.Core and System.IO.Packaging components to see what on earth was going on. In the end it was this piece of code in System.IO.Packaging.Package that was causing the issue:

public PackagePartCollection GetParts()
	PackagePart[] partsCore = this.GetPartsCore();
	Dictionary dictionary = new Dictionary(partsCore.Length);
	for (int index = 0; index < partsCore.Length; ++index)
	  PackUriHelper.ValidatedPartUri uri = (PackUriHelper.ValidatedPartUri) partsCore[index].Uri;
	  if (dictionary.ContainsKey(uri))
		throw new FileFormatException(MS.Internal.WindowsBase.SR.Get("BadPackageFormat"));
	  dictionary.Add(uri, partsCore[index]);

I mean, why would anyone consuming such a core piece of functionality as this API ever want to know anything about the conditions that led to the corruption of a 30MB package containing thousands of files? I mean it’s not like System.IO.Packaging was ever intended to be re-used all across the globe, right?

Anyways, here’s the error log for helping others with searching for this error and stuff.

[14:21:27]Step 1/1: Create Octopus Release
[14:21:27][Step 1/1] Step 1/1: Create Octopus release (OctopusDeploy: Create release)
[14:21:27][Step 1/1] Octopus Deploy
[14:21:27][Octopus Deploy] Running command:   octo.exe create-release --server https://octopus.url --apikey SECRET --project client-co-uk --enableservicemessages --channel Client Release --deployto Client CI --progress --packagesFolder=packagesFolder
[14:21:27][Octopus Deploy] Creating Octopus Deploy release
[14:21:27][Octopus Deploy] Octopus Deploy Command Line Tool, version 3.3.8+Branch.master.Sha.f8a34fc6097785d7d382ddfaa9a7f009f29bc5fb
[14:21:27][Octopus Deploy] 
[14:21:27][Octopus Deploy] Build environment is NoneOrUnknown
[14:21:27][Octopus Deploy] Using package versions from folder: packagesFolder
[14:21:27][Octopus Deploy] Package file: packagesFolder\Client.0.1.0-unstable0047.nupkg
[14:21:28][Octopus Deploy] System.IO.FileFormatException: Format error in package.
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.GetParts()
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.Open(Stream stream, FileMode packageMode, FileAccess packageAccess, Boolean streaming)
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.Open(Stream stream)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.GetManifestStreamFromPackage(Stream packageStream)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.c__DisplayClassa.b__5()
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.EnsureManifest(Func`1 manifestStreamFactory)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage..ctor(String filePath, Boolean enableCaching)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.PackageVersionResolver.AddFolder(String folderPath)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.CreateReleaseCommand.c__DisplayClass1_0.b__5(String v)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.c__DisplayClass15_0.b__0(OptionValueCollection v)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.ActionOption.OnParseComplete(OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.Option.Invoke(OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.ParseValue(String option, OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.Parse(String argument, OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.c__DisplayClass26_0.b__0(String argument)
[14:21:28][Octopus Deploy]    at System.Linq.Enumerable.WhereArrayIterator`1.MoveNext()
[14:21:28][Octopus Deploy]    at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
[14:21:28][Octopus Deploy]    at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.Parse(IEnumerable`1 arguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.Options.Parse(IEnumerable`1 arguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.ApiCommand.Execute(String[] commandLineArguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Program.Main(String[] args)
[14:21:28][Octopus Deploy] Exit code: -3
[14:21:28][Octopus Deploy] Octo.exe exit code: -3
[14:21:28][Step 1/1] Unable to create or deploy release. Please check the build log for details on the error.
[14:21:28][Step 1/1] Step Create Octopus release (OctopusDeploy: Create release) failed

Update – 19th September 2018

I have just this morning helped a colleague through a permutation of this issue. We have recently upgraded TeamCity and it seems this has pushed the issue further down the pipeline. Where the above error would appear in TeamCity during packaging, it seems that the components have been updated to no longer throw the obscure error. My colleague found this issue now manifests when attempting to deploy through Octopus, throwing up the error: “Unable to download package: Item has already been added. Key in dictionary: …”

As before, here is a slightly redacted log to help with searching:

Acquiring packages
Making a list of packages to download
Downloading package CLIENT_NAME.Web version 6.0.0-beta0000 from feed: ''
Unable to download package: 
Item has already been added. Key in dictionary: 'assets/fonts/fsmatthew-light-webfont.svg'  Key being added: 'assets/fonts/fsmatthew-light-webfont.svg'
System.ArgumentException: Item has already been added. Key in dictionary: 'assets/fonts/fsmatthew-light-webfont.svg'  Key being added: 'assets/fonts/fsmatthew-light-webfont.svg'
FileFormatException: Format error in package

Things I wish I knew 10 years ago: Abstractions

We need to talk about abstractions

The main reason I decided to start this blog is that I have begun working for a company that has genuinely challenged many of my assumptions about how software should be developed. I have spent much of my career learning from the more prominent voices in software development about how to write software effectively. I have learned, practiced and preached the tenets of clean code, TDD, layered design, SOLID, to name a few of the better known programming practices and had always believed that I was on a true path to robust, maintainable software. Now I find myself in a position where over the space of just one year I have already questioned many of the practices I had learned and taught in the preceding decade.

I hope to share on this blog much of what I have discovered of late but for my first entry discussing programming practices I want to talk about abstractions. In particular I want to call into question what I have come to understand as overuse of abstractions – hiding implementations away in layers/packages, behind interfaces, using IoC and dependency inversion – as often encountered in the C#/.NET and Java world.


I have been wondering lately if I have simply spent years misunderstanding and misapplying abstractions, but I have seen enough code written by others in books, tutorials, blogs, sample code and more diagrams than I can bear to know that I have not been alone in my practices. Furthermore, I have found myself on a few occasions of late in discussions with developers of similar experience who have come to share a similar feeling towards abstractions.

The all too familiar layer diagram
The all too familiar layer diagram. © Microsoft.
A typical layering structure
A typical layering structure

So what do I mean by abstractions and what is the point of them, really? The old premise and the one that I would always reiterate is that abstractions help enforce separation of concerns (SoC) by isolating implementation details from calling code. The reasoning being that code of one concern should be able to change without affecting the code dealing with other concerns, supposedly because code dealing with one concern will change for different reasons and at different times than the code dealing with other concerns. Of course we mustn’t forget that one of the more natural causes of abstractions is the isolation of logic to enable Unit Testing. Ultimately the result is that software is written in such a way that the different code dealing with different concerns is kept separate by abstractions such as interfaces and layers while making use of IoC and Dependency Injection to wire the abstractions together. Furthermore it is worth me stating that the usual separate ‘concerns’ touted by such advocacy frequently includes Presentation/UI, Service/Application Logic, Business Logic, Data Access Logic, Security, Logging, etc.

public class StudentController : Controller

    private readonly IStudentRepository _repository;
    private readonly IStudentService _service;
    private readonly IUnitOfWork _unitOfWork;

    public StudentController
        IStudentRepository repository, 
        IStudentService service, 
        IUnitOfWork unitOfWork
        _repository = repository;
        _service = service;
        _unitOfWork = unitOfWork;

    public ActionResult UpdateStudentDetails(StudentDetailsViewModel model)
        if (ModelState.IsValid)
            var student = _repository.Get(model.StudentId);

            student.Forename = model.Forename;
            student.Surname = model.Surname;
            student.Urn = model.Urn;



        return View(model);

Abstracted code, obscurity through indirection.


I am not about to start claiming that everything should just be thrown together in one Big Ball of Mud. I still feel that SoC certainly is worth following but it can be effectively achieved by applying simple encapsulation, such as putting more repetitive and complex logic of one concern within its own class so that it may be repeatedly invoked by code dealing with other concerns. An example of this would be the code to take an entity key, fetch and materialize the correlating entity from a data store and return it to the caller. This would be well served in a method of a repository class that can be called by code that simply needs the entity. Of course packages/libraries also have their place, in sharing logic across multiple applications or solutions.

Where I see problems starting to arise is when, for example, the aforementioned repository is hidden behind an interface, likely in a separate layer/package/library and dynamically loaded by an IoC infrastructure at runtime. Let’s not pull any punches here, this practice is hiding significant swathes of software behind a dynamic infrastructure which is only resolved at runtime. With the exception of some very specific cases, I see this practice as overused, unnecessarily complex and lacking in the obvious transparency that code must feature to be truly maintainable. The problem is further compounded by the common definition of the separate concerns and layers themselves. Experience has shown me that when coming to maintain an application that makes use of all of these practices you end up with a voice screaming in your head “Get the hell out of my way!”. The abstractions don’t seem to help like they promise and all of their complexity just creates so much overhead that slows down debugging and impedes changes of any significant proportion.

With one exception I have never spoken to anyone who has ever had to swap out an entire layer (i.e. UI, Services, Logic, Data Access, etc.) of their software. I’ve personally been involved in one project where it was required but it was a likely eventuality right from the start and so we were prepared for it. I have rarely seen an example of an implementation of an abstraction being swapped or otherwise significantly altered that did not affect its dependents, regardless of the abstraction. Whenever I have seen large changes made to software it very rarely involves ripping out an entire horizontal layer, tier or storage mechanism. Rather it will frequently involve ripping out or refactoring right across all layers affecting in one change the storage tables, the objects and logic that rely on those tables and the UI or API that relies on those objects and logic. More often than not large changes are made to a single business feature across the entire vertical stack, not a single conceptual technical layer and so it stands to reason that should anything need separating to minimise the impact of changes it should be the features not the technical concerns.

Invest in reality

So my main lesson here is that: The reality of enforcing abstractions through layering and IoC is very different from the theory and usually is not worth it, certainly when used to separate the typical software layers. With the exception of cases such as a component/plug-in design I am now completely convinced that the likelihood of layered abstractions and IoC ever paying off is so small it just isn’t worth the effect that these abstractions have on the immediate maintainability of code. It makes sense in my experience not to focus on abstracting code into horizontal layers and wiring it all up with IoC but to put that focus into building features in vertical slices, with each slice organised into namespaces/folders within the same project (think MVC areas and to a lesser extent the DDD Bounded Context). Spend the effort saved by this simplification keeping the code within the slices clear, cohesive and transparent so that it is easy for someone else to come along, understand and debug. I’d even go so far as to try to keep these slices loosely dependent on each other – but not to the point that you make the code less readable, i.e. don’t just switch hard abstractions of layers into hard abstractions of slices. I don’t want to offend anyone, I’m just putting my experience out there… why not give this a try… I promise you probably won’t die.

Vertical slices with MVC Areas
Vertical slices with MVC Areas

Take a look at the following updated controller action. You know almost exactly what it is doing just by looking at it this one method. This contains ALL of the logic that is executed by the action and to anyone first approaching this code they can be confident in their understanding of the logic without having to dig through class libraries and IoC configuration. Any changes that are made to the action would simply be made here and in the DB project, so much more maintainable! Being completely honest, even recently, seeing code written like this would rub me up the wrong way so I understand if this gets some others on edge but I’ve come full circle now and am pretty convinced of the simplified approach. And its this dichotomy I’d like to discuss.

public class StudentsController : Controller
    public ActionResult UpdateStudentDetails(StudentDetailsViewModel model)
        if (ModelState.IsValid)
            using (var context = new StudentsContext())
                var student = context.Students.Single(s => s.Id == model.StudentId);

                student.Forename = model.Forename;
                student.Surname = model.Surname;
                student.Urn = model.Urn;



        return View(model);

    private void SendStudentDetailsConfirmationEmail(Student student)

Transparent, maintainable, intention-revealing code and no need for IoC!

This is just an opening

So this has been my first attempt to open up some conversation around the use of abstractions in software. I’ve tried to keep it brief and in doing so I’ve only just scratched the surface of what I have learned and what I have to share. There is still so much more for me to cover regarding what I and others I know in the community have been experiencing in recent years: Should we abstract anything at all? What is maintainable if not SoC via IoC? How do we handle external systems integration? What about handling different clients sharing logic and data (UI, API, etc.)? How does this impact self/unit-testing code? When should we go the whole hog and abstract into physical tiers? I could go on… So I intend to write further on this subject in the coming weeks and in the meantime it would be great to hear if anyone has any thoughts on this, good or bad! So drop me a line and keep checking back for further posts.

Things I wish I knew 10 years ago: Abstractions