Babeling in defence of JavaScript

And so it goes, the eternal question “What is wrong with JavaScript?” and the inevitable, inescapably droll, reply:

Oh, ho ho ha ha haaaaaaaaaaah… The gag never gets less funny. I need to be clear that Scott Hanselman is one of my favourite people in the public eye. I hold him to be an industry treasure and I’m fully aware of him just poking fun here but we’ve all seen this dialog before and we all know it is not always so lighthearted.

At the end of the day, these scenarios showing how ‘broken’ JavaScript is are almost always bizarrely contrived examples that can be easily solved with the immortal words of the great Tommy Cooper:

Patient: “Doctor, it hurts when I do this”
Doctor: “Well, don’t do it”

Powerful Facts

Lets be absolutely clear that JavaScript is an incredibly powerful language. It is the ubiquitous web programming language. Of course it currently has a monopoly that ensures this status. That aside, it remains a fact that JavaScript runs on the fastest, most powerful and most secure websites. So clearly it does exactly what is needed when in the right hands.

JavaScript is free with a very low barrier to entry – all you need is a web browser.

JavaScript through its node.js guise powers Netflix, LinkedIn, NASA, PayPal… The list goes on and on.

Furthermore it is easy enough to learn and use that it is a firm favourite for beginners learning programming. It is in this last point that we observe some particularly harmful industry attitudes towards JavaScript.

What’s The Damage?

So now that we can all agree that Tommy Cooper has fixed JavaScript from the grave and now that we’re clear about just how seriously capable JavaScript is as a language, we can get onto the central point: industry attitudes to JavaScript are damaging. While many languages such as SQL and PHP are common targets of derision and it seems to me that each case has it’s own unique characteristics and nuances, there is something notably insidious about the way JavaScript is targeted.

One of the more painful examples of JavaScript’s negative press can be observed in the regular reports from those learning programming that they feel mocked for learning JavaScript. This is, quite frankly, appalling. We work in an industry that is suffering from a massive global undersupply of talent and we’re making potential recruits feel like crap. Well done team! Even globally established personalities such as Miguel de Icaza of Xamarin fame can’t help but fan these flames. What chance do new recruits have?

The JavaScript Apocalypse?

Moving on to the issue that prompted me to start writing this article; WebAssembly is here. It has a great website explaining all about it: webassembly.org. It even has a logo! It also has a bunch of shiny new features that promise to improve the experience of end users browsing the web.

WebAssembly logo
Of course WebAssembly has a logo!

From distribution, threading and performance improvements to a new common language with expanded data types, WebAssembly offers a bunch of improvements to the web development toolkit. I’m all for these changes! JavaScript and the web programming environment are far from perfect and these are another great step in the right direction.

Of course WebAssembly’s common language also promises to open up the web client for other programming languages. “Hurrah!” I hear many cheer. I’m seeing countless messages of support for the death of JavaScript at the hands of the obviously infinitely superior quality languages of C#, Rust and Java 🙄 Yeah… I’m not so sure…

Nah!

Like most programming languages, JavaScript is a product of its environment: namely, the web browser. It did have competition in the early days with VBScript back in IE4/5… I think… It was a long time ago. But otherwise it has developed on its own in response to demand from the web developer community and in response to the changing web landscape. The modern incarnations of JavaScript (ECMA Script 6/7/8) are incredibly powerful, including modern language features such as an async programming model, functional capabilities and so on. In many ways modern JavaScript resembles the languages to which it is so frequently compared but it also lacks many language features that are less relevant to web client programming such as generics and C#’s LINQ. It’s loose typing system make it well suited for working with the HTML DOM. Overall it would appear, as you might expect, that JavaScript is made for web client programming and is in fact the best choice for this task.

Even the WebAssembly project agrees, confirming on the project website that JavaScript will continue to be the ‘special’ focus of attention and you know what? This is a good thing!

Babel

Look, we already have other languages that compile for the web client but I don’t see any existential threat from the (albeit beautiful) CoffeeScript or from the (misguided) TypeScript. Sure, WebAssembly will make this more effective but the reasons that TypeScript hasn’t already taken over the web development world will still apply to C# and WebAssembly. We have seen a similar battle play out in the database world where NoSQL was lauded as the slayer of the decrepit 1970’s technology we all know as SQL. That was until NoSQL databases started to implement SQL. Turns out that SQL is hard to beat when it comes to querying data, which is unsurprising when you consider its 50-odd years of evolution in that environment. I suspect a large part of JavaScript’s alternatives such as TypeScript failing to take hold is that web client programming doesn’t need static typing; in my experience all it does is introduce compiler warnings and complexity that waste time. Ultimately I don’t know the answers here. At any rate it would take a serious effort to out-web the language that has evolved for the web environment.

The Tower of Babel (from WikiPedia)

But where my real concern lies is in the well known problems that are brought about by having too much choice when it comes to communicating. We use human readable programming languages so that we can communicate our programs to each other. With that in mind it is clearly more effective in the long run if we all learn to talk the same language. The story of The Tower of Babel shows us that for a long time we have considered too much choice to be a very bad thing when it comes to communication.

It would be a frustrating situation indeed if we were to end up having to consider and manage the overhead of multiple languages for a single task all because of some daft attitudes towards JavaScript. Furthermore, businesses that are struggling to find web developers don’t now also need to worry about whether these developers are Rust, Java or C# web developers. JavaScript is the right tool for the job so lets stop wasting time with all the JavaScript bashing and get on board with an incredibly powerful language we can all understand!

Advertisements
Babeling in defence of JavaScript

All you (probably) need to know about Microservices

So I’ve finally succumbed to writing about that with the never ending hype cycle: microservice architecture.

Where to begin? Well a distributed microservice architecture is extremely complicated to build, maintain and debug. It’s something that is born of very unique organisational constraints. If you really need to go down this path then you’re probably working at an organisation similar to Netflix or McDonald’s and you really wouldn’t be here scouring for info.

Fin.

P.S. There may of course be academic reasons for learning about this architecture and if that’s the case I promise you’ll get mega marks for focusing your essay on the ‘why’ rather than the ‘how’ and in that case I highly recommend this: https://martinfowler.com/articles/microservice-trade-offs.html

All you (probably) need to know about Microservices

A functional solution to interfacitis?

/ˈɪntəfeɪsʌɪtəs/
noun
noun: interfacitis
inflammation of a software, most commonly from overuse of interfaces and other abstractions but also from… well… actually it’s mostly just interfaces.

An illness of tedium

Over the years my experience has come to show me that unnecessary abstractions cause some of the most significant overheads and inertia in software projects. Specifically, I want to talk about one of the more tedious and time consuming areas of maintaining abstracted code; that which lies in the overzealous use of interfaces (C#/Java).

Neither C# or Java are particularly terse languages. When compared to F# with its Hindley-Milner type inference, working in these high-level OO languages often feels like filling out forms in triplicate. All too often I have experienced the already verbose syntax of these languages amplified by dozens of lengthy interfaces, each only there to repeat the exact signature of it’s singular implementation. I’m sure you’ve all been there. In my experience this is one of the more painful areas of maintenance, causing slowdowns, distraction and lack of focus. And I’ve been thinking for some time now that we’d probably be better off using interfaces (or even thin abstract classes) only when absolutely necessary.

What is necessary?

I like to apply a simple yard stick here: if you have a piece of application functionality that necessitates the ability to call multiple different implementations of a component then you probably require an interface. This means situations such as plugins or provider-based architectures would use an interface (of course!) but your CustomerRegistrationService that is called only by your CustomerRegistrationController will not. The message is simple, don’t start introducing unnecessary bureaucracy for the sake of it.

There are, I admit, cases where you might feel abstraction is required. What about a component that calls out to a third party system on the network? Surely you want to be able to isolate this behind an interface? And so I put it to you; why do you need an interface here? Why not use a function? After all, C# is now very well equipped with numerous, elegant functional features and many popular DI frameworks support delegate injection. Furthermore if you are following the SOLID practice of interface segregation then chances are your interface will contain only one or two method definitions anyways.

An example

So, for those times when you absolutely must abstract a single implementation, here is a simple example of an MVC controller using ‘functional’ IoC:

public class RegistrationController : Controller
{

    private readonly Func<string, RegistrationDetails> _registrationDetailsQuery;

    public RegistrationController(Func<string, RegistrationDetails> registrationDetailsQuery)
    {
        _registrationDetailsQuery = registrationDetailsQuery;
    }

    public ActionResult Index()
    {
        var currentRegistration = _registrationDetailsQuery(User.Identity.Name);

        var viewModel = ViewModelMapper.Instance
            .Map<RegistrationDetails, RegistrationDetailsViewModel>(currentRegistration);

        return View(viewModel);
    }
}

 

Addendum

13-March-2018: It has been pointed out to me that a further benefit of this approach is that static providers may also supply IoC dependencies whereas instances are required for interface-based IoC. What are your thoughts on this approach?

A functional solution to interfacitis?

Demystifying AI – The AI explosion

This is an article I had originally written as part of a stream of work that has now been put on hold indefinitely. I thought it a shame for it to languish in OneNote.

What’s with all this attention to Artificial Intelligence then?

Well that is a very good question. To be perfectly frank, not that much has changed of late in the world of Artificial Intelligence (AI) as a whole that should justify all the current excitement. That’s not to say that there isn’t cool stuff going on; there really is great progress being made… in the world of Machine Learning. And if we are to begin the process of ‘Demystifying AI’ then this is a very good place to start.

 

AI is a very broad area of technology encompassing research from robotics to computing emotions (affective computing) and everything in between, including Machine Learning or ‘ML’. As alluded to just a moment ago it is within ML specifically that we are seeing the greatest progress. Think of a modern ‘AI’ technology that is gaining a lot of attention and you can place a safe bet on it specifically using ML techniques: Natural Language Processing? That’s ML. Image classification? That’s ML. Sentiment Analysis? Also ML. The recent news of Go players being defeated by a computer? You guessed it… ML.

 

What is ML?

ML is an approach to analysing data that is based on training statistical models to predict outcomes. You may well have come across Statistical (or Linear) Regression back in your school days; well this is possibly the best known example from a range of techniques that make up the world of ML. To put it simply, an ML model learns from past data to make better decisions in the future.

mlaidl

Now it’s time to introduce what is arguably the beating heart of the AI frenzy: Deep Learning. While there are no trendy acronyms for Deep Learning it is fair to say that Deep Learning has become a bit of a buzz-word itself. Deep Learning takes its name from the concept of Deep Neural Networks (or DNNs, there’s your acronym!). The useful details of what DNNs are and how they function cannot be easily summarized, suffice it to say that DNNs are an ML technology that borrows heavily from the structure of the brain, hence the ‘neural’ part of the name. [N.B. These details are already planned for a follow-up piece.] To re-cap: Deep Learning is a subset of ML, which is in-turn a subset of AI and it is Deep Learning that drives the current hype.

 

What is Deep Learning?

Say you wanted to build some software to identify objects in an image; your usual non-Deep-Learning approach to this would include manually writing rules into the software to recognise the details you’re looking for. If you wanted to identify if a picture was of a bird or a cat, you would manually write rules to identify features such as whiskers or ears or wings and so on. This is complicated, time-consuming and error prone. Deep Learning takes a different approach. Instead you would create a Deep Learning model then supply it with a bunch of pictures. For each picture you supply, you would tell the model if it was of a bird or a cat. As you supply each image and it’s corresponding label, the model learns. Once enough data has been supplied you can then supply an image without a label and the model will give an accurate indication of whether it is a bird or a cat.

 

So what is the ‘explosion’ all about?

Continuing the bird/cat model example, the more example labelled pictures you supply to the model, the better the results will be. This seems simple and even somewhat obvious but it strikes at the heart of the current ‘AI boom’. Deep Learning has been around for a while now, evolving over a period of 30 years more or less, and one of the key reasons that it has never been so commercially successful as now is that there just hasn’t been enough readily available data to make it so accessible. To give you some idea of why this has been an issue, if you want to get to a high level of accuracy for classifying complex pictures then you’re going to need  thousands or even millions of examples depending on the complexity. Well we now have data, lots and lots and lots of it and it has never been more easy to get our hands on it. Do a quick Google image search for ‘Cat’; there is a rough cut of half your ‘training’ set (*ahem* copyright issues aside) and I’m sure you can figure out how to get the other half.

 

So we have data, but that isn’t all we need. The other side of the current explosion is raw computing power. Building a statistical model that can accurately identify cats and birds in pictures is very heavy work for a computer but thankfully with the advent of cloud-scale computing resources, available computing power is now big enough and cheap enough to make running this sort of model both practical and cost-effective. It’s cheap enough that Google can even give this stuff away as an educational toy (https://teachablemachine.withgoogle.com/).

 

So its all about pictures of cats and birds?

Beyond the abundance of data and computing power, probably the most significant factor in the commercial success of Deep Learning is its versatility. This is especially true when considering the success of Deep Learning against other ML techniques which have not gained the same level of attention. If you have enough data, regardless of its form, Deep Learning can be trained to extract knowledge from it. This has sent businesses, scientists and engineers into a global flurry of R&D to find all the amazing ways in which this technology can enhance our lives.

For years now the financial services industry has been at the forefront of applying ML techniques to everything from fraud prevention and risk management to investments and savings predictions; there are few – if any – areas of the industry that have yet to see the benefits of AI.

Manufacturing is seeing growing uptake in the application of ML to improve efficiency through waste reduction and better predictive analysis of production demands and infrastructure maintenance.

More recently utilities are beginning to get into the ML game with the UK National Grid striking up discussions with Google to investigate applying the infamous DeepMind AI to maximise National Grid’s use of renewables and to more efficiently balance supply and demand across its nationwide infrastructure.

Across all sectors business now find themselves in a position to use ML to better understand and engage with their customers. From utilities gaining greater knowledge of their customer’s consumption habits through to retailers and service providers more effectively capturing sales conversion opportunities, the possibilities are as varied as your data.

 

Would you like some knowledge with that?

So that concludes this effort to clear away some of the fog and hyperbole from the current AI phenomenon (ahem! It’s all ML, remember!?). In a nutshell, if you have a ton of data and you need to get knowledge from it then Deep Learning could well be your go-to tool.

Demystifying AI – The AI explosion

FileFormatException: Format error in package

OK so we’re all completely clear on what this error means and what must be done to resolve it right? I mean with a meaningful error like that how can anyone be mistaken? Oh? What’s that? You still don’t know? Let’s be a bit more specific: System.IO.FileFormatException: Format error in package Better? Didn’t think so. It’s not an error message, that’s why. I’ll tell you what it is though, it’s stupid and even more stupid when you find out what causes it.

I came across this delightfully wishy-washy error when configuring an Umbraco 7 deployment pipeline in TeamCity and Octopus Deploy. The Umbraco .csproj MSBuild file referenced a bunch of files as you might expect but I also needed to add a .nuspec file which referenced a bunch of other files. Long-story-short, the error came about because the files specified by the .csproj overlapped with the files specified by the .nuspec file. There were about 1000-odd generated files that the NuGet packaging components in their infinite wisdom added to the .nupkg archive as many times as they were referenced. NuGet was able to do this silly thing without any complaints and inspecting the confused package in NuGet Package Explorer or 7Zip or Windows Zip gave no indication of any issues whatsoever. It was not until Octopus called on NuGet to unpack the archive for deployment that we got the above error.

Stupid, right? Stupid!

FYI: I was able to get to the bottom of this issue after 2 freaking days of pain when I eventually used JetBrains dotPeek to debug step-through the NuGet.Core and System.IO.Packaging components to see what on earth was going on. In the end it was this piece of code in System.IO.Packaging.Package that was causing the issue:

public PackagePartCollection GetParts()
{
...
	PackagePart[] partsCore = this.GetPartsCore();
	Dictionary dictionary = new Dictionary(partsCore.Length);
	for (int index = 0; index < partsCore.Length; ++index)
	{
	  PackUriHelper.ValidatedPartUri uri = (PackUriHelper.ValidatedPartUri) partsCore[index].Uri;
	  if (dictionary.ContainsKey(uri))
		throw new FileFormatException(MS.Internal.WindowsBase.SR.Get("BadPackageFormat"));
	  dictionary.Add(uri, partsCore[index]);
	  ...
	}
...
}

I mean, why would anyone consuming such a core piece of functionality as this API ever want to know anything about the conditions that led to the corruption of a 30MB package containing thousands of files? I mean it’s not like System.IO.Packaging was ever intended to be re-used all across the globe, right?

Anyways, here’s the error log for helping others with searching for this error and stuff.

[14:21:27]Step 1/1: Create Octopus Release
[14:21:27][Step 1/1] Step 1/1: Create Octopus release (OctopusDeploy: Create release)
[14:21:27][Step 1/1] Octopus Deploy
[14:21:27][Octopus Deploy] Running command:   octo.exe create-release --server https://octopus.url --apikey SECRET --project client-co-uk --enableservicemessages --channel Client Release --deployto Client CI --progress --packagesFolder=packagesFolder
[14:21:27][Octopus Deploy] Creating Octopus Deploy release
[14:21:27][Octopus Deploy] Octopus Deploy Command Line Tool, version 3.3.8+Branch.master.Sha.f8a34fc6097785d7d382ddfaa9a7f009f29bc5fb
[14:21:27][Octopus Deploy] 
[14:21:27][Octopus Deploy] Build environment is NoneOrUnknown
[14:21:27][Octopus Deploy] Using package versions from folder: packagesFolder
[14:21:27][Octopus Deploy] Package file: packagesFolder\Client.0.1.0-unstable0047.nupkg
[14:21:28][Octopus Deploy] System.IO.FileFormatException: Format error in package.
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.GetParts()
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.Open(Stream stream, FileMode packageMode, FileAccess packageAccess, Boolean streaming)
[14:21:28][Octopus Deploy]    at System.IO.Packaging.Package.Open(Stream stream)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.GetManifestStreamFromPackage(Stream packageStream)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.c__DisplayClassa.b__5()
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage.EnsureManifest(Func`1 manifestStreamFactory)
[14:21:28][Octopus Deploy]    at NuGet.ZipPackage..ctor(String filePath, Boolean enableCaching)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.PackageVersionResolver.AddFolder(String folderPath)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.CreateReleaseCommand.c__DisplayClass1_0.b__5(String v)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.c__DisplayClass15_0.b__0(OptionValueCollection v)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.ActionOption.OnParseComplete(OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.Option.Invoke(OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.ParseValue(String option, OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.Parse(String argument, OptionContext c)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.c__DisplayClass26_0.b__0(String argument)
[14:21:28][Octopus Deploy]    at System.Linq.Enumerable.WhereArrayIterator`1.MoveNext()
[14:21:28][Octopus Deploy]    at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
[14:21:28][Octopus Deploy]    at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.OptionSet.Parse(IEnumerable`1 arguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.Options.Parse(IEnumerable`1 arguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Commands.ApiCommand.Execute(String[] commandLineArguments)
[14:21:28][Octopus Deploy]    at Octopus.Cli.Program.Main(String[] args)
[14:21:28][Octopus Deploy] Exit code: -3
[14:21:28][Octopus Deploy] Octo.exe exit code: -3
[14:21:28][Step 1/1] Unable to create or deploy release. Please check the build log for details on the error.
[14:21:28][Step 1/1] Step Create Octopus release (OctopusDeploy: Create release) failed
FileFormatException: Format error in package

Crash debugging Windows 10 Mobile UWP apps

So your app is crashing

This post explains how to get the details of the root managed .NET exception of a crash in a Windows 10 UWP app, specifically on the Windows 10 Mobile ARM platform. Hopefully this post will save you from some of the pain that I endured and aid you in getting to the bottom of your crashing app. Also note that with some minor differences – that I shall include as best I can in this article – this should in theory also work for debugging any UWP store apps on the x86 and x64 Windows 10 platforms although I have not tested this.

I’ll not detail the complete end-to-end process here as it is varied and lengthy and the core of the process is covered in excellent detail in a two-part series of posts by Andrew Richards of the Microsoft NTDebugging blog: ‘Debugging a Windows 8.1 Store App Crash Dump‘ Part 1 and Part 2.

The issue I found with following that series of posts alone is they are missing some key information if you are working on the Windows 10 UWP platform. No surprise when you consider that they were intended for Windows 8.1 Store platform. But they are full of essential details for the similar parts of the process on Windows 10 UWP and they got me so close!

In this post I will detail the information that is not already available in the above posts and how it fits into the overall process of debugging crash dumps from UWP apps running on the Windows 10 Mobile platform.

Enable and collect crash dumps

First off, make sure that Windows 10 Mobile will collect crash dumps by heading to Settings -> Update & Security -> For developers and ensure that the value of the settings labelled ‘Save this many crash dumps’ is greater than 0. I’d recommend at least 3.

Now reproduce the crash a couple of times to generate the crash dumps. The dump files should now be available under your device’s \Documents\Debug directory on the device storage. Note that it can take a few minutes to completely save the dump files and if you see any files here named ‘SOMETHING.part’ then the dumps are still being saved so come back in a minute or two. Move the dump files onto the machine where the debugging will take place.

Now on to the experts

Now I’ll pass you over to the aforementioned articles which explain how to fire up the dump files in WinDbg. Just as a heads up that at the time of writing this the latest Windows 10 Debugging Tools (WinDbg) are available from here.

If your crashes are indeed caused by managed code that you have written, generated or otherwise included then you will inevitably end up being directed to use SOS to elicit the details of the exception that is being thrown. This is where things got tricky for me and if you do get to this point then return here and read on…

Filling in the gaps

Now that you may have tried invoking, loading and even locating SOS and the CLR or DAC modules, I can tell you that these components are not where or even what the article describes. First of all I spent some time trying to confirm that the CLR or DAC was loaded as it should be according to most sources on this subject. Eventually after much trial and error I tried issuing a reload command to ensure the correct core framework was loaded. This is done with the following command (see documentation here). Also note that this step might not be necessary for you.

.cordll -ve -u -l

Which, for me, results in the following output:

CLRDLL: Unable to find 'mrt100dac.dll' on the path
Automatically loaded SOS Extension
CLRDLL: Loaded DLL c:\symbols\mrt100dac_winarm_x86.dll\561408BF43000\mrt100dac_winarm_x86.dll
CLR DLL status: Loaded DLL c:\symbols\mrt100dac_winarm_x86.dll\561408BF43000\mrt100dac_winarm_x86.dll

This is all fine but is not quite what I expected to see and leads onto the issue with using SOS. As you can see, SOS is supposedly loaded by the above command but normal SOS commands/invocations still will not work. The above DLLs gave me some clue to what was going on here and when I looked to see where this mrt100dac_winarm_x86.dl was located it led me to find the SOS DLL. In my environment, everything can be found here: C:\Program Files (x86)\MSBuild\Microsoft\.NetNative\arm and I can see here a DLL named mrt100sos.dll and a few variants thereof. So it looks as if there is a special distribution of SOS for the Universal platform, which makes sense.

NOTE: This is where the differences between the different platforms (ARM, x86, x64) will come into play. I suspect that this should be the same process for debugging UWP apps on all platforms but I cannot say for certain. At the very least you will see different modules/DLLs listed above and the different platform modules can all be found under: %Program Files%\MSBuild\Microsoft\.NetNative,

Armed with this knowledge I then headed back to Google and thankfully (luckily!) found one mention of using mrt100sos on a blurb for a non-existent Channel 9 show:

…This is very similar to how CLR Exceptions are discovered. Instead of using SOS, MRT uses mrt100sos.dll or mrt100sos_x86.dll (depending on the target). The command is !mrt100sos.pe -ccw <nested exception> . The same command(s) for CLR Exceptions is !sos.dumpccw <addr> –> !sos.pe <managed object address>.

And sure enough if you follow on from Andrew’s Windows 8.1 Store App articles with the above commands you will be able to see your managed exception in all its detailed beauty. In the following example <Exception Address> would be the value of ExceptionAddress or NestedException in your WinDbg output:

!mrt100sos.pe -ccw <Exception Address>

As an example, I had the following WinDbg output:

0:005> dt -a1 031df3c8 combase!_STOWED_EXCEPTION_INFORMATION_V2*
[0] @ 031df3c8
---------------------------------------------
0x008c2f04
+0x000 Header           : _STOWED_EXCEPTION_INFORMATION_HEADER
+0x008 ResultCode       : 80131509
+0x00c ExceptionForm    : 0y01
+0x00c ThreadId         : 0y000000000000000000001111001100 (0x3cc)
+0x010 ExceptionAddress : 0x7778afbb Void
+0x014 StackTraceWordSize : 4
+0x018 StackTraceWords  : 0x19
+0x01c StackTrace       : 0x008c67e8 Void
+0x010 ErrorText        : 0x7778afbb  "¨滰???"
+0x020 NestedExceptionType : 0x314f454c
+0x024 NestedException  : 0x008d09a0 Void

Taking the above NestedException address, I end up with the following command and resulting output. And this was all I needed to locate the bug.

0:005> !mrt100sos.pe -ccw 0x008d09a0
Exception object: 00f132ac
Exception type:   System.InvalidOperationException
Message:          NoMatch
InnerException:   <none>
StackTrace (generated):
IP       Function
65c279d5 ProblemApp_65810000!$51_System::Linq::Enumerable.First<System.__Canon>+0x99
65c27729 ProblemApp_65810000!$2_ProblemApp::Utilities::AssetsCache::<loadImage>d__4.MoveNext+0xa5
00000001
65539115 SharedLibrary!System::Runtime::ExceptionServices::ExceptionDispatchInfo.Throw+0x19
65539317 SharedLibrary!$13_System::Runtime::CompilerServices::TaskAwaiter.ThrowForNonSuccess+0x4b
655392c5 SharedLibrary!$13_System::Runtime::CompilerServices::TaskAwaiter.HandleNonSuccessAndDebuggerNotification+0x41
6553927d SharedLibrary!$13_System::Runtime::CompilerServices::TaskAwaiter.ValidateEnd+0x19
654ccea1 SharedLibrary!$13_System::Runtime::CompilerServices::TaskAwaiter$1<System::__Canon>.GetResult+0x11
65cf0285 ProblemApp_65810000!$2_ProblemApp::Utilities::AssetsCache::<Initialize>d__2.MoveNext+0x175
…

Would love to RTFM!

So hopefully this will help some poor souls who like me have to debug crashing Windows 10 Mobile UWP apps. If anyone knows of some proper documentation for the mrt100sos commands I would be eternally grateful!

Crash debugging Windows 10 Mobile UWP apps

LinkedIn Error “There was a problem sharing your update. Please try again”.

Obscure Error

I was trying to reply to a comment on an article I posted to LinkedIn the other day and kept hitting the error “There was a problem sharing your update. Please try again”. Just a note to help anyone who might come across this error when attempting to post an update to LinkedIn, there is an unadvertised comment character limit of 800 characters.

A little help?

It would be great if this was made obvious somewhere such as in the error itself or at least somewhere on the site but even searching the Internet for “There was a problem sharing your update. Please try again” didn’t turn up much for me. It wasn’t until I opened a support ticket that I was given this info.

I hope posting this here will at some point in the future save someone from wasting the time I did.

LinkedIn Error “There was a problem sharing your update. Please try again”.