Microsoft vs Javascript standards


Microsoft's flagship browser has been causing web developers much anguish for over a decade now, and they appear to have unleashed an "improved" version of that mediocrity upon us via WinJS in their offering for HTML apps for Windows.

In Javascript norms, when a function throws an error, the error and its stack trace get output to the console, all functions on this stack get aborted, and execution of other parts of the page resume. Even IE gets this right. Microsoft has, however, decided to "improve" upon this behaviour in their WinJS run time, where they decide that in the above scenario, what should happen instead is to simply crash the program, in a manner similar to that of a SEGFAULT in a C program.

This is most likely the root cause of the plentiful random crashes in an app that we have been working on.

Even if we wrote our own code in defensively to work around this limitation by wrapping everything in a try-catch block, we simply cannot enforce that the 3rd-party libraries we include do so as well.

... But there is light at the end of the tunnel. After much sleuthing, Chris and I have worked out that there is indeed a way to catch these errors: by setting a window.onerror handler function, and doing evt.preventDefault() inside it.

But that would be too easy, wouldn't it? Why yes, of course, we should have had the foresight to realise that the spec would be ignored yet again, and be required to use a parallel API instead. Here's the workaround that we ended up with that appears to work:

window.WinJS.Application.addEventListener('error', globalOnError);
function globalOnError(evt) {
  console.log('globalOnError', evt); // Replace with proper logging functions
  return true;

In a 3rd instance of a flagrant disregard for Javascript norms, the evt parameter here has neither preventDefault() nor stopPropagation() methods available. On the other hand, evt.details.error does, but that does not matter, because calling those will crash the program SEGFAULT style as well - but of course! Instead, we have to do a return true to do so, which takes us all the way back to Javascript event handling patterns that fell out of favour in the 90's.


We may have found a workaround for (at least some of) the random crashes, and in doing so, found this caution to be vital: When doing anything Microsoft Javascript platforms, tread carefully, and be prepared to write a parallel set of code from specification compliant Javascript. Do not expect anything that "should work" because "it's standard Javascript" to actually work, because that does not appear to be a priority.

Javascript is a language that is supposed to be both community-driven (ECMAScript and W3C) and open; and vendor should be creating platforms that go forward with the standards as they progress forward rather than break them with works-for-me-only solutions.

Please reconsider your Javascript strategy.

Uploading Files to Onedrive in Cordova App


  1. Set up a Windows-Phonegap app in MSVS2015
    • Phonegap tools
    • Create a account
    • Create developer account
    • Registering the app
  2. Authentication in using LiveSDK
    • Install via NuGet
    • Application manifest for extra Internets
    • WL.* APIs + caveats
  3. Backend for uploading a file to OneDrive
    • Using the access_token
    • OneDrive APIs

Set up Windows-Phonegap app in Visual Studio

Phonegap tools to generate the solution

The first thing that we need is to generate a Visual Studio solution.

  • Install Phonegap via npm
    • npm install --global phonegap
  • Generate a Visual Studio solution for a new project
    • phonegap create livesdk-onedrive-demo
      cd livesdk-onedrive-demo
      phonegap platform add windows
      phonegap build windows

Now that we have generated the project, we can build and debug the project in an IDE.

  • Open Microsoft Visual Studio 2015
  • Hit Ctrl+Shift+O to open a solution
  • Open to livesdk-onedrive-demo/platforms/windows/CordovaApp.sln
  • Note that every time you run phonegap build windows
    • The contents of livesdk-onedrive-demo/platforms/windows/* get overwritten, so be careful
    • This means that when developing the app in Visual Studio, we must edit the contents of the generated folder, and then copy back out to the root folder (livesdk-onedrive-demo/) before committing to git (or other VCS), and before running the phonegap build windows command again.
    • This is a major source of frustration, as it does not affect other platforms such as Android.

Create a Microsoft account

In order to test that any of this works, you will need a Microsoft account.

A Microsoft account is a OneDrive account. Note that, confusingly enough, neither Hotmail accounts nor an Office365 email accounts are considered Microsoft accounts. That is the equivalent of Google deeming GMail and Google Plus accounts to not be Google accounts. I have it on good word from developers at Microsoft that they are looking into changing this.

Create a Windows Store developer account

In order to publish your app to the Windows app store, you will need a developer account for it.

  • Skip this step if you already have a Windows Store developer account
  • Sign up
  • Use the email address of the Microsoft account that you created earlier
  • You have to pay for this:
    • Approximately $20 for individuals
    • Approximately $100 for businesses

Registering the app with the Windows app store

In order to do anything with your app which requires that the user logs in to their Microsoft account, you need to register the app with the Windows app store. In order to do this, you need a developer account. The developer account is not free, which means that you cannot develop or test this feature without paying first.

  • Menu: Project --> Store --> Associate App with the Store
  • Dialog: Associate
  • Dialog: Sign in to Microsoft Account - Password
  • Dialog: Sign in to Microsoft Account - Two factor authentication
  • Dialog: Waiting for developer account to load
  • Dialog: Selecting an application name
  • Dialog: Associate confirm
  • Solution Explorer:
    • Check that the following file has been created
    • Solution 'Cordova App' --> CordovaApp --> CordovaApp.Windows --> CordovaApp.Windows_StoreKey.pfx

LiveSDK for authentication

In order for your app to know who a particular user is, and for the user to give permissions to the app to access their account - in order to do stuff such as uploading to their OneDrive - they need to log in to their Microsoft account. The LiveSDK plugin is how we do this.

Install LiveSDK via NuGet

NuGet is a package manager for .NET projects. We use it to install LiveSDK.

  • Menu: Project --> Manage NuGet packages...
  • Tab: "NuGet Package Manager"
    • Search for LiveSDK
    • Select latest stable version (v5.6.2 at time of writing)
    • Install it

Application manifest for extra Internets

We also need to modify the project configuration to allow it access to the Internet. By default, it allows limited access, but we can extend this to include all forms of Internet access.

  • Solution Explorer:
    • Open this file
    • Solution 'Cordova App' --> CordovaApp --> CordovaApp.Windows -->
  • Tab:
    • Select "Internet (Client & Server)"
    • Select "Private Networks (Client & Server)"

Windows Live APIs

LiveSDK exposes several Windows Live APIs via the window.WL object in a Cordova project.

  • Add script tag for /js/wl.js immediately after cordova.js
    •   <!-- Platform native wrappers -->
        <script type="text/javascript" src="cordova.js"></script>
        <script type="text/javascript" src="/js/wl.js"></script>
  • Scopes
    • Be very careful about which scopes you can use
    • Each scope will allow the app to access certain features
    • For uploading a file to one drive the combination that we found we needed was:
      • WL
                scope: ['wl.signin', 'wl.basic', 'wl.offline_access',
                  'wl.skydrive_update', 'wl.contacts_skydrive',

Unfortunately, LiveSDK suffers from some problems which make developing for it difficult. These are the ones that caused me the most difficulties.

  • Fractured documentation
    • The only correct reference for scopes
    • Every API will also mention the scopes that it needs, and these are often partially correct - which means that they do not work at all
    • In particular, pay attention to "Subset and superset behavior"
      • This caused me much pain, because when WL.login() was called with a scope containing both wl.skydrive and wl.skydrive_update, write operations always failed authorization, however, when there was only wl.skydrive_update (which is a "suyperset" of wl.skydrive) went through just fine
    • Still not completely clear about what the difference is between wl.skydrive_update and onedrive.readwrite, or why both were needed for this case.
  • Cannot log out
    • Error got was:
      • "[WL]Logging out the user is not supported in current session because the user is logged in with a Microsoft account on this computer.
        To logout, the user may quit the app or log out from the computer."
    • However, quitting the app has no effect, and I have not yet logged out of my account - and have no intention of doing so. I feel that I should be able to log out of an app at any time I feel like, and no app should require me to log out of my operating system in order to log out within the app
      • This has been raised with developers from Microsoft, and they have indicated that they may look into this in the future. For now, however, they suggested looking into WL.basic() as an alternative to WL.login()

Upload file to OneDrive

So far, we have only managed to install the necessary libraries, and authenticate the user. Now we can finally get around to the original intent, which is uploading a file to a user's OneDrive.

Using the access_token on the front end

We need an access token, obtained during authentication, in order to interact with a user's account, including their OneDrive.

  • WL.login() returns a promise
    • WL
        .login(/* ... */)
        .then(function onLoginSuccess(result) {
          // Do something with result
        }, function onLoginFailure(err) {
          // Do something with err
  • In the success callback of the promise, the first parameter is the result object, which should look something like this:
    • {
        status: 'connected',
        session: {
          access_token: 'AReallyLongBase64EncodedToken',
      Now if we were to invoke the Windows Live APIs, which includes the OneDrive APIs, directly from the front end, we have no use for these. We can simply call WL.api() and have that deal with the necessary authentication related work However, in this case, we want to call our own back end APIs, and have the back end upload a file to OneDrive on our behalf. This is where session.access_token comes into play

Using the access_token on the back end

With the access token passed from the front end and in the hands of the back end, we can next invoke the relevant OneDrive APIs to upload the file.

  • Our own back end API thus receives the access_token, along with other information about the file that it needs to upload, such as the file name
  • The first step is to determine the folder ID that we should upload to, using a HTTP GET request

    • In C#:

      • public virtual string GetUploadLocation(string accessToken) {
          // Find out the folder ID (and therefore also the upload path) of the root directory in OneDrive
            var request =
                String.format("{0}", accessToken));
            request.Method = "GET";
            var response = (HttpWebResponse)request.GetResponse();
            var responseReader = new StreamReader(response.GetResponseStream());
            // Parse the JSON response and extract the upload_location property
            string jsonStr = responseReader.ReadToEnd();
            var json =
              (Dictionary<string, dynamic>)JsonConvert.DeserializeObject(
                jsonStr, typeof(Dictionary<string, dynamic>));
            return json["upload_location"].ToString();
  • The second step is to actually perform the upload, using a HTTP PUT request

    • In C#:

      • public virtual HttpWebResponse UploadFile(string accessToken, string fileName, byte[] fileData) {
            // Construct the upload URL
            String folderName = GetUploadLocation(accessToken);
            var uploadFileUrl = new Uri(
              String.Format("{0}{1}?access_token={2}", folderName, fileName, accessToken));
            // Upload the file by means of a streaming writer
            var request = (HttpWebRequest)WebRequest.Create(uploadFileUrl);
            request.Method = "PUT";
            request.ContentLength = fileData.Length;
            request.AllowWriteStreamBuffering = true;
            Stream stream = request.GetRequestStream();
            stream.Write(fileData, 0, fileData.Length);
            return (HttpWebResponse) request.GetResponse();

This part is the most straightforward part, however, also the part that fails, and thus feels like is at fault. In implementing this, the biggest source of grief was that using the wrong scope in WL.login() was the biggest cause of errors The responses returned by the Windows Live APIs were simply HTTP error codes (401, 403 and 405) plus their generic descriptions. It would be much more helpful if the responses were to include a reason as well, such as "You need the wl.skydrive_update scope to perform this action"

Microsoft also has published some docs on the brand new "Unified APIS", available on the subdomain,, however, these unified APIs do not cover OneDrive. It would be great if it did though.


Thanks to Saurabh Pawar for the work on the back end to upload the file to OneDrive.

Thanks to Rocky Heckman, Kiril Seksenov, & Ali Al Abbas from Microsoft for their help with navigating Windows APIs.

Software Engineering by Plumbing

When learning how to code for the first time, it is quite exhilarating. All that computational power at your fingertips, just waiting to be harnessed. If you knew the right sequence of commands, and how to invoke them, the world was your oyster. In theory - that is, when using a Turing complete programming language - you could write a program that does anything.

Quite soon, however, this is followed by a realisation. The realisation that IRL - in real life - most of the time when you write software, you are not really writing any of that interesting stuff.

  • Someone else, or another team or company, has already written that interesting piece of code, and probably has already done a better job of it than you can on your own.
  • The task that you have been given most likely falls into one of the two following categories, or at an intersection of the two:
    • Create, read, update, and delete various things
    • Here's a software library that does interesting thing #1, and here's another that does interesting thing #2. Make the two work together.

It always tends to pan out like such, simply because as a software engineer, your job is not not to write code for code's sake, but rather to write code to solve some business problem, or create some value for the business. That translates to most work being focussed on accomplishing rather ordinary tasks, by finding the right libraries that perform the necessary tasks, and simply connecting them together, such that the assembled aggregate accomplishes the goal.

I liken this to what a plumber does. In order for the kitchen sink, laundry, and toilets to all work properly, there are quite a number of interesting bits and bobs that need to be in place. Valves, faucets, pipes of various diameters and shapes, et cetera. But these are all pre-made - when something goes wrong somewhere, the plumber merely identifies where the problem has occurred, find the appropriate part, and replace it accordingly. The the main tools of the trade are identifying which parts go with which other parts, and connecting them together properly. That's not all too different from what the typical software engineer does day to day.

Yearning for the alternative

Some software engineers are pretty happy plumbing code together - that is great for them!

What if you are one who is not though, what if you are one who yearns for that greater stimulation - one who who is not fulfilled with connecting a valve to a pipe, but wants to make the valve themselves?

Here is my list of things to do to satisfy that urge:

  • Actively be on the lookout for something that needs solving
  • If something has already been solved, take a look at its internal workings, and put some thought into whether this can be solved in a different way
  • Where you cannot find something original, seek to make the plumbing easier
    • Make the plumbing experience easier/ more accessible in the future
    • If you do not want to make a new valve, find a better way to connect the valve to a pipe
    • This could mean doing things like improving the API, writing a DSL, or contributing to the documentation, for an existing project

Automatically publish documentation using Autodocs

I have been working on a NodeJs module quite diligently over the past couple of weeks, and I have finally cut a release that I am satisfied with releasing for wider use.


Build Status Coverage Status

Install it as a development dependency

npm install --save-dev autodocs

... and then add an autodocs hook to the scripts section of package.json:

"scripts": {
  "autodocs": "node ./node_modules/autodocs"

... and then invoke the autodocs hook in .travis.yml

- npm run autodocs

Finally configure autodocs by specifying environment variables, also in .travis.yml. The only compulsory one is GH_TOKEN, a Github access token, which you will need to obtain from Github, and then encrypt using Travis.

Note that autodocs does not generate any documentation itself - it is designed to publish documentation from a continuous integration server. It expects there to be a hook named generatedocs the scripts section of package.json.

That's all. Commit and push to your master branch, and you should get your documentation published to:


There are quite a few options that you can configure.

For example, you can set it up such that:

  • the URL it publishes to is differrent,
  • it publishes to a different repository,
  • it publishes when a different branch, e.g. develop is pushed instead
  • it publishes when a release is cut - a tag is pushed

For these, and more options, see autodocs' own documentation, which, you guessed it, is published by autodocs itself.

autodocs documentation


At the moment, autodocs only supports on CI environment - Travis - and one publishing environment - Github Pages. Other CI environments and publishing environments can also be supported.

These, and other issues, can be found at autodocs roadmap

Contributions are most welcome!

First contribution to Rust Compiler

I made my first contribution to the Rust compiler yesterday evening!

Rust logo

The learning curve was quite steep, so I started with something relatively simple - adding a detailed error message for one of the errors thrown by the compiler.

If only for my own future reference, I detail the entire process below: Compiling --> Verification --> Submission --> Acceptance --> CI


Compiling rustc for the first time:

git clone
cd rust/
make -j 4
sudo make install

Now edit some source code to add the new feature or fix the bug.


After making some changes:

make -j 4 rustc-stage1
export PATH=$( pwd )/${PLATFORM}/stage1/bin:$PATH
export LD_LIBRARY_PATH=$( pwd )/${PLATFORM}/stage1/lib
which rustc
rustc --version

(The value of ${PLATFORM} should be obtained from the build output, e.g. x86_64-linux-gnu or x86_64-apple-darwin)

Instead of compiling the entire project all over again, which will take extremely long, simply compiling one of the compile targets (rustc-stage1), and adding the relevant output files to the executable and library paths is a much quicker alternative.

Verifying the fix

In my case, the change was to add the detailed explanation message for a particular error message. To test this, run the --explain command:

rustc --explain E0265

Which should output the following;

  This error indicates that a constant references itself.
  All constants need to resolve to a value in an acyclic manner.

  For example, neither of the following can be sensibly compiled:

  const X: u32 = X;

  const X: u32 = Y;
  const Y: u32 = X;

Submitting the patch

Once the patch is OK, fork the repository on github, commit and push to your fork on a new branch, and then submit a pull request for your patch.

Fork the repository on github

Switch the remotes such that the upstream points to the rust-lang organisation's repository, and the origin points to the your forked copy of the repository.

git remote add upstream
git remote remove origin
git remote add origin${GH_USER}/rust.git

Commit and push:

git checkout -b ${SOME_BRANCH_NAME}
git add src/
git commit
# Enter a commit message
git push origin ${SOME_BRANCH_NAME}

Visit the main repository on github again, and click on the link for "compare and create pull request".

This results in a pull request:

Patch acceptance and Continuous Integration

Someone from the core team or reviewers team for Rust will get assigned to review the pull request, and if it passes their review, they will add a comment like this:

@bors: r+ ${COMMIT_HASH}

This triggers the Bors Github bot, and the patch is added to the Homu build queue. Visit the build queue, and find your patch on the list:

Note that you will find that the project contains a .travis.yml file. This led me to believe, initially, that Rust uses Travis as its continuous integration system. However, it only uses Travis for make tidy, which is essentially a linting task. I assume that this is because compiling rust takes a lot longer than the maximum of forty minutes per build allowed by Travis. The actual CI infrastructure for Rust consists of a couple of Github bots


As mentioned earlier, it was an extremely steep learning curve.

It takes extremely long to compile the project from scratch. Even just the rustc-stage1 target, takes ages. This is quite a big inhibitor for further contributions, as one would have to have access to an extremely powerful build machine in order to attain a reasonable amount of productivity developing Rust itself. If possible, I would like this to change. Perhaps break the project up into several smaller ones, and make it possible just to recompile the ones that changed.

This problem spills over to the build queue as well. It took just under two days for my patch to get compiled and tested by the continuous integration system, and get merged in.

Luckily I had a couple of experienced guys to guide me - thanks to Michael and Huon for walking me through the innards of librustc and libsyntax!

Now your turn!

If you are interested to get your contributions on, this issue is a good place to start.

At first I attempted to fix this one, however, it was beyond my current Rust skills, so I have had to let that be. Double points to anyone who tackles that!