My Memory Sucks

It just does.

Using Azure-Provided DNS

Posted by Chris on August 15, 2016

I came across this slight annoyance today. I have bunch of development servers on Azure with an AD deployment, SQL, etc. and I was wanting to modify the built-in Azure-provided DNS / Azure-managed DNS records. I’ve since found out at that you can’t modify the records for the built-in DNS and you need to setup your own DNS if you want this level of control. Managing my own, internal only, DNS just seems a bit unnecessary for a development environment so I’ll just modify the hosts file for now.

After a bit more investigation I’ve found out that I can’t seem to add to the hosts file as the servers appear to be ignoring it for Domain controller name resolution. I’m not sure if I’ve ever attempted using hosts file name resolution for joining to a domain controller so unsure if this is a result of a Windows Security Update or just a general check done by Windows anyway. Regardless, it means I’m going to need to set a custom DNS server in Azure which can be done under Virtual Networks -> (Name of Virtual Network) -> DNS servers -> Custom DNS -> Primary DNS Server.

Posted in Azure, Development | Tagged: , | Leave a Comment »

Weird Cursor and Text Malalignment in IntelliJ

Posted by Chris on June 6, 2016

Been having this issue for ages and finally found a post today that solves the issue. I know I’m going to lose the URL amongst my millions of bookmarks so I thought I’d post it here for future reference.

Posted in Uncategorized | Leave a Comment »

Sharing an ExpressJS / Connect / PassportJS Session with Golang – Part 1

Posted by Chris on May 26, 2016

TL;DR – I’ve confirmed the process to extract and verify the session ID and accompanying signature from the authentication cookies set by ExpressJS and Connect. For a “no-sh*t-sherlock” version of this article (bare bones research) check out my Oystr map here.



I’ve begun the process of writing some of my microservices for my start-up, Oystr, in Golang and I have encountered a slight issue with authentication. My current authentication is all handled by PassportJS in a Node.JS and ExpressJS stack meaning I’m going to have to somehow share the the session information between the two different platforms. I’m a one man team at the moment so re-writing everything in Golang is out of the question and I’m just battling too much with the lack of type checking in JavaScript to be able to continue completely on Node.JS. The solution for me is to be able to access the cookie information saved by ExpressJS and then pull the user information from Redis into Golang.

The Connect.SID Cookie

So this is what I know from previous experience with setting up PassportJS – I know that it uses the information stored in the “connect.sid” cookie value to find the corresponding user IDs in the Redis session store I’m using to persist sessions. Verifying this was the easy bit and was just a matter of opening up Chrome’s developer tools (F12) and then looking at the cookies. Sure enough I found it as shown below.

connect.sid Cookie

Looking at the value, it was clear to see that it was URL encoded so once I decoded this I got something of the form “s:sessionid.signature”. At this point you could say “now that I have the session ID I can go look for that ID in the Redis session store… my work here is done”, smack your hands and walk away… BUT if you do that you’re opening yourself up to a massive security hole. Someone could easily just try brute forcing the session ID in the hope that they hit the session ID matching a logged in user. To really make this secure you need to be able to verify the signature is correct for the given session ID.

Verifying the Signature

So after a bit of research (go look at my research map in Oystr, here, if you want to see my sources for yourself) I figured out the process for creating the signature. I decided that initially I’d prove the process works with the sample I took from my browser using commands on the Linux command line as I hate writing code for a process that doesn’t actually work. Writing code for something that isn’t even correct is a million times more annoying than having to continuously check your typing in JavaScript (and if you read my last post you’ll realise how big of annoyance that is to me).

The basic process to create the signature from the session ID is –

  1. Create a SHA256 HMAC signature based upon the secret you declare in your ExpressJS middle-ware setup code. The code itself should look something like app.use(session({secret: ‘my secret’})).
  2. Creating the signature will generate a long hexadecimal value like “6F0AD0BFEE7D4B478AFED096E03CD80A”. You need to convert this to base64 and that value should match the signature stored in the “connect.sid” cookie.

It’s only two steps so it’s pretty basic. After some searching around I figured out you could do the conversion from the Linux command line using the “openssl”, “xxd”, and “base64” commands.

  1. You use “openssl” to first create the signature for the session ID.
  2. You use “xxd” to convert the signature, which is actually just a “string” of numbers and letters, into the actual bytes represented by the hexadecimal characters.
  3. You use “base64” to convert the bytes from “xxd” into a base64 format.

You can do all of that pretty easily with the following commands –

  1. echo -n <session id> | openssl sha256 -hmac <my secret>

    This dumps out a string like “6F0AD0BFEE7D4B478AFED096E03CD80A”. This is the <signature> to be used in the next set of commands.

  2. echo -n <signature> | xxd -r -p | base64

After that second command, assuming you did all that correctly and used the correct values, you should have a base64 signature that matches exactly the signature part of your example connect.sid cookie.

What next?

So now that I can calculate the correct signature, I can make sure that the session ID I’m being supplied by the user hasn’t been tampered with and that they are unlikely to be attempting a brute force hack. In my part 2 post I’ll give details and code as to how I made use of all of this in developing a Golang package to share the sessions and user profiles between my two platforms.

Posted in Development, Node.JS | Tagged: , , , , | Leave a Comment »

Why JavaScript is giving me depression…

Posted by Chris on May 19, 2016

Disclaimer: This is not a serious post. It’s intended to be a little tongue-in-cheek. Before you try and start a war in the comments section for either the “JS sucks” camp or the “JS is the second coming” camp, I won’t be approving any of those comments as THIS POST ISN’T SERIOUS and I’m not speaking for either side.

For the last year I’ve been building Oystr, a collaborative problem solving platform, upon Node.JS. Prior to moving entirely into JavaScript for the full stack, I had been developing on top of .NET using C# for about 15 years. Over the last year I’ve found working with JavaScript on the server to be liberating at times compared to the .NET/C# world. But this has come at a huge cost. My initial elation with JavaScript is now beginning to turn to depression. For a while I’ve been going backwards and forwards between elation and depression so much that I started to feel kind of bi-polar. But now, I think I’m ready to call it. JavaScript is making me depressed.

I think one of the biggest annoyances is the lack of types. I know people are probably thinking “I could’ve told you that” but let me explain. It’s not so much the lack of any strong typing, it’s more what strong typing provides you… auto-complete. Man I miss proper VS.NET auto-complete. The thing I love about JavaScript is it’s duck typing. But obviously this has come at the cost of my sanity.

But really auto-complete is just a convenience. That’s not the main reason I feel depressed with JavaScript. The main reason I feel depressed when I go back to working with JavaScript, is that re-factoring just feels so much harder. I don’t have any quantitative data to back this up, it’s just a feeling. But damn it’s stressful when I want to do something simple like rename a method. I’m always worried that WebStorm will accidentally rename similarly named methods or properties. And here is the worst part… I can’t do a compile to do a simple sanity check. Sure if something compiles it doesn’t mean it works for sure, but it sure goes a long way in giving you the peace of mind that “oh, it seems to be OK…” which is a hell of a lot better than feeling “have I missed or forgotten anything?!?!?!” and never actually feeling comfortable with saying it’s all fine until you’ve done a full systems test.

This compile-time related depression (I’m calling it CTSD for Compile-Time Stress Disorder) extends beyond the fact my compiler safety net no longer exists. It’s the fact that when I started programming 22 years ago, what got me so excited as a child when I moved from BASIC to C / C++ was that I could compile my code and when it compiled / linked properly, it felt like a milestone. Programming, for me, is more than just work. I love to code, it’s part of who I am. Taking away one of the things that makes me feel like I’m getting somewhere (whether that’s actually true or not) just sucks the fun out of programming for me.

I think for the sake of my sanity and to bring joy back into my nerd life, it may be time I put down JavaScript and start looking for something not CTSD prone.

Posted in Development, General, JavaScript | 5 Comments »

Is Agile Suited to Competitive Software Projects more than Support-based Software Projects?

Posted by Chris on April 6, 2016

I came across this interesting post on the LinkedIn CSM board today from Victor Penman and felt compelled to provide my 2 cents.

Who actually implements Agile?

Companies whose main product is software are more likely to implement Agile practices than companies where software (including IT) is a support function. This is what I have observed at the 12 companies where I have seen Agile (usually Scrum) either introduced or attempted to be implemented.

I believe this is because companies that sell or license software need products that will compete in the real world where customers have many options. Support software only has to be adequate.

The quality that comes from Agile, including the close interaction with customers, ability to pivot in response to changing market conditions or customer desires, and allowing teams to determine the best way to deliver, provides a needed competitive advantage.

Software developed for internal support does not face these pressures. Its users are unable to turn to a competitor. This allows support managers to continue to operate in the traditional Command and Control mode. If Agile practices are mandated from above, it is more likely that only the trappings of Agile (ceremonies for example) will be implemented.

I will appreciate hearing if others’ observations support or refute mine.

Turns out my 2 cents is actually more like $4.50 so I decided to post my response here (I ran out of room in the LinkedIn response box).

I agree with your premise to an extent but believe that in addition to pressure (rather than calling it competition) a team also requires an environment in which learning from experiments (rather than safety from abject failure) is encouraged. I have 3 experiences in particular that lead me to this conclusion.

In the case of pressure without safety to “fail”, I look back to one product-based business I worked with that was under extreme pressure to perform. Although the team is one of the smartest I’ve worked with, the product company still collapsed as a result of poor product implementations which led to loss of customer confidence. I believe a big component of the failure was due to the focus on architecting perfect implementations from the start rather than delivering solutions early to customers and learning from the interactions.

In the case of no pressure with safety to fail, I’m reminded of one product-based business that I worked with that had attempted to implement a new version of a core product 3 times over the course of 5+ years. I was involved in the 3rd attempt and soon after I left they started on the 4th attempt. This is a company that had made, and continued to make, a lot of money from their existing product and client-base. As a result, there was too much safety to fail so there was no push to experiment or learn from mistakes quickly.

The third project I want to point out was a successful project for an internal system for a large utility. In this particular project, there was immense pressure in the form of limited resources and time. In return, the project team was given a huge amount of safety around failure by the business owners. Although the project was given approx. 5 months of budget in which to complete, the project was re-assessed by a panel of business owners every month. At each monthly meeting, the team would communicate to the business any potential risks to the success of the project and if anything was deemed insurmountable, the panel had the right to pull the plug on the project. “Pulling the plug” on the project would not result in blame being attributed to any group or person, it was just considered a natural decision that had to be made as new information came to light.

The main take away from my 3 examples is that the first two failures were both product companies but both lacked a critical element. The third one was, as you call it, an internal project to develop support software – yet this project has been one of my most successful to date – the business owners were happy as they came within budget and the end-users were so eager to use the system that the beta system went “viral” internally. I attribute the success of this project to both pressure and safety to experiment.

Posted in Development, Methodologies and Principles, Organisational Development, Sense-making | Tagged: , , , | Leave a Comment »

Docker “[graphdriver]” and ‘”aufs” failed: driver not supported’ Error After Ubuntu Upgrade

Posted by Chris on March 31, 2016

I’ve had the following error a number of times after updating Ubuntu so I figured I should post something about it (I’m bound to forget the solution) as the first “fix” I came across wasn’t actually an appropriate fix – I’ll go more into this later.

ERRO[0000] [graphdriver] prior storage driver “aufs” failed: driver not supported
FATA[0000] Error starting daemon: error initializing graphdriver: driver not supported

If you’re receiving the error above after a Ubuntu upgrade, the reason may be related to not having the correct kernel packages installed. There’s details on how to reinstall these at

I got this solution from the issue posted at The problem with that issue log is that the first “fix” mentioned is to remove “/var/lib/docker/aufs” which will remove your containers… this isn’t ideal. Thing is, the solution I suggest above is actually mentioned much further down the post and is only a very fleeting mention so if you’re impatient (like I am the majority of the time), you may totally miss it and delete “/var/lib/docker/aufs” in your frustration in finding an answer (proud to say I didn’t do this because deletion to fix something makes my infrastructure “spidey-sense” tingle).

Posted in Development, Docker | Tagged: , , , | Leave a Comment »

Restoring Redis from RDB Dump File to Server Using AOF Logging

Posted by Chris on March 27, 2016

This is just a quick post as a reminder to myself in the future on how to restore a Redis database from a dump.rdb file to a database set to use AOF (append-only file) logging. I found this post on Stack Overflow and the really helpful comment is in the 4th comment in the top voted answer.

The comment makes it clear to start the server in a non-AOF manner so that it loads off the dump.rdb file. In order to get Redis to restore from the dump.rdb file, you copy it to the data location for the server. Once the server is loaded, you can then turn on AOF with the command “config set appendonly yes”.

Posted in Development, Redis | Tagged: | Leave a Comment »

Docker Image for Lightweight Node.JS and PM2 on Alpine Linux

Posted by Chris on March 17, 2016

I recently changed my deployment workflow for Oystr to be containerized within a Docker image. This has greatly simplified my deployment workflow and, combined with the private NPM repository I configured using Sinopia, has greatly simplified my development process too as I have now re-factored all my code over the past week into a microservices architecture.

During the process of doing this, like many others, I’ve found that the base Ubuntu Docker image is pretty heavy compared to how much code I’m deploying. The base Ubuntu Docker Image is ~190MB and after deploying things like Node.JS this bloats quite a lot to something closer to ~500MB. For my development machine this isn’t a problem, but Oystr is hosted on top of SSD-based infrastructure in the cloud, and this space is considerably limited.

As I’m doing Oystr on a pretty tight budget, I can’t afford to be too wasteful with these kinds of resources. This led me to re-visit something I thought about a while ago, which is to host Node.JS on top of Alpine Linux, a very minimal Linux distribution. After a little searching I found that Irakli Nadareishvili had already done a basic version at which works great!

As he’s using runit and I’m wanting PM2 as my process manager, I decided to fork his project and make the necessary changes. While I was at it, I exposed the web port, a volume for mounting your Node.JS app to, and an environment variable for the name of the main Node.JS file to make configuring a PM2 based Node.JS application a little easier. The original image is around 58MB and with the changes I’ve made, it’s only grown to about 78MB! Definitely and improvement over the Ubuntu image I’ve been using to date.

If you’re looking for the Docker image or source, it can be found at the following addresses. I’ve changed the to reflect the changes I’ve made to the original Dockerfile so hopefully you’ll be able to get a deployment up pretty quickly.

GitHub Source –
Docker Image –

Posted in Development, Docker, Node.JS | Tagged: , , , , | Leave a Comment »

Dangers of Node.JS Package Versions and Singletons

Posted by Chris on March 16, 2016

I was working on a singleton pattern object to manage some global configuration in Oystr today and came across a potentially dangerous realisation that I tested and turned out to be true. The issue I found is that if you are using a module that exports a singleton, that singleton will only be the same object across modules that are using THE SAME VERSION of the that module. This is a pretty important “gotcha” and could create a huge debugging headache if not managed correctly. To demonstrate this exact problem, I’ve created a project hosted on my GitHub at and have provided a guide below of this exact issue.

What the Sample Application Does

The sample application in NodeVersioningAndSingletons does something pretty basic. It sets the name on the singleton-based greeting module (known as mymemorysucks-greeting-module), and asks the friend module (known as mymemorysucks-friend-module) to introduce themselves. The friend module in turn uses the greeting module to create an appropriate greeting and then the app display their greeting to the console.

To demonstrate the issue, I’ve created two versions of this simple application. One version shows the output when the main application and the friend module are both using the same version of the greetings module. This version is called app-same-version. The other version shows the output when the main application and the friend module are both using different versions of the greetings module. This version of the application is called app-diff-version.

Output of the app-same-version

The output of the app-same-version application is as you would expect. The main app.js script sets the name to “Chris” and then the friend object greets Chris as illustrated with the screenshot below.


Output of the app-diff-version

This is the problematic version. In this version, the main app has been set to use a different version of the greetings module to the friend module. When you run the application you get the following output.


As you can see, it now greets “undefined” as a different version was set.

Why does this happen and how to I fix it?

If you take compare the node_modules directories from both of the applications, they differ in that the mymemorysucks-friend-module will import it’s own version of the greeting module separate to the app-diff-version application. In the app-same-version application, the mymemorysucks-friend-module doesn’t do this.

At this point in time, I have no ideas on how to fix this situation if say it’s occurring with two external modules that you’ve imported into your project. The only ideas I have around fixing this is in managing the package versions and to carefully define specific versions of modules that export “singleton” objects. In other words, in your package.json file “dependencies” list, to specify a specific version for all your modules to use. If you have any better ideas beyond this it would be great if you could share them and I’ll add them to this post.

Posted in Development, JavaScript, Node.JS | Tagged: , , | Leave a Comment »

Single-Click Private NPM Repository on Docker using Sinopia and Nginx

Posted by Chris on March 7, 2016

TL;DR Just developed a “single-click” Docker file to deploy a private NPM repository using Sinopia and Nginx. Not on the Docker hub yet, but you can pull it from my Github at

I’ve been using Docker quite a lot the past 2 weeks to simplify Oystr’s deployment and I’ve really come to love it! Whilst I do love the ease at which you can create repeatable and easily transferable deployment environments, I think what has really bought me over is the way in which you are working in a completely isolated environment without having to go to the extent of a virtual machine.

Thanks to my personal enlightenment on the wonders of Docker (and the current “oooh I have a new toy” fanboy obsession I’ve grown), I decided to write a Dockerfile for a Sinopia and Nginx deployment. If you’re not familiar with Sinopia, it’s basically a private NPM repository. It’s not a full repository like the official NPM one (for a start it uses the file system as storage, not Couch DB), but it does a pretty good job at emulating one. You can find out more about it at

I did try to use the two most popular Docker images for Sinopia currently on the Docker hub, but I couldn’t get them working with a single click – and I really wanted a single-click script. So like any good (and impatient) developer, I rolled my own. Hopefully it’ll mean you won’t have to! (although I totally understand if you do😉 )

You can get the Dockerfile and related files from my Github at and build it for yourself very easily with the following commands –

  1. sudo docker build -t sinopia-nginx .
  2. sudo docker run -p 80:80 –name sinopia -d sinopia-nginx

I’ll write some better documentation over the next week and upgrade it a little, but for now if you’re just wanting a single click Sinopia deployment, this should hopefully suffice. My plan is to upgrade it so that –

  1. The Sinopia config.yaml and the Nginx nginx.conf files are mapped to volumes.
  2. The Sinopia storage is mapped to a volume.
  3. A way to provide Nginx with certificates/keys for SSL and automatic setup of SSL.


Posted in Development, JavaScript | Tagged: , , , , , | Leave a Comment »


Get every new post delivered to your Inbox.

Join 277 other followers