Multiple monitors? You should buy VMWare Fusion instead of Parallels Desktop

In a post three years ago, I waxed lyrical about how much better Parallels Desktop was compared to VMWare for the very common task of running Windows on your Mac.

It’s time to take that back.

Parallels Desktop is no longer fit for purpose if you are an advanced user.

How Parallels Desktop broke multiple monitors

In older versions of macOS, virtual desktops spanned your whole set of monitors. Therefore if you had a left and right monitor, switching spaces (or virtual desktops) would switch both, giving you two “Desktops”, Desktop 1 (left monitor A and right monitor A) and “Desktop 2”, (left monitor B and right monitor B). Switching between desktops would switch both screens. The major downside of this was that when applications were run “Full screen” (rather than just maximised), they would go full screen on one monitor and leave the other one completely blank, which was complete madness

In Parallels 11, Parallels supported two ways of rendering full screen on multiple monitors. The first was using macOS’s built in full screen function (more on that in a minute) and the other using a “non-native” method that involved drawing a windowless fullscreen window on top of the whole screen. 

To work around the full screen issue when using multiple monitors, macOS Yosemite introduced the option for displays to have their own “Spaces”. This meant that your left and right monitors have their own sets of virtual desktops. However, this meant that each monitor could be switched desktop independently, introducing say 4 different combinations when you had two monitors and two desktops. This was a context switching nightmare. Most power users turn this off, especially if they are using keyboard shortcuts (CTRL+arrow keys) to switch between spaces because the monitor that would switch would be the one your mouse cursor was over.

The combination of turning off “Displays have separate Spaces” in macOS, and disabling “native full screen mode” in Parallels was the perfect, wanted behaviour that Parallels users of multiple monitors had become accustomed to for many, many years.

Parallels 12 changed all that, by removing the non-native full screen mode option that was working perfectly in version 11, leaving users with no satisfactory multi-monitor display mode.

Users were up in arms:

8 pages of complaints on the official Parallels forum when Parallels 12 launched with this

“Usable” multi-monitor support feature request

Did Parallels listen? Well, only a little. Near the end of version 12’s shelf life they pushed an update out that contained a work around – an option to “switch” all other spaces to Parallels when you clicked Parallels on another space. Sounds great but still doesn’t allow you to switch in and out of Windows on all of your screens at once.

Users were livid. The pithy Knowledge Base article didn’t help either.

Then Parallels 13 came out with no new fixes for this. Parallels was effectively dead for users with multiple monitors.

Other reasons not to use Parallels any more

The push for yearly subscription pricing. You aren’t Creative Cloud guys. The last thing users want when buying a piece of utility software is to set calendar reminders that they are going to be auto-rebilled.

The shovelware and crapware that Parallels pushes on you, even via advertisements with the application that you paid for. Who doesn’t want a subscription to Parallels Remote Access or “Parallels Toolbox”?

Only 9.99 USD a year!!

The resurrection of VMWare Fusion

Back in the day, Parallels spanked VMWare Fusion on performance. They became market leaders and deserved it. I fondly remember running Parallels 4 against a Bootcamp partition on a now clunky old Mac Mini and being pleasantly surprised.

I’ve recently given VMWare Fusion 8.5 a go and I am pleased to say the performance against Parallels for my main use case (Visual Studio on Windows 10) is indistinguishable. It imported my Parallels VM flawlessly. It didn’t pester me to install anti-virus in my Windows 10 VM (something so completely pointless Parallels must be getting kickbacks). There will be a free upgrade to VMWare Fusion 10 this October. And most importantly…

It works correctly with multiple monitors!

Yes, VMWare Fusion 8.5 behaves the same way Parallels 11 used to work.

RIP Parallels Desktop.

Hidden full screen web page kiosk mode in Windows 10 Anniversary

Running Windows 10 Anniversary Edition? Click this link and say yes to the prompts (You’ll, er, have to press CTRL+ALT+DELETE to exit and sign in again).


You just launched the hidden Take a Test app. Windows 10 Anniversary now includes a chromeless kiosk mode that web pages can launch. Basically any link in the format…


…will launch the app. Administrators can even create user accounts that are locked down to single web pages where CTRL+ALT+DELETE is the only way out.

Notably there are some extended JavaScript APIs available when running under the kiosk mode – interestingly some even called getIPAddressList, getMACAddress and getProcessList. Yes, with a couple of prompts, a web page can launch the Take a Test app and get a list of the user’s running processes and their MAC address.

I wonder how long until this gets abused.

Shameless plug: This post was written with Net Writer – a little app I wrote to help blogging on Windows 10. If you have Windows 10, download it for free.

Better git and other Linux based command line tools on Windows 10

One fantastic new feature in the latest version of Windows 10 is an add on you can install that allows you to use an Ubuntu-based Linux distribution natively in Windows. This opens up a whole new world for developers on Windows, including access to the same class of Git and SSH tools that are available on OS X (goodbye PuTTY!).

To enable it, start by heading to Settings, Update & security, For developers and turn Developer mode on.

Then, right click on the Start icon, click Programs & Features, “Turn Windows features on or off” and enable “Windows Subsystem for Linux (beta)”. You’ll need to then restart your machine.

Once back, open a new admin command by right clicking Start and choosing “Command Prompt (Admin)”. Then type “bash” and hit enter. You’ll need to set some things up – including choosing a new username and password for the Linux install – then an Ubuntu image will download from the Windows Store. You’ll then be dumped into a bash prompt that will be familiar if you have used a Terminal on OS X.

The first thing you should do is run “sudo apt-get update”. This will update most components of the Ubuntu install.

Using the new Git

You can now use Ubuntu’s version of Git, instead of the Windows version you likely have installed. To install it, open a bash command and type “sudo apt-get install git”.

Opening a bash prompt in your Windows user directory by default

By default, the “Bash on Ubuntu on Windows” shortcut opens a bash prompt in the user directory of the Ubuntu install. This isn’t very useful if you still need to interoperate with files in your main user directory. To fix this, start by right clicking on the “Bash on Ubuntu on Windows” shortcut in the Start menu, going to More and Open file location.

You can then right click on the shortcut and choose Properties. Delete the tilde ~ character from the end of “Target”, enter %USERPROFILE% in the “Start in” box and hit OK.

Clicking the shortcut will now open in your Windows user profile folder via the magic of the mount points set up.

Simply right click the icon in the taskbar and pin it to get a shiny new Unix-based command line on Windows without Cygwin or MINGW32. Magic!

Shameless plug: This post was written with Net Writer – a little app I wrote to help blogging on Windows 10. If you have Windows 10, download it for free.

An ode to Surface 3

It is increasingly looking like the Surface 3 is going to be discontinued. Microsoft is running out of stock on the 128GB / 4GB RAM model. Third party vendors are heavily discounting it, suggesting a clearance. The biggest sign of its demise is that Intel are simply going to stop making the quad-core Cherry Trail Atom processors that power the Surface 3 and other tablets like it.

This is a crying shame. The Surface 3 (not to be confused with the larger, laptop-class Surface Pro 3) is simply a fantastic tablet device.

The history

Surface 3 was the successor to the Surface 2, which followed on from the Surface RT. Both Surface RT and Surface 2 were powered by ARM chips and a limited, cut down version of Windows, Windows 8.1 RT. They were never eligible for an upgrade to Windows 10 (although the work done to enable Windows-on-ARM lives on in Windows 10 IoT). The market also shunned them and customers were confused by them. I was in New York for the launch of Surface RT, picked one up and loved it. I however personally witnessed customers, after queuing for an hour to get into the pop-up Microsoft Store in Times Square, decide to leave empty handed when they found out that the Surface RT wouldn’t run iTunes. Strangely the Surface Pro, which would have run iTunes, had its release staggered to a few days after the RT launch. I believe this caused significant confusion and prevented the Microsoft Store staff from successfully upselling.

Surface RT was a fantastic device for its time, albeit with serious flaws. I loved the fact it was a perfect Remote Desktop machine, but aspects like the custom charger and stupid 16:9 aspect ratio took until the Surface 3 to resolve.

The hardware

The Surface 3 is a real PC, well crafted for the price point. Some of my favourite features of the hardware are:

  • Mini USB charging port – you can charge this thing with almost any cable or charger you already have lying around, including USB charging battery packs. This makes it extremely easy to travel with. The Surface 3 is the only Surface (including RTs and Pros) ever made with a standard, universal charging connector.
  • Stylus support – one of the USPs of the Surface Pro VS the Surface RT was the fact the Pro had a Wacom stylus and digitizer. It took until the Surface 3 for the non-Pro line to get a stylus to match the Pro line. Although you do need to buy the pen separately, the pens are the same across Surface 3 and Pro (which has a pen bundled). I have a feeling this might have cannibalized sales of the Surface Pro 3 and 4.
  • USB 3 port – Pretty much peripheral ever made for a PC works with the Surface 3.
  • DisplayPort connector – you can plug directly into a large monitor with dual screen support.
  • Kickstand – this is unbelievably useful on airplanes and something that Apple is too proud to add to the iPad without resorting to flappy, folding cases. Without the keyboard attached this enables hands-free viewing in a really small footprint.
  • Expandable storage – you can bung a micro SD card in the slot in the back to expand the storage.

None of the above features are available on non-Pro iPads without accessories and dongles. Stylus support is limited to the iPad Pro.

The software

Whilst it shipped with Windows 8.1, the Surface 3 now runs Windows 10 like a charm. Some of the best bits:

  • Battery Saver mode – this really works. It shuts down background processes (even Windows Updates!) and underclocks the CPU. I have seen the Surface 3 stretch to around 10 hours of use when browsing with Battery Saver turned on.
  • InstantGo/Connected Standby – Surface 3 picks up emails and Skype calls when in standby mode. It does actually work.
  • Real Chrome – because this is a real PC, you can run full Chrome with extensions. Hilariously, Chrome had better support for tablets than Microsoft Edge until the Anniversary Update – Chrome supported swipe left/right for back/forward when Edge did not. iPads are limited to a fake Chrome (Safari in a wrapper) with no extension support.
  • Legacy software – Microsoft Money still works on this, a program Microsoft stopped supporting in 2008.
  • Native support for FLAC and MKV – one of my favourite features of Windows 10 is built in support for FLAC, the most popular lossless audio encoding format, and MKV, the most popular HD video format container. Apple still does not have native support for these in macOS or iOS.
  • Multiple user accounts – unlike an iPad, you can actually have multiple user accounts with separate settings etc. You can create user accounts for your spouse and children without the ability to administer the device. I believe Apple’s solution to shared devices is to, er, buy another one.

The only real downside is because of the slow eMMC disk speed, Windows 10 baseline version updates can take over 2 hours to install.

Pricing and comparisons to iPads

Surface 3 in the UK comes in two main models:

  • 64GB Storage, 2GB RAM – 419.99 GBP
  • 128GB storage, 4GB RAM – 499.99 GBP

I own the second model, purchased at the Hawaii Microsoft Store for 599 USD, along with a US layout type cover at 129 USD and a stylus at 49 USD. This was a total of 540 GBP at the time, so thanks to the exchange rate I essentially got the type cover for free.

If you want to buy an iPad with 128GB storage, this will cost you 619 GBP for the 9.7 inch iPad Pro. The iPad Air only goes up to 64GB for 429 GBP. You still don’t get a kickstand, expandable storage or even a USB port. iOS doesn’t even support a mouse, Bluetooth or not, forcing you to get gorilla arm when using it with a keyboard attached.

At under 500 quid, this is a feasible device to travel with and not have your holiday ruined if you lose it. I cannot find any justification to get a Surface Pro 4 at double the price for the mid-range i5 / 8 GB RAM / 256 GB storage model. After using a 13 inch MacBook Pro as my main machine for three years, I’ve now offloaded the Mac and returned to the state of having a beefy desktop and cheap, portable companion tablet PC. I was sorely tempted by the Surface Book, but for the two thirds of the price you can build a beast desktop and get a Surface 3 or other companion device for portability, just using Remote Desktop if you need to connect back to base.

For those who don’t mind Windows and want a companion device, I really recommend getting a Surface 3 whilst you still can. They were/are truly revolutionary at their price point.

Shameless plug: This post was written with Net Writer – a little app I wrote to help blogging on Windows 10. If you have Windows 10, download it for free.

Get Living London E20 East Village – 18 months on Review

This is a follow up to my initial thoughts on the rental properties by Get Living London at East Village E20, approximately 18 months after moving back to the UK and settling down here.

Get Living London are simply the best landlords in London. Period. If you have to rent and live in London, ideally it should be from them. They are now Private Landlord of the Year for two years in a row and it not hard to see why.

After my first year’s tenancy, I started the process to extend it by another two years. There were no fees whatsoever and the rent was only hiked by RPI – in my case about 20 quid. There is still only a tenant break clause, not a landlord break clause.

Living here for a while has meant I have had to interact with the management office on numerous occasions – lost keys, things that needed fixing, meter readings etc. On every occasion the response has been prompt, often on the same day, and completely professional. This is because they are professionals – not amateur Buy to Let “investors” farming you for their pension. The management office is also just down the street and open extended hours if you need anything.

For young people, the rental sticker price of the apartments might be a bit of a shock. They are premium priced, but remember there are no fees, which last time I calculated to work out at about 60 pounds a month if you went through Foxtons. For young professionals they offer the ability to split the rent for flat sharing completely with separate direct debits. This is a vast contrast from the student days of renting when one “lucky” tenant had to round up everyone else’s contributions and pay every month.

A special shoutout goes to Hyperoptic, the Fibre To The Premises broadband provider. Get Living London residents can get 20MBit free, with special rates for the 100MBit and 1Gbit packages. This is still the finest consumer internet connection I have ever used in the entire world. Frankly, it will now be hard to live in a non-Hyperoptic area of the UK.

Warning! Do not get conned into paying for Sky, TalkTalk, BT or other ADSL/VDSL based internet providers (even if its “free with Sky TV”). You are effectively being missold when you have an Ethernet jack in the cupboard with real internet that just needs turning on. You also do not need to pay for “line rental”.

Bills vary through the year. The heating and hot water bill for our two bed apartment ranges from 40-80 pounds a month, depending on usage. Bearing in mind we tend to run a full bath every day, this is quite reasonable. There are no gas bills since there is no gas. Electricity can be had from your choice of provider – mine is around 30 a month from GB Energy Supply who have charges that are the closest to the wholesale rate that you can find on the market.

Stratford International, the local DLR station 1 minute from our flat, has now become Zone 2 – this means commuting into London is even cheaper. And still certainly much better than paying over 400 quid a month to commute from Sussex into London on Britain’s most delayed train service (and people say renting is throwing money away, how about three hours of your life a day?).

If living next to the Westfield Stratford mall isn’t enough, shops have started to open in East Village itself. There is now an amazing Fish and Chip shop, Ice Cream parlour, two bars/pubs, coffee shops, a pizza place, dry cleaning and other awesome independent stores. The Fish and Chips from Fish House are out of this world.

Fish House East Village E20 – from the inside

It is worth mentioning the construction work that has started in the village. Two large towers are currently being built on an area of the green space in the center. Despite the disruption and a bit of an eyesore while the towers are going up, this is a good thing for London. London needs more quality homes from reputable, professional landlords and not just tower blocks designed to park Chinese money, which is the case for the majority of new builds going up in the city (marketed off plan for a week in Hong Kong before the locals can get a look in).

So all in all, still a great place to live and remains highly recommended. Drop me an email if you have any questions about E20 or Get Living London.

Net Writer: porting Open Live Writer to Windows 10

A few months ago I started to write a replacement for Windows Live Writer for Windows 10 using the new Universal Windows Platform, calling it Net Writer and putting it on the Windows Store in Preview.

A few weeks later Scott Hanselman announces that Windows Live Writer has finally been open sourced as Open Live Writer licensed under MIT. It was time to throw away my code and use that!

Scott was not joking when he said:

IMPORTANT HISTORICAL NOTE: Much of the code in Open Live Writer is nearly 10 years old. The coding conventions, styles, and idioms are circa .NET 1.0 and .NET 1.1. You may find the code unusual or unfamiliar, so keep that in mind when commenting and discussing the code. Before we start adding a bunch of async and await and new .NET 4.6isms, we want to focus on stability and regular updates.

Windows 10 apps use a subset of .NET called Windows Runtime (or WinRT for short) – vast swathes of .NET are missing. Some of the stuff I had to change includes:

  • Ripping out everything apart from the connectivity code. This was not easy as there were UI dependencies everywhere.
  • Removing System.Net.HttpWebResponse and replacing it with the much better HttpClient. It almost looked like I wasn’t going to have to do this until I realised that the backwards compatible System.Net interface was not handling gzip responses correctly. HttpClient however is async only, therefore;
  • All methods need to be async and non-blocking. Windows Live Writer used some classic .NET 2.0 era background threading tricks that are unnecessary today and Windows 10 apps are expected to be async all the way through.
  • The XML API has changed quite a bit and now wraps around what I assume is a C++ implementation underneath. System.Xml has been replaced with Windows.Data.Xml.Dom. There is a bizarre new way of querying with XML Namespaces that StackOverflow saved my bacon on. 

It took days of staring at 1000+ compiler errors but I managed to get a subset of Open Live Writer working. Net Writer currently only supports WordPress blogs (like this one) but I will be gradually turning on the other supported blog engines as I test them out. This also means that Open Live Writer code now works on Mobile – however the user interface is a bit of a hack job at the moment.

You can try Net Writer out for free from the Windows Store. I update the Preview when time allows and love getting feedback.

Windows 10 on Mac Bootcamp – fixes (Updated)

Update 19th August 2015: Apple have released Bootcamp 6, which fixes all of the below when using Windows 10. If you already have Bootcamp 5 installed, run the Apple Software Update utility to get the latest set of drivers. The only oddity I’ve had with Bootcamp 6 is that is resets your DPI scaling to 200%.

Windows 10 on Bootcamp (Macbook Pro 13 inch, Bootcamp 5.1) has some teething issues as of build 10162.

SSD Powering down problems

You might notice Windows hanging for extended periods of time or blue screening – the SSD is literally powering down underneath Windows. The Bootcamp drivers don’t properly support Windows 10’s powering down of the SSD to save battery. Your Event log might have references to “”Event 129, storahci – Reset to device, \Device\RaidPort0, was issued.” To fix this, you need to disable AHCI Link Power Management and prevent storahci from going into low power mode.

1. Copy and paste the following into a new text file called “enable-hipm.reg” and save it:

Windows Registry Editor Version 5.00



2. Double click the file to import the records into the registry.

3. Right click on the Battery icon in the Taskbar, select “Power Options”. Click “Change plan settings” under the “Balanced” option. Then click “Change advanced power settings”.

4. Expand the “Hard disk” node and you’ll see “ACHI Link Power Management – HIPM/DIPM”. You need to set the value to “Active” as seen below:


5. Create a another regedit file “storahci.reg” with the following content:

Windows Registry Editor Version 5.00


6. Double click the file to import the registry entries. This stops storahci from going into Low Power Mode.

A restart should then solve the SSD freezing problems.

System Restore, Restore Points and Windows 7 style backups do not work

Again, if you are getting messages such as “check the event log for VSS errors” when trying to backup or create a restore point, and then finding event log messages like:

Volume Shadow Copy Service error: Unexpected error CreateFileW(\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy48\,0x80000000,0x00000003,…).  hr = 0x80070001, Incorrect function.

Processing PreFinalCommitSnapshots

Execution Context: System Provider

Then you’ll find that this is another Bootcamp driver problem, specifically the applehfs.sys driver that allows read only access to HFS volumes. You need to disable this from starting up:

1. Download Sysinternals Autoruns and run it as an Administrator.

2. Search for “apple” and you’ll see “applehfs.sys”.


3. Disable it by unchecking AppleHFS and restart. You should now be able to create System Restore images and Windows 7 style backups.

Hopefully Apple updates Bootcamp for Windows 10. If I find any other issues I’ll update this post.

Backup and Restore is back in Windows 10

Great news! In Windows 10 build 10130, Microsoft appears to have seen sense and brought back the perfectly functional Backup and Restore function that was removed in Windows 8.1. You can find it in the classic Control Panel under “Backup and Restore (Windows 7)”.

No longer do you have to use the File History feature. The Windows 7 version of Backup and Restore supports a schedule you control, all files on your hard disk, includes a System Image at the same time and will include your OneDrive files!


The cynic in me suspects this is only here for the benefit of Windows 7 users who have upgraded directly to 10, skipping Windows 8 and 8.1, rather than an admission that Windows was suddenly without a viable backup solution (which File History is not).

Shinkansen wifi access (and Japan docomo/mobilepoint wifi WEP keys)

Back in 2011, I created a pretty popular post (that now redirects here) that contained the WEP key for Softbank’s “mobilepoint” wifi hotspot on the Shinkansen. For some reason, both docomo and Softbank encrypt their public wifi connections with publicly accessible keys which doesn’t prevent eavesdropping on connections at all. Docomo even sells “visitor” access passes via their mobile portal, only accessible if you know the wifi password in the first place!

In any case, the passwords to get on the wifi on the Shinkansen or pretty much anywhere in Japan are now:

Operator SSID WPA2/WEP Key
Docomo docomo e3f4aad65c
Docomo 0000docomo B35D084737
Softbank mobilepoint2 62626d7032
Softbank mobilepoint 696177616B

Once in, you can then log in with your roaming provider (I use Boingo, which works a treat in Japan, and is the source of the above info). The best “public” wifi provider in Japan is Wi2 if a hotspot is available – they provide a hotspot that doesn’t require a password and sells passes in English on the portal.

.NET web app cloud deployments in 2015

.NET web applications tend to get treated very poorly in the real world – some people still think that copying and pasting the contents of their /bin/Release/ directory (lovingly referred to as “DLLs”) over Remote Desktop to a webserver and manually setting up IIS is acceptable – but this is now 2015 and the world has moved on. Here are my thoughts on some of the various ways you can deploy .NET apps to the cloud.

First things first – keeping your .NET app cloud ready

Real cloud environments are stateless. You must treat the web servers you use as ephemeral. DevOps practitioners treat virtual servers as cattle, not pets, and don’t nurse servers back to health if there is a problem. Instead they take them out back, shoot them in the head and spin up a new one.

The .NET Framework does not make building cloud-ready, stateless scalable applications easy by default, especially if you are still shaking off decade old WebForms habits. Here is some advice:

  • Never use Session State. If you type HttpContext.Current.Session you lose. Using Session State either forces you to have a “Session State Server”, building a single point of failure into your architecture, or having to use sticky load balancers to force users to continuously hit the same web node where their in-memory session lives.
  • You’ll need to synchronize your MachineKey settings between machines, so all nodes use the same keys for crypto.
  • Multiple nodes will break ASP.NET MVC’s TempData (typically used for Flash messages) – try CookieTempData
  • For configuration values, only use web.config AppSettings and ConnectionStrings. Sticking to this rule will give you maximum compatibility with the various cloud deployment platforms I’ll outline below. And no, don’t use Environment Variables, despite what The 12 Factor app enthuses – Windows apps do not use Environment Variables for application configuration. UPDATE Jan 2016: ASP.NET 5 has embraced Environment Variables as a first class configuration option bringing it inline with other web frameworks – if you are using ASP.NET 5 you can now use Environment Variables as an alternative to local config files. Don’t bother for ASP.NET 4.6 apps. 
  • Do not rely on any pre-installed software. All dependencies should be pulled from NuGet and distributed with your application package. If you use a vendor’s “solution” (custom PDF components? Using Office to create Excel files? CrystalReports?) insist on a NuGet package or remove the vendor’s software. This is 2015.


Azure Websites

The granddaddy of .NET Platform as a Service and the cornerstone of almost every Azure demo. Azure Websites is a very high level abstraction over IIS and .NET web farms, supports lots of very cool deployment mechanisms and is easily scalable.

  • Deploy from Github, TFS, Mercurial etc by monitoring branches. The very clever software under the hood (Kudu) monitors branches for changes, runs MSBuild for you and deploys your app.
  • Lots of features – staging slots (with DNS switch over for zero-downtime deploys), scaling with a slider, monitoring and logging all included
  • You don’t get access to the underlying Windows VM that the sites are running on – even if you pay to have dedicated VMs for your sites. This does mean that you get auto-patching, but if you have any exotic requirements (I’ve seen third party APIs have such broken SSL implementations you need to install their Root CA certificate on your web server) you’ll be out of luck as there is no way to run scripts on the servers.
  • To configure your app, you can set variables that replace AppSettings or ConnectionStrings in your web.config at deployment time.
  • Azure Websites also supports PHP, Java, node.js and more, if you are happy to run those frameworks on Windows. This blog is WordPress backed, so PHP, and running on Azure Websites!

An honorable mention goes out to App Harbor – they technically got there first by providing a Heroku-like experience for .NET developers. Also note that Azure has “Azure Cloud Services” – this is significantly more complex than Azure Websites and does tie you into the Azure platform significantly. Azure Cloud Services are typically chosen for long running cloud systems rather than transactional web sites (think Xbox Live rather than a high traffic blog).


Amazon Web Services Elastic Beanstalk

Amazon are by far the biggest cloud provider out there and they try to tick as many Windows feature boxes as possible to woo enterprises. Elastic Beanstalk is a Platform as a Service deployment platform, similar to Azure Websites, but completely platform agnostic. Since it uses all the existing EC2 APIs underneath (Elastic Load Balancing, Auto Scaling Groups etc), Language and OS support is much higher than Azure Websites, at the expense of not being optimised for Windows/.NET workloads.

  • There is no cheap, shared tier. Your application runs on a dedicated VM that you have access to. This makes costs a bit higher (unless you are crazy and want to try to run .NET on micro instances) but gives you more control. As part of your deployment package you can include Powershell scripts that can execute on your VM.
  • The user interface is very limited – when I last checked the only configuration values you could set via the UI were named “PARAM1”, “PARAM2”, “PARAM3” etc, which limited your AppSettings to using those names unless you wanted to completely script your deployment.
  • If you want a SQL Server as a Service, you are limited to RDS which charges for the whole VM and SQL Server license. Azure’s SQL Server service charges for CPU time and disk space, which can work out quite a bit cheaper.
  • Docker container support is available – this will become important for .NET developers when ASP.NET 5 is out of beta and CoreCLR is ready.


Opscode Chef + Azure or AWS VMs

Opscode Chef is a favorite of the “infrastructure as code” crowd, and it can be made to work on Windows. Given standard virtual machines on either AWS or Azure, you can install the Chef service on your nodes and execute Chef recipes.

  • Chef recipes are written in Ruby. This may or may not be a problem depending on your team (I can count the number of .NET developers I know who are also good at Ruby on one hand) but is definitely extra skilled requirements. It is possible to use Chef recipes to bootstrap Powershell scripts, but then you have Rube Goldberg machine of pain.
  • Ruby is simply not designed to run on Windows, let alone for long-running processes. The Chef Service had a long standing bug on Windows where Ruby would simply run out of memory. Anybody who has tried getting every gem in a typical Ruby on Rails gemfile to compile on Windows knows the pain I am talking about. Windows support for Ruby is an afterthought.
  • One thing Azure has over AWS for Chef deployments is the ability to pre-install the Chef Client onto a VM when you start it, all from the UI. AWS requires you to manually apt-get the client.
  • Chef recipes are based on the concept of convergence – where the desired state of the server is described and then a policy is calculated to bring the server to that state. Co-incidentally, this is exactly what Powershell Desired State Configuration does. Chef have plans to integrate with Powershell DSC.


Octopus Deploy + Azure or AWS VMs

Ocotpus Deploy is quickly becoming one of my favourite parts of the .NET ecosystem. Built by some of the finest .NET developers in the land, for .NET developers. It provides the Platform as a Service ease of Azure Websites with the power of running your own VMs. I think of it as bringing your own platform layer to infrastructure you might get elsewhere – I’ve dealt with a big deployment of Octopus on AWS.

  • VMs can be assigned to environments, enabling a fully customisable Test-UAT-Staging-Production workflow with release promotion.
  • Your build server needs to create “octopack” packages– a nuget package variant. These packages then get pushed to the Octopus server nuget feed and can be deployed.
  • A deployment agent called a “Tentacle” is deployed on each VM. A single MSI command can install and enroll the node.
  • Elastic scaling is not included – Octopus does not manage your environment for you.
  • Deployment steps are fully customisable – you can create IIS sites, AppPools, run custom scripts or even install Windows Services
  • Configuration settings for your application are set as variables that apply to AppSettings and ConnectionStrings in your web.config when you deploy.

The Octopus Deploy team is currently working on version 3.0, which will replace the RavenDB database with SQL Server. I’m very much looking forward to it. Octopus isn’t limited to cloud-deployments either – it can be used equally well for on-premise datacenters.

In summary then, I’d choose Azure Websites if the application is simple enough to work within it’s constraints. Given an application with multiple tiers (microservices etc) or special deployment requirements (third party software, certificates), I’d go for Octopus Deploy on top of whichever is your organisation’s favored cloud provider.

If you have any thoughts on the above, or can point out a mistake I’ve made, please drop me an email or leave a comment.