Using Google APIs and Auth in Xamarin Forms

I’m working on porting Net Writer from UWP to Android using Xamarin Forms. The Google authentication is a little bit tricky as it is constantly changing. Working off this amazing blog post by Timothé Larivière got me 90% of the way there but there are some updates to the process in 2020.

Pre-requisites to register an app with Google

At this point in time you’ll need to do the following before you can register a Public app:

  •  Create a project in GCP via the Google Developer Console
  •  Verify a domain via the Google Search Console. To do this you will need access to your nameserver and DNS records in order to copy and paste a TXT record. Access this here.
  •  Know the SHA-1 fingerprint of the key that will be used to sign your package

Getting the SHA-1 fingerprint used to sign a locally deployed debug Xamarin Forms app

You’ll need to do this:

  •  In Visual Studio, go to Tools > Android > Android Adb Command Prompt
  •  Navigate to C:\Users\{username}\AppData\Local\Xamarin\Mono for Android

Run the command

keytool -keystore debug.keystore -list -v

And when prompted enter the keystore password “android”.

You’ll see a result like this:

The SHA1 value is what you need.

Registering the application

You should have everything you need now. Go to Credentials in the GCP panel and create the OAuth consent screen. Fill in the details (you’ll need the verified domain you created earlier).

Then you can create an OAuth 2.0 Client ID. Select “Android” as the platform and enter the package name from AndroidManifest.xml and the SHA-1 fingerprint you figured out earlier. You’ll see something like the below:

Adding the code

The approach I have taken is to create a class in the main Android project to encapsulate everything, rather than putting the logic inside the Mobile/PCL project. This is because the Android version needs references to Activities and other Android-specific concepts to work effectively. It’s not just a case of adding Xamarin.Auth and calling a method unfortunately.

Using Xamarin Forms dependency injection I can refer to and call this class within the portable Mobile project when I need an access token from the API.

The token reader class

There are some nuget dependencies you’ll need for this – the “Google.Apis.Auth” libraries for the TokenResponse class (although you can probably remove the dependency from the below code if you’d like), “Xamarin.Auth”, “Xamarin.Auth.XamarinForms” and “Plugin.CurrentActivity”. The last one allows code outside of an Activity to get access to the current Activity.

    public class GoogleAccessTokenReader : IGoogleAccessTokenReader
    {
        public static readonly string[] GoogleAPIScopes =
        {
            DriveService.Scope.DriveFile,
            BloggerService.Scope.Blogger
        };

        public static TokenResponse Token { get; set; }

        public static OAuth2Authenticator Auth;

        public async Task<TokenResponse> GetOrNullAsync()
        {
            if (Auth == null)
            {
                Auth = new OAuth2Authenticator(
                "your-client-id",
                string.Empty,
                String.Join(" ", GoogleAPIScopes),
                new Uri("//accounts.google.com/o/oauth2/v2/auth"),
                new Uri("com.yourpackageid:/oauth2redirect"),
                new Uri("//www.googleapis.com/oauth2/v4/token"),
                isUsingNativeUI: true);

                Auth.Completed += OnAuthenticationCompleted;
            }

            if (Token != null) return Token;


            Xamarin.Auth.CustomTabsConfiguration.CustomTabsClosingMessage = null;

            var intent = Auth.GetUI(CrossCurrentActivity.Current.AppContext);

            CrossCurrentActivity.Current.Activity.StartActivity(intent);

            while(!Auth.HasCompleted)
            {
                await Task.Delay(500);
            }

            return Token;
        }

        private void OnAuthenticationCompleted(object sender, AuthenticatorCompletedEventArgs e)
        {
            if (e.IsAuthenticated)
            {
                Token = new TokenResponse()
                {
                    AccessToken = e.Account.Properties["access_token"],
                    TokenType = e.Account.Properties["token_type"],
                    Scope = e.Account.Properties["scope"],
                    ExpiresInSeconds = int.Parse(e.Account.Properties["expires_in"]),
                    RefreshToken = e.Account.Properties["refresh_token"]
                };

             }
         }

    }

Add the following dependency injection declaration to the namespace:

[assembly: Dependency(typeof(NetWriter.Mobile.Droid.GoogleAccessTokenReader))]

Where NetWriter.Mobile.Droid should be replaced with your namespace.

You can use this interface too as IGoogleAccessTokenReader (or change depending on your needs):

using Google.Apis.Auth.OAuth2;
using Google.Apis.Auth.OAuth2.Responses;
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;

namespace BlogWriter.Shared.NetStandard.Interfaces
{
    public interface IGoogleAccessTokenReader
    {
        Task<TokenResponse> GetOrNullAsync();
    }
}

Add the activity that receives the response

Somewhere in your main Android app you should add the following:

 [Activity(Label = "GoogleAuthInterceptor")]
    [IntentFilter(actions: new[] { Intent.ActionView },
              Categories = new[] { Intent.CategoryDefault, Intent.CategoryBrowsable },
              DataSchemes = new[]
              {
                  "com.yourpackageid"
              },
              DataPaths = new[]
              {
                   "/oauth2redirect"
              })]
    public class GoogleAuthInterceptor : Activity
    {
        protected override void OnCreate(Bundle savedInstanceState)
        {
            base.OnCreate(savedInstanceState);

            Android.Net.Uri uri_android = Intent.Data;

            var uri_netfx = new Uri(uri_android.ToString());

            GoogleAccessTokenReader.Auth?.OnPageLoading(uri_netfx);

            var intent = new Intent(this, typeof(MainActivity));
            intent.SetFlags(ActivityFlags.ClearTop | ActivityFlags.SingleTop);
            StartActivity(intent);

            Finish();
        }
    }

The last three lines before Finish() are really important as they actually make the Google login window go away after logging in. if you don’t add them it will stay there and the user will need to manually close the window.

Retrieving the token from within your app

Using the Xamarin Forms dependency injection system, you can get an instance of the token reader like:

var reader = DependencyService.Get<IGoogleAccessTokenReader>();
var token = await reader.GetOrNullAsync();

Coming soon

Refresh tokens, remembering the login and other stuff. Oh my!

Demo video

If you want a walkthough of doing this, you can watch the following video:

How to fix AMD EyeInfinity not filling all monitors

In the 2020 iteration of the AMD Radeon Software they have removed all the settings to configure AMD EyeInfinity. Which means when you play games with a mix of different monitor setups, you’ll see gaps and black areas by default, normally on your largest screen. Something like this:

Not great!

Fixing it

To fix this you need to find the old Catalyst Control Panel, which is now secretly hidden away.

When AMD EyeInfinity is enabled, head to C:/Program Files (x86)/AMD/CNext/CCCSlim and find the CCC.exe app.

This app will only launch when AMD EyeInfinity is enabled. When you launch it, you’ll be able to find the “Resize Desktop” option. Selecting “Expand” will fill all your screens properly.

Video demo

Here is a video of how to do it!

UWP and Xamarin Forms – How to display your app’s version number

Assume we want to automatically show the version number of your app in your UI, for example, the settings page or elsewhere. Your version number will normally be updated by your CI/CD system (updating Package.appxmanifest for UWP and AndroidManifest.xml for an Android Xamarin app).

Create a property to bind to

ViewModels and how they bind to your UI are out of scope for this post (as you’ll have already got this far). However, if you add a property to your view model like this:

        public string VersionString 
        { 
            get
            {
                return "hello";
            } 
        }

We can use this to test the binding before updating to the actual version number later.

Bind to it using UWP

For UWP you’ll need to use the TextBlock control and bind the Text property:

<TextBlock Text="{Binding VersionString}"></TextBlock>

Bind to it in Xamarin Forms

Xamarin Forms has a slightly different flavour, so you’ll need to use the Label control:

<Label Text="{Binding VersionString}"></Label>

Check the UI

You’ll see the following in your app:

Updating the binding to show the version number

Because getting the version number differs by platform, you should use the super awesome Xamarin Essentials library.

Add nuget package to your Mobile class library and your UWP app following the documentation.

Then add the following using statement to the top of your ViewModel:

using Xamarin.Essentials;

And update your property code:

        public string VersionString 
        { 
            get
            {
                return "Version " + AppInfo.VersionString;
            } 
        }

That’s it! Xamarin Essentials handles the cross platform bits.

Confirm the version number is displayed 

Relaunch your UWP app and you’ll see:

Launch the Xamarin app and you’ll see:

These values are pulled from Package.appxmanifest and AndroidManifest.xml respectively.

Demo video

Here is a nice YouTube Style™ video demo of the above:

Adding an Admin Panel to a .NET Core web app with CoreAdmin

I’ve published version 1.0.0 of a new open source package and a corresponding nuget package – CoreAdmin.

CoreAdmin adds a nice set of CRUD screens to your .NET Core web app in one line of code!

Adding CoreAdmin to your app

Given a typical Startup.cs file, you will have a ConfigureServices method. You need to add the line services.AddCoreAdmin() somewhere near the bottom (at least after you register your Entity Framework DbContexts).

Then when you visit your site with /coreadmin on the end of the URL, you’ll see this:

On the left you can see your database tables (these are the DBSets in your DbContexts). Click one and you get:

From here you can Create new entities, Delete and Edit them. Full searching, sorting, filtering etc are also supported.

There are a few limitations on data types and primary keys (for example, entities with composite primary keys are not supported for editing or deletion yet) but this should be sufficient for basic quick and dirty editing of entities.

How to get it

CoreAdmin on Github

CoreAdmin on NuGet

Simply install the nuget package “CoreAdmin” and you are good to go!  

Or watch a demo!

Here is a YouTube Style video demo.

EF Core Migrations – creating Indexes Online depending on SQL Server edition

I recently hit the classic case of trying to add Indexes to a large table. Whilst Entity Framework Core supports creating Indexes online during migrations, not all versions of SQL Server support this.

In the case that your migration contains the code:

 migrationBuilder.CreateIndex(
                 name: "IX_TableName_ColumnName",
                 table: "TableName",
                 column: "ColumnName").Annotation("SqlServer:Online", true);

This will fail hard on SQL Server Express, which you are likely using for development locally, with the error message “Online index operations can only be performed in Enterprise edition of SQL Server.”. Online index operations are available in Enterprise or luckily in my case, Azure SQL.

Whilst there is not a “feature flag” to detect the support of Online index creation, you can execute the following query to detect the edition of SQL Server your app is running on.

SELECT SERVERPROPERTY(‘EngineEdition’)

Which returns 3 for Enterprise edition or 5 for SQL Azure (full list here).

EF Core has removed the ability to easily execute scalar queries so you’ll need a small extension method:

public static class SqlQueryExtensions
    {
        public static T ExecuteScalar<T>(this DbContext context, string rawSql,
         params object[] parameters)
        {
            var conn = context.Database.GetDbConnection();
            using (var command = conn.CreateCommand())
            {
                command.CommandText = rawSql;
                if (parameters != null)
                    foreach (var p in parameters)
                        command.Parameters.Add(p);
                conn.Open();
                return (T) command.ExecuteScalar();
            }
        }

 

And then you can set a public static property on your migration before calling DbContext.Migrate():

var dbEngineVersion = dbContext.ExecuteScalar<int>("SELECT SERVERPROPERTY('EngineEdition')");
MyMigrationName.UseOnlineIndexCreation = dbEngineVersion == 3 || dbEngineVersion == 5;
dbContext.Database.Migrate();


public partial class MyMigrationName : Migration
{
    public static bool UseOnlineIndexCreation { get; set; }

    protected override void Up(MigrationBuilder migrationBuilder)
    {
        if (UseOnlineIndexCreation)
        {
            migrationBuilder.CreateIndex(
             name: "IX_TableName_ColumnName",
             table: "TableName",
             column: "ColumnName").Annotation("SqlServer:Online", true);
        }
        else
        {
            migrationBuilder.CreateIndex(
             name: "IX_TableName_ColumnName",
             table: "TableName",
             column: "ColumnName");
            }

        }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DropIndex(
            name: "IX_TableName_ColumnName",
            table: "AuditTrail");
    }
}

Now your Index will be created Online on editions of SQL Server that support it.

 

Missing StoreKey PFX certificates when building a Visual Studio 2019 UWP project

I came across an interesting issue updating my UWP app to Visual Studio 2019 and a new Azure DevOps pipeline. “Associate with Store” no longer adds password-less PFX files named *TemporaryKey.pfx and *StoreKey.pfx to your project to sign your store submissions – instead in VS2019 it now adds the certificates to your local user store only.

Which means when it comes to build, you get errors like

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Microsoft\VisualStudio\v15.0\AppxPackage\Microsoft.AppXPackage.Targets(4353,5): Error APPX0102: A certificate with thumbprint '' that is specified in the project cannot be found in the certificate store. Please specify a valid thumbprint in the project file.
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Microsoft\VisualStudio\v15.0\AppxPackage\Microsoft.AppXPackage.Targets(4353,5): Error APPX0107: The certificate specified is not valid for signing. For more information about valid certificates, see http://go.microsoft.com/fwlink/?LinkID=241478.

For comparison:

Above: Visual Studio 2017

Above: Visual Studio 2019 – notice the options to select from file and create test certificate are no longer.

To fix this for Azure Devops, you’ll need to install the PFX private key on every build. Follow these steps:

  • On the Choose Certificate window (shown above) choose View Full Certificate
  • On the second tab, choose “Copy to file…” to start the export to PFX process
  • Export the private key to a password protected PFX file
  • Add the PFX file to your project directory, like where it used to be in VS 2017
  • Update your .csproj file, adding a <PackageCertificateKeyFile> element containing the filename alongside <PackageCertificateThumbprint>
  • Add your PFX to source control making sure it is not ignored
  • In Azure Devops Pipelines, you’ll need a quick Powershell build step to add the certificate to the local user store:
  • Make sure that the WorkingDirectory option is set to the folder with the PFX file (alongside the .csproj) file.

That Powershell script in full:

$pfxpath = 'MySigningKey.pfx'
$password = 'supersecretpassword'

Add-Type -AssemblyName System.Security
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($pfxpath, $password, [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]"PersistKeySet")
$store = new-object system.security.cryptography.X509Certificates.X509Store -argumentlist "MY", CurrentUser
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]"ReadWrite")
$store.Add($cert)
$store.Close()

Now when your app is built, the private signing key will be loaded from the local machine store.

A note on security

The above is a quick and dirty way of getting this working – adding a PFX file to your source code repository is not best practice and you shouldn’t do this if you can help it. This is probably why Microsoft changed this behaviour in VS2019. An improvement on this could be to use the Secure Files feature of Azure DevOps to securely hold the PFX file until the build templates have a decent way of handing this scenario.

 

 

 

 

Multiple monitors? You should buy VMWare Fusion instead of Parallels Desktop

In a post three years ago, I waxed lyrical about how much better Parallels Desktop was compared to VMWare for the very common task of running Windows on your Mac.

It’s time to take that back.

Parallels Desktop is no longer fit for purpose if you are an advanced user.

How Parallels Desktop broke multiple monitors

In older versions of macOS, virtual desktops spanned your whole set of monitors. Therefore if you had a left and right monitor, switching spaces (or virtual desktops) would switch both, giving you two “Desktops”, Desktop 1 (left monitor A and right monitor A) and “Desktop 2”, (left monitor B and right monitor B). Switching between desktops would switch both screens. The major downside of this was that when applications were run “Full screen” (rather than just maximised), they would go full screen on one monitor and leave the other one completely blank, which was complete madness

In Parallels 11, Parallels supported two ways of rendering full screen on multiple monitors. The first was using macOS’s built in full screen function (more on that in a minute) and the other using a “non-native” method that involved drawing a windowless fullscreen window on top of the whole screen. 

To work around the full screen issue when using multiple monitors, macOS Yosemite introduced the option for displays to have their own “Spaces”. This meant that your left and right monitors have their own sets of virtual desktops. However, this meant that each monitor could be switched desktop independently, introducing say 4 different combinations when you had two monitors and two desktops. This was a context switching nightmare. Most power users turn this off, especially if they are using keyboard shortcuts (CTRL+arrow keys) to switch between spaces because the monitor that would switch would be the one your mouse cursor was over.

The combination of turning off “Displays have separate Spaces” in macOS, and disabling “native full screen mode” in Parallels was the perfect, wanted behaviour that Parallels users of multiple monitors had become accustomed to for many, many years.

Parallels 12 changed all that, by removing the non-native full screen mode option that was working perfectly in version 11, leaving users with no satisfactory multi-monitor display mode.

Users were up in arms:

8 pages of complaints on the official Parallels forum when Parallels 12 launched with this

“Usable” multi-monitor support feature request

Did Parallels listen? Well, only a little. Near the end of version 12’s shelf life they pushed an update out that contained a work around – an option to “switch” all other spaces to Parallels when you clicked Parallels on another space. Sounds great but still doesn’t allow you to switch in and out of Windows on all of your screens at once.

Users were livid. The pithy Knowledge Base article didn’t help either.

Then Parallels 13 came out with no new fixes for this. Parallels was effectively dead for users with multiple monitors.

Other reasons not to use Parallels any more

The push for yearly subscription pricing. You aren’t Creative Cloud guys. The last thing users want when buying a piece of utility software is to set calendar reminders that they are going to be auto-rebilled.

The shovelware and crapware that Parallels pushes on you, even via advertisements with the application that you paid for. Who doesn’t want a subscription to Parallels Remote Access or “Parallels Toolbox”?

Only 9.99 USD a year!!

The resurrection of VMWare Fusion

Back in the day, Parallels spanked VMWare Fusion on performance. They became market leaders and deserved it. I fondly remember running Parallels 4 against a Bootcamp partition on a now clunky old Mac Mini and being pleasantly surprised.

I’ve recently given VMWare Fusion 8.5 a go and I am pleased to say the performance against Parallels for my main use case (Visual Studio on Windows 10) is indistinguishable. It imported my Parallels VM flawlessly. It didn’t pester me to install anti-virus in my Windows 10 VM (something so completely pointless Parallels must be getting kickbacks). There will be a free upgrade to VMWare Fusion 10 this October. And most importantly…

It works correctly with multiple monitors!

Yes, VMWare Fusion 8.5 behaves the same way Parallels 11 used to work.

RIP Parallels Desktop.


Hidden full screen web page kiosk mode in Windows 10 Anniversary

Running Windows 10 Anniversary Edition? Click this link and say yes to the prompts (You’ll, er, have to press CTRL+ALT+DELETE to exit and sign in again).

Back?

You just launched the hidden Take a Test app. Windows 10 Anniversary now includes a chromeless kiosk mode that web pages can launch. Basically any link in the format…

ms-edu-secureassessment:<URL>!enforceLockdown

…will launch the app. Administrators can even create user accounts that are locked down to single web pages where CTRL+ALT+DELETE is the only way out.

Notably there are some extended JavaScript APIs available when running under the kiosk mode – interestingly some even called getIPAddressList, getMACAddress and getProcessList. Yes, with a couple of prompts, a web page can launch the Take a Test app and get a list of the user’s running processes and their MAC address.

I wonder how long until this gets abused.


Shameless plug: This post was written with Net Writer – a little app I wrote to help blogging on Windows 10. If you have Windows 10, download it for free.

Better git and other Linux based command line tools on Windows 10

One fantastic new feature in the latest version of Windows 10 is an add on you can install that allows you to use an Ubuntu-based Linux distribution natively in Windows. This opens up a whole new world for developers on Windows, including access to the same class of Git and SSH tools that are available on OS X (goodbye PuTTY!).

To enable it, start by heading to Settings, Update & security, For developers and turn Developer mode on.

Then, right click on the Start icon, click Programs & Features, “Turn Windows features on or off” and enable “Windows Subsystem for Linux (beta)”. You’ll need to then restart your machine.

Once back, open a new admin command by right clicking Start and choosing “Command Prompt (Admin)”. Then type “bash” and hit enter. You’ll need to set some things up – including choosing a new username and password for the Linux install – then an Ubuntu image will download from the Windows Store. You’ll then be dumped into a bash prompt that will be familiar if you have used a Terminal on OS X.

The first thing you should do is run “sudo apt-get update”. This will update most components of the Ubuntu install.

Using the new Git

You can now use Ubuntu’s version of Git, instead of the Windows version you likely have installed. To install it, open a bash command and type “sudo apt-get install git”.

Opening a bash prompt in your Windows user directory by default

By default, the “Bash on Ubuntu on Windows” shortcut opens a bash prompt in the user directory of the Ubuntu install. This isn’t very useful if you still need to interoperate with files in your main user directory. To fix this, start by right clicking on the “Bash on Ubuntu on Windows” shortcut in the Start menu, going to More and Open file location.

You can then right click on the shortcut and choose Properties. Delete the tilde ~ character from the end of “Target”, enter %USERPROFILE% in the “Start in” box and hit OK.

Clicking the shortcut will now open in your Windows user profile folder via the magic of the mount points set up.

Simply right click the icon in the taskbar and pin it to get a shiny new Unix-based command line on Windows without Cygwin or MINGW32. Magic!


Shameless plug: This post was written with Net Writer – a little app I wrote to help blogging on Windows 10. If you have Windows 10, download it for free.

An ode to Surface 3

It is increasingly looking like the Surface 3 is going to be discontinued. Microsoft is running out of stock on the 128GB / 4GB RAM model. Third party vendors are heavily discounting it, suggesting a clearance. The biggest sign of its demise is that Intel are simply going to stop making the quad-core Cherry Trail Atom processors that power the Surface 3 and other tablets like it.

This is a crying shame. The Surface 3 (not to be confused with the larger, laptop-class Surface Pro 3) is simply a fantastic tablet device.

The history

Surface 3 was the successor to the Surface 2, which followed on from the Surface RT. Both Surface RT and Surface 2 were powered by ARM chips and a limited, cut down version of Windows, Windows 8.1 RT. They were never eligible for an upgrade to Windows 10 (although the work done to enable Windows-on-ARM lives on in Windows 10 IoT). The market also shunned them and customers were confused by them. I was in New York for the launch of Surface RT, picked one up and loved it. I however personally witnessed customers, after queuing for an hour to get into the pop-up Microsoft Store in Times Square, decide to leave empty handed when they found out that the Surface RT wouldn’t run iTunes. Strangely the Surface Pro, which would have run iTunes, had its release staggered to a few days after the RT launch. I believe this caused significant confusion and prevented the Microsoft Store staff from successfully upselling.

Surface RT was a fantastic device for its time, albeit with serious flaws. I loved the fact it was a perfect Remote Desktop machine, but aspects like the custom charger and stupid 16:9 aspect ratio took until the Surface 3 to resolve.

The hardware

The Surface 3 is a real PC, well crafted for the price point. Some of my favourite features of the hardware are:

  • Mini USB charging port – you can charge this thing with almost any cable or charger you already have lying around, including USB charging battery packs. This makes it extremely easy to travel with. The Surface 3 is the only Surface (including RTs and Pros) ever made with a standard, universal charging connector.
  • Stylus support – one of the USPs of the Surface Pro VS the Surface RT was the fact the Pro had a Wacom stylus and digitizer. It took until the Surface 3 for the non-Pro line to get a stylus to match the Pro line. Although you do need to buy the pen separately, the pens are the same across Surface 3 and Pro (which has a pen bundled). I have a feeling this might have cannibalized sales of the Surface Pro 3 and 4.
  • USB 3 port – Pretty much peripheral ever made for a PC works with the Surface 3.
  • DisplayPort connector – you can plug directly into a large monitor with dual screen support.
  • Kickstand – this is unbelievably useful on airplanes and something that Apple is too proud to add to the iPad without resorting to flappy, folding cases. Without the keyboard attached this enables hands-free viewing in a really small footprint.
  • Expandable storage – you can bung a micro SD card in the slot in the back to expand the storage.

None of the above features are available on non-Pro iPads without accessories and dongles. Stylus support is limited to the iPad Pro.

The software

Whilst it shipped with Windows 8.1, the Surface 3 now runs Windows 10 like a charm. Some of the best bits:

  • Battery Saver mode – this really works. It shuts down background processes (even Windows Updates!) and underclocks the CPU. I have seen the Surface 3 stretch to around 10 hours of use when browsing with Battery Saver turned on.
  • InstantGo/Connected Standby – Surface 3 picks up emails and Skype calls when in standby mode. It does actually work.
  • Real Chrome – because this is a real PC, you can run full Chrome with extensions. Hilariously, Chrome had better support for tablets than Microsoft Edge until the Anniversary Update – Chrome supported swipe left/right for back/forward when Edge did not. iPads are limited to a fake Chrome (Safari in a wrapper) with no extension support.
  • Legacy software – Microsoft Money still works on this, a program Microsoft stopped supporting in 2008.
  • Native support for FLAC and MKV – one of my favourite features of Windows 10 is built in support for FLAC, the most popular lossless audio encoding format, and MKV, the most popular HD video format container. Apple still does not have native support for these in macOS or iOS.
  • Multiple user accounts – unlike an iPad, you can actually have multiple user accounts with separate settings etc. You can create user accounts for your spouse and children without the ability to administer the device. I believe Apple’s solution to shared devices is to, er, buy another one.

The only real downside is because of the slow eMMC disk speed, Windows 10 baseline version updates can take over 2 hours to install.

Pricing and comparisons to iPads

Surface 3 in the UK comes in two main models:

  • 64GB Storage, 2GB RAM – 419.99 GBP
  • 128GB storage, 4GB RAM – 499.99 GBP

I own the second model, purchased at the Hawaii Microsoft Store for 599 USD, along with a US layout type cover at 129 USD and a stylus at 49 USD. This was a total of 540 GBP at the time, so thanks to the exchange rate I essentially got the type cover for free.

If you want to buy an iPad with 128GB storage, this will cost you 619 GBP for the 9.7 inch iPad Pro. The iPad Air only goes up to 64GB for 429 GBP. You still don’t get a kickstand, expandable storage or even a USB port. iOS doesn’t even support a mouse, Bluetooth or not, forcing you to get gorilla arm when using it with a keyboard attached.

At under 500 quid, this is a feasible device to travel with and not have your holiday ruined if you lose it. I cannot find any justification to get a Surface Pro 4 at double the price for the mid-range i5 / 8 GB RAM / 256 GB storage model. After using a 13 inch MacBook Pro as my main machine for three years, I’ve now offloaded the Mac and returned to the state of having a beefy desktop and cheap, portable companion tablet PC. I was sorely tempted by the Surface Book, but for the two thirds of the price you can build a beast desktop and get a Surface 3 or other companion device for portability, just using Remote Desktop if you need to connect back to base.

For those who don’t mind Windows and want a companion device, I really recommend getting a Surface 3 whilst you still can. They were/are truly revolutionary at their price point.


Shameless plug: This post was written with Net Writer – a little app I wrote to help blogging on Windows 10. If you have Windows 10, download it for free.