The Xbox One

Earlier this week Microsoft finally revealed their new gaming console – the Xbox One – their replacement for the 8-year-old Xbox 360. As a big fan of the 360 I had a long list of features I was hoping to see in the release event for the new device. It’s safe to say I ended up disappointed.

Regardless of whether or not the new Xbox turns out to be any good, as an event designed to be the first impression of their new machine, the reveal was surely a flop. It’s kind of incredible this even needs to be said, but if you’re launching a new games console the first thing people are going to want to know about are the games. E3 is just around the corner and they’ve promised to show a full lineup of games then, but you only get one chance for a first impression and showing a tiny amount of game footage from existing brands was a really poor choice. Forget about E3, this was the time everyone was focusing on the Xbox One and trying to figure out what kinds of new gaming experiences we could expect it. They really couldn’t have picked a collection of games more known for their complete lack of innovation and endless sequels than Call of Duty, EA sports and Forza. The only new IP shown, a called called Quantum Break looked interesting but had an incomprehensible trailer and turned out to not even be exclusive to the Xbox.

In the absence of games, Microsoft were clearly positioning the Xbox One as a media centre for the living room. I understand why they want to do this, and it well may be that the Xbox One could provides an excellent way for Microsoft to become gatekeepers for the delivery of TV, ideally positioned as traditional broadcasting gradually moves online. However it’s a mistake to try and sell the device this way – nobody watching this presentation was interested in this machine primarily for TV. The success of the Xbox 360 was that it was a great games console that was then repurposed as a media device. Microsoft could repeat the same trick with the Xbox One, but only if they managed to convince the public it’s a great gaming console first.

Compare this approach with the plans for the original Xbox in this article from one it’s designers – especially Point 3:

Focus on one customer, not all of them.

What do you think happens at a company like Microsoft when making a game console is officially green-lit? Well, it becomes the breadbasket for every other product division’s agendas. The smart WebTV guys thinking about set-top boxes want it to be more like a TV cable box. The successful browser guys want it to run IE. Many thought it should ship with a keyboard and be usable as a PC-like device right out of the box. And even many in my own camp thought the games at launch should have been a more diverse offering for young and old, encroaching on both Nintendo and PlayStation’s positions.

The Xbox team got great at saying “no.” A lot.

In order to launch on time, the product had to focus on one key customer. In the case of the Xbox1, it was the ‘hardcore gamer’ who was between the ages of 18 and 25. It was not going to be an educational device for children. It was not going to run Microsoft Office for business users. It was going to do one thing really, really well: play action games, with realistic pixel shaders, running at 60+ fps.

But far from trying to win gamers over, Microsoft have made a number of decisions seemingly designed to drive gamers into the arms of Sony. The new machine won’t be backwards compatable with your existing collection of Xbox games. So even though I’m an existing Xbox 360 user I don’t have any buy-in to the new system through my collection of old games – I don’t lose anything by switching over to the competitor. Although it would be an extremely tough technical problem to solve, providing backward compatibility would massively boose customer retention – so much so I still expect they might try something innovative like an OnLive type solution.

Another real disappointment with the new console, is that Microsoft have decided to introduce a fee for the use of second hand games. Games now have to be installed on the harddrive before they can be played, and each game can only be installed against one Xbox Live account without an additional charge (which seems likely to be the full price of the game). This move seems like it would surely kill high street gaming stores, who are heavily reliant on second-hand game sales, and destroys the culture of sharing games amongst friends. This problem looks even worse when you consider how much Microsoft charge for digital software on the existing Xbox 360. Quite unlike the massive discounts you can expect from Steam sales on the PC, digital downloads on the 360 are almost always more expensive than the price you’d pay in a store for a physical copy of the game. Personally, I like to pick up single player games about 6 months after release at a huge discount on the release price (normally <£10 compared to £40+ on release day), having Microsoft control the whole market means I can expect to start paying a lot more for games in the future. The next problem is the Kinect. A lot of people are worried about the privacy concerns of having an always-listening microphone and high-definition camera in your front room. I'm not particularly paranoid in that regard, but the original Kinect was a huge dud so I'm not sure why Microsoft have made the baffling decision to keep pushing it on us. The novelty of motion controls wore off years ago - probably about 3 months after the Nintendo Wii was released. Luckily they've kept the design of the control pad extremely similar to the last generation, so hopefully developers will resist the temptation to put to much Kinect content in their games and it'll merely be an extremely expensive remote control. After the superb 360, I though the new Xbox would be a guaranteed purchase. They could have simply bought the specs up-to-date, put in a quieter fan, and that would have been enough for me to buy one. But somehow they seem to have added a bunch of things I don't want, removed some of the functionality I've come to expect and the cost of gaming looks set to rise. They payoff should have been the games - it's going to take an impressive showing at E3 to start getting excited about this console.

Windows Azure


I’ve put my lead management site online using Windows Azure and have run into a few gotchas.

Several bugs started showing up once I’d deployed my website to the live environment hosted on Windows Azure. Firstly: all of the Twitter Bootstrap icons vanished! The rest of the site looked exactly as it had on the development environment, and they were showing up just fine on my local machine, but on the live Azure site they’d disappeared completely. After a bit of fiddling around I figured out this related to the use of Font Awesome – it seems that Azure doesn’t support some of the HTML5 Media Types out of the box. Stupid… but fixable!

Just add the following to your web.config file to let Azure know how to handle the file extensions used by the icons:

&lt;remove fileExtension=".svg" /&gt;
&lt;remove fileExtension=".eot" /&gt;
&lt;remove fileExtension=".woff" /&gt;
&lt;mimeMap fileExtension=".svg" mimeType="image/svg+xml" /&gt;
&lt;mimeMap fileExtension=".eot" mimeType="application/" /&gt;
&lt;mimeMap fileExtension=".woff" mimeType="application/x-woff" /&gt;

Update: Later on I realised this only worked for Chrome – to get IE and Firefox working I also need to ensure that each of the font files had their Build Action changed from ‘none’ to ‘content’.

The second problem was occasional crashes due to MARS not be set to true by default on Azure. To get round this you’ll need to log into the Azure website and locate the connection string that has been generated for you. Then during publishing you’ll have to override the connection string – paste in the Azure connection string, and append the following:


Finally, I ran into another issue when I wanted to deploy an automatic code first database migration that would lead to data loss. After publishing the site I was getting the following error message:

Automatic migration was not applied because it would result in data loss.

This is easy to fix whenever it happens locally (just use the verbose and force option when updating the database via the package console manager (e.g. update-database -force -verbose). But how do you do that during publication?

Turns out it’s pretty easy. In your data layer’s migration configuration file, add the following lines to ensure the automatic migration is forced during publication:

public Configuration()
    AutomaticMigrationsEnabled = true;
    AutomaticMigrationDataLossAllowed = true;

Amelia Update

Just a couple of new Amelia videos!

She’s started toddling everywhere:

And an example of why all our stuff is now broken…

The screenshot for that second video is pretty special 🙂

Moving House

I’ve been a bit lax about posting this month as there’s been a lot going on but with my coding project and personally. Right now we’re in the process of moving house in Winchester. The owner’s of the house we’ve been renting have decided to sell up so we need to relocate and we moved out yesterday. We’ve found a really nice place at the other end of St. Cross, a little further out from the centre, but with the benefit costing only about half of what we’ve been paying in rent. It has a much nicer garden for Amelia, three good-sized bedrooms with a similar modern finish to our current house and (crucially for us) only two floors. As much as I liked the location of our current place, Gem and I have both been sick of trekking up and down three flights of stairs all day and it’s always felt far too big for our needs. I’m not sure how much longer I’m going to carry on with my coding project before looking for ‘proper’ employment but I’m still really enjoying the work so cutting the rent definitely seems like the smart move – there’s no point dipping into our savings if we can help it.

Here’s a little video of Amelia helping the packing!

I’ll talk more about how the project is going in another post, but it’s starting to feel a lot more like a functioning website now. I feel like I’m comfortable over the learning hump now for several of the technologies and I’m becoming properly productive with MVC, jQuery and Linq (I’m completely in love with deferred execution). I’ve still got a few reservations over DI but that’s more to do with the time it takes to write anything with it – I’m more than happy with the results and habits it leads to.

Company and Staff Pages

I haven’t posted much about the project last week as I wanted to plough some solid development time into it rather than writing about it. It’s coming on pretty well and I’m starting to get a real feel for developing with MVC.

When you create a new user, they’re assigned administration rights to a new company. The site then provides you with a simple interface for setting up your company. For our purposes a company is modelled as a collection of branches – a head branch and up to two levels of child branches below it. The site creates a new ‘head branch’ for the user as soon as they validate their email and login for the first time.

I’ve been watching a lot of the US Office recently so here’s what the fictional paper company Dunder Mifflin might look like in the site:

Working with tree structures is fairly tricky. Luckily I found a great site that gives some extension method code that allows tree data structures to be queried with Linq. This means I can pass a branch id into a query and retrieve a hierarchy of branches below it.

Once I’ve created my company structure I can assign staff to each of the companies. These staff are actual user accounts which we’re going to allow access to a limited part of the company. For example we might create a new employee at a branch that has access just to that branch. Or we might want to create a regional manager that has access to several branches including their child branches.

The site makes this easy to do – just fill in a simple form with the employee’s name, their role in the company and their email address. They’ll receive an email with a unique link that let’s them access the site where they can create their own username and password.

Completing the Dependency Injection

In an earlier post I was discussing how to use dependency injection to make unit testing possible. I ended up having a bunch of classes in my business and data layer with constructors that allowed you to pass in interfaces for all the classes they were dependant on.

However, at some point you actually have to make concrete decisions about which implementation of an interface you’re actually going to use. This gets done in your executing assembly – i.e. your StartUp project. I have a couple of StartUp projects that I use – a Console application (for testing where I want to write out a bunch of output) and an MVC3 website (which is the presentation layer of my project).

Yesterday I got round to testing the first few pages of my MVC3 website. Using the Inversion of Control (IoC) container actually turned out to be a piece of cake. I’m using Ninject as my IoC container of choice, and I used the Nuget packet manager to install Ninject.MVC3 into my MVC3 project. Here’s the command for the Package Manager Console:

Install-Package Ninject.MVC3

This creates a new folder in my MVC3 website project called “App_Start” which contains a NinjectWebCommon.cs file. I can use this file to register all my dependencies – i.e. which concrete types should be resolved to when the code encounters a dependency. This is done in the RegisterServices method – here’s an example from my project:

        /// <summary>
        /// Load your modules or register your services here!
        /// </summary>
        /// <param name="kernel">The kernel.</param>
        private static void RegisterServices(IKernel kernel)

Most of these are pretty unremarkable. Only the ICrytographyHelper resolves to anything interesting – the Sha256CryptographyHelper. From this you can see how easy it would be to alter the type of cryptography my project uses simply by changing the concrete class here – I could just create another implementation of ICryptographyHelper, such as AES or Rijndael and change the reference in this single place to replace all references throughout my entire solution.

To finish up, here is an example of where the IoC contrainer injects the concrete implementation – in the controller of my MVC3 website. Note that in this controller I’ve added a constructor that requires the ICampaignHelper dependency to be injected.

        private readonly ICampaignHelper _campaignHelper;

        public CampaignStatisticsController(ICampaignHelper campaignHelper)
            _campaignHelper = campaignHelper;

Below is a screenshot of me stepping through this code when it gets triggered within the website. You can see that without writing any further code, the Ninject IoC container has injected the correct concrete version of the ICampaignHelper interface, CampaignHelper, into the constructor.

Even better than this, the IoC container has also resolved the dependencies of the CampaignHelper class which was itself dependant on another class, which inherited from IUnitOfWork!

        private readonly IUnitOfWork _unitOfWork;

        public CampaignHelper(IUnitOfWork unitOfWork)
            _unitOfWork = unitOfWork;

Why Do Images Uploaded to Flickr Lose Their Colour? Explained.

I host most of my photos on my Flickr account, and pay for the pro membership. It’s pretty inexpensive ($45 for 2 years) but I’ve been planning on moving away when I noticed that the colours of my images looks really flat once I’ve uploaded them.

Here’s an example of the problem. On the left is the picture of my daughter I wanted to upload. On the right is how her picture appears on Flickr:

Amelia Before and After Flickr

Notice that the colour has gone – the image looks noticeably more ‘grey’ and desaturated when it gets to Flickr. My first thought was that it might be a problem with the uploader I use – perhaps it strips out information or tries to compress the image to make for a smaller upload – but the problem occurs no matter what tool you use or even if you upload directly through the website.

From doing a bit of reseach on the web, I was lead to believe that the issue was caused by Flickr was stripping out the “Colour Space” information embedded in the image. I edit my images in Photoshop, so they’re saved with a reference to the Adobe RGB colour space.
The theory was that Flickr would remove this information meaning the images are displayed in the standard sRGB colour space – with noticeably incorrect results.

However, whether or not this was true in the past, I don’t believe Flickr does this anymore. Instead the issue seems to be with browser support of color spaces. Trying viewing the picture of my daughter Amelia directly on Flickr with your browser and see if it appears like the ‘before’ or ‘after’ half of the image above.

If it looks ‘grey’ on your browser, then it turns out that your browser doesnt support colour management. As I tend to use Chrome, I was more than little surprised to find out that Chrome is an example of a browser that is affected by this issue. Sort it out Chrome! As a comparison If I view the link in Firefox, a browser which correctly supports colour management, everything appears as expected.

Here’s a great example of some images that appear radically different if your browser doesn’t support colour management.

Apparently Windows seems to suffer from this problem more than Mac users as the latest version of Chrome supports colour management on Mac.

So what’s the solution to all of this? I think the correct thing to do is to convert any image you want to display online to the sRGB colour space (it’s simple enough to do this in Photoshop) prior to uploading. This ensure that all browsers will display your photo in a way you’d expect. Sure enough once I started doing this, even Chrome started displaying pictures of my daughter without desaturating them.