A Fluent Migrator Primer

Last week at work I ran a quick lunchtime learning session on using the Fluent Migrator, so I thought I’d post up some notes and sourcecode here.

Firstly, why do you need the fluent migrator? It’s a decent way of getting version control for your database. Whilst every project I’ve worked on in the past has had version control for the sourcecode, very few have had the database in a state where you can easily roll back to previous versions. Thrown together solutions might include putting database backups or a bunch of sql scripts under source control. Normally these attempts don’t play well with automatic deployments.

The Fluent Migrator gives us an easy way of migrating up and down between database states. It can be easy tied in to .NET migration frameworks like nAnt or MsBuild. A further bonus is that it includes a Fluent Interface that allows you to define the database at an abstracted level, so you can spin up databases based on multiple different vendors.

Using the Fluent Migrator is pretty simple. Create an empty class library, and then use Nuget to pull in the FluentMigrator files: Install-Package FluentMigrator

This will pull the dlls you need into the project, and will also drop the migration runner tools (needed for running the migrations) into the packages folder. As mentioned these include support for build frameworks, but for the purposes of this write-up I’ll use the Command Line Tool Migrate.exe.

The next thing we need is an empty database to run the migrations against. In my case, this means firing up SqlYog, connecting to my localhost and creating a new MySql database.

This is all we need to get started! To test all the bits are working together, open up your command line tool and navigate to the packages folder that contain the Migrate.exe (that was installed by nuget). The migrate tool needs a couple of parameters to run. You can see these below. They’re the connection string to the database you just created, the type of database you want to create (in this case MySql) and a reference to the library created by building the class library you just made.

Migrate.exe
/connection “Server=localhost;Database=fluentmigrator;Uid=root;Pwd=root;Port=3307;”
/db mysql
/target C:\Users\matthew.durrant\Desktop\FluentMigrator\bin\Debug\FluentMigrator.dll

We haven’t created any migrations yet, but you can still run the Migration tool to make sure it can connect. If it works ok you’ll also see a new table created in your database called “VersionInfo”. This contains a list of migrations that have been run against the database, along with the date they were applied to the database, so right now it’ll be empty.

Next thing is to start creating migrations. Any modification you make to your database warrants a migration, and each migration should define both an “Up” method (for applying the migration) and a “Down” method (for undoing the migration). To create a migration, simply add a new class to the project which inherits from the Migration class. You’ll also need to add an attribute to the class which sets the id of the migration – this defines the order in which migrations are run. They don’t have to be consecutive. Here’s a basic example:

    [Migration(1)]
    public class Migration1 : Migration
    {
        public override void Up()
        {
        }

        public override void Down()
        {
        }
    }

The Up and Down methods can then be filled in. They’re written using the Fluent API, which easily allows you to create database schema objects, and to work with data. Here’s a simple example for creating some tables and a clustered index. Notice that the purpose of the Down method is to undo the work done in the Up method:

    [Migration(1)]
    public class Migration1 : Migration
    {
        public override void Up()
        {
            Create.Table("Users")
                .WithColumn("Id").AsInt32().Identity().PrimaryKey()
                .WithColumn("FirstName").AsString().NotNullable()
                .WithColumn("Surname").AsString().NotNullable();

            Create.Table("Accounts")
                  .WithColumn("Id").AsInt32().Identity().PrimaryKey()
                  .WithColumn("Name").AsString(100).NotNullable();

            Create.Table("UserAccounts")
                  .WithColumn("Id").AsInt32().Identity().PrimaryKey()
                  .WithColumn("UserId").AsInt32().ForeignKey("Users", "Id")
                  .WithColumn("AccountId").AsInt32().ForeignKey("Accounts", "Id");

            Create.Index().OnTable("Users").
                WithOptions().Clustered().OnColumn("FirstName").Ascending();
        }

        public override void Down()
        {
            Delete.Table("UserAccounts");
            Delete.Table("Accounts");
            Delete.Table("Users");
        }
    }

Now if you use the command line tool to apply this migration against your database. Simply add another parameter to tell the tool to run all available migrations:

–task migrate:up

Alternatively you can specify which migration you want to run up until by also specifying the version parameter. Right now we only have a single migration with ID of 1, so let’s use that:

–task migrate:up –version 1

You can check the database to make sure the tables and indexes are created as expected. If you look inside the VersionInfo table, you’ll also see a record that Migration 1 was applied. The next thing to try is rolling back the migration. Run the command line tool against, this time with these additional parameters to migrate down to version 0 (i.e. an ID below 1):

–task migrate:down –version 0

Check the database and you’ll see the Users and Accounts tables have been removed again. The row in the VersionInfo table has also been removed.

There are plenty of other syntax to learn for working with the Fluent API (altering schema objects, inserting and deleting data). I’ll write these up in a later post but you already have the ability to perform and undo migrations against a database, and this provides you with everything you need to place the database under source control.

Here’s the sourcecode – includes a bunch of additional migrations.

The Xbox One

Earlier this week Microsoft finally revealed their new gaming console – the Xbox One – their replacement for the 8-year-old Xbox 360. As a big fan of the 360 I had a long list of features I was hoping to see in the release event for the new device. It’s safe to say I ended up disappointed.

Regardless of whether or not the new Xbox turns out to be any good, as an event designed to be the first impression of their new machine, the reveal was surely a flop. It’s kind of incredible this even needs to be said, but if you’re launching a new games console the first thing people are going to want to know about are the games. E3 is just around the corner and they’ve promised to show a full lineup of games then, but you only get one chance for a first impression and showing a tiny amount of game footage from existing brands was a really poor choice. Forget about E3, this was the time everyone was focusing on the Xbox One and trying to figure out what kinds of new gaming experiences we could expect it. They really couldn’t have picked a collection of games more known for their complete lack of innovation and endless sequels than Call of Duty, EA sports and Forza. The only new IP shown, a called called Quantum Break looked interesting but had an incomprehensible trailer and turned out to not even be exclusive to the Xbox.

In the absence of games, Microsoft were clearly positioning the Xbox One as a media centre for the living room. I understand why they want to do this, and it well may be that the Xbox One could provides an excellent way for Microsoft to become gatekeepers for the delivery of TV, ideally positioned as traditional broadcasting gradually moves online. However it’s a mistake to try and sell the device this way – nobody watching this presentation was interested in this machine primarily for TV. The success of the Xbox 360 was that it was a great games console that was then repurposed as a media device. Microsoft could repeat the same trick with the Xbox One, but only if they managed to convince the public it’s a great gaming console first.

Compare this approach with the plans for the original Xbox in this article from one it’s designers – especially Point 3:

Focus on one customer, not all of them.

What do you think happens at a company like Microsoft when making a game console is officially green-lit? Well, it becomes the breadbasket for every other product division’s agendas. The smart WebTV guys thinking about set-top boxes want it to be more like a TV cable box. The successful browser guys want it to run IE. Many thought it should ship with a keyboard and be usable as a PC-like device right out of the box. And even many in my own camp thought the games at launch should have been a more diverse offering for young and old, encroaching on both Nintendo and PlayStation’s positions.

The Xbox team got great at saying “no.” A lot.

In order to launch on time, the product had to focus on one key customer. In the case of the Xbox1, it was the ‘hardcore gamer’ who was between the ages of 18 and 25. It was not going to be an educational device for children. It was not going to run Microsoft Office for business users. It was going to do one thing really, really well: play action games, with realistic pixel shaders, running at 60+ fps.

But far from trying to win gamers over, Microsoft have made a number of decisions seemingly designed to drive gamers into the arms of Sony. The new machine won’t be backwards compatable with your existing collection of Xbox games. So even though I’m an existing Xbox 360 user I don’t have any buy-in to the new system through my collection of old games – I don’t lose anything by switching over to the competitor. Although it would be an extremely tough technical problem to solve, providing backward compatibility would massively boose customer retention – so much so I still expect they might try something innovative like an OnLive type solution.

Another real disappointment with the new console, is that Microsoft have decided to introduce a fee for the use of second hand games. Games now have to be installed on the harddrive before they can be played, and each game can only be installed against one Xbox Live account without an additional charge (which seems likely to be the full price of the game). This move seems like it would surely kill high street gaming stores, who are heavily reliant on second-hand game sales, and destroys the culture of sharing games amongst friends. This problem looks even worse when you consider how much Microsoft charge for digital software on the existing Xbox 360. Quite unlike the massive discounts you can expect from Steam sales on the PC, digital downloads on the 360 are almost always more expensive than the price you’d pay in a store for a physical copy of the game. Personally, I like to pick up single player games about 6 months after release at a huge discount on the release price (normally <£10 compared to £40+ on release day), having Microsoft control the whole market means I can expect to start paying a lot more for games in the future. The next problem is the Kinect. A lot of people are worried about the privacy concerns of having an always-listening microphone and high-definition camera in your front room. I'm not particularly paranoid in that regard, but the original Kinect was a huge dud so I'm not sure why Microsoft have made the baffling decision to keep pushing it on us. The novelty of motion controls wore off years ago - probably about 3 months after the Nintendo Wii was released. Luckily they've kept the design of the control pad extremely similar to the last generation, so hopefully developers will resist the temptation to put to much Kinect content in their games and it'll merely be an extremely expensive remote control. After the superb 360, I though the new Xbox would be a guaranteed purchase. They could have simply bought the specs up-to-date, put in a quieter fan, and that would have been enough for me to buy one. But somehow they seem to have added a bunch of things I don't want, removed some of the functionality I've come to expect and the cost of gaming looks set to rise. They payoff should have been the games - it's going to take an impressive showing at E3 to start getting excited about this console.

Running the London Marathon 2013

marathon

A few weeks ago I ran the London Marathon for the second time. I wanted to write a little bit about the experience as it turned out to be a really great day out, and was a much more successful run than my first attempt!

Although I made it across the finish line in a fairly reasonable time (4h16m) last time I ran London (back in 2007!), it was real struggle. I hit the wall hard at around 20 miles and had to resort to some walking towards the end. I was pleased to have completed a marathon for the first time, but it became so painful in the final hour that I can’t honestly look back at the race itself as a positive experience. I definitely wasn’t alone in suffering – 2007 was one of the hottest London marathon’s on record and I saw huge numbers of runners collapsing and receiving medical attention on the course.

Happily, this time round it couldn’t have been more different. I was aiming for a sub 4h time, and I completed in 3h54m. More importantly the run was completely comfortable from start-to-finish and I had a great time – far from struggling to get round, I was actually quite disappointed that the course wasn’t a few miles longer as I was having so much fun and the atmosphere was so positive.

Here’s a link to my official time and race photos.
Here’s the run on RunKeeper.

Apart from getting lucky with the cooler weather, I tried to do things a bit differently this year.

Firstly, I didn’t force myself to do extra-long training runs. I managed a single 18-miler but I’m really starting to believe the benefit you gain from running 18-20 miles+ in training makes minimal difference on the big day, especially when weighed up against the stress they cause, plus the greater chance of injury and the lengthy recovery periods. My thinking was that that building up a good level of general fitness from regularly running 8km – 15k would be plenty to get me round. So much of the marathon is psychological rather than physical and it’s actually pretty demoralizing to finish a 20 mile training run. feel like you couldn’t take another step, but know that you’ll need to find another 6.2 miles from somewhere.

Second change was carrying food. I have no idea why, but last time I didn’t consider food for the marathon at all. Due to the heat I took on water and lucozade from every single stop, but didn’t eat anything. A lot of runners take gels, but I can’t stomach them, so this time I bought a neoprene running belt and carried chocolate Mule Bars with me. I ate another for breakfast (with a small amount of porridge) and another just before the run. The cooler weather also meant I was only taking water on every 2 miles this time.

I tried really hard not to change anything about the way I ran on the day of the marathon (this lead to me running in my massively unsuitable touch rugby boots – I really need to switch to a lighter pair of trainers now), but on the morning before the race I decided to ditch the music for the first time. Although I always train with music, this turned out to be a great decision – it was loads of fun interacting with the crowd on the way round, and it was a real motivator hearing strangers calling your name. I’m not sure if it was due to the events in Boston, but there was an incredibly positive party atmosphere the whole way round, and I’m glad I didn’t miss any of it.

 

The next major change was to follow a pacemaker and to keep my pace SLOW during the first half of the race. I went out far too fast in 2007, finishing the first half in 1h45m, and then falling off a cliff towards the end. This time I set out slowly, stuck with the pacemaker for the first half (which took about 1h57m) and kept my pace completely consistent throughout. Even after the pacemaker dropped out (it threw me completely when the pacemaker veered off the course and into the toilets) I used the RunKeeper app on my phone to monitor my pace and forced myself to slow down whenever the pace crept up more than a few seconds above my target pace. The result was that I didn’t hit the wall, I felt completely comfortable the whole way round, and the second half of the marathon ended up taking only 20 seconds longer than my first half!

Deliberately slowing myself down during the first half felt counter intuitive but I read a great post by an experienced marathon runner called Keith about trying to think of a race like a tide going in and out:

First third of race let the tide go out – let other runners leave you behind, feels like you are running backwards and hardest part of race. People especially at VLM will set off too fast, let them – they wil pay for it later. You cant win the race at this stage but you can lose it.

Second third the tide is out – keep up with those around you but they are tiring and you are fresh. You will feel good and it still feels easy. They will feel like crap and that how can there be another 12-18 miles to go

Final third the tide comes in – you start to over take people and each person you overtake you take a bit of their energy. You reel them in and leave them behind you. You will feel great and the crowds will love it.

This was the game plan I decided to go with and it played out exactly as Keith suggested. This was shown really clearly in my running statistics – over the final 7km I was passed by 27 runners, and I overtook 1124!

So overall it was a great day out, and I feel like I learned a lot. And as it was a comfortable run it seems like there’s the potential to knock a bit more time off my record so I’ll definitely be back for another marathon soon! I can’t imagine there’s a marathon out there with better atmosphere or organisation than London, but I’ve done it twice now, so maybe it’s time to try somewhere else.

Finally, a huge thank you to everyone who donated money. I got a free place courtesy of Laird Ben Whitnell through the friends and family lottery at Virgin, but I decided to try and do a bit of fundraising for Cancer Reasearch UK, especially as some of my family have been affected this year. I’d aimed to raise £1000, but the final total will come to about £1500 or perhaps closer to £1750 after gift aid is factored in. Again, thanks so much to everyone who donated!

Windows Azure

Gotchas!

I’ve put my lead management site online using Windows Azure and have run into a few gotchas.

Several bugs started showing up once I’d deployed my website to the live environment hosted on Windows Azure. Firstly: all of the Twitter Bootstrap icons vanished! The rest of the site looked exactly as it had on the development environment, and they were showing up just fine on my local machine, but on the live Azure site they’d disappeared completely. After a bit of fiddling around I figured out this related to the use of Font Awesome – it seems that Azure doesn’t support some of the HTML5 Media Types out of the box. Stupid… but fixable!

Just add the following to your web.config file to let Azure know how to handle the file extensions used by the icons:

&lt;code&gt;
&lt;system.webServer&gt;
&lt;staticContent&gt;
&lt;remove fileExtension=".svg" /&gt;
&lt;remove fileExtension=".eot" /&gt;
&lt;remove fileExtension=".woff" /&gt;
&lt;mimeMap fileExtension=".svg" mimeType="image/svg+xml" /&gt;
&lt;mimeMap fileExtension=".eot" mimeType="application/vnd.ms-fontobject" /&gt;
&lt;mimeMap fileExtension=".woff" mimeType="application/x-woff" /&gt;
&lt;/staticContent&gt;
&lt;/system.webServer&gt;
&lt;/code&gt;

Update: Later on I realised this only worked for Chrome – to get IE and Firefox working I also need to ensure that each of the font files had their Build Action changed from ‘none’ to ‘content’.

The second problem was occasional crashes due to MARS not be set to true by default on Azure. To get round this you’ll need to log into the Azure website and locate the connection string that has been generated for you. Then during publishing you’ll have to override the connection string – paste in the Azure connection string, and append the following:

MultipleActiveResultSets=true;

Finally, I ran into another issue when I wanted to deploy an automatic code first database migration that would lead to data loss. After publishing the site I was getting the following error message:

Automatic migration was not applied because it would result in data loss.

This is easy to fix whenever it happens locally (just use the verbose and force option when updating the database via the package console manager (e.g. update-database -force -verbose). But how do you do that during publication?

Turns out it’s pretty easy. In your data layer’s migration configuration file, add the following lines to ensure the automatic migration is forced during publication:

public Configuration()
{
    AutomaticMigrationsEnabled = true;
    AutomaticMigrationDataLossAllowed = true;
}

Amelia Update

Just a couple of new Amelia videos!

She’s started toddling everywhere:

And an example of why all our stuff is now broken…

The screenshot for that second video is pretty special 🙂

The Presentation Layer – Twitter Bootstrap with MVC

We’re just back from another last-minute holiday, this time a week away in Cornwall! We had great weather, got to spend a lot of time on the beach and took trips to Tintagel, Boscastle and the Eden project – as soon as I can find a camera cable I’ll put some videos and pictures up.

Whilst we were away I carried on working in the evenings (Amelia means we’re housebound during those times nowadays) and got stuck into the “look and feel” of the site. I’d planned to leave this until the very end of the project, but I was getting thoroughly sick of the blue Microsoft MVC default scheme and wanted to have something a bit nicer to look at.

Having worked on a series of ‘ugly’ corporate sites, the appearance of this site is actually pretty important to me. I don’t want this site to just look functional, it also needs to be beautiful – modern typography, intuitive controls, plenty of white space, icons, rounded corners, gradients etc. etc.

Besides being pretty the presentation layer also has a few other goals. The way people consume websites has changed a lot in the last few years – it’s all about mobile devices these days so the site needs to look good on a iPhone or tablet as well as on a desktop computer or laptop. I want the site to be completely functional even on a tiny phone screen.

Finally the site I’m working on needs to be able to support arabic text – this means it need to support both left-to-right and right-to-left text.

It’s not completely there yet but I’m pretty pleased with how it’s turned out. Here are a few screenshots of the logon page:



And the branches page…

I’ve started by adding support for the Twitter Bootstrap. This takes away a lot of the donkey work for supporting the varying screen sizes for all the different devices I want to support. The site is built on a fluid grid system that dynamically resizes to display perfectly even on mobile devices with a width of 480px or less. Look at how the menu stretches across the whole screen on a laptop monitor with a width of 1200px, changes to a dropdown list when you shrink the screen down so you can still see all the available options and navigate it easily. This menu takes up a lot of screen space in the screenshot below, but it can be hidden away by clicking on that button in the top right.

I think The twitter bootstrap style is really nice, but it’s fairly generic. I’ve used a theme called Base Admin from WrapBootstrap to get something a little more individual that I can build on without having to spend weeks creating something completely from scratch.

On top of this I’m using jQuery UI for all the interactive controls on the site. There aren’t a lot of these so far – just some datepickers right now. I’ve being using jQuery for all the Javascript throughout the frontent, which has been superb, but that’s probably the subject for another post.

Arabic support has been achieved using a LTR to RTL convertor called CSSJanus – this is a simple script that goes through your css document and replaces all “left” and “right” styles – floating, margins, padding etc. – with the other. Once that’s done an additional piece of CSS sets the direction:

html
{
    direction: rtl;
}

This leaves you with 2 sets of stylesheets and the code dynamically references the correct set depending on the culture the user is browsing the site in (the right-to-left property can be derived from the CultureInfo class). Here’s what the site looks like in Arabic (using some placeholder lorem ipsum text). As you can see it’s not perfect yet but it’s 90% of the way there:

Getting there!

Moving House

I’ve been a bit lax about posting this month as there’s been a lot going on but with my coding project and personally. Right now we’re in the process of moving house in Winchester. The owner’s of the house we’ve been renting have decided to sell up so we need to relocate and we moved out yesterday. We’ve found a really nice place at the other end of St. Cross, a little further out from the centre, but with the benefit costing only about half of what we’ve been paying in rent. It has a much nicer garden for Amelia, three good-sized bedrooms with a similar modern finish to our current house and (crucially for us) only two floors. As much as I liked the location of our current place, Gem and I have both been sick of trekking up and down three flights of stairs all day and it’s always felt far too big for our needs. I’m not sure how much longer I’m going to carry on with my coding project before looking for ‘proper’ employment but I’m still really enjoying the work so cutting the rent definitely seems like the smart move – there’s no point dipping into our savings if we can help it.

Here’s a little video of Amelia helping the packing!

I’ll talk more about how the project is going in another post, but it’s starting to feel a lot more like a functioning website now. I feel like I’m comfortable over the learning hump now for several of the technologies and I’m becoming properly productive with MVC, jQuery and Linq (I’m completely in love with deferred execution). I’ve still got a few reservations over DI but that’s more to do with the time it takes to write anything with it – I’m more than happy with the results and habits it leads to.

Company and Staff Pages

I haven’t posted much about the project last week as I wanted to plough some solid development time into it rather than writing about it. It’s coming on pretty well and I’m starting to get a real feel for developing with MVC.

When you create a new user, they’re assigned administration rights to a new company. The site then provides you with a simple interface for setting up your company. For our purposes a company is modelled as a collection of branches – a head branch and up to two levels of child branches below it. The site creates a new ‘head branch’ for the user as soon as they validate their email and login for the first time.

I’ve been watching a lot of the US Office recently so here’s what the fictional paper company Dunder Mifflin might look like in the site:

Working with tree structures is fairly tricky. Luckily I found a great site that gives some extension method code that allows tree data structures to be queried with Linq. This means I can pass a branch id into a query and retrieve a hierarchy of branches below it.

Once I’ve created my company structure I can assign staff to each of the companies. These staff are actual user accounts which we’re going to allow access to a limited part of the company. For example we might create a new employee at a branch that has access just to that branch. Or we might want to create a regional manager that has access to several branches including their child branches.

The site makes this easy to do – just fill in a simple form with the employee’s name, their role in the company and their email address. They’ll receive an email with a unique link that let’s them access the site where they can create their own username and password.

User Registration Pages

I’ve spent the last couple of days getting the user creation pages of the site set up. After so much messing around working with conceptual models and the like it’s nice to be producing something concrete for a change. It’s also given me a lot more experience with newer ‘front-end’ technologies including MVC3, CSS3 and jQuery. It’s been a fairly steep learning curve but I’m pretty pleased with the result so far – not just the fairly simple pages I’ve produced, but also seeing all the different bits of the application running together for the first time.

Here’s a quick sketch of the user registration process I’d envisioned, showing how the different pages should all link together. The goal is to allow a user to register an account with the minimum possible amount of effort on their part. We only require a couple of bits of information – a unique username and email address, their name and a password.

Plan for the user creation process.

All this information is validated on-the-fly as the user types it in using Ajax calls, which mimics sites like Twitter. For example as they’re typing in their username, the site checks the User table to make sure no other user has registered with the same username. It also uses a RegEx to ensure only alphanumeric characters are used (no spaces or special characters allowed). As the user types their username they can see a tick or cross icon (icons from here) to let them know if they’ve entered an available and acceptable value.

All fields are validated in a similar way. Email addresses seem nigh-on impossible to validate programatically so I haven’t even tried – the code simply tries to create the .NET MailMessage class using the entered value. If it can’t create a MailMessage with the email address it’s assumed the email message is invalid – certainly we know we won’t be able to use the value to send email messages within the site as it’ll need to create a MailMessage class to do that. The real validation takes place once the user is registered, and we send them an email with a unique link to make sure the address exists and they have control of it.

The password strength is calculated using some code I’ve adapted from this site. The code returns a enumeration value which indicates how strong the password based on a simple points system. It’s basic but ensures passwords are at least minimally secure and works fine for my purposes.

The name is the easiest to validate – pretty much any non-blank entry is acceptable. I store a fist and last name in the User table, but we let the user enter their full name and the code divides it up in the background – no need for two fields when we can make registration simple by offering one.

jQuery and MVC actually makes writing these ajax calls laughably easy. To show the user some validation on the fly we just need to call into a URL – which you can do using the jQuery.Ajax() method – and show the user the result somewhere on the current screen. Here we’re calling into the CheckFullNameIsValid url and passing in a parameter via the QueryString. We’ll take the result of this call and use it to replace the div tag with the id ‘FullNameIsValid. To make it look a bit slicker you can also see that the first line pops a little animation inside this div tag in meantime to show the user something is happening:

function CheckFullNameIsValid(fullName) {
    $('#FullNameIsValid').html(GetUpdateAnimation('FullNameIsValid'));

    $.ajax(
            {
                url: 'CheckFullNameIsValid?FullName=' + fullName,
                success: function (data) {
                    $('#FullNameIsValid').html(data);
                }
            });
}

Doing this will call into the method on the control which corresponds to this url. In the method we check that the full name the user has entered is indeed value and send back some html in a partial view based on the model we’ve created. Here’s the method on the controller:

public ActionResult CheckFullNameIsValid(string fullName)
        {
            var isFullNameValid = _userHelper.IsFullNameValid(fullName);
            var model = new FullNameIsValidModel
            {
                FullNameIsValid = isFullNameValid
            };
            return PartialView("FullNameIsValid", model);
        }

Finally here’s the partial view. As you can see we just check the boolean value FullNameIsValid on the model and returns some html which the initial jQuery call will drop inside the tag with id ‘FullNameIsValid’.

@model  Website.Models.FullNameIsValidModel
            
@if(Model.FullNameIsValid)
{
    <text>
         <div class="validation">
            <div class="tick" />
            <div>Name looks great.</div>
         </div>
    </text>
}
else
{
    <text>
        <div class="validation">
            <div class="cross" />
            <div>A name is required.</div>
        </div>
    </text>
}

All this validation matches exactly the validation that occurs when we try and submit the values. Even if the user somehow bypasses the validation on the client, they’ll be sent back an error message from the server that explains the error. It won’t look quite as fancy, but unless they start messing around with the response they should only see the ‘pretty’ validation on the client.

Once the user registers their account they’re shown a ‘waiting’ screen which lets them know they need to verify their email address – this mimics the email validation UI another site I like called FogBugz. They can verify their email address by clicking on a unique link in an email we’ve sent them. If the user aborts the process, they’ll be sent back to this waiting screen next time they log in. They also have the option to change their email address (which assigns them a new unique code and resends the welcome email) or resend the email.