This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub: https://github.com/mmooney/MMDB.AzureSample.  GitHub commit checkpoints are referenced throughout the posts.

In the last post we covered setting up a basic Azure-enabled web site.  In this post, we’ll show you how to setup your Azure account.  Next, we’ll cover deploying the project to Azure.

So head on over to http://www.windowsazure.com/ and click the Free Trial link:

image

Singe in with your Microsoft/Live/Password/Metro/Whatever user ID. If you don’t have on, create one:

image

Why yes, I would like the 90-Day Free Trial:

image

Once you do a little validation of your phone and credit card, they will go ahead and setup your account.

image

After a minute or two, click the Portal link:

image

This takes you to https://manage.windowsazure.com/, the main place you want to be for managing your Azure application.

image

And there we are.  Poke around a little bit, there is a lot of interesting stuff in here.  In the next post, we’ll create a Cloud Service and deploy our sample application there.

One of the hardest problems to solve when setting up a deployment strategy is how to handle the web.configs and exe.configs.  Each environment will have different settings, and so every time you deploy something some where you need to make that web.config look different.

The quick and dirty answer is to have a separate web.config for each environment.  Then during a deployment we drop the prod/web.config or staging/web.config into the web directory, and your good to go.  However, like a lot of problematic development strategies, this is really fast and easy to get going with, but it doesn’t age very well.  What happens when your DEV->STAGING->PRODUCTION environments evolve into LOCAL->DEV->QA->INTEGRATION->STAGING->PRODUCTION?  Or when you have machine-specific or farm-specific settings that change from one part of the production environment to another? 

Most importantly, what happens when that web.config changes for a reason other than configuration?  Then you have a whole bunch of web.configs to fix, and you’re going to put a typo in at least 2 of them, it’s guaranteed.

Let’s take a look at at VERY simple web.config, created from just a basic MVC 4 project:

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit

http://go.microsoft.com/fwlink/?LinkId=152368

  -->
<configuration>
  <configSections>
    <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=4.4.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
  </configSections>
  <connectionStrings>
    <add name="DefaultConnection"
         providerName="System.Data.SqlClient"
         connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-MMDB.AzureSample.Web-20130117123218;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnet-MMDB.AzureSample.Web-20130117123218.mdf" />
  </connectionStrings>
  <appSettings>
    <add key="webpages:Version" value="2.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="PreserveLoginUrl" value="true" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
  </appSettings>
  <system.web>
    <compilation debug="true" targetFramework="4.0" />
    <authentication mode="Forms">
      <forms loginUrl="~/Account/Login" timeout="2880" />
    </authentication>
    <pages>
      <namespaces>
        <add namespace="System.Web.Helpers" />
        <add namespace="System.Web.Mvc" />
        <add namespace="System.Web.Mvc.Ajax" />
        <add namespace="System.Web.Mvc.Html" />
        <add namespace="System.Web.Optimization" />
        <add namespace="System.Web.Routing" />
        <add namespace="System.Web.WebPages" />
      </namespaces>
    </pages>
    <profile defaultProvider="DefaultProfileProvider">
      <providers>
        <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" />
      </providers>
    </profile>
    <membership defaultProvider="DefaultMembershipProvider">
      <providers>
        <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" />
      </providers>
    </membership>
    <roleManager defaultProvider="DefaultRoleProvider">
      <providers>
        <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" />
      </providers>
    </roleManager>
    <!--
            If you are deploying to a cloud environment that has multiple web server instances,
            you should change session state mode from "InProc" to "Custom". In addition,
            change the connection string named "DefaultConnection" to connect to an instance
            of SQL Server (including SQL Azure and SQL  Compact) instead of to SQL Server Express.
      -->
    <sessionState mode="InProc" customProvider="DefaultSessionProvider">
      <providers>
        <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" />
      </providers>
    </sessionState>
  </system.web>
  <system.webServer>
    <validation validateIntegratedModeConfiguration="false" />
    <modules runAllManagedModulesForAllRequests="true" />
    <handlers>
      <remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" />
      <remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" />
      <remove name="ExtensionlessUrlHandler-Integrated-4.0" />
      <add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" />
      <add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" />
      <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
    </handlers>
  </system.webServer>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="1.0.0.0-4.0.0.0" newVersion="4.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Web.WebPages" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
  <entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
      <parameters>
        <parameter value="v11.0" />
      </parameters>
    </defaultConnectionFactory>
  </entityFramework>
</configuration>

 

Zoinks, that is a lot of configuration. 

Well, actually, exactly how much “configuration” is there?

Well, actually, the answer is one line:

<add name="DefaultConnection"
     providerName="System.Data.SqlClient" 
     connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-MMDB.AzureSample.Web-20130117123218;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnet-MMDB.AzureSample.Web-20130117123218.mdf" />

That is the ONLY line that is going to change from environment to environment.   Everything else there is not configuration, it’s code.

OK sure, it’s in a “configuration” file.  Who cares.  It is tied to your code, and it should change about as often as your code changes, not more.  It certainly should not change between environments.  My general rule is, “if any change to it should be checked into source control, it’s code.”  The sooner you stop pretending that they are different in a way that gives you an enhanced level of configuration flexibility, the sooner you’ll be a happier person.  Trust me.

The problem here is that the web.config is a confused little person.  It does try to configure stuff, but is actually two types of stuff.  As far as most people are concerned, it is for configuring their application, but the other 90% of it that they never touch and usually don’t understand is for configuring the underlying .NET and IIS guts, not your application directly.  And once you’ve coded your application, that 90% should never change from environment to environment, unless your code is changing as well.

And of course that code does change over time.  If you add a WCF web service proxy client to your application, it’s going to fill your web.config up with all sorts of jibber-jabber that you better not touch unless you know what you are doing.  But deep inside there is the endpoint URL that DOES need to change from environment to environment.

Again, this is where the “have a web.config for every environment” really breaks down, because now you have to go through and update every one those web.configs to add in all that crazy WCF stuff.  And try not to screw it up.

So What?

So what’s can we do about it?  We have a few options:

One option is to put all of the configuration in the database.  This can introduce a lot of issues, when you configure your application to point to database that configures your application, your run into all sorts of codependency issues that makes your systems environments really fragile.  The only time I’ve seen this be a good idea is when you have really specific change control rules about not being able to touch configuration files on the server outside of an official deployment, but configuring settings in the database through an administration page would be allowed.

I think these two options are preferable:

  • Drop a brand new web.config on every deployment and have your deployment utility and and reconfigure it, either using web.config transformations, XSLT, or a basic XML parser.
  • Use the configSource attribute on your settings.  This lets you put all of your connectionString or appSettings in different files, which is NOT updated from source control in every deployment.  This way you can always drop the latest web.config without having to worry about reconfiguring it.  (If you’re using Azure, this goes a step farther, by having a completely separate file for reserved for environment configuration, outside of your web application package)

Both of these options work well.  The first option works better if you have only a few options or if you need to update something that does not support a configSource attribute, like a WCF endpoint.  The second option works better if you have a whole list of settings and can consolidate them into the connectionStrings, appSettings, etc.

But either way, no matter what, ALWAYS drop a new web.config, and ALWAYS make sure you have a plan to treat YOUR configuration differently than the rest of the web.config.

Good Luck.

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub: https://github.com/mmooney/MMDB.AzureSample.  GitHub commit checkpoints are referenced throughout the posts.

Prerequisites

I’m assuming that you have Visual Studio 2012.  Now go install the latest Windows Azure SDK.  Go go go.

Getting Started

So the first thing we are going to do is build a simple Windows Azure web application.  To do this, we’ll create a new VS2012 solution:

image

 https://github.com/mmooney/MMDB.AzureSample/tree/76b9bbcd11146bca026b815314df907406b99048

And we’ll create a plan old MVC web application:

image

image

image 

Now we have a empty project, and add in a Home controller a view, we have ourselves an simple but working MVC web application.

image

image

 

Now Let’s Azurify This Fella

Now if we want to deploy this to Azure, we need to create an Azure project for it. Right-click on your project and select “Add Windows Azure Cloud Service Project”

image

That will add a bunch of Azure references to your MVC app, and will create a new wrapper project:

image

Now if you can still run the original web application the same as before, but you if you run the new Azure project, you’ll get this annoying, albeit informative error message:

image

Ok so let’s shut it all down and restart VS2012 in Administrator mode.

(Tip: if you have a VS2012 icon on your Windows toolbar, SHIFT-click it to start in Admin mode)

When we come back in Admin mode and run the Azure project, it’s going to kick up an Azure emulator:

image
image

And we get our Azure app, which looks much the same as a our existing app, but running on another port:

image

 

The idea here is to simulate what will actually happen when the application runs in Azure, which a little different than running in a regular IIS web application.  There are different configuration approaches, and the Web/Worker Roles get fired up.  This is very cool, especially when you are getting started or migrating an existing site, because it gives you a nice test environment without having to upload to Azure all the time. 

However, the simulator does have it’s downsides. First, requiring Administrator mode is annoying.  I forget to do this EVERY TIME, and so right when I’m about to debug the first time, I have to shut everything down and restart Visual Studio and reopen my solution in Admin mode.  Not the end of the world, but an annoying bit of friction.  Second, it is SLOW to start up the site in simulator; not unusably slow, but noticeably and annoyingly slow, so I guess it’s almost unusably slow. 

To combat this, I try to make sure that my web application runs fine all the time as a regular .NET web application, and then I just test from there.  Then before I release a new feature, I test it out in  simulator mode to sanity check, but being able to run as a vanilla web application makes everything a lot faster.

Also, and this is important, it forces you to keep your web application functioning independent of Azure.  Besides the obvious benefit of faster debuggability, it also ensures that your application has enough seams that if we had to move away from Azure, you can.  I’ve gone on and on about how great Azure is, but it might not be the right thing for everyone, or might stop being the right thing in the future, and you want to have the option to go somewhere else, so you really don’t want Azure references burned in all over the place.  Even if you stay with Azure, you might want to replace some of their features (like replacing Azure Table Storage with RavenDB, or replacing Azure Caching with Redis).  We’ve used a few tricks for this in the past that I’ll get into in some later blog posts.

https://github.com/mmooney/MMDB.AzureSample/tree/a56beb73e44b025d90570978541f83f3622e9eac

Next

Next we’ll actually deploy this thing to Azure, but first we need to setup an account, which we’ll do next.  Get your checkbook ready (just kidding).

If I had a nickel for every time our deployment strategy for a new or different environment was to edit a few config files and then run some batch files and then edit some more config files, and then it goes down in a steaming pile of failure, I would buy a LOT of Sriracha.

image

(Picture http://theoatmeal.com/)

Here’s a config file.  Lets say we need to edit that connection string:

<Setting name="ConnectionString" 
value="Data Source=(local); Initial Catalog=SportsCommander; Integrated Security=true;" />

Now let’s say we are deploying to our QA server.  So after we deploy, we fire up our handy Notepad, and edit it:

<Setting name="ConnectionString" 
value="Data Source=SCQAServ; Initial Catalog=SportsCommander; Integrated Security=true;" />

OK good.  Actually not good.  The server name is SCQASrv not SCQAServ.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; Integrated Security=true;" />

OK better.  But wait, integrated security works great in your local dev environment, but in QA we need to use a username and password.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; UserID=qasite; Password=&SE&RW#$" />

OK cool.  Except you can’t put & in an XML file.  So we have to encode that.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; UserID=qasite; Password=&amp;SE&amp;RW#$" />

And you know what?  It’s User ID, not User ID.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; User ID=qasite; Password=&amp;SE&amp;RW#$" />

OK, that’s all there is too it!  Let’s do it again tomorrow.  Make sure you don’t burn you don’t burn your fingers on this blistering fast development productivity.

I know this sounds absurd, but the reality is that for a lot of people, this really is their deployment methodology.  The might have production deployments automated, but their lower environments (DEV/QA/etc) are full of manual steps.  Or better yet, they have automated their lower environments because they deploy there every day, but their production deployments is manual because they only do it once per month.

And you know know what I’ve learned, the hard and maddeningly painful way?  Manual process fails.  Consistently.  And more importantly, it can’t be avoided. 

Storytime

A common scenario you see is a developer or an operations person (but of course never both at the same time, that would ruin the blame game)  is charged with deploying an application.  After many iterations, the deployment process has been clearly defined out as 17 manual steps.  This has been done enough times that the whole process is fully documented, with a checklist, and the folks running the deployment have done it enough times that they could do it in their sleep. 

The only problem is that in the last deployment, one of the files didn’t get copied.  The time before that, the staging file was copied instead of the production file.  And the time before that, they put a typo into the config.

Is the deployer an idiot?  No, as a matter of fact, the reason that he or she was entrusted with such an important role was that he or she was the most experienced and disciplined person on the team and was intimately familiar with the workings of the entire system.

Were the instructions wrong?  Nope, if the instructions were followed to the letter.

Was the process new?  No again, the same people have been doing this for a year.

At this point, the managers are exasperated, because no matter how much effort we put into formalizing the process, no matter how much documentation and how many checklists, we’re still getting failures.  It’s hard for the mangers to not assume that the deployers are morons, and the deployers are faced with the awful reality of going into every deployment knowing that it WILL be painful, and they WILL get blamed.

Note to management: Good people don’t stick around for this kind of abuse.  Some people will put up with it.  But trust me, you don’t want those people.

The lesson

The kick in the pants is, people are human.  They make mistakes.  A LOT Of mistakes.  And when you jump down their throat on every mistake, they learn to stop making mistakes by not doing anything.

This leads us to Mooney’s Law Of Guaranteed Failure (TM):

In the software business, every manual process will suffer at least a 10% failure rate, no matter how smart the person executing the process.  No amount of documentation or formalization will truly fix this, the only resolution is automation.

 

So the next time Jimmy screws up the production deployment, don’t yell at him (or sneer behind his back) “how hard is it to follow the 52-step 28-page instructions!”  Just remember that it is virtually impossible.

Also, step back and look at your day to day development process.  Almost everything you do during the day besides writing code is a manual process full of failure (coding is too, but that’s what you’re actually get getting paid for).  Like:

  • When you are partially checking in some changes to source control but trying to leave other changes checked out
  • When you need to edit a web.config connection string every time you get latest or check in
  • When you are interactively merging branches
  • When you are doing any deployment that involves editing a config or running certain batch files in order or entering values into an MSI interface, or is anything more than “click the big red button”
  • When you are setting up a new server and creating user or editing folder permissions or creating MSMQ queues or setting up IIS virtual directories.
  • When you are copying your hours from Excel into the ridiculously fancy but still completely unusable timesheet website
  • When, instead of entering your hours into a timesheet website, you are emailing them to somebody
  • When you are trying to figure out which version of “FeatureRequirements_New_Latest_Latest.docx” is actually the “latest”
  • When you are updating deploying database changes by trying to remember which tables you added to your local database or which scripts have or have not been run against production yet

It’s actually easier to find these things than you think.  The reason is, again, it is just about everything you do all day besides coding.  It’s all waste.  It’s all manual.  And it’s all guaranteed to fail.  Find a way to take that failure out of your hands and bath it in the white purifying light of automation.  Sure it takes time, but with a little time investment, you’ll be amazed how much time you have when you are not wasting it with amazing stupid busywork and guaranteed failure all day.

Overview

This is the first in a series of blog posts on getting started with building .NET applications in Windows Azure.  We’ve been a big fan of Azure for a lot of years, and we’ve used it for SportsCommander.com’s event registration site since the very beginning. 

I started off writing a blog post on automatically deploying web applications to Azure from TeamCity, but I ended up with too many “this blog assumes…” statements, so I figured should take care of those assumptions first.

What the deuce is Windows Azure any why should I care?

According to Wikipedia, Windows Azure is:

Windows Azure is a Microsoft cloud computing platform used to build, deploy and manage applications through a global network of Microsoft-managed datacenters. Windows Azure allows for applications to be built using many different programming languages, tools or frameworks and makes it possible for developers to integrate their public cloud applications in their existing IT environment. Windows Azure provides both Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) services and is classified as the “Public Cloud” in Microsoft’s cloud computing strategy, along with its Software as a Service (SaaS) offering, Microsoft Online Services.

 

According to me, Windows Azure is:

A hosting platform, sort of line Amazon EC2, because you can deploy to virtual machines that abstract away all of the physical configuration junk that I don’t want to care about, but even better because it also abstracts away the server configuration stuff that I also don’t want to care about, so I can just build code and ship it up there and watch run without having to care about RAID drives or network switches or load balancers or whether someone is logging into these servers and running Windows Update on them.

 

Azure has grown into a lot of things, but as far as I’m concern Azure primary product is a Platform-as-a-Service (Paas) offering called Cloud Services.  Cloud Services lets you use combination of Web Roles and Worker Roles to run web applications and background services.

Glossary

These types of terms get thrown around a lot these days, so let’s defined them.

Before the cloud came in to overshadow our whole lives, we had the these options:

  • Nothing-as-a-Service: You went to Best Buy and bought a “server.”  You’re running it under your desk.  Your site goes down when your power goes out or someone kicks the plug out of the wall.  Or when your residential internet provider changes your IP because you won’t shell out the money for a business account with static IPs.  Then the hard drive fan dies and catches fire, your mom complains about the burning smell and tells you to get a real job.
  • Co-Location: This is a step up.   You still bought the server and own it, but you brought it down the street to hosting company that takes care of it for you, but you are still responsible for the hardware and software, and when the hard drive dies you have to shlep down to the store to get a new one.
  • Dedicated Hosting: So you still have a single physical box, but you don’t own it, you rent it from the data center.  This cost hundreds up to thousands per month, depending on how fancy you wanted to get.  You are responsible for the software, but they take care of the hardware.  When a network card dies, they swap it out for a new one.
  • Shared Hosting: Instead of renting a whole server, you just rent a few folders.  This option is very popular for very small sites, and can cost as little as $5-$10/month.  You have very little control over the enviornment though, and you’re fighting everyone else on that server for resources.
  • Virtual Hosting: A cross between Dedicated and Shared hosting.  You get a whole machine, but it’s a virtual machine (VM) running on a physical machine with a bunch of other virtual machines.  This is the ground work for Infrastructure as a service.  You get a lot more control of the operating system, and supposedly you are not fighting with the other VMs for resources, but in reality there can always be some contention, especially for disk I/O.  The cost is usually significantly less than dedicated hosting.  You don’t care at all about the physical machines, because if one piece of physical hardware fails, you can be transferred to another physical machine.

 

In today’s brave new cloudy buzzword world, you have:

  • Infrastructure-as-a-service: This is basically Virtual Hosting, where you get a virtual machine and all of the physical machine info is abstracted away from you.  You say you want a Windows 2008 Standard Server, and in a few minutes you have a VM running that OS.  Amazon EC2 is the classic example of this.
  • Platform-as-a-Service: This is one level higher in the abstraction.  It means that you write some code, and package it up in a certain way, give it some general hosting information like host name and number of instances, and then the hosting company takes it from there.  Windows Azure is an example of this, along with Google App Engine.
  • Software-as-a-Service (SaaS): This means that someone is running some software that you depend on.  Either you interact with it directly, or your software interacts with it.  You don’t own or write or host any code yourself.  The classic example of this is SalesForce.com.

 

So why is Azure and PaaS more awesome than the other options?

Because it let’s me focus on the stuff that really care about, which is building software.  As long as I follow the rules for building Azure web applications, I don’t have to worry about any of the operations stuff that I’m really not an expert in, like have I applied the right Windows updates and is my application domain identity setup up correctly and how do I add machines to the load balancer and a whole lot of other stuff I don’t even know that I don’t know.

Some IT balk at this and insist that you should control your whole stack, down to the physical servers.  That is a great goal once you get big enough to hire those folks, but when you are getting started in a business, your time is your most valuable asset, and you need a zero-entry ramp and you need to defer as much as possible to experts.  If you are spending time running Windows Updates on your servers when you are the only developer and you could be coding, you are robbing your company blind.

Shared hosting platforms were close to solving this problem.  As long as your website was just a website, and it’s small, you could host it on a shared hosting service and not worry about anything, until somebody else pegs the CPU or memory.  Or if you need to go outside the box a little and run a persistent scheduled background process.  Also, scaling across mulitple servers is pretty much out of the question, you are stuck with “less than one server” of capacity, and you can never go higher. 

But after you grow out of shared hosting and move up to dedicated hosting or virtual hosting, it costs a whole lot more per month (like 5x or 10x), and the increased maintenance effort is even worse.  It’s a pretty step cliff to jump off from shared to dedicated/virtual hosting.

Azure fills that gap more cleanly nicely.  You are still just focusing on the application instead of the server, but you get a lot more power with features like Worker Roles and Azure Storage, and you can even expand out into full blown VMs if you really need it.

Ah ha, VMs!  What about them?  And Azure Websites?

By the time you’ve read this blog post, I’m sure the Azure team will have come out with 27 new features.  Ever since the Scott Gu took over Azure, the rate at which Azure’s been releasing new features has gotten a little ridiculous.  Two for of the more interesting features are Azure VMs and Azure Websites.

Azure VMs were late feature that it seems like the Azure team didn’t even really want to add.  Every Azure web instance is actually a VM, so this lets you remote into the underlying machine like it was a regular Windows server, or even create new VMs by uploading the an existing VM image.  This was introduced so that companies could have an easier migration path to Azure.  If they app still needed some refactoring to fit cleanly into an Azure web or worker role, or it had dependencies on other systems that would not fit into an Azure application, it gives them a bridge to get there, instead of having to rewrite the whole world in one day.  But to be clear, this was not introduced because it’s a good idea to run a bunch of VMs in Azure, because it misses out on the core abstraction and functionality that Azure offers.  If you really just want VMs, just go to Amazon EC2, they are the experts.

Azure Website are a more recent feature (still in Beta) which mimics shared hosting in the Azure world.  While the feature set is more involved than your run of the mill shared hosting platform, it does not nearly give you the power that Azure Cloud Services provides.  They work best with simple or canned websites, like DotNetNuke, Orchard CMS, or WordPress.  In fact, right now we’re testing out moving this blog and the MMDB Solutions website to Azure Websites to consolidate and simplify our infrastructure.

The End…?

In the coming blog posts, I’ll cover some more stuff like creating an account, setting up an Azure web application, deploying it, dealing with SQL Azure, and lots more.  Stay tuned.

So your application needs to send emails.  So you start with this:

var message = new MailMessage();
message.From = new MailAddress("donoreply@example.com");
message.To.Add(new MailAddress("jimbob@example.com"));
message.Subject = "Password Reset";
message.Body = "Click here to reset your password: "
                      + "http://example.com/resetpassword?token=" 
                      + resetToken);
smtpClient.Send(message);

 

And that works OK, until you want to send some more detailed, data-driven emails. 

Duh, that’s what StringBuilder is for
var message = new MailMessage();
message.From = new MailAddress("donoreply@example.com");
message.To.Add(new MailAddress("jimbob@example.com"));
message.Subject = "Order Confirmation";

StringBuilder body = new StringBuilder();
body.AppendLine(string.Format("Hello {0} {1},", 
                    customer.FullName));
body.AppendLine();
body.AppendLine(string.Format("Thank you for your order.  "
                    + "Your order number is {0}.", 
                    order.OrderNumber));
body.AppendLine("Your order contained:");
foreach(var item in order.LineItems)
{
    body.AppendLine(string.Format("\t-{0}: {1}x${2:c}=${3:c}",
                    item.ProductName,item.Quanity,
                    item.UnitPrice,item.SubTotal));
}
body.AppendLine(string.Format("Order Total: ${0:c}", 
                    order.OrderTotal));
message.Body = body.ToString();

smtpClient.Send(message);

Yes, this is certainly the wrong way to do it.  It’s not flexible, you have to change code every time the email content changes, and it’s just plan ugly.

On the other hand, much of the time especially (early in a project), this is just fine.  Step 1 is admitting you have a problem, but step 0 is actually having a problem in the first place to admit to.  If this works for you, run with it until it hurts. 

I have a lot of code running this way in production right now, and it works swimmingly, because if there a content change I can code it, build it, and deploy it to production in 15 minutes.  If your build/deploy change is small enough, there is virtually no difference between content changes and code changes.

 

Back to the real problem please

But let’s say you really do want to be more flexible, and you really do need to be able update the email content without rebuilding/redeploying the whole world.

How about a tokenized string?  Something like:

string emailTemplate = "Hello {FirstName} {LastName},"
                       + "\r\n\r\nThank you for your order...";

That could work, and I’ve started down this path many times before, but looping messes you up.  If you needed to include a list of order line items, how would you represent that in a template?

What else?  If you are in 2003, the obvious answer is a to build an XSLT stylesheet. Serialize that data object as XML, jam it into your XSLT processor, and BAM you have nicely formatted HTML email content.  Except writing those stylesheets are a nightmare.  Maintaining them is even worse.  If you don’t have interns that you hate, you’re going to be stuck with it.

So yes of course you could use XSLT.  Or you could just shoot some heroin.  Both will make you feel good very briefly in the beginning, but both will spiral our of control and turn your whole life into a mess.  Honestly, I would not recommend either.

OK, so how about some MVC templatey type things?

The whole templating idea behind XSLT is actually a good idea, it’s just the execution that is painful.  We have an object, we have a view that contains some markup and some presentation-specific logic, we put them all into a view engine blender and get back some silky smooth content.

If you were in ASP.NET MVC web application, you could use the Razor view engine (or WebForms view engine, if you’re into that kinda thing) to run the object through the view engine and get some HTML, but that plumbing is a little tricky.  Also, what if you are not in an MVC web app, or any web app at all?  If you are looking to offload work from your website to background processes, moving all of your email sending to a background Window service is a great start, but it’s tough to extract out and pull in that Razory goodness.

Luckily some nice folks extracted all of that out into a standalone library (https://github.com/Antaris/RazorEngine), so you can execute Razor views against objects in a few limes of code:

string view = "Hello @Model.CustomerFirstName @Model.CustomerLastName, thank you for your order.";
var model = new { CustomerFirstName="Ty", CustomerLastName="Webb" };
var html = RazorEngine.Razor.Parse(view, model);

 

That is awful pretty. But if course we still need to write wrap that up with some SMTP code, so let’s take it a step farther. 

Razor Chocolate + SMTP Peanut Butter = MMDB.RazorEmail

Let’s say I’m lazy.  Let’s say I just want to do this:

new RazorEmailEngine().SendEmail("Order Receipt", model, view, 
                        new List<string>{"jimbob@example.com"}, 
                        "donoreply@example.com");

I couldn’t find anything that was this simple, so built one.  Here it is: https://github.com/mmooney/MMDB.RazorEmail

Just import that into your project via NuGet (https://nuget.org/packages/MMDB.RazorEmail/), and you can start sending emails right away. 

It works in both web apps and and Windows apps (we wrote it specifically because we needed to support this from a Windows Service).

It can use your app.config/web.config, settings or you can pass in different values.  It also has some rudimentary attachment handleing that will need to be improved.

Take a look, try it out, and let me how how it works for you at @mooneydev

Intro

So enums are awesome. They greatly simplify your ability to restrict data fields in clear and self-descriptive way. The C# implementation of enums have alleviated the need for all of those awful “item code”, magic number, and random unenforced constants that can be the source of so many bugs.

However, nothing is perfect, and so there can some rough edges when working with enums. Everyone ends up writing the bunch of plumbing code, or taking shortcuts that are not as type-safe as they could be. MMDB encountered this on several projects at the same time, so we put it all in a helper library (yes, this met our rigid definition of worthwhile reusability). Recently we put this up on github (https://github.com/mmooney/MMDB.Shared) and NuGet (http://nuget.org/packages/MMDB.Shared), so please help yourself.

Is this groundbreaking? Absolutely not. In fact, some may not even consider it all that useful.  But it’s been helpful for us, and it’s one of the first NuGet packages I pull into each new application, so hopefully it can help others simplify some of their code.

Anyhow, let’s get down to it. We’ll run through the problems we encountered with enums, and what we’ve done to solve them.

Parsing Enums

One of the most annoying things with enums is trying to parse random input into a strictly typed enum value. Say you have a string that has come from a database or user input, and you think the value should align nicely. You end up with something like this:

string input = "SomeInputValue";
var enumValue = (EnumCustomerType)Enum.Parse
                 (typeof(EnumCustomerType), inputValue);

 

Ugh. First, you have to pass in the type and cast the result, which is ugly, and should never be necessary since generics were introduced in .NET 2.0. Also, you have to hope that “SomeInputValue” is a valid value for the enum, otherwise you get a wonderful System.ArgumentException, with a  message like “Additional information: Requested value ‘SomeInputValue’ was not found.”, which is moderately helpful.

In .NET 4, we finally got introduced a strictly-typed Enum.TryParse:

string input = "SomeInputValue";
EnumCustomerType enumValue;
if(Enum.TryParse<EnumCustomerType>(input, out enumValue))
{
//...
}

This is much better, but can still be a little clunky.  You have to play the temporary variable game that all .NET TryParse methods require, and if you want to actually throw a meaningful exception you are back to calling Enum.Parse or writing the check logic yourself.

So lets try to simplify this a little with the EnumHelper class in MMDB.Shared.  Our basic Parse is nice and concise:

string input = "SomeInputValue";
var enumValue = EnumHelper.Parse<EnumCustomerType>(input);

And if this is not a valid value, it will throw a exception with the message “Additional information: Unrecognized value for EnumCustomerType: ‘SomeInputValue'”.  I always find a little more helpful to have exceptions tell you exactly what it was trying to do when it failed, not just that it failed.

Also, it has a strictly typed TryParse method, that does not require any temp values.  It returns a nullable enum, with which you can do whatever you like:

string input = "SomeInputValue";
EnumCustomerType? enumValue = 
             EnumHelper.TryParse<EnumCustomerType>(input);
return enumValue.GetValueOrDefault
             (EnumCustomerType.SomeOtherInputValue);

Enum Display Values

The next problem we run into assigning display values to enums, and more importantly easily retrieving them.

OK, so this has been done a few times.  In most cases, people have used the System.ComponentModel.DescriptionAttribute class to assign display values to enums:

public enum EnumCustomerType 
{
    [Description("This is some input value")]
    SomeInputValue,
    [Description("This is some other input value")]
    SomeOtherInputValue
}

Also the MVC folks introduced a DisplayAttribute in System.ComponentModel.DataAnnotations:

public enum EnumCustomerType 
{
	[Display("This is some input value")]
	SomeInputValue,
	[Display("This is some other input value")]
	SomeOtherInputValue
}

So lots of options there, with lots of dependencies.  Anyhow, to keep it minimal, we created a simple class specifically for enum display values:

public enum EnumCustomerType 
{
	[EnumDisplayValue("This is some input value")]
	SomeInputValue,
	[EnumDisplayValue("This is some other input value")]
	SomeOtherInputValue
}

What’s fun about this is that you can now get your display value in a single line:

public enum EnumCustomerType 
{
	[EnumDisplayValue("This is some input value")]
	SomeInputValue,
	[EnumDisplayValue("This is some other input value")]
	SomeOtherInputValue
}
//...
//Returns "This is some input value"
string displayValue = EnumHelper.GetDisplayValue
                       (EnumCustomerType.SomeInputValue);

And if you don’t have an EnumDisplayValue set, it will default to the stringified version of the enum value:

public enum EnumCustomerType 
{
	[EnumDisplayValue("This is some input value")]
	SomeInputValue,
	[EnumDisplayValue("This is some other input value")]
	SomeOtherInputValue,
	SomeThirdValue
}
//...
//Returns "SomeThirdValue"
string displayValue = EnumHelper.GetDisplayValue
                        (EnumCustomerType.SomeThirdValue);

Databind Enums

Next, let’s do something a little more useful with the enums.  Let’s start databinding them.

Normally if you want to display a dropdown or a radio button list or other type of list control to select an enum, you either have to manually create all of the entries (and then make sure they stay in sync with the enum definition), or write a whole bunch of plumbing code to dynamically generate the list of enum values and bind them to the control.  And if you want to include display values for your enums, it’s even worse, because you have to map the enum values and display values into object that exposes those fields to the DataTextField and DataValueField in the  list control.  Meh.

Or, you can just do this:

EnumHelper.DataBind(dropDownList, typeof(EnumCustomerType));

This will retrieve the the enum values, and their display values, put them into a list, and bind it to the control.

I know what you’re going to say.  “But I want to hide the zero enum value because that is really the ‘None’ value”:

EnumHelper.DataBind(comboBox, typeof(EnumCustomerType), 
             EnumHelper.EnumDropdownBindingType.RemoveFirstRecord);

Or, “I want to display the first zero record in my combobox, but I want to clear it out because it’s not a real valid selection, it’s just the default”:

EnumHelper.DataBind(comboBox, typeof(EnumCustomerType), 
             EnumHelper.EnumDropdownBindingType.ClearFirstRecord);

Or even, “yeah I want that blank first record in my combobox, but I don’t have a zero/none value defined in my enum values, so I want to add that when databinding”:

EnumHelper.DataBind(comboBox, typeof(EnumCustomerType), 
             EnumHelper.EnumDropdownBindingType.AddEmptyFirstRecord);

Conclusion

So again, nothing groundbreaking here.  Hey, it may not even be the best way to handle some of this stuff.  But it works great for us, removes a lot of the unnecessary noise from our code, and makes it a lot easier to read our code and get to the intent of what is being down without a whole lot of jibber-jabber about the how.

Hopefully this can make one part of your coding a lot easier.  Any feedback or suggestions welcome, or better yet, submit a patch Smile

So while ago I wrote about my adventures in SQL Azure backups.  At the time, there was very little offered by either Microsoft or tool vendors to provide an easy solution for scheduling SQL Azure backups.  So in the end, I cobbled together a solution involving batch files, Task Scheduler, and most importantly Red Gate Compare and Data Compare.

But much has changed the past year.  Red Gate released their new SQL Azure Backup product, whose functionality looks freakishly similar to other less polished solutions that people had written about.  The cool part is that while the SQL Compare solution I proposed originally required a purchased copy of the Red Gate SQL tools, Red Gate has been nice enough to release their Azure backup tool for free.

Also, Microsoft has released a CTP version of their SQL Import/Export Service.  This service allows you to backup and restore your database using Azure Blob storage instead having to download it to a local database server, which is actually what most of us really wanted in the first place anyway.  The latest versions of Red Gate’s Azure Backup also supports this functionality, which gives you a lot of options.

So just to close the loop on this, here’s the updated batch script file we’re using for SportsCommander now for doing regular production backups of our database.  We’re opting to use the the Import/Export functionality as our primary backup strategy:

SET SqlAzureServerName=[censored]
SET SqlAzureUserName=[censored]
SET SqlAzurePassword=[censored]
SET SqlAzureDatabaseName=[censored]

SET AzureStorageAccount=[censored]
SET AzureStorageKey=[censored]
SET AzureStorageContainer=[censored[

for /f "tokens=1-4 delims=/- " %%a in (‘date /t’) do set XDate=%%d_%%b_%%c
for /f "tokens=1-2 delims=: " %%a in (‘time /t’) do set XTime=%%a_%%b

SET BackupName=SportsCommander_Backup_%XDate%_%XTime%


C:\SQLBackups\RedGate.SQLAzureBackupCommandLine.exe /AzureServer:%SqlAzureServerName% /AzureDatabase:%SqlAzureDatabaseName% /AzureUserName:%SqlAzureUserName% /AzurePassword:%SqlAzurePassword% /CreateCopy /StorageAccount:%AzureStorageAccount% /AccessKey:%AzureStorageKey% /Container:%AzureStorageContainer% /Filename:%BackupName%.bacpac

 

A few notes:

- This runs the same Import/Export functionality you can get through the Azure portal.  If you have any problems with the parameters here, you can experiment in Azure portal

- The AzureStorageAccount parameter is the account name of your storage account.  So if your blob storage URL is http://myawesomeapp.blob.core.windows.net, your would want to use “myawesomeapp” in this parameter

- The /CreateCopy parameter will use SQL Azure’s CREATE DATABASE AS COPY OF method to create a snapshot first and then back that up, instead of just backing up the live database.  This takes a little extra time, but it is important to ensure that you are getting a transactionally consistent backup.

 

Of course, if you still want to copy down a local instance of the database like we did in the previous post, you can easily do that too:

SET SqlAzureServerName=[censored]
SET SqlAzureUserName=[censored]
SET SqlAzurePassword=[censored]
SET SqlAzureDatabaseName=[censored]

SET LocalSqlServerName=[censored]
SET LocalSqlUserName=[censored]
SET LocalSqlPassword=[censored]

for /f "tokens=1-4 delims=/- " %%a in (‘date /t’) do set XDate=%%d_%%b_%%c
for /f "tokens=1-2 delims=: " %%a in (‘time /t’) do set XTime=%%a_%%b

SET BackupName=SportsCommander_Backup_%XDate%_%XTime%

C:\SQLBackups\RedGate.SQLAzureBackupCommandLine.exe /AzureServer:%SqlAzureServerName% /AzureDatabase:%SqlAzureDatabaseName% /AzureUserName:%SqlAzureUserName% /AzurePassword:%SqlAzurePassword% /CreateCopy /LocalServer:%LocalSqlServerName% /LocalDatabase:%BackupName% /LocalUserName:%LocalSqlUserName% /LocalPassword:%LocalSqlPassword% /DropLocal

 

Good luck.

How to use SourceGear DiffMerge in SourceSafe, TFS, and SVN

What is DiffMerge

DiffMerge is yet-another-diff-and-merge-tool from the fine folks at SourceGear.  It’s awesome.  It’s head and shoulders above whatever junky diff tool they provided with your source control platform, unless of course you’re already using Vault.  Eric Sink, the founder of SourceGear, wrote about it here.  By the way, Eric’s blog is easily one of the most valuable I’ve read, and while it doesn’t get much love these days, there’s a lot of great stuff there, and it’s even worth going back and reading from the beginning if you haven’t seen it.

Are there better diff tools out there?  Sure, there probably are.  I’m sure you have your favorite.  If you’re using something already that works for you, great.  DiffMerge is just yet another great option to consider when you’re getting started.

You sound like a sleazy used car salesman

Yeah, I probably do, but I don’t work for SourceGear and have no financial interest in their products.  I’ve just been a very happy user of Vault and DiffMerge for years.  And it if increases Vault adoption, both among development shops and development tool vendors, it will make my life easier.

But when I go to work on long-term contracts for large clients, they already have source control in place that they want me to use, which is OK, but when I need to do some merging, it starts getting painful.  I want it to tell me not just that a line changed, but exactly what in that line changed.  I want to it actually be able to tell me the only change is whitespace.  I want it to offer me a clean and intuitive interface.  Crazy, I know.

Not a huge problem because DiffMerge is free, and it can plug into just about any source control system, replacing the existing settings.  However those settings can be tricky to figure out, so I figured I’d put together a cheat sheet of how to set it up for various platforms.

Adding DiffMerge To SourceSafe

Let’s start off with those in greatest need, ye old SourceSafe users.  First and foremost, I’m sorry.  We all feel bad that you are in this position.  SourceSafe was great for what it was, 15 years ago when file shares were considered a reliable data interchange format, but nobody should have to suffer through SourceSafe in this day and age.  But don’t worry, adding in DiffMerge can add just enough pretty flowers to your dung heap of a source control system to make it bearable.  Just like getting 1 hour of yard time when you’ve been in the hole for a week, it gives you something look forward to.

Anywho, let’s get started.  First, whip out your SourceSafe explorer:

DiffMerge_VSS_1

Here’s what we get for a standard VSS diff:

DiffMerge_VSS_2

Ugh.  So go to Tools->Options and go to the Custom Editors Tab.  From there, the following operations:

Operation: File Difference

File Extension: .*

Command:  [DiffMergePath]\diffmerge.exe –title1=”original version” –title2=”modified version” %1 %2

Operation: File Merge

File Merge: .*

Command: [DiffMergePath]\diffmerge.exe –title1=”source branch” –title2=”base version” –title3=”destination branch” –result=%4 %1 %3 %2

Now here’s our diff, much less painful:

DiffMerge_VSS_3

But merging is where it really shines:

DiffMerge_VSS_4

Thanks to Paul Roub from Source Gear for the details: http://blog.roub.net/2007/11/diffmerge_in_vss.html

Adding DiffMerge To Subversion

Obviously SVN is worlds better than VSS, but some of the standard tools distributed with TortoiseSVN are a little lacking.  You might say “you get what you paid for,” but you’d only say that if you wanted to tick off a lot of smart and helpful people.

So let’s take a look at a standard diff in SVN:

DiffMerge_SVN_1

Oof.  I’ve used SVN on and off for years, and I still don’t understand what is going on here.

So let’s get this a little mo’ better.  Right click your folder, and select TortoiseSVN->Settings.  Go to the External Programs->Diff Viewer, and enter this External tool:

 [DiffMergePath]\DiffMerge.exe /t1=Mine /t2=Original %mine %base

DiffMerge_SVN_2

Switch over to the Merge Tool screen, and enter this External Tool:

[DiffMergePath]\DiffMerge.exe /t1=Mine /t2=Base /t3=Theirs /r=%merged %mine %base %theirs

DiffMerge_SVN_3

And now our diffs look a little more familiar:

DiffMerge_SVN_4

Thanks Mark Porter for the details: http://www.manik-software.co.uk/blog/post/TortoiseSVN-and-DiffMerge.aspx

Adding DiffMerge To Team Foundation Server

For years I dreamed of using TFS.  I hoped that someday I would work at a company successful and cool enough to invest the money in a TFS solution.  And then I actually got it, and uh, it’s seems like a nice enough fella, but it seems that its tendencies towards megalomania have really had some negative consequences on the end-user experience.

Given that, after decades of technological advancement in source control, the TFS diff tool is pretty much just the same ugliness as SourceSafe:

DiffMerge_TFS_1

Get your spelunking helmet on, and we’ll go digging for the settings in TFS to change this.

  • Open up Visual Studio and select Tools->Options
  • Expand the Source Control group, and select Visual Studio Team Foundation Server
  • Click the Configure User Tools button

DiffMerge_TFS_2

Enter the following tool configurations:

Operation: Compare

Extension: .*

Command: [DiffMergePath]\DiffMerge.exe

Arguments: /title1=%6 /title2=%7 %1 %2

Operation: Merge

Extension: .*

Command: [DiffMergePath]\DiffMerge.exe

Arguments: /title1=%6 /title2=%7 /title3=%8 /result=%4 %1 %2 %3 (Corrected, thanks to Rune in the comments!)

Thanks to James Manning for the details: http://blogs.msdn.com/b/jmanning/archive/2006/02/20/diff-merge-configuration-in-team-foundation-common-command-and-argument-values.aspx

The End

So that’s all it takes to make your source control life a little bit easier.  Even if you don’t prefer DiffMerge, I’d suggest you find one you do like, because the built-in tools are usually pretty bad.  Diffing and merging is hard enough as it is, don’t waste precious brain cells on substandard tools.

The Error

So if you are working in SQL Azure, you’ve probably learned the hard way that you can’t just script out your DDL changes in your local SQL Management Studio and run it against your Azure database.  It throws in a whole bunch of extra fancy-pants DBA-y stuff that SQL Azure just doesn’t let you use. 

For example, say I throw together a simple table in my local database.  Depending on your SQL source control approach (you have one, right?), you might script it out in SQL Management Studio and get something like this:

CREATE

TABLE MyWonderfulSampleAzureTable (

[ID] [int] IDENTITY(1,1) NOT NULL,

[GoEagles] [varchar](50) NOT NULL,

[BeatThemGiants] [varchar](50) NOT NULL,

[TheCowboysAreAwful] [bit] NOT NULL,

[AndyReidForPresident] [varchar](50) NULL,

CONSTRAINT [PK_MyWonderfulSampleAzureTable] PRIMARY KEY CLUSTERED

(

[ID] ASC)

WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

)

ON [PRIMARY]

GO

 

Pretty enough, no?  Sure, it’s full of a lot of gibberish you don’t really care about, like PAD_INDEX=OFF, but hey if it runs, what’s the problem?

Now, let’s run that against our SQL Azure database:

Msg 40517, Level 16, State 1, Line 10
Keyword or statement option ‘pad_index’ is not supported in this version of SQL Server.

 

Ooops.  This is a pain to fix when you’re deploying a single script.  However, when you’re running a whole development cycle worth of these against your production database at 3 AM and it chokes one of these scripts, this is absolutely brutal.

Censorship Brings Peace

So why does this happen?  Why can’t SQL Azure handle these types of cool features?  Mostly because they just don’t want to.  Sure, some of the features missing from SQL Azure are because they just haven’t been implemented yet, but some of them are deliberately disabled to prevent unmitigated chaos.

While you may have a DBA managing your on-premise database who is working in your best interest (or at least your company’s interest), SQL Azure has a much bigger problem to solve.  They need to provide a shared SQL environment that does not let any one consumer hose up everyone else.  If you’ve ever hosted a SQL database in a high-traffic shared hosting environment, you’ve probably feel the pain of some joker going cookoo-bananas with the database resources.

In other words, what you do in the privacy of your own home is all well and good, but if you are going to go play bocce in the public park, you’re certainly going to have to watch your language and act live a civilized person.

And a lot of these features you don’t really have to care about anyway.  No doubt, you are really really smart and know when your indexes should be recompiled, but the reality is that much of the time whatever algorithm the folks on the SQL team came up with is going to be a little bit smarter than you, Mr. SmartyPants.

Anyhow, for your edification, here’s a wealth of information about the stuff you can’t do.

The Manual Workaround

So how do we get our script to run?  My general rule of thumb is to rip out all of the WITH stuff and all of the file group references:

CREATE TABLE MyWonderfulSampleAzureTable (

[ID] [int] IDENTITY(1,1) NOT NULL,

[GoEagles] [varchar](50) NOT NULL,

[BeatThemGiants] [varchar](50) NOT NULL,

[TheCowboysAreAwful] [bit] NOT NULL,

[AndyReidForPresident] [varchar](50) NULL,

CONSTRAINT [PK_MyWonderfulSampleAzureTable] PRIMARY KEY CLUSTERED

(

[ID] ASC)

WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

)

ON [PRIMARY]

GO

See, I had a general rule of thumb for this, because I encountered it a lot.  On just about every DDL script I had to generate.  And missed a lot of them.  Quite the pain in the neck.

 

The Much Better, Not-So-Manual Workaround

So I was at the a Philly.NET user group meeting last night and Bill Emmert from Microsoft was walking through the options for migrating SQL Server databases, and he showed us this setting that I wish I knew about a year ago:

SQLAzureSettings

If you change this to SQL Azure Database, it will change your environment settings to always use create SQL scripts that are compatible with Azure.  No more manual script trimming!  Unless, of course, you are into that kind of thing, in which case you really wasted the last 10 minutes reading this.

Good Luck.