Category Archives: DevOps

Windows Azure 6.1: Deploying with Configuration via Sriracha Command Line Tools

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:


In the last post, we covered how to easily deploy a Azure Cloud Service using the new Sriracha 2.0 command line tools, specifically the Deploy Cloud Service Task.

In that post, we just covered the minimum amount of configuration to get a basic Cloud Service site up and running.  In this post, we’ll cover the handful of more advanced configuration options that will hopefully make you’re live easier.

In the last post, we had a pretty basic configuration file:

    "AzureSubscriptionIdentifier": "ThisIsYourAzureSubscriptionIdentifier",
    "AzureManagementCertificate" : "ThisIsYourAzureManagementCertificate",
    "ServiceName" : "AzureViaSriracha",
    "StorageAccountName" : "srirachademo",
    "AzurePackagePath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\MMDB.AzureSample.Web.Azure.cspkg",
    "AzureConfigPath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\ServiceConfiguration.Cloud.cscfg"


For this post, we’re doing to add a bit more configuration to our project:

    "AzureSubscriptionIdentifier": "ThisIsYourAzureSubscriptionIdentifier",
    "AzureManagementCertificate" : "ThisIsYourAzureManagementCertificate",
    "ServiceName" : "AzureViaSriracha",
    "StorageAccountName" : "srirachademo",
    "AzurePackagePath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\MMDB.AzureSample.Web.Azure.cspkg",
    "AzureConfigPath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\ServiceConfiguration.Cloud.cscfg"
    "RoleList": {
        "MMDB.AzureSample.Web": {
            "InstanceCount": 2,
            "ConfigurationSettingValues": {
                "TestSettingValue1": "Vote Quimby",
                "TestSettingValue2": "He'd vote for you"
            "CertificateThumbprints": {
                "MySSLCert": "ThisIsYourSSLThumbprint"

In this case, we’re not just setting the core configuration for the Azure Cloud Service package itself, but also some configuration for the Web Roles and/or Worker Roles contained in that package, along with the SSL certificate information.

AppSettings vs. Azure ConfigurationSettings

In most non-Azure ASP.NET web applications, you would specify most of your configuration in the appSettings or connectionStrings elements.  For example, you might have something like this in your web.config:

  <add key="TestSettingValue1" value="This is the first test value"/>
  <add key="TestSettingValue2" value="This is the second test value"/>


And then this in your ASP.NET MVC controller (using the handy AppSettingsHelper from the MMDB.Shared library, available on NuGet):

public ActionResult Index()
    var model = new SettingsViewModel
        TestSettingValue1 = AppSettingsHelper.GetSetting("TestSettingValue1"),
        TestSettingValue2 = AppSettingsHelper.GetSetting("TestSettingValue2")
    return View(model);


And this in your Razor view:

<p>TestSettingValue1: @Model.TestSettingValue1</p>
<p>TestSettingValue2: @Model.TestSettingValue2</p>


Which ends up looking like this:


While this sort of works in Azure, your web.config gets bundled up inside the Azure package file, so it can’t easily be changed when you promote it from one environment to another.  Instead, when you deploy your Azure package to a given environment, you also include a configuration file specific to that environment.

So forgetting the web.config for now, but your Azure configuration file might look like this:

<ServiceConfiguration serviceName="MMDB.AzureSample.Web.Azure" xmlns="" osFamily="2" osVersion="*" schemaVersion="2014-01.2.3">
  <Role name="MMDB.AzureSample.Web">
    <Instances count="1" />
      <Setting name="TestSettingValue1" value="Vote Quimby" />
      <Setting name="TestSettingValue2" value="He'd vote for you" />
      <Certificate name="MySSLCert" thumbprint="61475037B69532AA6EE96936DF9CBC463A5F5FE8" thumbprintAlgorithm="sha1" />

Now you’re ASP.NET MVC controller might look like this:

var settingsAdapter = new MMDB.Azure.Settings.AppSettingsAdapter();
var model = new SettingsViewModel
    TestSettingValue1 = settingsAdapter.GetSetting("TestSettingValue1"),
    TestSettingValue2 = settingsAdapter.GetSetting("TestSettingValue2")
return View(model);


And that pulls the settings from the Azure configuration file instead:


Note: This code using our MMDB Azure Settings library, also available on NuGet.  This library will check to see if you’re actually running in an Azure environment (or even the local Azure emulator), and if so will try to pull the values from the Azure configuration file, and then will fall back and check the web.config appSettings if it’s not found there.  Alternatively, it will detect if you’re not running in Azure (either in IIS or just debugging with IIS Express in Visual Studio), in which case it will skip the Azure components and just check the web.config.  This has been invaluable for us to be able to get our whole application working fast locally as a normal ASP.NET website, but then giving us the flexibility to include an Azure configuration file when we deploy to production without having to change any code.

Configuring Azure

Now, this all seems simple enough right?  You may already have some build or deploy scripts that are doing some sort of XML poke to inject values into your web.config at a given XPath, so why can’t you just do the same thing for the Azure file?

Or, to ask another question with the same answer, what is a major reason people hate XML and have been fleeing to JSON?

The answer is XML namespaces.  If you try to write a simple XPath to inject a ConfigurationSetting value, you might try the following:


And then you would probably be confused why it didn’t find any nodes.

The reason is this xmlns attribute at the top of the configuration file:

<ServiceConfiguration serviceName="MMDB.AzureSample.Web.Azure" 
    osFamily="2" osVersion="*" schemaVersion="2014-01.2.3">

That defines the default namespace to be, and therefore that namespace must be provided at each element when you’re executing an XPath.  Sure, there are ways to simplify this in .NET code, like defining a prefix upfront for the namespace and putting that in a NamespaceManager and passing the NamespaceManager through to the SelectNodes call and …zzzzz…

I’m sorry, you seem to have dozed off.  What I was getting at is that this is annoying and wasteful and let’s not do it that way.

Sriracha’s Deploy Azure Cloud Service Task to the rescue

As we mentioned in the last post and above, you can use the Sriracha command line deployment tools to easily push an Azure package to up to the cloud.

And since the poking configuration values into Azure configuration files can be painful, we made sure to simplify this process as well.

So if we use the following configuration file to call the Sriracha Azure Deploy Cloud Service Task, it will automatically inject the values into the our Azure configuration file during the deployment:

    "AzureSubscriptionIdentifier": "ThisIsYourAzureSubscriptionIdentifier",
    "AzureManagementCertificate" : "ThisIsYourAzureManagementCertificate",
    "ServiceName" : "AzureViaSriracha",
    "StorageAccountName" : "srirachademo",
    "AzurePackagePath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\MMDB.AzureSample.Web.Azure.cspkg",
    "AzureConfigPath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\ServiceConfiguration.Cloud.cscfg"
    "RoleList": {
        "MMDB.AzureSample.Web": {
            "InstanceCount": 2,
            "ConfigurationSettingValues": {
                "TestSettingValue1": "Vote Quimby",
                "TestSettingValue2": "He'd vote for you"
            "CertificateThumbprints": {
                "MySSLCert": "ThisIsYourSSLThumbprint"


The RoleList should match the list of roles in your package, although everything in it is optional so you only have to include the values for the roles you’d like to configure.

Them, you can control the number of instances, the the ConfigurationSettings, and even add the SSL certificate thumbprint if you’re Azure site is configured for using SSL. 

Also, until normal XPath poke approaches, it won’t just replace the configuration elements in your Azure configuration file if they are found, it will add any missing elements as well.

One thing you CAN’T control is the VM size.  For some reason, that is buried in the Azure service definition file, which then buried inside your Azure package.  So if you want to run ExtraSmall instances in your test site but Large instances in production site, you’d have to edit that service definition file before you build the Azure package.  We are also considering a nice-to-have feature during the deployment to crack open the Azure package and update the the VM size value, but is much more involved and some consider it a little scary to be modifying the internals of your packages between environments; but anyone who wants to jump in and submit a pull request for that, it would be greatly appreciated.

Anyhow, that gives you an overview of the how and why, but there is a lot more info on the Sriracha2 wiki:—Deploy-Cloud-Service-Task

Windows Azure 6: Deploying via Sriracha Command Line Tools

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:

So in some previous posts we covered how to deploy Azure Cloud Services via Powershell and also via C# code using the MMDB.Azure.Management library.

Now, we’ll cover a quick and easy way to configure and push Azure Cloud Service projects with a single command and configuration file.

Sriracha v2.0

First, to give you some context, we been rebuilding large parts of our Sriracha Deployment System to make it more modular, so you can more easily leverage individual pieces without having to install the whole system.  It’s still very much a work in progress, but our goal is to steady release chunks of useful functionality as they become available.

One things I’ve needed for a while is a quick command line interface to deploy whatever I want whenever I want.  Sure, the traditional Sriracha/Octopus/BuildMaster model is great for scenarios when I can set up a server to catalog out releases, but sometimes you just need a script to push something somewhere and you want it to be as simple as possible.

There is a major design change in the new Sriracha system.  Previously, you couldn’t use anything in Sriracha unless you installed the whole system, but there was a lot of great functionality trapped in there.  This time around, we are building each type of deployment task as standalone libraries that can be executed through a simple command line tool first, and then the broader Sriracha system will then leverage these standalone libraries.

Deploying To Azure In 3 Steps

First, you’ll need an Management Certificate for your Azure account to be able to connect through their REST APIs.  If you don’t have that yet, check out the Authentication section of the last Azure post for steps to get your Subscription Identifier and Management Certificate values from the Azure publish settings.

The idea is pretty simple:

  • We’ll download a command line tool into our solution from Nuget
  • We’ll update a configuration file with the info specific to our project
  • We’ll run the command line tool to execute the deployment.

For this demonstration, we’ll be using the MMDB.AzureSample project we’ve used in earlier Azure posts.   Feel free to pull down the code if you’d like to follow along.

Now we have our solution open, the first thing we’ll do is pull down the Sriracha.DeployTask.Azure package from NuGet. 

PM> Install-Package Sriracha.DeployTask.Azure


This will create a SrirachaTools\Runners\Azure directory under your solution.  In there will be sriracha.runner.exe, along with a bunch of DLLs.


You’ll notice of those files is “Sample.DeployCloudService.json”.  Let’s put a copy of that named “MyDeployCloudService.json” and put it in the root of our solution and open it up:



You’ll see a whole bunch of settings in here.  Some of them are required, but most of them are optional.  We’ll add the bare minimum for now:

    "AzureSubscriptionIdentifier": "ThisIsYourAzureSubscriptionIdentifier",
    "AzureManagementCertificate" : "ThisIsYourAzureManagementCertificate",
    "ServiceName" : "AzureViaSriracha",
    "StorageAccountName" : "srirachademo",
    "AzurePackagePath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\MMDB.AzureSample.Web.Azure.cspkg",
    "AzureConfigPath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\ServiceConfiguration.Cloud.cscfg"

These values are:

  • AzureSubscriptionIdentifier and AzureManagementCertificate are the credentials you got from the Azure publish settings above.
  • ServiceName is the name of your service in Azure.  If this service does not yet exist, it will be created.  When we’re done, the cloud service will have the url http://[servicename]  Note: as you would guess with the URL, this service name must be unique, not just within your account, but also throughout all of Azure cloud services, so be creative.
  • StorageAccountName is an Azure storage account that we need to to hold the Azure package binary during the deployment.  If this account doesn’t exist, it will be created.  Note: the storage account name must be between 3 and 24 characters, and can only contain lower case letters and numbers.
  • AzurePackagePath is the location of your Azure Cloud Service package (*.cspkg).  This can be created by right-clicking your Cloud Service in Visual Studio and selecting Publish
  • AzureConfigPath is the location of the Azure configuration file (*.cscfg) to deploy with your project, also created when you package your Azure Cloud Service.  By default it will use the values from this file, but those values can be overridden during deployment using the configFile paramater below.  We’ll cover the values in the configFile in the next post.

(Make sure you enter your own Azure Subscription Identifier and Management Certificate)

Then from a command line (or a batch file or NAnt script or whatever floats your boat), run the following:

.\SrirachaTools\Runners\Azure\ --taskBinary=Sriracha.DeployTask.Azure.dll --taskName=DeployCloudServiceTask --configFile=.\MyDeployCloudService.json --outputFormat text


The breakdown here is:

  • Run the Sriracha command line tool (
  • Use the taskBinary parameter to tell it which assembly has the deployment task implementation (Sriracha.DeployTask.Azure.dll)
  • Use the taskName parameter to tell it what actual task to use (DeployCloudServiceTask)
  • Use the configFile parameter to tell it where our config file lives (MyDeployCloudService.json)
  • Optionally use the outputFormat parameter tell it what type of output format we want back. Normally you’d want text, but you can use json if are piping it somewhere you want a parseable response.
  • You can find the full documentation of calling the Sriracha deployment runner here: 


That’s it.  Just run that command, and then sit back and watch the bits fly:



Now let’s go check out out the site, which again will be at http://[servicename], and there we go:


Now there are also some other cool things you can do with configuring your cloud service settings, instance counts, and certificates, which we will cover in the next post.

Windows Azure 5: Deploying via C# using MMDB.Azure.Management

This is an ongoing series on Windows Azure.  The series starts here, and all code is located on GitHub:  GitHub commit checkpoints are referenced throughout the posts.

(Yes, I know that Microsoft has recently renamed Windows Azure to Microsoft Azure, but hey, I’m in the middle of a series here, and consistency is important.  In my humble opinion, nobody cares whether it’s called Windows Azure or Microsoft Azure or Metro Azure or Clippy Azure.  I’ll try to stick to just “Azure” though.)

So I’m getting back to this series after a little while (ok, a year).  I’ve spent a lot of the last year building out the Sriracha Deployment System, and open source package management and deployment system.  It’s coming along very nicely with a lot of cool features, and the beta users so far love it.  Anyhow, as a part of that I wanted to have a nice clean way to deploy a Cloud Service to Azure (mostly for selfish reasons to simplify deployments of SportsCommander).

In the last post in this series, we covered a way to automated deployments via PowerShell.  That definitely works pretty well, but it requires installing the Azure Cmdlets, and muddling around with PowerShell, which lots of folks (including me) would rather not deal with.

So what are our other options?

Azure Management REST APIs

So it turns out Azure has a bunch of really nice, logical, easy to use, well documented REST APIs.  You GET/POST to a URL, sometimes with an XML file, and get an XML file back.

But that is still a lot of work, and seems like it should be a solved problem.  You have to deal with a lot of endpoints for a simple deployment.  You need to deal with UTF encoding, URL encoding, XML schemas, error handling, and a whole bunch of blah blah blah that just seems unnecessary.

What I really wanted was a nice clean library that let me call a few functions to carry out the common steps that I and a hundred other developers want to do every day.

Wait, aren’t their already libraries for this?

Sure, there are some.  The main one is the official Windows Azure Management Libraries.  These are an add-on library that the Azure team made to work the Azure REST APIs from C#, which certainly sounds exactly like what I was looking for.  Any it very well may be, but I couldn’t get it to work. When I would call into it, it would hang indefinitely even though I know that the underlying HTTP call had completed, and I think has something to do with how it’s using the fancy new async/await stuff, and there is a deadlock somewhere.  I tried pulling the source code and building it, but it’s using a whole bunch of multitargeting stuff; I needed .NET 4.0 versions, and they were building as 4.5 by default.  Anyhow, I pouring WAAAY too much time into something that was supposed to make my life easier. 

And there are certainly other libraries as well, but they seemed to take a Swiss Army knife approach, trying to solve everyone’s probably in every situation, which means that they a little to complicated to use for my task.

Frankly, all I really to do is call some URLs and work with some XML.  My tolerance for complexity for something like that is VERY low.

Let’s reinvent that wheel as a better mousetrap

When in doubt, assume anything not invented here is junk and write it from scratch, right?  So I created a library for this which was exactly what I was looking for, and hopefully you’ll find it useful as well.


The code is located on GitHub at, and on NuGet at  The GitHub readme has a good intro and some sample code, but I’ll get into some more detail here.


Before you get started you’ll need an Azure account.  If you don’t have one, some charming fellow wrote up a walkthrough on how to do it.

Once you have an account, you’ll need some authentication information to connect to the REST APIs.  Azure uses the idea of Management Certificates authorize the management calls, which gives you a lot of flexibility to issue access to different individual individuals and applications.  It also comes in handy when you accidentally check your management certificate into GitHub and need to revoke it.

Now if you Google around for how to create a mangement certificate, you’ll see a whole bunch of stuff about makecert.exe and local certificate stores and thumbprints and X.509 certificates and plenty of detailed information that will scare off a lot of developers that just wanted deploy some code and are wondering why this is so hard in the first place.

So here’s the easy way (which was covered in the last post as well):

  • Install the Azure PowerShell Cmdlets on any machine, and it doesn’t need to be your deployment machine.  This is just to get the management certificate.
  • If you have not yet, open PowerShell in Administrator mode and run the following command, also know as “Make-PowerShell-Useful”:
    Set-ExecutionPolicy RemoteSigned
  • And then run this command in PowerShell (doesn’t need to be Admin mode), which will launch up a browser, prompt you to log into Azure, and then download a Azure Publish Settings file:
Set-ExecutionPolicy RemoteSigned

And you’ll get something that looks like this:

<?xml version="1.0" encoding="utf-8"?>
      Name="3-Month Free Trial" />


And that my friends, is your Azure Subscription Identifier and Management Certificate.  Grab those values, you’ll need them in a second.

Enter MMDB.Azure.Management

So I put together a simple library that does a bulk of what you may need to do for deploying an Azure Cloud Service.  The goal was abstract away all of the unnecessary noise around XML and schema namespaces and versions Base-64 encoding, and provide a nice and easy for creating, get, updating, and deleting Cloud Services, Storage Accounts, Deployments, and blob files.

First, install the NuGet Package:

PM> Install-Package MMDB.Azure.Management


Then, create an AzureClient object, passing in your subscription identifier and management certificate:

string subscriptionIdentifier = "FromYourPublishSettingsFile";
string managementCertificate = "AlsoFromYourPublishSettingsFile";
var client = new AzureClient(subscriptionIdentifier, managementCertificate);


Then you can do lots of fun stuff like creating a Cloud Service (and checking that the name is actually available first, of course):

string serviceName = "MyNewServiceName";
string message;
bool nameIsAvailable = client.CheckCloudServiceNameAvailability(serviceName, out message);
    throw new Exception("Cannot create " + serviceName + ", service name is not available!  Details" + message);

var service = client.CreateCloudService(serviceName);

Console.WriteLine("Successfully created service " + serviceName  + "!  URL = " + service.Url);


Or creating a Storage Account (again, making sure that the name is available first):

string storageAccountName = "MyNewStorageAccount";

string message;
bool nameIsAvailable = client.CheckStorageAccountNameAvailability(storageAccountName, out message)
    throw new Exception("Cannot create " + storageAccountName + ", service name is not available!  Details" + message);

var storageAccount = client.CreateCloudService(storageAccountName);

//Initial setup is complete, but it is still resolving DNS, etc
Console.WriteLine("Initial creation for storage account " + storageAccountName + " complete!  URL = " + storageAccount.Url);

//Wait for the entire setup to be complete
client.WaitForStorageAccountStatus(storageAccountName, StorageServiceProperties.EnumStorageServiceStatus.Created, timeout:TimeSpan.FromMinutes(2));
Console.WriteLine("Final setup " + storageAccountName + ", your storage account is ready to go");


Now, actually deploying something can take a few steps, like creating a Cloud Service, creating a Storage Account, waiting for everything to get initialized, getting the storage keys for the newly created Storage Account, uploading the Azure package file as a blob to the Storage Account, and then telling Azure to use that blob to create the Cloud Service.  Oh, and then wait for everything to actually initialize (yes, this can take a while on Azure, especially if you have a lot of instances).

string serviceName = "MyNewServiceName";
var service = client.CreateCloudService(serviceName);

string storageAccountName = "MyNewStorageAccount";
var storageAccount = client.CreateStorageAccount(storageAccountName);
client.WaitForStorageAccountStatus(storageAccountName, StorageServiceProperties.EnumStorageServiceStatus.Created);

string azureContainerName = "MyDeploymentContainer";
string azurePackageFile = "C:\\Build\\MyAzurePackage.cspkg";
string azureConfigFile = "C:\\Build\\MyAzureConfig.cscfg";
string azureConfigData = File.ReadAllText(azureConfigFile);
string deploymentSlot = "staging";

var storageKeys = client.GetStorageAccountKeys(storageAccountName);
var blobUrl = client.UploadBlobFile(storageAccountName, storageKeys.Primary, azurePackageFile, azureContainerName);

var deployment = client.CreateCloudServiceDeployment(serviceName, blobUrl, azureConfigData, deploymentSlot);
client.WaitForCloudServiceDeploymentStatus(serviceName, deploymentSlot, DeploymentItem.EnumDeploymentItemStatus.Running, TimeSpan.FromMinutes(5));
client.WaitForAllCloudServiceInstanceStatus(serviceName, deploymentSlot, RoleInstance.EnumInstanceStatus.ReadyRole, TimeSpan.FromMinutes(10));


See it in action

Again there are a basic usage examples over at the readme, and there are some more exhaustive examples in the test project

You can also see it in real-live action over in the DeployCloudService task in Sriracha.Deploy source.  There you can see how it handles a lot of the day-to-day stuff like checking whether the Cloud Service and Storage Accounts already exist, creating vs. upgrading deployments, etc.

Running the tests

The first time you try to run the test project, you will get some errors that “Azure.publishsettings.private” does not exist.  Get a copy of a publish settings file (see the Authentication section above), and drop it in the root of the MMDB.Azure.Management.Tests folder, and rename it to “Azure.publishsettings.private”, and you should be good to go.  You shouldn’t have to worry about accidentally committing this file, because .private files are excluded in the .gitignore, but make sure to keep an eye on it just to be safe.

The end

So hopefully you find this usual.  If so, let me know on Twitter (@mooneydev).  Obviously this doesn’t cover every single thing you’d want to do with Azure, and I’m sure we’ll be building more features as they are needed.  If you need something you don’t see here, create an issue over on GitHub, or better yet, take a crack at implementing and send over a pull request.  I tried to make the code really approachable and easy to work with, so it shouldn’t be too hard to get started with it and add some value.

Simple RavenDB Backups to Amazon S3

A while ago I had some posts about how to set up simple backups of SQL Azure to make up for a few holes in the tooling (here and here).  I recently ran into a the same issue with RavenDB, and it required stringing a few pieces together, so I figured I’d write up the steps.


Yet again I started out to make a quick how-to, and ended up going into a lot of detail.  Anyhow, here’s the short version:

  1. Download s3.exe from
  2. Run this:
Raven.Smuggler.exe out http://[ServerName]:[Rort] [DatabaseName].dump --database=[DatabaseName]
s3 auth /nogui [AccessKeyID] [SecretAccessKey]
s3 put /nogui [S3BucketName]/[TargetS3Folder]/ [DatabaseName].dump


What is RavenDB?

RavenDB is a flat out awesome document database written in .NET.  It’s sort of link MongoDB or CouchDB, but very Windows- and Microsoft-friendly, and has much better ACID and LINQ query support than your average document database.

Whether a document database is right for your project is a complicated question, well beyond the scope of this post, but if you decide you need one, and you’re doing .NET on Windows, RavenDB should be the first one you check out.

While document databases are not ideal for every situation, I’ve found them to be very good for message based applications, which pump data messages from one queue to another.  I’ve used RavenDB as the default primary data store for SrirachaDeploy and MMDB.DataService, and besides a few bumps in the road, it’s worked great.

Types of backups in RavenDB

RavenDB offers two flavors of backups.  One is the official “Backup and Restore” feature, which is very similar to a SQL Server backup/restore, including advanced options link incremental backups.  This does a low-level backup of the ESENT files, including index data.  Restores are all-or-nothing, so you can’t import a file if you’re going to overwrite data in the process.

The other type is the the “Smuggler” feature, which is more of a data import/export utility.  This generates an extract file that contains all of the documents, index definitions, and attachments (controllable by command line parameters).  It’s worth noting though, while Smuggler will backup the index definition, it does not backup the index data, so after you import via Smuggler you may have to wait a few minutes for your indexes to rebuild, depending on your data size.  Since it’s just a data import, you can import it into an existing database, without deleting your existing records, and it will just append the new records, and override existing records if there is a matching ID.

The simplest way to get started with either option is to try them out in the RavenDB user interface.  The RavenDB UI is continually evolving, but as of this writing, under the Tasks section there are Import/Export Database options that use Smuggler, and a Backup Database option as well.


Personally, I prefer Smuggler.  It’s very easy to use, the defaults do what I want them to do most of the time, and it can do a low-impact data import to an existing database without blowing away existing test data.  Also, because backup/restore feature uses some OS ESENT logic, it has some OS version portability limitations.  In the end, I usually don’t want anything too fancy or even incremental, the first and foremost backup I want to get in place is “export ALL of my data in the most portable format possible on a regular schedule so I can always get another server running if this one fails, and I can restore it on my machine to recreate a production issue”, and Smuggler has fit that bill nicely.

RavenDB Periodic Backup

RavenDB does have a very cool feature called “Periodic Backup”.  This option actually uses the Smuggler data export functionality, and runs incremental backups and uploads them to your hosting provider of choice (File system, Amazon Glacier, Amazon S3, or Azure storage).


The cool thing with this feature is that it’s easy to setup up without any many confusing options.  My problem with this is  that it doesn’t quite have enough options for me, or rather the defaults are not what I really want.  Rather than doing incremental backups on a schedule, I want to be able to do a full backup any time I want.  Unfortunately it doesn’t (yet?) offer the options to force a full backup, nor to force a backup on demand or a a specific time of day.  I’m guessing that these features will continue to improve over time, but in the mean time this is not really what I’m looking for.

Smuggler Export 101

So how to get started with Smuggler?  Of course, visit the documentation here, but here’s the short version for everything I usually need to do.

First, open a command line. 

Yes a command line.  What, you don’t like using the command line?  Oh well, deal with it.  I know, I have hated the command line through much of my career, and I fought against it, but complained about it.  Then I gave up and embraced it.  And guess what, it’s not that bad.  There are plenty of things that are just plain easier to do in a command line and don’t always need a pointy-clicky interface.  So please, just stop complaining and get over it, it’s  all a part of being a developer these days.  If you refuse to use a command line, you are tying a hand behind your back and refusing to use some of the most powerful tools at your disposal.  Plus we are going to be scripting this to run every night, so guess what, that works a lot better with a command line.  I’ll throw in some basic command line tips as we go.

Anyhow, in your command line, go to the Smuggler folder under your RavenDB installation (usually C:\RavenDB\Smuggler on my machines).

Tip: You don’t have type the whole line.  Type part of a folder name and hit TAB, and it will autocomplete with the first match.  Hit TAB a few times and it will cycle through all of matches.  Even us a wildcard (like *.exe) with TAB and it will autocomplete the file name.

Type Raven.Smuggler.exe (or Raven + TAB a few times, or *.exe + TAB) to run Smuggler without and parameters, and you’ll get some detailed instructions.


The most common thing you want to do here is backup a whole database to a file.  You do this with the command “Raven.Smugger.exe out [ServerUrl] [OutputFileName]”.

Note: the instructions here will dump the System database (if you use http://localhost:8080/ or something similar as your URL), which is almost certainly not what you want.  It’s not entirely clear the documentation, but the way to export a specific database instance is to use a URL like “http://[servername]:[port]/databases/[databasename]”, or use the –-database variable at the end.  For example, to backup my local demo database, I would use the command:

Raven.Smuggler.exe out http://localhost:8080/databases/demodb demodb.dump


Raven.Smuggler.exe out http://localhost:8080 demodb.dump --database=demodb


And off it goes:


Depending on the size of your database, this may take a few seconds to a few minutes.  Generally it’s pretty fast, but if you have a lot of attachments, that seems to slow it down quite a bit.  Once it’s done, you can see your output file in the same directory:


Tip: Are you in a command line directory and really wish you had a Explorer window open in that location?  Run “explorer %cd%” to launch a new version of Explorer defaulted to your current directory.  Note: sometimes this doesn’t always work, like if you’re running the command line window in administrator mode.

Yes, that’s not a very big file, but it’s a pretty small database to start with.  Obviously they can get much bigger, and I usually see backups gettting up to a few hundred MB or a few GB.  You could try to compress it with your favorite command line compression tool installed (I really like 7-Zip), but it’s not going to get you much.  RavenDB already does a good job of compressing the content while it’s extracting it via Smuggler.

Amazon S3

Next, you have to put it somewhere, preferably as far away from this server as possible.  A different machine is a must, a different data center or even different hosting company is even better.  For me, one of the cheapest/easiest/fastest places to put it in Amazon S3.

There are a few ways to get the file up to S3.  The first option is to upload it straight from Amazon’s S3 website, although that can require installing Java, and you may not be into that kind of thing.  Also, that’s not really scriptable.

Or you could use S3 Browser, which is an awesome tool.  For end user integration with S3, it’s great, and I’ve gladly paid the nominal charge for a professional license, and recommended all of my S3-enabled clients to do the same.  However, while it’s a great UI tool for S3, it is not very scripting friendly.  It stores your S3 connection information in your Windows user profile, which means if you want to script it you need to log in as that user first, setup the S3 profile in S3 Browser, and then make sure you run the tool under that same user account.  That’s a lot of headache I don’t really want to worry about setting up, much less remembering in 6 months when I need to change something.

One great S3 backup tool is CloudBerry.  It’s not free, but it’s relatively inexpensive, and it’s really good for scheduling backups of specific files or folders to S3.  Depending on your wallet tolerance, this may be the best option for you.

But you may want a free version, and you’re probably asking, “why is this so hard to just push a file to S3?  Isn’t that just a few lines of AWSSDK code?”.  Well yeah, it is.  Actually it can be quite a few lines, but yeah, it’s not rocket science.  Luckily here is a great tool on CodePlex that lets you do this:  It’s a simple command tool with one-line commands like “put” and “auth” to do the most simple tasks.  To push your new file to S3, it would just be:

s3 auth /nogui [AccessKeyID] [SecretAccessKey]
s3 put /nogui [S3BucketName]/[TargetS3Folder]/ [FileToUpload]


So if we wanted to upload our new demodbn.dump file to S3, it would look something like this:


And hey, there’s our backup on S3. 



So we now have a 3-line database backup script:

Raven.Smuggler.exe out http://[ServerName]:[Port] [DatabaseName].dump --database=[DatabaseName]
s3 auth /nogui [AccessKeyID] [SecretAccessKey]
s3 put /nogui [S3BucketName]/[TargetS3Folder]/ [DatabaseName].dump


Just put that in a batch file and set a Window’s Scheduled Task to run whenever you want.  Simple enough, eh?

Restoring via Smuggler

So now that you have your RavenDB database backed up, what do you do with it?  You’re going to test your restore process regularly, right?  Right?

First you need to get a copy of the file to your machine.  You could easily write a script using the S3 tool to download the file, and I’ll leave that as an exercise for the reader.  I usually just pull it down with S3 Browser whenever I need it.

So once you have it downloaded, you just need to call Smuggler again to import it.  It’s the same call as the export, just change “out” to “in”.  For example, to import our demodb back into our local server, into a new DemoDBRestore database instance, we would say:

Raven.Smuggler.exe in http://localhost:8080 demodb.dump --database=DemoDBRestore

And we would see:


And then we have our restored database up and running in RavenDB:



Now I’m not a backup wizard.  I’m sure there are better ways to do this, with incremental builds and 3-way offsite backups and regular automated restores to a disaster recovery site and all sorts of fancy stuff like that.  The bigger and more critical your application becomes, the more important it becomes to have those solutions in place.  But day 1, starting out on your project, you need to have something place, and hopefully this helps you get started.

DevOps + Reality = ???

One of the hot new buzzwords in software development these days is “DevOps”.  It seems that “cloud” and “big data” and “synergy” were getting boring, so now every CTO is trying to get some of that DevOps stuff injecting into their process so that they can deploy to production 10 times a day.

But what is DevOps really?  Like most cool trendy buzzword ideas, it grew out of a few smart people with some good ideas that did real, concrete, awesome things, before everyone else saw how successful it was and tried to figure out shortcut formulas to get there.


To me, DevOps is just the next evolution of how successful software teams find better ways to be more effective, and almost all of it is traceable back to the ideas of the Agile Manifesto

An informal and woefully incomplete list of this this evolution would be:

  • For a while, the focus was getting the complexity of enterprise software under control, and building writing that code as quickly and efficiently as possible.  You saw tools RAD tools, development patterns like object orientation, and design approaches like UML and RUP.  This mostly helped single developers and development teams.
  • Once developers had figured out ways to make individual and independently-developed components easier to build, they had to deal ensure that their code fit together nicely with everyone else’s, out of which continuous integration was born.  This help build the stability of interaction and coordination between development teams.
  • As the development teams go faster and more flexible at building stuff, the projects that defined and managed what they they should be building and when they could expect it to be done.  Agile project management processes like Scrum filled this need.  This helped improved the communication and effectiveness of the whole project team, from developers to QA to project manager, and even product owners and business users (who were very underrepresented in previous approaches). 
  • Another challenge of building software so quickly with so many changes along the they way was validating the software.  When you are moving fast enough, and handling changes to the project scope and desired functionality, it’s hard to capture what the actual correct functionality should be, and whether the software you are building meets those requirements.  This brought along several approaches to automated testing, such as unit testing, integration testing, and UI testing.  Developers starting using patterns like Test Driven Development, and started working with and communicating with the QA team to ensure that there was a shared vision of what the expecting quality of the system was.  This increased communication between development and QA resulted in less focus on silly distractions like bug counts and whether something is actually a defect by strict definition, and more focus on everyone working together build the highest quality system they could.
  • Having semi-conquered many of the problems above, many product teams took the agile ideas a few steps farther to get the business users more involved.  While it was always important to make sure that the product being built was what the users actually needed.  More importantly, they wanted to ensure that they were genuinely solving the user’s problem; this required working with the users, asking questions, getting to the root of the problems, and offering potential solutions, rather than just demanding a requirements document.  To help along this communication and discovery process, ideas such as Behavior Driven Development and Specification By Example were developed to ensure that the business users, the only people who really know what needs to be done, are more involved in the project (serving as Pigs rather than just Chickens, if you will).
  • Now, having handling many of the hard parts of efficiently building software that actually solves the user’s problems, there has been a focus on how to ship and support that software.  This has involved working with operations teams to create streamline deployment and monitoring of the systems throughout various environments.  And while this “DevOps” approach is solving a long list of long-standing problems in this business, it is, unfortunately, doomed to be the The Next Big Thing, the next Game Changer, the next Silver Bullet, and that is a Bad Thing.


[Buzzword]-In-A-Box = Failure-In-A-Box

Notice a pattern there?  It’s like an every-growing blob, pulling in more people from diverse teams.  Starting with just the developers, and then expand to other development teams, QA, project management, business users, and now operations.  Each step of the process involved building out bridges of communication and cooperation across teams.

But then it goes wrong.  Each of these steps went through a similar buzzword-ifcation.  Those steps were:

  • Some pioneering teams start to develop new processes for help they succeed with other teams.  Being analytic folks who can react and respond to new information, the most successful ones are developed over time with feedback from the other teams and a critical eye towards continuously improving the process.
  • Other folks notice this success and want to capture it, so the processes start to become more formalized and defined.  While the original processes were very customized, the industry starts to get a better of idea of what parts work well in more generic settings.
  • As the patterns become better understood, the repetitive friction points are identified and companies begin to build tools to automate away that friction and give people freedom to focus on the core benefits of the process.
  • More people, looking for the quickest way to get the most value from the concept and begin to think that the tools and formalized processes is the key to accomplishing that.You want DevOps!  I'll get you some DevOps!
  • Eventually, large companies are hiring high priced consultants and buying expensive enterprise tools as a part of a corporate-wide initiative to capture that buzzword magic.  This focuses on dogmatically following the process and the tooling.
  • While lip-service is paid to the idea of communication, and cross-team meetings are set up, it takes a backseat to the process and tools.  This is because cooperation and communication takes a lot of work over time to build, and that is not something you can sell in a box.  In the end, those companies are buying valuing Processes and tools over Individuals and Interactions, which is the complete reverse of the Agile Manifesto that drove the concepts to the great idea in the first place.
  • The magic is dead. Consultants are making money, executives are touting their groundbreaking strategies, and in the trenches the same pathologies remain.  While the masses try to follow the recipe and failed to be effective with it, the groundbreaking folks are off to solve the next problem.


Next Stop: DevOps

So what is this DevOps thing?  In it’s simplest sense, it expanding the development team beyond just developers and QA and PMs to include the operations and deployment teams.  The result is what you might call a “product delivery team”.

The first step, like all of of the other steps of the evolution, is “stop being a jerk”.  Get yourself OK with that first, and come back when you’re done.  “But I’m not a jerk, some people are just idiots”.  Yeah, that means you’re still a jerk.  As soon as you find yourself saying or even thinking that someone on your team (or worse one of your users) is an idiot or a moron or useless and their job responsibilities are pointless, you have more deep-seated issues to work out first.  Before you can succeed, you MUST treat those everyone on your team and other teams with dignity and respect.  And the hardest part about this is that you can’t just go through the motions, but deep down inside you need to actually believe it.  If you think that is too touch-feely, or you don’t think you can do that, or you don’t want to or you don’t believe that it’s necessary, that’s fine: Go away, and stay the &$@# away from my projects, you miserable person.  The idea of the "smart but difficult-to-work-with-developer” is a crock.  If you can’t work effectively with other developers and people on other teams, I don’t care how many books you’ve read, you suck as a developer, in my humble opinion.

OK, the next step is to actually talk the other people.  Recognize that as hard and important as your job may be, theirs is probably just as hard and just as important, and the only way your team is going to get better is if everyone works to make everyone’s job easier.  So set up a meeting with them, and ask, with a straight face and genuine interest, “what makes your job more difficult, and what can we do to make it easier”.  Then watch their face light up.  I guarantee you they have list of annoyances and manual steps and remediation steps that waste their time every day and they will be happy to have an opportunity to gripe about them in a welcoming setting without having to worry about being labeled a “complainer”.  Examples would be “we don’t know when something is deployed or needs to be deployed” or “we don’t know what changes are including in each release” or “every time I have to deploy something I need to copy files all over the place and edit some scripts and the dev team always forgets to include a certain file”.

Now, you will be tempted to smile and nod and shrug your shoulders and explain that it’s not a perfect system but it has worked so far.  Suppress this urge, write down the concerns, and start talking about them.  Throw around blue sky ideas of possible solutions.  Get an idea of not just want hinders them, but what would actually make them succeed. 

OK, now what is the official title of this part of the DevOps process?  There are probably several names, but I prefer “stop being a jerk, talk to people, and find out you can solve their problems”.  What tool should you use for this?  Usually Notepad, Evernote/OneNote, or a piece of paper, and an open mind.

Now, before you are done talking to them, pick a few of the most offensive and/or easiest problems and promise to fix them.  In fact, schedule a follow up meeting before you even leave the room, for a few days or a week or two away, where you will show you’re half-done solutions and get feedback about whether it actually is going to solve the problem or what you might be missing.  Or now that you gave them something to visualize, what brand new thing they thought of that would make it 10x better.  Or even that they now realize that were wrong, this is not going to solve the problem, so maybe we need to try something else; this is not a bad thing, and try not to get frustrated.  Instead, repeatedly stress to them that your main goal here is to make their life easier, not just because they want to hear that, but because it’s 100% true.

Sound simple?  It really is.  But that is the core of DevOps.  If you do this first, everything else will start to fall into your lap.  There are a bunch of tools and procedures and techniques that you can leverage to solve the problems, but you need to be sure that you are actually solving the right problems, and to that you need to build a positive working relationship to help root out and identify the solutions to those problems.  The tools are awesome, but you HAVE to focus on the “individuals and interactions over processes and tools”.  But once you have in place, you can do anything.



Accessing an Amazon VM through WMI


OK, so after figuring this all out again for the second time this year, I figured it’s time that I write it down for when I eventually forget again.

So I’m working on adding some changes to the DropkicK library in the Chuck Norris Framework.  DropkicK is an AWESOME tool for deploying just about anything in Windows, and the vast majority of all of the deployment stuff I’ve built over the last year has been heavily based on DropkicK.  Go hear Rob Reynolds talk about the Chuck Norris Framework a lot on Dot Net Rocks and Hanselminutes.

However, while it works great for remotely deploying stuff when your domain account is an administrator on the target server, but it doesn’t yet support connecting as a local administrator.  So what needs to be added is the ability to provide a username and password for an administrator on the target machine.

Why? Amazon.  The most common Amazon EC2 setup I encounter with my clients is that they are just a bunch independent machines, each with their own local user accounts.  Even those that are in a VPC don’t have a domain controller or anything else that would allow the same authenticated user to access multiple machines from the same session.

So that’s something I’m working on now.  Underneath the covers, the DropkicK code is surprisingly straightforward and uses WMI for just most things, and those WMI components take an optional user name and password when connecting, so it’s just a matter of exposing the administrator user name and password as deployment parameters, and then threading them through to the WMI objects.  No big deal.

What IS a big deal though, is getting WMI to work with an Amazon VM in the first place.  You’d think it would be pretty easy, but you’d be wrong.  Very wrong.  There are several things to get right, and if you don’t get them right, you’re going to get some of the must useless error messages you have every seen.

So I just got it working, and here’s how I did it.

Getting Started

First, you go create your self an Amazon VM.  Make it any size you want, but you’ll probably want a Windows base install.

I used Windows 2012 for this, but I went through this same pain earlier in the year with Windows 2008, and it was the same.

Also, when creating your VM, give it a new security group.  Right now you can start with just RDP access, but you’ll going to be adding a bunch of firewall exceptions specific to WMI, so you’ll probably want to keep this type of stuff isolated.

OK, so once your VM is up and running, first create yourself an administrative user account, something like “mmooney”.  Then log out as administrator, and go log in as that new user account.  In fact, don’t go back in through the “administrator” account again.  What?  You and every developer on your team likes to use the “administrator” account on every server?  Stop it.  Stop It.  STOP IT.  Bad Bad BAD.

Now go in and install all of the Windows features/roles/whatever-they-call-it-this-year that you normally need (IIS/MSMQ/whatever) and run a Windows Update.


Now, lets get a baseline of failure.  After a LOT of googling on solving WMI issues, i finally stumbled across a blog post mentioning WBEMTEST, and I was furious that I didn’t know about it sooner.

WBEMTEST is a WMI test client, already installed on your machine.  Go ahead, run wbemtest from a command line, and it will launch it up.

It’s basically the type of throughout test UI that you probably wrote on your third and fourth projects you ever worked on, after you learned it was a useful investment of your time, and you were still young and idealistic enough to spend a few hours building out a cool test tool like this.  But those days are gone; you are now old and slow and lazy and so many years of custom built tools have come and been used and then gone and been forgotten about washed over you like yet another wave flowing down then endless river of projects that your career has become.  Anyway.  That’s OK, because someone already built this one for you.


So hit Connect accept the defaults that will point to your own machine, and you’ll get all sorts of fancy options.  Play around with it.  Go ahead.  Play I say.

image image

Now let’s go to your VM.  Hit connect again, and instead of the default “root\cimv2”, put “\\[YourMachineIP]\root\cimv2”.  Ka BOOM.  Kinda.


Ah, “The RPC server is unavailable”.  Simple enough, clearly accurate, but enormously unhelpful.  Get used to this message folks, it’s going to be following you around for a while.

Amazon Security Settings

The problem here is that you have a few firewalls blocking you from accessing that server.  This is one of those “good” safe-by-default security things, because you can do some nasty stuff if you get WMI access to a machine.  Sure you’ll still need an administrative username and password, but you really don’t want “guessing some guys password” as the only thing between you and p!@wnzge (or wherever those kids call it).

But here we actually want to get in.  first we’ll need to poke a hole in the Amazon firewall. By “small” I mean a giant gaping hole that you could fly a spaceship through.

To access WMI, you will need to open TCP ports 135 and 445.  Oh and 1024 through 65535.  Yes, that’s right.  WMI will try to connect through one random port in that range, and you can’t easily tell it which one, either from the client or the server.  I spent a lot of time trying several things get it locked down to a single port or list of ports, but came to the conclusion that it was pretty much not possible.

While you are in here, ask yourself if you will also want to be able to access the machine through a file share (\\[IPAddress]\C$ or something similar).  If so, also add allow holes for TCP  135-139, UDP 135-139, and UDP 445 (you also want TCP 445, but you did that above).

But PLEASE make sure you restrict the IP range to the servers that you are actually expecting to connect.  Do NOT leave it as  That’s just asking for trouble.

When it’s all said and done, your security group should look something like:


Now if you try to telnet to any of these ports, or use WBEMTEST, you’ll probably still get RPC Unavailable, because the Windows Firewall is blocking you.

Windows Settings


So now go into your VM and bring up the firewall settings.  Make sure all of the Windows Management Instrumentation Services rules are enabled for the Public role (make sure you’re not in Domain or Private or you’re going to make and bunch of changes and nothing will happen and you’ll be confused and angry and you’ll blame me and my stupid blog post and that’s no fun, at least not for me). 


Then in Computer Management, drill down to Services And Applications->WMI Control.  Right click the WMI Control node and select Properties.  On the Security tab, select the Root node and Ensure that the Administrators group has access to everything (it probably will).


Then go into the Services window and make sure the Windows Management Instrumentation service is running.


So let’s try again now.


Final Steps, One Crazy Weirdness

Now if we go back to WBEMTEST and try to connect, we get a little farther.


Access Denied is good!  We got through the firewalls and we got a response.  So let’s put in an administrator in the user name and password…


WAT.  “The object exporter specified was not found”.

What the deuce does that mean?  If you look around, you’ll see people having this issue connecting through a host name and the solution is to use the IP address to get it to resolve correctly.

But we ARE using the IP, right?  Sort of.  We are using the public IP, not private IP that the VM would actually use to identify itself. 


In fact, if you go into another Amazon VM and try to connect to this one through its private IP, it actually works.  But that is not help to me when I’m sittimg at home in my bunny slippers trying to push a change from my desktop.  To the best of my knowledge you cant access Amazon VMs through their private IPs from outside he Amazon cloud (at least not without a lot of networking voodoo that is above my paygrade).

BUT, you can also connect to WMI through a host name.  No, not that crazy public DNS host name, that won’t help you any more than the public IP.  Instead you have to use the actual machine name that the machine itself is aware of.


Of course, your local machine is not going to resolve that, but if you add a host file entry, you should be all set.  In case you’ve forgotten, that’s at C:\Windows\System32\Drivers\etc\hosts, and you’ll need to open it with a text editor that is running in Administrator mode if you have UAC running on your machine.


Now it we go back to WBEMTEST try to connect to “\\[PrivateMachineName]root\cimv2”, it works!

image image


Well, hopefully that helps some folks.  Or at least helps me again in 6 months when I run into it again.

Windows Azure 4: Deploying via PowerShell

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:  GitHub commit checkpoints are referenced throughout the posts.

In the previous post we covered deploying an app to Azure through the Azure web portal and from Visual Studio.  In this post, we’ll show you how to deploy to Azure from PowerShell.  This comes in really handy if you want to be able to deploy to right from your build server, and who doesn’t want to do that?

Why now?

So we have not really gotten into much detail about Azure, and our app is stupidly simple, why are we getting into mundane operational gold-plating like automating deployments from a build server?

Because it’s really important to automate your whole build/deploy pipeline as soon as possible.  The later you automated it, the more time you are flushing down the toilet.  Even if you don’t want to deploy automatically from your build server, if you don’t at least boil your whole deployment down to single one-click script file, you’re stealing from yourself.

When I started out with SportsCommander, I was building all the code locally in Visual Studio and then deploying through the Azure web portal (I know, caveman stuff right?).  Anyhow, pretty soon I got everything built and versioned through a TeamCity build server, and even had the site being FTPed to our shared hosting test server (hello, LFC Hosting), but for production deployments to Azure I would still remote into the build server and upload the latest package from the hard drive to the Azure website.  Part of this was that I wanted to be able to test everything in the test server before deploying to production, and part of this was that I wanted to make sure it didn’t get screwed up, but part of it was also the logical fallacy that I didn’t have time to sit down and spend the time to figure out how to get the Azure deployment working.

And I was wrong.  Way wrong.  Deploying to Azure manually doesn’t take too long, but it adds up.  If it took me 15 minutes to remote into the server, browse to the Azure site, select the package, select the config, and yadda yadda yadda, it only takes a handful of times before you are bleeding whole hours.  If you are deploying several times per week, this can get really expensive.  Not only are you getting fewer fixes and features done, you aren’t even deploying the ones that you do have done, because you don’t have time to deploy and it’s a pain anyway.  Plus, really the only reason we wanted to deploy to the test server first was to smoke test, because deploying again was such a pain that I didn’t want to have to do a whole second deployment to fix a line of code; but if I could fix that line of code and redeploy with one click, I don’t even need to waste time with the test server.

So I didn’t want to spend the time figuring out who to deploy to Azure automatically.  Well I did, but it took me more than 5 minutes to Google it, find the right answer among the plethora of other answers, so it took a while to get done.

Hopefully you found this post in under 5 minutes of Googling so you don’t have any excuses.


If you’ve been Googling around, you may have seen some posts about installing certificates.  Don’t bother.  This approach doesn’t require it, which is good, because that’s no fun.

First, go install the Windows Azure Cmdlets (  Go go go.

Second, make sure you can run remote signed scripts in PowerShell.  You only need to do this once, and if you have played around with PowerShell you’ve probably already done this.  Open up PowerShell in Administator mode (Start Button->type powershell->CTRL+Shift+Enter).  Then type:

Set-ExecutionPolicy RemoteSigned

and hit Enter.  You will get a message along the lines of “OMG Scary Scary Bad Bad Are you Sure!?!?! This is Scary!”.  Hit “Y” to continue.

Now comes the tricky part.  There is a whole bunch of PowerShell commands and certificate stuff that can get confusing.  Thankfully Scott Kirkland wrote a great blog post and even put a sample script up on GitHub.  I had to make a few tweaks to it to get it working for me, so here goes.

Fire up PowerShell again (doesn’t need to be Administrator mode any more), browse to your solution directory, and run:


This will launch a browser window, prompt you to log into your Azure account, and then prompt you to download a file named something fun like “3-Month Free Trial-1-23-2013-credentials.publishsettings”.  Take that file, move it to your solution directory, and name it something less fun like “Azure.publishsettings”.  If you open that fella up, you’ll see something like:

<?xml version="1.0" encoding="utf-8"?>
      Name="3-Month Free Trial" />

So on to the script.  In the root of your project, create a PowerShell script (just a text file named something like DeployAzure.ps1):

#Modified and simplified version of
#From: #
$subscription = "3-Month Free Trial" #this the name from your .publishsettings file
$service = "mmdbazuresample" #this is the name of the cloud service you created
$storageAccount = "mmdbazuresamplestorage" #this is the name of the storage service you created
$slot = "production" #staging or production
$package = "C:\Projects\MMDB.AzureSample\MMDB.AzureSample.Web.Azure\bin\Release\app.publish

$configuration = "C:\Projects\MMDB.AzureSample\MMDB.AzureSample.Web.Azure\bin\Release\app.publish

$publishSettingsFile = "Azure.publishsettings"
$timeStampFormat = "g"
$deploymentLabel = "PowerShell Deploy to $service"
Write-Output "Slot: $slot"
Write-Output "Subscription: $subscription"
Write-Output "Service: $service"
Write-Output "Storage Account: $storageAccount"
Write-Output "Slot: $slot"
Write-Output "Package: $package"
Write-Output "Configuration: $configuration"

Write-Output "Running Azure Imports"
Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1"
Import-AzurePublishSettingsFile $publishSettingsFile
Set-AzureSubscription -CurrentStorageAccount $storageAccount -SubscriptionName $subscription
Set-AzureService -ServiceName $service -Label $deploymentLabel
function Publish(){
 $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue 
 if ($a[0] -ne $null) {
    Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. "
 if ($deployment.Name -ne $null) {
    #Update deployment inplace (usually faster, cheaper, won't destroy VIP)
    Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename.  Upgrading deployment."
 } else {
function CreateNewDeployment()
    write-progress -id 3 -activity "Creating New Deployment" -Status "In progress"
    Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"
    $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -

ServiceName $service
    $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
    $completeDeploymentID = $completeDeployment.deploymentid
    write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete"
    Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: 

function UpgradeDeployment()
    write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress"
    Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"
    # perform Update-Deployment
    $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label 

$deploymentLabel -ServiceName $service -Force
    $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
    $completeDeploymentID = $completeDeployment.deploymentid
    write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete"
    Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID"
Write-Output "Create Azure Deployment"

Run that guy from the PowerShell command line, and you’re watch the bits fly.  Yes, it will take several minutes to run.

A generic version of this can be found here:  Again, I borrowed from Scott Kirkland’s version, but his script assumed that your storage and cloud service were the same name, so I added a separate field for storage account name.  Also to alleviate my insanity, I added a little more diagnostic logging.


This was the post that I started out to write, before I decided to backfill with the more beginner stuff.  From here, it’s going to be a little more ad-hoc.

Anyhow, the next post will probably be setting up your own DNS and SSL for your Azure site.

Web.Config: Code vs Configuration

One of the hardest problems to solve when setting up a deployment strategy is how to handle the web.configs and exe.configs.  Each environment will have different settings, and so every time you deploy something some where you need to make that web.config look different.

The quick and dirty answer is to have a separate web.config for each environment.  Then during a deployment we drop the prod/web.config or staging/web.config into the web directory, and your good to go.  However, like a lot of problematic development strategies, this is really fast and easy to get going with, but it doesn’t age very well.  What happens when your DEV->STAGING->PRODUCTION environments evolve into LOCAL->DEV->QA->INTEGRATION->STAGING->PRODUCTION?  Or when you have machine-specific or farm-specific settings that change from one part of the production environment to another? 

Most importantly, what happens when that web.config changes for a reason other than configuration?  Then you have a whole bunch of web.configs to fix, and you’re going to put a typo in at least 2 of them, it’s guaranteed.

Let’s take a look at at VERY simple web.config, created from just a basic MVC 4 project:

<?xml version="1.0" encoding="utf-8"?>
  For more information on how to configure your ASP.NET application, please visit
    <!-- For more information on Entity Framework configuration, visit -->
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
    <add name="DefaultConnection"
         connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-MMDB.AzureSample.Web-20130117123218;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnet-MMDB.AzureSample.Web-20130117123218.mdf" />
    <add key="webpages:Version" value="" />
    <add key="webpages:Enabled" value="false" />
    <add key="PreserveLoginUrl" value="true" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <compilation debug="true" targetFramework="4.0" />
    <authentication mode="Forms">
      <forms loginUrl="~/Account/Login" timeout="2880" />
        <add namespace="System.Web.Helpers" />
        <add namespace="System.Web.Mvc" />
        <add namespace="System.Web.Mvc.Ajax" />
        <add namespace="System.Web.Mvc.Html" />
        <add namespace="System.Web.Optimization" />
        <add namespace="System.Web.Routing" />
        <add namespace="System.Web.WebPages" />
    <profile defaultProvider="DefaultProfileProvider">
        <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" />
    <membership defaultProvider="DefaultMembershipProvider">
        <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" />
    <roleManager defaultProvider="DefaultRoleProvider">
        <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" />
            If you are deploying to a cloud environment that has multiple web server instances,
            you should change session state mode from "InProc" to "Custom". In addition,
            change the connection string named "DefaultConnection" to connect to an instance
            of SQL Server (including SQL Azure and SQL  Compact) instead of to SQL Server Express.
    <sessionState mode="InProc" customProvider="DefaultSessionProvider">
        <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" />
    <validation validateIntegratedModeConfiguration="false" />
    <modules runAllManagedModulesForAllRequests="true" />
      <remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" />
      <remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" />
      <remove name="ExtensionlessUrlHandler-Integrated-4.0" />
      <add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" />
      <add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" />
      <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="" newVersion="" />
        <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="" newVersion="" />
        <assemblyIdentity name="System.Web.WebPages" publicKeyToken="31bf3856ad364e35" />
        <bindingRedirect oldVersion="" newVersion="" />
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameter value="v11.0" />


Zoinks, that is a lot of configuration. 

Well, actually, exactly how much “configuration” is there?

Well, actually, the answer is one line:

<add name="DefaultConnection"
     connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-MMDB.AzureSample.Web-20130117123218;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnet-MMDB.AzureSample.Web-20130117123218.mdf" />

That is the ONLY line that is going to change from environment to environment.   Everything else there is not configuration, it’s code.

OK sure, it’s in a “configuration” file.  Who cares.  It is tied to your code, and it should change about as often as your code changes, not more.  It certainly should not change between environments.  My general rule is, “if any change to it should be checked into source control, it’s code.”  The sooner you stop pretending that they are different in a way that gives you an enhanced level of configuration flexibility, the sooner you’ll be a happier person.  Trust me.

The problem here is that the web.config is a confused little person.  It does try to configure stuff, but is actually two types of stuff.  As far as most people are concerned, it is for configuring their application, but the other 90% of it that they never touch and usually don’t understand is for configuring the underlying .NET and IIS guts, not your application directly.  And once you’ve coded your application, that 90% should never change from environment to environment, unless your code is changing as well.

And of course that code does change over time.  If you add a WCF web service proxy client to your application, it’s going to fill your web.config up with all sorts of jibber-jabber that you better not touch unless you know what you are doing.  But deep inside there is the endpoint URL that DOES need to change from environment to environment.

Again, this is where the “have a web.config for every environment” really breaks down, because now you have to go through and update every one those web.configs to add in all that crazy WCF stuff.  And try not to screw it up.

So What?

So what’s can we do about it?  We have a few options:

One option is to put all of the configuration in the database.  This can introduce a lot of issues, when you configure your application to point to database that configures your application, your run into all sorts of codependency issues that makes your systems environments really fragile.  The only time I’ve seen this be a good idea is when you have really specific change control rules about not being able to touch configuration files on the server outside of an official deployment, but configuring settings in the database through an administration page would be allowed.

I think these two options are preferable:

  • Drop a brand new web.config on every deployment and have your deployment utility and and reconfigure it, either using web.config transformations, XSLT, or a basic XML parser.
  • Use the configSource attribute on your settings.  This lets you put all of your connectionString or appSettings in different files, which is NOT updated from source control in every deployment.  This way you can always drop the latest web.config without having to worry about reconfiguring it.  (If you’re using Azure, this goes a step farther, by having a completely separate file for reserved for environment configuration, outside of your web application package)

Both of these options work well.  The first option works better if you have only a few options or if you need to update something that does not support a configSource attribute, like a WCF endpoint.  The second option works better if you have a whole list of settings and can consolidate them into the connectionStrings, appSettings, etc.

But either way, no matter what, ALWAYS drop a new web.config, and ALWAYS make sure you have a plan to treat YOUR configuration differently than the rest of the web.config.

Good Luck.

Mooney’s Law Of Guaranteed Failure

If I had a nickel for every time our deployment strategy for a new or different environment was to edit a few config files and then run some batch files and then edit some more config files, and then it goes down in a steaming pile of failure, I would buy a LOT of Sriracha.



Here’s a config file.  Lets say we need to edit that connection string:

<Setting name="ConnectionString" 
value="Data Source=(local); Initial Catalog=SportsCommander; Integrated Security=true;" />

Now let’s say we are deploying to our QA server.  So after we deploy, we fire up our handy Notepad, and edit it:

<Setting name="ConnectionString" 
value="Data Source=SCQAServ; Initial Catalog=SportsCommander; Integrated Security=true;" />

OK good.  Actually not good.  The server name is SCQASrv not SCQAServ.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; Integrated Security=true;" />

OK better.  But wait, integrated security works great in your local dev environment, but in QA we need to use a username and password.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; UserID=qasite; Password=&SE&RW#$" />

OK cool.  Except you can’t put & in an XML file.  So we have to encode that.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; UserID=qasite; Password=&amp;SE&amp;RW#$" />

And you know what?  It’s User ID, not User ID.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; User ID=qasite; Password=&amp;SE&amp;RW#$" />

OK, that’s all there is too it!  Let’s do it again tomorrow.  Make sure you don’t burn you don’t burn your fingers on this blistering fast development productivity.

I know this sounds absurd, but the reality is that for a lot of people, this really is their deployment methodology.  The might have production deployments automated, but their lower environments (DEV/QA/etc) are full of manual steps.  Or better yet, they have automated their lower environments because they deploy there every day, but their production deployments is manual because they only do it once per month.

And you know know what I’ve learned, the hard and maddeningly painful way?  Manual process fails.  Consistently.  And more importantly, it can’t be avoided. 


A common scenario you see is a developer or an operations person (but of course never both at the same time, that would ruin the blame game)  is charged with deploying an application.  After many iterations, the deployment process has been clearly defined out as 17 manual steps.  This has been done enough times that the whole process is fully documented, with a checklist, and the folks running the deployment have done it enough times that they could do it in their sleep. 

The only problem is that in the last deployment, one of the files didn’t get copied.  The time before that, the staging file was copied instead of the production file.  And the time before that, they put a typo into the config.

Is the deployer an idiot?  No, as a matter of fact, the reason that he or she was entrusted with such an important role was that he or she was the most experienced and disciplined person on the team and was intimately familiar with the workings of the entire system.

Were the instructions wrong?  Nope, if the instructions were followed to the letter.

Was the process new?  No again, the same people have been doing this for a year.

At this point, the managers are exasperated, because no matter how much effort we put into formalizing the process, no matter how much documentation and how many checklists, we’re still getting failures.  It’s hard for the mangers to not assume that the deployers are morons, and the deployers are faced with the awful reality of going into every deployment knowing that it WILL be painful, and they WILL get blamed.

Note to management: Good people don’t stick around for this kind of abuse.  Some people will put up with it.  But trust me, you don’t want those people.

The lesson

The kick in the pants is, people are human.  They make mistakes.  A LOT Of mistakes.  And when you jump down their throat on every mistake, they learn to stop making mistakes by not doing anything.

This leads us to Mooney’s Law Of Guaranteed Failure (TM):

In the software business, every manual process will suffer at least a 10% failure rate, no matter how smart the person executing the process.  No amount of documentation or formalization will truly fix this, the only resolution is automation.


So the next time Jimmy screws up the production deployment, don’t yell at him (or sneer behind his back) “how hard is it to follow the 52-step 28-page instructions!”  Just remember that it is virtually impossible.

Also, step back and look at your day to day development process.  Almost everything you do during the day besides writing code is a manual process full of failure (coding is too, but that’s what you’re actually get getting paid for).  Like:

  • When you are partially checking in some changes to source control but trying to leave other changes checked out
  • When you need to edit a web.config connection string every time you get latest or check in
  • When you are interactively merging branches
  • When you are doing any deployment that involves editing a config or running certain batch files in order or entering values into an MSI interface, or is anything more than “click the big red button”
  • When you are setting up a new server and creating user or editing folder permissions or creating MSMQ queues or setting up IIS virtual directories.
  • When you are copying your hours from Excel into the ridiculously fancy but still completely unusable timesheet website
  • When, instead of entering your hours into a timesheet website, you are emailing them to somebody
  • When you are trying to figure out which version of “FeatureRequirements_New_Latest_Latest.docx” is actually the “latest”
  • When you are updating deploying database changes by trying to remember which tables you added to your local database or which scripts have or have not been run against production yet

It’s actually easier to find these things than you think.  The reason is, again, it is just about everything you do all day besides coding.  It’s all waste.  It’s all manual.  And it’s all guaranteed to fail.  Find a way to take that failure out of your hands and bath it in the white purifying light of automation.  Sure it takes time, but with a little time investment, you’ll be amazed how much time you have when you are not wasting it with amazing stupid busywork and guaranteed failure all day.