This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:

So in some previous posts we covered how to deploy Azure Cloud Services via Powershell and also via C# code using the MMDB.Azure.Management library.

Now, we’ll cover a quick and easy way to configure and push Azure Cloud Service projects with a single command and configuration file.

Sriracha v2.0

First, to give you some context, we been rebuilding large parts of our Sriracha Deployment System to make it more modular, so you can more easily leverage individual pieces without having to install the whole system.  It’s still very much a work in progress, but our goal is to steady release chunks of useful functionality as they become available.

One things I’ve needed for a while is a quick command line interface to deploy whatever I want whenever I want.  Sure, the traditional Sriracha/Octopus/BuildMaster model is great for scenarios when I can set up a server to catalog out releases, but sometimes you just need a script to push something somewhere and you want it to be as simple as possible.

There is a major design change in the new Sriracha system.  Previously, you couldn’t use anything in Sriracha unless you installed the whole system, but there was a lot of great functionality trapped in there.  This time around, we are building each type of deployment task as standalone libraries that can be executed through a simple command line tool first, and then the broader Sriracha system will then leverage these standalone libraries.

Deploying To Azure In 3 Steps

First, you’ll need an Management Certificate for your Azure account to be able to connect through their REST APIs.  If you don’t have that yet, check out the Authentication section of the last Azure post for steps to get your Subscription Identifier and Management Certificate values from the Azure publish settings.

The idea is pretty simple:

  • We’ll download a command line tool into our solution from Nuget
  • We’ll update a configuration file with the info specific to our project
  • We’ll run the command line tool to execute the deployment.

For this demonstration, we’ll be using the MMDB.AzureSample project we’ve used in earlier Azure posts.   Feel free to pull down the code if you’d like to follow along.

Now we have our solution open, the first thing we’ll do is pull down the Sriracha.DeployTask.Azure package from NuGet. 

PM> Install-Package Sriracha.DeployTask.Azure


This will create a SrirachaTools\Runners\Azure directory under your solution.  In there will be sriracha.runner.exe, along with a bunch of DLLs.


You’ll notice of those files is “Sample.DeployCloudService.json”.  Let’s put a copy of that named “MyDeployCloudService.json” and put it in the root of our solution and open it up:



You’ll see a whole bunch of settings in here.  Some of them are required, but most of them are optional.  We’ll add the bare minimum for now:

    "AzureSubscriptionIdentifier": "ThisIsYourAzureSubscriptionIdentifier",
    "AzureManagementCertificate" : "ThisIsYourAzureManagementCertificate",
    "ServiceName" : "AzureViaSriracha",
    "StorageAccountName" : "srirachademo",
    "AzurePackagePath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\MMDB.AzureSample.Web.Azure.cspkg",
    "AzureConfigPath": ".\\MMDB.AzureSample.Web.Azure\\bin\\Release\\app.publish\\ServiceConfiguration.Cloud.cscfg"

These values are:

  • AzureSubscriptionIdentifier and AzureManagementCertificate are the credentials you got from the Azure publish settings above.
  • ServiceName is the name of your service in Azure.  If this service does not yet exist, it will be created.  When we’re done, the cloud service will have the url http://[servicename]  Note: as you would guess with the URL, this service name must be unique, not just within your account, but also throughout all of Azure cloud services, so be creative.
  • StorageAccountName is an Azure storage account that we need to to hold the Azure package binary during the deployment.  If this account doesn’t exist, it will be created.  Note: the storage account name must be between 3 and 24 characters, and can only contain lower case letters and numbers.
  • AzurePackagePath is the location of your Azure Cloud Service package (*.cspkg).  This can be created by right-clicking your Cloud Service in Visual Studio and selecting Publish
  • AzureConfigPath is the location of the Azure configuration file (*.cscfg) to deploy with your project, also created when you package your Azure Cloud Service.  By default it will use the values from this file, but those values can be overridden during deployment using the configFile paramater below.  We’ll cover the values in the configFile in the next post.

(Make sure you enter your own Azure Subscription Identifier and Management Certificate)

Then from a command line (or a batch file or NAnt script or whatever floats your boat), run the following:

.\SrirachaTools\Runners\Azure\ --taskBinary=Sriracha.DeployTask.Azure.dll --taskName=DeployCloudServiceTask --configFile=.\MyDeployCloudService.json --outputFormat text


The breakdown here is:

  • Run the Sriracha command line tool (
  • Use the taskBinary parameter to tell it which assembly has the deployment task implementation (Sriracha.DeployTask.Azure.dll)
  • Use the taskName parameter to tell it what actual task to use (DeployCloudServiceTask)
  • Use the configFile parameter to tell it where our config file lives (MyDeployCloudService.json)
  • Optionally use the outputFormat parameter tell it what type of output format we want back. Normally you’d want text, but you can use json if are piping it somewhere you want a parseable response.
  • You can find the full documentation of calling the Sriracha deployment runner here: 


That’s it.  Just run that command, and then sit back and watch the bits fly:



Now let’s go check out out the site, which again will be at http://[servicename], and there we go:


Now there are also some other cool things you can do with configuring your cloud service settings, instance counts, and certificates, which we will cover in the next post.

I have been very critical of distributed version control systems (DVCS) in the past.  And like many things I thought in the past, I got older and wiser and now think I was wrong before.  A few years ago I started using Mercurial and Git for some projects, and Git is pretty much a requirement if you want to work with open source code these days, especially on GitHub.  These days, I only go back to using TFS or SVN if it’s a requirement for a certain customer (which it is surprisingly often).

However, getting started with a DVCS it is definitely a conceptual leap from what a lot of .NET developers are working with, since most of them are coming from a TFS/SourceSafe background, or maybe SVN.  Once people make that leap, it the new approach makes a lot of sense, but I know from experience it can be really hard to make that leap when you’ve been used to one approach for most of your career.

Anyhow, while ramping up a developer for a client project that uses Mercurial, I found myself writing this quick conceptual explanation for the twentieth time, so I figured I’d put it here so I can just point to it in the future.  Nothing new here, just my quick explanation.

DVCS Summary

Git and Mercurial are both distributed version control systems.  This is a very different approach from TFS/VSS or even SVN (although SVN is closer).  The core idea is that there is no single centralized server that you need to stay connective to. The repository you have on your own machine is a full blown Mercurial/Git repository, with a full history of the whole project, and you can commit, branch, merge, etc as much as you want with your local repository.  Then when you are ready to release your changes into the wild (what you would normally consider checking into the central server in TFS or VSS), you pull the latest changes from the central repository (like GitHub or BitBucket), do any merging you need to do locally, and then push your changes up to the central server.

This may sound like basically the same thing as SVN but with an extra step (commit locally and then push to the central server, rather than just commiting straight to the central server), but it has its benefits.  Normally if you are working against a central TFS/SVN server, you need to make sure you’re changes are solid and ready for release before checking anything in; before that point you changes are in limbo on your machine, either not checked in for several days (yikes), or you have to shelve them, which can be a pain if your source control platform even supports it.  But in an DVCS, you can work on a feature, or several features locally, committing repeatedly as you go small incremental checkins, without having to worry about breaking anything for anyone else yet. Also, you can make local branches, so you can create a local branch to start working on a big feature that will take several days, but then when a production bug comes in you can switch back to the master/trunk/default branch quickly to fix it and release that fix, without disturbing your feature branch.  While you can make these types of branches in TFS/SVN, they are always done on the server, which gets messy and complicated when you have a lot of developers.  The DVCS approach lets you use work with all of your own mess locally, without having to clutter up anyone else’s workspace, or anyone cluttering yours.  Then you can merge your feature changes into the trunk when it’s actually ready, or better yet before experimental features that never pan out, you can just throw that branch out and go back to the main trunk code.


If you want some deeper theory, Eric Sink of SourceGear/Vault fame wrote a great blog series a few years go (coincidentally right before he announced Source Gear was building it’s own DVCS, Veracity:

This is an ongoing series on Windows Azure.  The series starts here, and all code is located on GitHub:  GitHub commit checkpoints are referenced throughout the posts.

(Yes, I know that Microsoft has recently renamed Windows Azure to Microsoft Azure, but hey, I’m in the middle of a series here, and consistency is important.  In my humble opinion, nobody cares whether it’s called Windows Azure or Microsoft Azure or Metro Azure or Clippy Azure.  I’ll try to stick to just “Azure” though.)

So I’m getting back to this series after a little while (ok, a year).  I’ve spent a lot of the last year building out the Sriracha Deployment System, and open source package management and deployment system.  It’s coming along very nicely with a lot of cool features, and the beta users so far love it.  Anyhow, as a part of that I wanted to have a nice clean way to deploy a Cloud Service to Azure (mostly for selfish reasons to simplify deployments of SportsCommander).

In the last post in this series, we covered a way to automated deployments via PowerShell.  That definitely works pretty well, but it requires installing the Azure Cmdlets, and muddling around with PowerShell, which lots of folks (including me) would rather not deal with.

So what are our other options?

Azure Management REST APIs

So it turns out Azure has a bunch of really nice, logical, easy to use, well documented REST APIs.  You GET/POST to a URL, sometimes with an XML file, and get an XML file back.

But that is still a lot of work, and seems like it should be a solved problem.  You have to deal with a lot of endpoints for a simple deployment.  You need to deal with UTF encoding, URL encoding, XML schemas, error handling, and a whole bunch of blah blah blah that just seems unnecessary.

What I really wanted was a nice clean library that let me call a few functions to carry out the common steps that I and a hundred other developers want to do every day.

Wait, aren’t their already libraries for this?

Sure, there are some.  The main one is the official Windows Azure Management Libraries.  These are an add-on library that the Azure team made to work the Azure REST APIs from C#, which certainly sounds exactly like what I was looking for.  Any it very well may be, but I couldn’t get it to work. When I would call into it, it would hang indefinitely even though I know that the underlying HTTP call had completed, and I think has something to do with how it’s using the fancy new async/await stuff, and there is a deadlock somewhere.  I tried pulling the source code and building it, but it’s using a whole bunch of multitargeting stuff; I needed .NET 4.0 versions, and they were building as 4.5 by default.  Anyhow, I pouring WAAAY too much time into something that was supposed to make my life easier. 

And there are certainly other libraries as well, but they seemed to take a Swiss Army knife approach, trying to solve everyone’s probably in every situation, which means that they a little to complicated to use for my task.

Frankly, all I really to do is call some URLs and work with some XML.  My tolerance for complexity for something like that is VERY low.

Let’s reinvent that wheel as a better mousetrap

When in doubt, assume anything not invented here is junk and write it from scratch, right?  So I created a library for this which was exactly what I was looking for, and hopefully you’ll find it useful as well.


The code is located on GitHub at, and on NuGet at  The GitHub readme has a good intro and some sample code, but I’ll get into some more detail here.


Before you get started you’ll need an Azure account.  If you don’t have one, some charming fellow wrote up a walkthrough on how to do it.

Once you have an account, you’ll need some authentication information to connect to the REST APIs.  Azure uses the idea of Management Certificates authorize the management calls, which gives you a lot of flexibility to issue access to different individual individuals and applications.  It also comes in handy when you accidentally check your management certificate into GitHub and need to revoke it.

Now if you Google around for how to create a mangement certificate, you’ll see a whole bunch of stuff about makecert.exe and local certificate stores and thumbprints and X.509 certificates and plenty of detailed information that will scare off a lot of developers that just wanted deploy some code and are wondering why this is so hard in the first place.

So here’s the easy way (which was covered in the last post as well):

  • Install the Azure PowerShell Cmdlets on any machine, and it doesn’t need to be your deployment machine.  This is just to get the management certificate.
  • If you have not yet, open PowerShell in Administrator mode and run the following command, also know as “Make-PowerShell-Useful”:
    Set-ExecutionPolicy RemoteSigned
  • And then run this command in PowerShell (doesn’t need to be Admin mode), which will launch up a browser, prompt you to log into Azure, and then download a Azure Publish Settings file:
Set-ExecutionPolicy RemoteSigned

And you’ll get something that looks like this:

<?xml version="1.0" encoding="utf-8"?>
      Name="3-Month Free Trial" />


And that my friends, is your Azure Subscription Identifier and Management Certificate.  Grab those values, you’ll need them in a second.

Enter MMDB.Azure.Management

So I put together a simple library that does a bulk of what you may need to do for deploying an Azure Cloud Service.  The goal was abstract away all of the unnecessary noise around XML and schema namespaces and versions Base-64 encoding, and provide a nice and easy for creating, get, updating, and deleting Cloud Services, Storage Accounts, Deployments, and blob files.

First, install the NuGet Package:

PM> Install-Package MMDB.Azure.Management


Then, create an AzureClient object, passing in your subscription identifier and management certificate:

string subscriptionIdentifier = "FromYourPublishSettingsFile";
string managementCertificate = "AlsoFromYourPublishSettingsFile";
var client = new AzureClient(subscriptionIdentifier, managementCertificate);


Then you can do lots of fun stuff like creating a Cloud Service (and checking that the name is actually available first, of course):

string serviceName = "MyNewServiceName";
string message;
bool nameIsAvailable = client.CheckCloudServiceNameAvailability(serviceName, out message);
    throw new Exception("Cannot create " + serviceName + ", service name is not available!  Details" + message);

var service = client.CreateCloudService(serviceName);

Console.WriteLine("Successfully created service " + serviceName  + "!  URL = " + service.Url);


Or creating a Storage Account (again, making sure that the name is available first):

string storageAccountName = "MyNewStorageAccount";

string message;
bool nameIsAvailable = client.CheckStorageAccountNameAvailability(storageAccountName, out message)
    throw new Exception("Cannot create " + storageAccountName + ", service name is not available!  Details" + message);

var storageAccount = client.CreateCloudService(storageAccountName);

//Initial setup is complete, but it is still resolving DNS, etc
Console.WriteLine("Initial creation for storage account " + storageAccountName + " complete!  URL = " + storageAccount.Url);

//Wait for the entire setup to be complete
client.WaitForStorageAccountStatus(storageAccountName, StorageServiceProperties.EnumStorageServiceStatus.Created, timeout:TimeSpan.FromMinutes(2));
Console.WriteLine("Final setup " + storageAccountName + ", your storage account is ready to go");


Now, actually deploying something can take a few steps, like creating a Cloud Service, creating a Storage Account, waiting for everything to get initialized, getting the storage keys for the newly created Storage Account, uploading the Azure package file as a blob to the Storage Account, and then telling Azure to use that blob to create the Cloud Service.  Oh, and then wait for everything to actually initialize (yes, this can take a while on Azure, especially if you have a lot of instances).

string serviceName = "MyNewServiceName";
var service = client.CreateCloudService(serviceName);

string storageAccountName = "MyNewStorageAccount";
var storageAccount = client.CreateStorageAccount(storageAccountName);
client.WaitForStorageAccountStatus(storageAccountName, StorageServiceProperties.EnumStorageServiceStatus.Created);

string azureContainerName = "MyDeploymentContainer";
string azurePackageFile = "C:\\Build\\MyAzurePackage.cspkg";
string azureConfigFile = "C:\\Build\\MyAzureConfig.cscfg";
string azureConfigData = File.ReadAllText(azureConfigFile);
string deploymentSlot = "staging";

var storageKeys = client.GetStorageAccountKeys(storageAccountName);
var blobUrl = client.UploadBlobFile(storageAccountName, storageKeys.Primary, azurePackageFile, azureContainerName);

var deployment = client.CreateCloudServiceDeployment(serviceName, blobUrl, azureConfigData, deploymentSlot);
client.WaitForCloudServiceDeploymentStatus(serviceName, deploymentSlot, DeploymentItem.EnumDeploymentItemStatus.Running, TimeSpan.FromMinutes(5));
client.WaitForAllCloudServiceInstanceStatus(serviceName, deploymentSlot, RoleInstance.EnumInstanceStatus.ReadyRole, TimeSpan.FromMinutes(10));


See it in action

Again there are a basic usage examples over at the readme, and there are some more exhaustive examples in the test project

You can also see it in real-live action over in the DeployCloudService task in Sriracha.Deploy source.  There you can see how it handles a lot of the day-to-day stuff like checking whether the Cloud Service and Storage Accounts already exist, creating vs. upgrading deployments, etc.

Running the tests

The first time you try to run the test project, you will get some errors that “Azure.publishsettings.private” does not exist.  Get a copy of a publish settings file (see the Authentication section above), and drop it in the root of the MMDB.Azure.Management.Tests folder, and rename it to “Azure.publishsettings.private”, and you should be good to go.  You shouldn’t have to worry about accidentally committing this file, because .private files are excluded in the .gitignore, but make sure to keep an eye on it just to be safe.

The end

So hopefully you find this usual.  If so, let me know on Twitter (@mooneydev).  Obviously this doesn’t cover every single thing you’d want to do with Azure, and I’m sure we’ll be building more features as they are needed.  If you need something you don’t see here, create an issue over on GitHub, or better yet, take a crack at implementing and send over a pull request.  I tried to make the code really approachable and easy to work with, so it shouldn’t be too hard to get started with it and add some value.

A while ago I had some posts about how to set up simple backups of SQL Azure to make up for a few holes in the tooling (here and here).  I recently ran into a the same issue with RavenDB, and it required stringing a few pieces together, so I figured I’d write up the steps.


Yet again I started out to make a quick how-to, and ended up going into a lot of detail.  Anyhow, here’s the short version:

  1. Download s3.exe from
  2. Run this:
Raven.Smuggler.exe out http://[ServerName]:[Rort] [DatabaseName].dump --database=[DatabaseName]
s3 auth /nogui [AccessKeyID] [SecretAccessKey]
s3 put /nogui [S3BucketName]/[TargetS3Folder]/ [DatabaseName].dump


What is RavenDB?

RavenDB is a flat out awesome document database written in .NET.  It’s sort of link MongoDB or CouchDB, but very Windows- and Microsoft-friendly, and has much better ACID and LINQ query support than your average document database.

Whether a document database is right for your project is a complicated question, well beyond the scope of this post, but if you decide you need one, and you’re doing .NET on Windows, RavenDB should be the first one you check out.

While document databases are not ideal for every situation, I’ve found them to be very good for message based applications, which pump data messages from one queue to another.  I’ve used RavenDB as the default primary data store for SrirachaDeploy and MMDB.DataService, and besides a few bumps in the road, it’s worked great.

Types of backups in RavenDB

RavenDB offers two flavors of backups.  One is the official “Backup and Restore” feature, which is very similar to a SQL Server backup/restore, including advanced options link incremental backups.  This does a low-level backup of the ESENT files, including index data.  Restores are all-or-nothing, so you can’t import a file if you’re going to overwrite data in the process.

The other type is the the “Smuggler” feature, which is more of a data import/export utility.  This generates an extract file that contains all of the documents, index definitions, and attachments (controllable by command line parameters).  It’s worth noting though, while Smuggler will backup the index definition, it does not backup the index data, so after you import via Smuggler you may have to wait a few minutes for your indexes to rebuild, depending on your data size.  Since it’s just a data import, you can import it into an existing database, without deleting your existing records, and it will just append the new records, and override existing records if there is a matching ID.

The simplest way to get started with either option is to try them out in the RavenDB user interface.  The RavenDB UI is continually evolving, but as of this writing, under the Tasks section there are Import/Export Database options that use Smuggler, and a Backup Database option as well.


Personally, I prefer Smuggler.  It’s very easy to use, the defaults do what I want them to do most of the time, and it can do a low-impact data import to an existing database without blowing away existing test data.  Also, because backup/restore feature uses some OS ESENT logic, it has some OS version portability limitations.  In the end, I usually don’t want anything too fancy or even incremental, the first and foremost backup I want to get in place is “export ALL of my data in the most portable format possible on a regular schedule so I can always get another server running if this one fails, and I can restore it on my machine to recreate a production issue”, and Smuggler has fit that bill nicely.

RavenDB Periodic Backup

RavenDB does have a very cool feature called “Periodic Backup”.  This option actually uses the Smuggler data export functionality, and runs incremental backups and uploads them to your hosting provider of choice (File system, Amazon Glacier, Amazon S3, or Azure storage).


The cool thing with this feature is that it’s easy to setup up without any many confusing options.  My problem with this is  that it doesn’t quite have enough options for me, or rather the defaults are not what I really want.  Rather than doing incremental backups on a schedule, I want to be able to do a full backup any time I want.  Unfortunately it doesn’t (yet?) offer the options to force a full backup, nor to force a backup on demand or a a specific time of day.  I’m guessing that these features will continue to improve over time, but in the mean time this is not really what I’m looking for.

Smuggler Export 101

So how to get started with Smuggler?  Of course, visit the documentation here, but here’s the short version for everything I usually need to do.

First, open a command line. 

Yes a command line.  What, you don’t like using the command line?  Oh well, deal with it.  I know, I have hated the command line through much of my career, and I fought against it, but complained about it.  Then I gave up and embraced it.  And guess what, it’s not that bad.  There are plenty of things that are just plain easier to do in a command line and don’t always need a pointy-clicky interface.  So please, just stop complaining and get over it, it’s  all a part of being a developer these days.  If you refuse to use a command line, you are tying a hand behind your back and refusing to use some of the most powerful tools at your disposal.  Plus we are going to be scripting this to run every night, so guess what, that works a lot better with a command line.  I’ll throw in some basic command line tips as we go.

Anyhow, in your command line, go to the Smuggler folder under your RavenDB installation (usually C:\RavenDB\Smuggler on my machines).

Tip: You don’t have type the whole line.  Type part of a folder name and hit TAB, and it will autocomplete with the first match.  Hit TAB a few times and it will cycle through all of matches.  Even us a wildcard (like *.exe) with TAB and it will autocomplete the file name.

Type Raven.Smuggler.exe (or Raven + TAB a few times, or *.exe + TAB) to run Smuggler without and parameters, and you’ll get some detailed instructions.


The most common thing you want to do here is backup a whole database to a file.  You do this with the command “Raven.Smugger.exe out [ServerUrl] [OutputFileName]”.

Note: the instructions here will dump the System database (if you use http://localhost:8080/ or something similar as your URL), which is almost certainly not what you want.  It’s not entirely clear the documentation, but the way to export a specific database instance is to use a URL like “http://[servername]:[port]/databases/[databasename]”, or use the –-database variable at the end.  For example, to backup my local demo database, I would use the command:

Raven.Smuggler.exe out http://localhost:8080/databases/demodb demodb.dump


Raven.Smuggler.exe out http://localhost:8080 demodb.dump --database=demodb


And off it goes:


Depending on the size of your database, this may take a few seconds to a few minutes.  Generally it’s pretty fast, but if you have a lot of attachments, that seems to slow it down quite a bit.  Once it’s done, you can see your output file in the same directory:


Tip: Are you in a command line directory and really wish you had a Explorer window open in that location?  Run “explorer %cd%” to launch a new version of Explorer defaulted to your current directory.  Note: sometimes this doesn’t always work, like if you’re running the command line window in administrator mode.

Yes, that’s not a very big file, but it’s a pretty small database to start with.  Obviously they can get much bigger, and I usually see backups gettting up to a few hundred MB or a few GB.  You could try to compress it with your favorite command line compression tool installed (I really like 7-Zip), but it’s not going to get you much.  RavenDB already does a good job of compressing the content while it’s extracting it via Smuggler.

Amazon S3

Next, you have to put it somewhere, preferably as far away from this server as possible.  A different machine is a must, a different data center or even different hosting company is even better.  For me, one of the cheapest/easiest/fastest places to put it in Amazon S3.

There are a few ways to get the file up to S3.  The first option is to upload it straight from Amazon’s S3 website, although that can require installing Java, and you may not be into that kind of thing.  Also, that’s not really scriptable.

Or you could use S3 Browser, which is an awesome tool.  For end user integration with S3, it’s great, and I’ve gladly paid the nominal charge for a professional license, and recommended all of my S3-enabled clients to do the same.  However, while it’s a great UI tool for S3, it is not very scripting friendly.  It stores your S3 connection information in your Windows user profile, which means if you want to script it you need to log in as that user first, setup the S3 profile in S3 Browser, and then make sure you run the tool under that same user account.  That’s a lot of headache I don’t really want to worry about setting up, much less remembering in 6 months when I need to change something.

One great S3 backup tool is CloudBerry.  It’s not free, but it’s relatively inexpensive, and it’s really good for scheduling backups of specific files or folders to S3.  Depending on your wallet tolerance, this may be the best option for you.

But you may want a free version, and you’re probably asking, “why is this so hard to just push a file to S3?  Isn’t that just a few lines of AWSSDK code?”.  Well yeah, it is.  Actually it can be quite a few lines, but yeah, it’s not rocket science.  Luckily here is a great tool on CodePlex that lets you do this:  It’s a simple command tool with one-line commands like “put” and “auth” to do the most simple tasks.  To push your new file to S3, it would just be:

s3 auth /nogui [AccessKeyID] [SecretAccessKey]
s3 put /nogui [S3BucketName]/[TargetS3Folder]/ [FileToUpload]


So if we wanted to upload our new demodbn.dump file to S3, it would look something like this:


And hey, there’s our backup on S3. 



So we now have a 3-line database backup script:

Raven.Smuggler.exe out http://[ServerName]:[Port] [DatabaseName].dump --database=[DatabaseName]
s3 auth /nogui [AccessKeyID] [SecretAccessKey]
s3 put /nogui [S3BucketName]/[TargetS3Folder]/ [DatabaseName].dump


Just put that in a batch file and set a Window’s Scheduled Task to run whenever you want.  Simple enough, eh?

Restoring via Smuggler

So now that you have your RavenDB database backed up, what do you do with it?  You’re going to test your restore process regularly, right?  Right?

First you need to get a copy of the file to your machine.  You could easily write a script using the S3 tool to download the file, and I’ll leave that as an exercise for the reader.  I usually just pull it down with S3 Browser whenever I need it.

So once you have it downloaded, you just need to call Smuggler again to import it.  It’s the same call as the export, just change “out” to “in”.  For example, to import our demodb back into our local server, into a new DemoDBRestore database instance, we would say:

Raven.Smuggler.exe in http://localhost:8080 demodb.dump --database=DemoDBRestore

And we would see:


And then we have our restored database up and running in RavenDB:



Now I’m not a backup wizard.  I’m sure there are better ways to do this, with incremental builds and 3-way offsite backups and regular automated restores to a disaster recovery site and all sorts of fancy stuff like that.  The bigger and more critical your application becomes, the more important it becomes to have those solutions in place.  But day 1, starting out on your project, you need to have something place, and hopefully this helps you get started.

One of the hot new buzzwords in software development these days is “DevOps”.  It seems that “cloud” and “big data” and “synergy” were getting boring, so now every CTO is trying to get some of that DevOps stuff injecting into their process so that they can deploy to production 10 times a day.

But what is DevOps really?  Like most cool trendy buzzword ideas, it grew out of a few smart people with some good ideas that did real, concrete, awesome things, before everyone else saw how successful it was and tried to figure out shortcut formulas to get there.


To me, DevOps is just the next evolution of how successful software teams find better ways to be more effective, and almost all of it is traceable back to the ideas of the Agile Manifesto

An informal and woefully incomplete list of this this evolution would be:

  • For a while, the focus was getting the complexity of enterprise software under control, and building writing that code as quickly and efficiently as possible.  You saw tools RAD tools, development patterns like object orientation, and design approaches like UML and RUP.  This mostly helped single developers and development teams.
  • Once developers had figured out ways to make individual and independently-developed components easier to build, they had to deal ensure that their code fit together nicely with everyone else’s, out of which continuous integration was born.  This help build the stability of interaction and coordination between development teams.
  • As the development teams go faster and more flexible at building stuff, the projects that defined and managed what they they should be building and when they could expect it to be done.  Agile project management processes like Scrum filled this need.  This helped improved the communication and effectiveness of the whole project team, from developers to QA to project manager, and even product owners and business users (who were very underrepresented in previous approaches). 
  • Another challenge of building software so quickly with so many changes along the they way was validating the software.  When you are moving fast enough, and handling changes to the project scope and desired functionality, it’s hard to capture what the actual correct functionality should be, and whether the software you are building meets those requirements.  This brought along several approaches to automated testing, such as unit testing, integration testing, and UI testing.  Developers starting using patterns like Test Driven Development, and started working with and communicating with the QA team to ensure that there was a shared vision of what the expecting quality of the system was.  This increased communication between development and QA resulted in less focus on silly distractions like bug counts and whether something is actually a defect by strict definition, and more focus on everyone working together build the highest quality system they could.
  • Having semi-conquered many of the problems above, many product teams took the agile ideas a few steps farther to get the business users more involved.  While it was always important to make sure that the product being built was what the users actually needed.  More importantly, they wanted to ensure that they were genuinely solving the user’s problem; this required working with the users, asking questions, getting to the root of the problems, and offering potential solutions, rather than just demanding a requirements document.  To help along this communication and discovery process, ideas such as Behavior Driven Development and Specification By Example were developed to ensure that the business users, the only people who really know what needs to be done, are more involved in the project (serving as Pigs rather than just Chickens, if you will).
  • Now, having handling many of the hard parts of efficiently building software that actually solves the user’s problems, there has been a focus on how to ship and support that software.  This has involved working with operations teams to create streamline deployment and monitoring of the systems throughout various environments.  And while this “DevOps” approach is solving a long list of long-standing problems in this business, it is, unfortunately, doomed to be the The Next Big Thing, the next Game Changer, the next Silver Bullet, and that is a Bad Thing.


[Buzzword]-In-A-Box = Failure-In-A-Box

Notice a pattern there?  It’s like an every-growing blob, pulling in more people from diverse teams.  Starting with just the developers, and then expand to other development teams, QA, project management, business users, and now operations.  Each step of the process involved building out bridges of communication and cooperation across teams.

But then it goes wrong.  Each of these steps went through a similar buzzword-ifcation.  Those steps were:

  • Some pioneering teams start to develop new processes for help they succeed with other teams.  Being analytic folks who can react and respond to new information, the most successful ones are developed over time with feedback from the other teams and a critical eye towards continuously improving the process.
  • Other folks notice this success and want to capture it, so the processes start to become more formalized and defined.  While the original processes were very customized, the industry starts to get a better of idea of what parts work well in more generic settings.
  • As the patterns become better understood, the repetitive friction points are identified and companies begin to build tools to automate away that friction and give people freedom to focus on the core benefits of the process.
  • More people, looking for the quickest way to get the most value from the concept and begin to think that the tools and formalized processes is the key to accomplishing that.You want DevOps!  I'll get you some DevOps!
  • Eventually, large companies are hiring high priced consultants and buying expensive enterprise tools as a part of a corporate-wide initiative to capture that buzzword magic.  This focuses on dogmatically following the process and the tooling.
  • While lip-service is paid to the idea of communication, and cross-team meetings are set up, it takes a backseat to the process and tools.  This is because cooperation and communication takes a lot of work over time to build, and that is not something you can sell in a box.  In the end, those companies are buying valuing Processes and tools over Individuals and Interactions, which is the complete reverse of the Agile Manifesto that drove the concepts to the great idea in the first place.
  • The magic is dead. Consultants are making money, executives are touting their groundbreaking strategies, and in the trenches the same pathologies remain.  While the masses try to follow the recipe and failed to be effective with it, the groundbreaking folks are off to solve the next problem.


Next Stop: DevOps

So what is this DevOps thing?  In it’s simplest sense, it expanding the development team beyond just developers and QA and PMs to include the operations and deployment teams.  The result is what you might call a “product delivery team”.

The first step, like all of of the other steps of the evolution, is “stop being a jerk”.  Get yourself OK with that first, and come back when you’re done.  “But I’m not a jerk, some people are just idiots”.  Yeah, that means you’re still a jerk.  As soon as you find yourself saying or even thinking that someone on your team (or worse one of your users) is an idiot or a moron or useless and their job responsibilities are pointless, you have more deep-seated issues to work out first.  Before you can succeed, you MUST treat those everyone on your team and other teams with dignity and respect.  And the hardest part about this is that you can’t just go through the motions, but deep down inside you need to actually believe it.  If you think that is too touch-feely, or you don’t think you can do that, or you don’t want to or you don’t believe that it’s necessary, that’s fine: Go away, and stay the &$@# away from my projects, you miserable person.  The idea of the "smart but difficult-to-work-with-developer” is a crock.  If you can’t work effectively with other developers and people on other teams, I don’t care how many books you’ve read, you suck as a developer, in my humble opinion.

OK, the next step is to actually talk the other people.  Recognize that as hard and important as your job may be, theirs is probably just as hard and just as important, and the only way your team is going to get better is if everyone works to make everyone’s job easier.  So set up a meeting with them, and ask, with a straight face and genuine interest, “what makes your job more difficult, and what can we do to make it easier”.  Then watch their face light up.  I guarantee you they have list of annoyances and manual steps and remediation steps that waste their time every day and they will be happy to have an opportunity to gripe about them in a welcoming setting without having to worry about being labeled a “complainer”.  Examples would be “we don’t know when something is deployed or needs to be deployed” or “we don’t know what changes are including in each release” or “every time I have to deploy something I need to copy files all over the place and edit some scripts and the dev team always forgets to include a certain file”.

Now, you will be tempted to smile and nod and shrug your shoulders and explain that it’s not a perfect system but it has worked so far.  Suppress this urge, write down the concerns, and start talking about them.  Throw around blue sky ideas of possible solutions.  Get an idea of not just want hinders them, but what would actually make them succeed. 

OK, now what is the official title of this part of the DevOps process?  There are probably several names, but I prefer “stop being a jerk, talk to people, and find out you can solve their problems”.  What tool should you use for this?  Usually Notepad, Evernote/OneNote, or a piece of paper, and an open mind.

Now, before you are done talking to them, pick a few of the most offensive and/or easiest problems and promise to fix them.  In fact, schedule a follow up meeting before you even leave the room, for a few days or a week or two away, where you will show you’re half-done solutions and get feedback about whether it actually is going to solve the problem or what you might be missing.  Or now that you gave them something to visualize, what brand new thing they thought of that would make it 10x better.  Or even that they now realize that were wrong, this is not going to solve the problem, so maybe we need to try something else; this is not a bad thing, and try not to get frustrated.  Instead, repeatedly stress to them that your main goal here is to make their life easier, not just because they want to hear that, but because it’s 100% true.

Sound simple?  It really is.  But that is the core of DevOps.  If you do this first, everything else will start to fall into your lap.  There are a bunch of tools and procedures and techniques that you can leverage to solve the problems, but you need to be sure that you are actually solving the right problems, and to that you need to build a positive working relationship to help root out and identify the solutions to those problems.  The tools are awesome, but you HAVE to focus on the “individuals and interactions over processes and tools”.  But once you have in place, you can do anything.




OK, so after figuring this all out again for the second time this year, I figured it’s time that I write it down for when I eventually forget again.

So I’m working on adding some changes to the DropkicK library in the Chuck Norris Framework.  DropkicK is an AWESOME tool for deploying just about anything in Windows, and the vast majority of all of the deployment stuff I’ve built over the last year has been heavily based on DropkicK.  Go hear Rob Reynolds talk about the Chuck Norris Framework a lot on Dot Net Rocks and Hanselminutes.

However, while it works great for remotely deploying stuff when your domain account is an administrator on the target server, but it doesn’t yet support connecting as a local administrator.  So what needs to be added is the ability to provide a username and password for an administrator on the target machine.

Why? Amazon.  The most common Amazon EC2 setup I encounter with my clients is that they are just a bunch independent machines, each with their own local user accounts.  Even those that are in a VPC don’t have a domain controller or anything else that would allow the same authenticated user to access multiple machines from the same session.

So that’s something I’m working on now.  Underneath the covers, the DropkicK code is surprisingly straightforward and uses WMI for just most things, and those WMI components take an optional user name and password when connecting, so it’s just a matter of exposing the administrator user name and password as deployment parameters, and then threading them through to the WMI objects.  No big deal.

What IS a big deal though, is getting WMI to work with an Amazon VM in the first place.  You’d think it would be pretty easy, but you’d be wrong.  Very wrong.  There are several things to get right, and if you don’t get them right, you’re going to get some of the must useless error messages you have every seen.

So I just got it working, and here’s how I did it.

Getting Started

First, you go create your self an Amazon VM.  Make it any size you want, but you’ll probably want a Windows base install.

I used Windows 2012 for this, but I went through this same pain earlier in the year with Windows 2008, and it was the same.

Also, when creating your VM, give it a new security group.  Right now you can start with just RDP access, but you’ll going to be adding a bunch of firewall exceptions specific to WMI, so you’ll probably want to keep this type of stuff isolated.

OK, so once your VM is up and running, first create yourself an administrative user account, something like “mmooney”.  Then log out as administrator, and go log in as that new user account.  In fact, don’t go back in through the “administrator” account again.  What?  You and every developer on your team likes to use the “administrator” account on every server?  Stop it.  Stop It.  STOP IT.  Bad Bad BAD.

Now go in and install all of the Windows features/roles/whatever-they-call-it-this-year that you normally need (IIS/MSMQ/whatever) and run a Windows Update.


Now, lets get a baseline of failure.  After a LOT of googling on solving WMI issues, i finally stumbled across a blog post mentioning WBEMTEST, and I was furious that I didn’t know about it sooner.

WBEMTEST is a WMI test client, already installed on your machine.  Go ahead, run wbemtest from a command line, and it will launch it up.

It’s basically the type of throughout test UI that you probably wrote on your third and fourth projects you ever worked on, after you learned it was a useful investment of your time, and you were still young and idealistic enough to spend a few hours building out a cool test tool like this.  But those days are gone; you are now old and slow and lazy and so many years of custom built tools have come and been used and then gone and been forgotten about washed over you like yet another wave flowing down then endless river of projects that your career has become.  Anyway.  That’s OK, because someone already built this one for you.


So hit Connect accept the defaults that will point to your own machine, and you’ll get all sorts of fancy options.  Play around with it.  Go ahead.  Play I say.

image image

Now let’s go to your VM.  Hit connect again, and instead of the default “root\cimv2”, put “\\[YourMachineIP]\root\cimv2”.  Ka BOOM.  Kinda.


Ah, “The RPC server is unavailable”.  Simple enough, clearly accurate, but enormously unhelpful.  Get used to this message folks, it’s going to be following you around for a while.

Amazon Security Settings

The problem here is that you have a few firewalls blocking you from accessing that server.  This is one of those “good” safe-by-default security things, because you can do some nasty stuff if you get WMI access to a machine.  Sure you’ll still need an administrative username and password, but you really don’t want “guessing some guys password” as the only thing between you and p!@wnzge (or wherever those kids call it).

But here we actually want to get in.  first we’ll need to poke a hole in the Amazon firewall. By “small” I mean a giant gaping hole that you could fly a spaceship through.

To access WMI, you will need to open TCP ports 135 and 445.  Oh and 1024 through 65535.  Yes, that’s right.  WMI will try to connect through one random port in that range, and you can’t easily tell it which one, either from the client or the server.  I spent a lot of time trying several things get it locked down to a single port or list of ports, but came to the conclusion that it was pretty much not possible.

While you are in here, ask yourself if you will also want to be able to access the machine through a file share (\\[IPAddress]\C$ or something similar).  If so, also add allow holes for TCP  135-139, UDP 135-139, and UDP 445 (you also want TCP 445, but you did that above).

But PLEASE make sure you restrict the IP range to the servers that you are actually expecting to connect.  Do NOT leave it as  That’s just asking for trouble.

When it’s all said and done, your security group should look something like:


Now if you try to telnet to any of these ports, or use WBEMTEST, you’ll probably still get RPC Unavailable, because the Windows Firewall is blocking you.

Windows Settings


So now go into your VM and bring up the firewall settings.  Make sure all of the Windows Management Instrumentation Services rules are enabled for the Public role (make sure you’re not in Domain or Private or you’re going to make and bunch of changes and nothing will happen and you’ll be confused and angry and you’ll blame me and my stupid blog post and that’s no fun, at least not for me). 


Then in Computer Management, drill down to Services And Applications->WMI Control.  Right click the WMI Control node and select Properties.  On the Security tab, select the Root node and Ensure that the Administrators group has access to everything (it probably will).


Then go into the Services window and make sure the Windows Management Instrumentation service is running.


So let’s try again now.


Final Steps, One Crazy Weirdness

Now if we go back to WBEMTEST and try to connect, we get a little farther.


Access Denied is good!  We got through the firewalls and we got a response.  So let’s put in an administrator in the user name and password…


WAT.  “The object exporter specified was not found”.

What the deuce does that mean?  If you look around, you’ll see people having this issue connecting through a host name and the solution is to use the IP address to get it to resolve correctly.

But we ARE using the IP, right?  Sort of.  We are using the public IP, not private IP that the VM would actually use to identify itself. 


In fact, if you go into another Amazon VM and try to connect to this one through its private IP, it actually works.  But that is not help to me when I’m sittimg at home in my bunny slippers trying to push a change from my desktop.  To the best of my knowledge you cant access Amazon VMs through their private IPs from outside he Amazon cloud (at least not without a lot of networking voodoo that is above my paygrade).

BUT, you can also connect to WMI through a host name.  No, not that crazy public DNS host name, that won’t help you any more than the public IP.  Instead you have to use the actual machine name that the machine itself is aware of.


Of course, your local machine is not going to resolve that, but if you add a host file entry, you should be all set.  In case you’ve forgotten, that’s at C:\Windows\System32\Drivers\etc\hosts, and you’ll need to open it with a text editor that is running in Administrator mode if you have UAC running on your machine.


Now it we go back to WBEMTEST try to connect to “\\[PrivateMachineName]root\cimv2”, it works!

image image


Well, hopefully that helps some folks.  Or at least helps me again in 6 months when I run into it again.

New To AngularJS?

Not sure what Angular JS is? 

The short version is that it is a client-side Javascript library.  And unlike most of those libraries I’ve fooled around with, it’s very quick to get started with, and makes a lot of stuff really easy.  Pulling data from JSON REST services.  Hash-bang routing.  Two-way data-binding.  All crazy simple.

To quote one fella a few months ago: “It’s like a backbonejs and knockoutjs sandwich covered in awesomesauce."

The website is here:

The best point-by-point introduction you can get is here:

The best “let’s actually build something” introduction you can get is here:

Even if you’ve worked in Angular JS in the past, go through the and TekPub videos anyway.  Tons of good info in both of them.

The Problem

Back already?  OK, so now that you are familiar Angular JS, and maybe you’ve built an application or two with it. 

As you build out your applications, you’ll inevitably encounter some scaling problems.  You  are probably running into big fat controllers, hundreds of JS and HTML template files, and the like.  Any there are plenty of resources just a google away on how to deal with them. 

But the one problem I have not seen dealt with (to my satisfaction at least) is how to deal with the client side URLs.  Angular’s router makes it so easy to navigate to a URL have it bind to a specific controller and view.  But what happens when those URLs start to have a few parameters?  What happens when you want to change them?  All of the sample code I’ve seen has the URLs defined in the router config, and in the HTML templates, and maybe in the controllers as well.  Hard-coded, manually-concatenated strings all over the place.  Arg.

I don’t like this, and I want a better solution.

What Are You Talking About?

(The code samples for this are from the Sriracha Deployment System, and open source deployment platform that I’m building for fun, and the latest code is here:

So for example, let’s pretend you want to display project details.  You’ll create an HTML template that looks something like this:

    Project {{project.projectName}} 
    <a ng-href="#/project/edit/{{}}">
... more info here ...

And then you would define your router like this:

ngSriracha.config(function ($routeProvider) {
        .when("/project/:projectId", {
            templateUrl: "templates/project-view-template.html",
            controller: "ProjectController"

And then later when someone wants to get to that page, you direct them you create a link to “/#/project/whateverTheProjectIdIs”:

<tr ng-repeat="project in projectList">
        <a ng-href="/#/project/{{}}">


OK, now we have three separate places that we’re referencing the URL, and they are all slightly different, and we have not even really built anything yet.  As this turns into a real application, we’re going to have 20 or 30 or 50 variations of this URL all over the place.  And at least 5 of them will have a typo.  And you will be sad.  Very sad, I say.

If we wanted to change this URL from “/project/:projectId” to “/project/details/:projectId”, it’s going to be a huge pain  the neck.  Couple this with the inevitable hard-to-find bugs that you’re going encounter because you spelled it “/proiect” instead of “/project”, and you’re wasting all kinds of time.

First Attempt, An Incomplete Solution

So I when through a few attempts at solving this before I settled on something that really worked for me.

First things first, I built a navigation class to store the URLs:

var Sriracha = Sriracha || {};

Sriracha.Navigation = {
    HomeUrl: "/#",
    Home: function () {
        window.location.href = this.HomeUrl;

    GetUrl: function (clientUrl, parameters) {
        var url = clientUrl;
        if (parameters) {
            for (var paramName in parameters) {
                url = url.replace(":" + paramName, parameters[paramName]);
        return "/#" + url;

    GoTo: function (clientUrl, parameters) {
        var url = this.GetUrl(clientUrl, parameters);
        window.location.href = url;

    Project : {
        ListUrl: "/",
        List: function () {
        CreateUrl: "/project/create",
        Create: function () {
        ViewUrl: "/project/:projectId",
        View: function (id) {
            Sriracha.Navigation.GoTo(this.ViewUrl, { projectId: id });


At least now when I needed a URL from Javascript, I could just say Sriracha.Navigation.Project.ViewUrl.  When I want to actually go to a URL, it would be Sriracha.Navigation.Project.View(;.  And if I wanted the formatted client side URL, with leading /# included and the parameters included, I could use the GetUrl() function to format it.  You’re router config is a little less hard-coded:

ngSriracha.config(function ($routeProvider) {
        .when(Sriracha.Navigation.Project.ViewUrl, {
            templateUrl: "templates/project-view-template.html",
            controller: "ProjectController"


This worked pretty good, except for HTML templates.  This is a bit crazy to be burning into your ng-href calls, plus your templates should really only isolate their concerns tot he $scope object.  I’m pretty sure the Dependency Injection Police would come hunt me down with pitchforks if I start calling static global Javascript objects from inside a HTML template.  Instead I ended up with a lot of this:

<tr ng-repeat="project in projectList">
            <a ng-href="{{getProjectViewUrl(}}">


And then this:

$scope.getViewProjectUrl = function (project) {
    if (component) {
       return Sriracha.Navigation.GetUrl(Sriracha.Navigation.Project.ViewUrl, { projectId: });


More Arg.

Better Solution: Navigator Pattern

So I figured what you really want is an object, that can be injected into the router configuration, and also into controller, and then stored in the scope so that it can be referenced by the HTML templates.

I played with Service and Factory and all of that jazz, but really if you want to create an object that is going to get injected into the router config, you need a Provider. 

So I created an object that looks just like the static global SrirachaNavigation object I had before, but a little more suited for the ways I want to use it.

ngSriracha.provider("SrirachaNavigator", function () {
    this.$get = function () {
        var root = {
            getUrl: function (clientUrl, parameters) {
                var url = clientUrl;
                if (parameters) {
                    for (var paramName in parameters) {
                        url = url.replace(":" + paramName, parameters[paramName]);
                return "/#" + url;
            goTo: function (clientUrl, parameters) {
                var url = this.getUrl(clientUrl, parameters);
                window.location.href = url;
        root.project = {
            list: {
                url: "/project",
                clientUrl: function () { return root.getUrl(this.url) },
                go: function() { root.goTo(this.url) }
            create: {
                url: "/project/create",
                clientUrl: function () { return root.getUrl(this.url) },
                go: function() { root.goTo(this.url) }
            view: {
                url: "/project/:projectId",
                clientUrl: function (projectId) { return root.getUrl(this.url, { projectId: projectId }) },
                go: function(projectId) { root.goTo(this.url, { projectId: projectId}) }

Now we can inject this into the router config and use that instead:

ngSriracha.config(function ($routeProvider, SrirachaNavigatorProvider) {
    var navigator = SrirachaNavigatorProvider.$get();
        .when(navigator.project.view.url, {
            templateUrl: "templates/project-view-template.html",
            controller: "ProjectController"


And then inject it into our controllers:

ngSriracha.controller("ProjectController", function ($scope, $routeParams, SrirachaResource, SrirachaNavigator) {
    $scope.navigator = SrirachaNavigator;

And then when we need to reference a URL from the your code:

<tr ng-repeat="project in projectList">
        <a ng-href="{{navigator.project.view.clientUrl(}}">

Much, much nicer in my opinion.


But, but but…

I know, you may think this is silly. 

If you don’t have a problem putting your URLs all over the place, good for you, stick with what works for you.

Or, if you think this is an absurd waste of effort, because there is a better generally accepted way to deal with this, awesome.  Please let me know what you’re doing.  I’ve love to toss this and use a better way, I just have not been able to find anyone else who’s really solved this yet (to my great surprise).

But until then, if you’re AngularJS URLs are getting you down, this might work for you.

Good luck.

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:  GitHub commit checkpoints are referenced throughout the posts.

In the previous post we covered deploying an app to Azure through the Azure web portal and from Visual Studio.  In this post, we’ll show you how to deploy to Azure from PowerShell.  This comes in really handy if you want to be able to deploy to right from your build server, and who doesn’t want to do that?

Why now?

So we have not really gotten into much detail about Azure, and our app is stupidly simple, why are we getting into mundane operational gold-plating like automating deployments from a build server?

Because it’s really important to automate your whole build/deploy pipeline as soon as possible.  The later you automated it, the more time you are flushing down the toilet.  Even if you don’t want to deploy automatically from your build server, if you don’t at least boil your whole deployment down to single one-click script file, you’re stealing from yourself.

When I started out with SportsCommander, I was building all the code locally in Visual Studio and then deploying through the Azure web portal (I know, caveman stuff right?).  Anyhow, pretty soon I got everything built and versioned through a TeamCity build server, and even had the site being FTPed to our shared hosting test server (hello, LFC Hosting), but for production deployments to Azure I would still remote into the build server and upload the latest package from the hard drive to the Azure website.  Part of this was that I wanted to be able to test everything in the test server before deploying to production, and part of this was that I wanted to make sure it didn’t get screwed up, but part of it was also the logical fallacy that I didn’t have time to sit down and spend the time to figure out how to get the Azure deployment working.

And I was wrong.  Way wrong.  Deploying to Azure manually doesn’t take too long, but it adds up.  If it took me 15 minutes to remote into the server, browse to the Azure site, select the package, select the config, and yadda yadda yadda, it only takes a handful of times before you are bleeding whole hours.  If you are deploying several times per week, this can get really expensive.  Not only are you getting fewer fixes and features done, you aren’t even deploying the ones that you do have done, because you don’t have time to deploy and it’s a pain anyway.  Plus, really the only reason we wanted to deploy to the test server first was to smoke test, because deploying again was such a pain that I didn’t want to have to do a whole second deployment to fix a line of code; but if I could fix that line of code and redeploy with one click, I don’t even need to waste time with the test server.

So I didn’t want to spend the time figuring out who to deploy to Azure automatically.  Well I did, but it took me more than 5 minutes to Google it, find the right answer among the plethora of other answers, so it took a while to get done.

Hopefully you found this post in under 5 minutes of Googling so you don’t have any excuses.


If you’ve been Googling around, you may have seen some posts about installing certificates.  Don’t bother.  This approach doesn’t require it, which is good, because that’s no fun.

First, go install the Windows Azure Cmdlets (  Go go go.

Second, make sure you can run remote signed scripts in PowerShell.  You only need to do this once, and if you have played around with PowerShell you’ve probably already done this.  Open up PowerShell in Administator mode (Start Button->type powershell->CTRL+Shift+Enter).  Then type:

Set-ExecutionPolicy RemoteSigned

and hit Enter.  You will get a message along the lines of “OMG Scary Scary Bad Bad Are you Sure!?!?! This is Scary!”.  Hit “Y” to continue.

Now comes the tricky part.  There is a whole bunch of PowerShell commands and certificate stuff that can get confusing.  Thankfully Scott Kirkland wrote a great blog post and even put a sample script up on GitHub.  I had to make a few tweaks to it to get it working for me, so here goes.

Fire up PowerShell again (doesn’t need to be Administrator mode any more), browse to your solution directory, and run:


This will launch a browser window, prompt you to log into your Azure account, and then prompt you to download a file named something fun like “3-Month Free Trial-1-23-2013-credentials.publishsettings”.  Take that file, move it to your solution directory, and name it something less fun like “Azure.publishsettings”.  If you open that fella up, you’ll see something like:

<?xml version="1.0" encoding="utf-8"?>
      Name="3-Month Free Trial" />

So on to the script.  In the root of your project, create a PowerShell script (just a text file named something like DeployAzure.ps1):

#Modified and simplified version of
#From: #
$subscription = "3-Month Free Trial" #this the name from your .publishsettings file
$service = "mmdbazuresample" #this is the name of the cloud service you created
$storageAccount = "mmdbazuresamplestorage" #this is the name of the storage service you created
$slot = "production" #staging or production
$package = "C:\Projects\MMDB.AzureSample\MMDB.AzureSample.Web.Azure\bin\Release\app.publish

$configuration = "C:\Projects\MMDB.AzureSample\MMDB.AzureSample.Web.Azure\bin\Release\app.publish

$publishSettingsFile = "Azure.publishsettings"
$timeStampFormat = "g"
$deploymentLabel = "PowerShell Deploy to $service"
Write-Output "Slot: $slot"
Write-Output "Subscription: $subscription"
Write-Output "Service: $service"
Write-Output "Storage Account: $storageAccount"
Write-Output "Slot: $slot"
Write-Output "Package: $package"
Write-Output "Configuration: $configuration"

Write-Output "Running Azure Imports"
Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1"
Import-AzurePublishSettingsFile $publishSettingsFile
Set-AzureSubscription -CurrentStorageAccount $storageAccount -SubscriptionName $subscription
Set-AzureService -ServiceName $service -Label $deploymentLabel
function Publish(){
 $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue 
 if ($a[0] -ne $null) {
    Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. "
 if ($deployment.Name -ne $null) {
    #Update deployment inplace (usually faster, cheaper, won't destroy VIP)
    Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename.  Upgrading deployment."
 } else {
function CreateNewDeployment()
    write-progress -id 3 -activity "Creating New Deployment" -Status "In progress"
    Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"
    $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -

ServiceName $service
    $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
    $completeDeploymentID = $completeDeployment.deploymentid
    write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete"
    Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: 

function UpgradeDeployment()
    write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress"
    Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"
    # perform Update-Deployment
    $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label 

$deploymentLabel -ServiceName $service -Force
    $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
    $completeDeploymentID = $completeDeployment.deploymentid
    write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete"
    Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID"
Write-Output "Create Azure Deployment"

Run that guy from the PowerShell command line, and you’re watch the bits fly.  Yes, it will take several minutes to run.

A generic version of this can be found here:  Again, I borrowed from Scott Kirkland’s version, but his script assumed that your storage and cloud service were the same name, so I added a separate field for storage account name.  Also to alleviate my insanity, I added a little more diagnostic logging.


This was the post that I started out to write, before I decided to backfill with the more beginner stuff.  From here, it’s going to be a little more ad-hoc.

Anyhow, the next post will probably be setting up your own DNS and SSL for your Azure site.

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:  GitHub commit checkpoints are referenced throughout the posts.

In the previous posts we covered setting up a basic Azure-enabled web site and setting up your Azure account.  In this post, we’ll show you how to deploy that website to Azure.  First we will deploy through the Azure web UI, and then directly from Visual Studio.

Azure Deployment Package

In order to deploy your website to Azure, you’ll need two things, a deployment package and a configuration file:

  • Deployment Package (.cspkg): This is a ZIP file that contains all of your compiled code, plus a bunch of metadata about your application
  • Configuration File (.cscfg): This is an environment-specific configuration file.  We’ll get into this more later, but this lets you pull all of the environment-specific configuration away from your code which is definitely a good idea.

So go back into your solution that we build in an earlier post (or grab it from GitHub here).  To deploy this project, you will need a MMDB.AzureSample.Azure.cspkg file.  But if you search your hard drive, you won’t find one yet.  To create this package, you’ll need to right click on your Azure project and select “Package”



This will create the package, and will even launch a Windows Explorer window with the location (which is helpful, because it can be a little tricky to find:


And there are our two files we need to deploy.

Deploying Through Azure Website

So now that we have our deployment package and config file, let’s go back into the Azure website ( and log in.


We want to create a new Cloud Services project, so let’s click Create An Item and drill down to Compute->Cloud Service –>Quick Create:



We’ll enter our URL, which will end with, but does not have to the the final name of your website.

We’ll also select a Region/Affinity Group, which is where your servers will be hosted.  Select somewhere closest to where your biggest user base will be.  The different areas in the US don’t make a huge difference, but US vs. Europe vs. Asia can have a big impact.


So then we click Create Cloud Service, and we have ourselves a cloud service:


Now we’ll go ahead and set it up.  Click the little arrow next to the service name, then Upload A New Production Deployment, and start filling in the details:



Enter a name for your deployment.  This is specific to this deployment, and the idea is that it could/should change on subsequent deployments.  I usually try to put the release version number in there.

Also browse for your package and config file, and select Deploy even if one or more roles contain single instance (which is in fact the case with our simple little test app), and select Start Deployment:


Click the checkmark button, and then go get yourself a cup of coffee. This is going to take a few minutes.




Oh you’re back already?  Go sneak out for a smoke, we have a few more minutes to go:


This is one of the downsides to Azure, these deployments can take a while, especially for new deployments.  Down the line we’ll show you how to automate this so you can click a button and walk away for a while, but for now just keep watching.

Ok, so after about 5 minutes, it looks like everything is running.  We can click the Instances tab to see more detail:



Back on the Dashboard screen, if you scroll down, you’ll see a whole bunch of useful stuff:


Including the Site URL.  Try clicking that and let’s see what we get:


Hey, there’s our website!  Nice.

Deploying Through Visual Studio

Now that you’re familiar with the Azure website, we can also deploy right from Visual Studio.

Let’s go back into Visual Studio, right click our project, and select Publish:


Now we’ll set up our subscription details.  Under “Choose your subscription”, select “<Manage…>”, and then click the New button, and that will take you to the New Subscription screen:




We’ll need to create a management certificate, so under the dropdown select “<Create…>”, enter a name for the certificate, and click OK.



How that we have that, upload that certificate to the Azure portal web site. Click the link to copy the path and then the link to go to the Azure portal:


One you’re there, go all the way to the bottom, select Settings->Upload Management Certificate. 


Select the certification (using the path in your clipboard) and click OK.



OK, now that we’ve uploaded our certificate, let’s go back to the New Subscription screen.  Next it’s asking for our subscription ID, which is back in the Azure portal.  Paste that in, give a name for your subscription, and click OK:



And now we’re all the way back to the publish screen.  Now we’ll use that subscription click next, and …



…and now it’s asking to create an Azure storage account?  Why? 

The reason is that when you deploy right from the Azure portal website, it does everything all at once on the server.  However, when you deploy from the Visual Studio (or from Powershell, which we’ll get into later), it first uploads the package to the storage account and then tells Azure to deploy the package from there.

Let’s enter the name of our new storage account and location (yes, it’s a good idea to use the same location as the Region/Affinity Group you entered above) and click OK

Now we have some nice default settings to deploy with, and so let’s click Publish:


It may ask you to replace the existing deployment, and that is OK:


Now this will run for a while, and will take about as long as the deployment from the website.


Once that’s done, click the Website URL, and check out your fancy new website:



What to be careful of

  • When deploying through the website, make sure you build and package each time before you deploy.  If you are deploying from Visual Studio, it will automatically package everything for you.  However, if you build your project but forget to package it, and then upload the package to the Azure website, you’ll be uploading and older version package.  This is the type of thing that can cause a lot of time wasted trying to figure out why your new changes are not appearing.
  • Also, as a general rule, deploying right from your development environment is bad bad bad.  All of the awful reasons are long enough for another blog post, but in short you really want to have an independent build server which is responsible for pulling the latest code from source control, building it, running any tests it can, and publishing it out to Azure.


Next, we’ll cover another key part of Azure deployments, which is deploying via command line.  While deploying from Visual Studio and the Azure portal is easy when you’re getting started, eventually you’re going to want to automate this from a build server.

This is an ongoing series on Window’s Azure.  The series starts here, and all code is located on GitHub:  GitHub commit checkpoints are referenced throughout the posts.

In the last post we covered setting up a basic Azure-enabled web site.  In this post, we’ll show you how to setup your Azure account.  Next, we’ll cover deploying the project to Azure.

So head on over to and click the Free Trial link:


Singe in with your Microsoft/Live/Password/Metro/Whatever user ID. If you don’t have on, create one:


Why yes, I would like the 90-Day Free Trial:


Once you do a little validation of your phone and credit card, they will go ahead and setup your account.


After a minute or two, click the Portal link:


This takes you to, the main place you want to be for managing your Azure application.


And there we are.  Poke around a little bit, there is a lot of interesting stuff in here.  In the next post, we’ll create a Cloud Service and deploy our sample application there.