One of the hot new buzzwords in software development these days is “DevOps”.  It seems that “cloud” and “big data” and “synergy” were getting boring, so now every CTO is trying to get some of that DevOps stuff injecting into their process so that they can deploy to production 10 times a day.

But what is DevOps really?  Like most cool trendy buzzword ideas, it grew out of a few smart people with some good ideas that did real, concrete, awesome things, before everyone else saw how successful it was and tried to figure out shortcut formulas to get there.

History

To me, DevOps is just the next evolution of how successful software teams find better ways to be more effective, and almost all of it is traceable back to the ideas of the Agile Manifesto

An informal and woefully incomplete list of this this evolution would be:

  • For a while, the focus was getting the complexity of enterprise software under control, and building writing that code as quickly and efficiently as possible.  You saw tools RAD tools, development patterns like object orientation, and design approaches like UML and RUP.  This mostly helped single developers and development teams.
  • Once developers had figured out ways to make individual and independently-developed components easier to build, they had to deal ensure that their code fit together nicely with everyone else’s, out of which continuous integration was born.  This help build the stability of interaction and coordination between development teams.
  • As the development teams go faster and more flexible at building stuff, the projects that defined and managed what they they should be building and when they could expect it to be done.  Agile project management processes like Scrum filled this need.  This helped improved the communication and effectiveness of the whole project team, from developers to QA to project manager, and even product owners and business users (who were very underrepresented in previous approaches). 
  • Another challenge of building software so quickly with so many changes along the they way was validating the software.  When you are moving fast enough, and handling changes to the project scope and desired functionality, it’s hard to capture what the actual correct functionality should be, and whether the software you are building meets those requirements.  This brought along several approaches to automated testing, such as unit testing, integration testing, and UI testing.  Developers starting using patterns like Test Driven Development, and started working with and communicating with the QA team to ensure that there was a shared vision of what the expecting quality of the system was.  This increased communication between development and QA resulted in less focus on silly distractions like bug counts and whether something is actually a defect by strict definition, and more focus on everyone working together build the highest quality system they could.
  • Having semi-conquered many of the problems above, many product teams took the agile ideas a few steps farther to get the business users more involved.  While it was always important to make sure that the product being built was what the users actually needed.  More importantly, they wanted to ensure that they were genuinely solving the user’s problem; this required working with the users, asking questions, getting to the root of the problems, and offering potential solutions, rather than just demanding a requirements document.  To help along this communication and discovery process, ideas such as Behavior Driven Development and Specification By Example were developed to ensure that the business users, the only people who really know what needs to be done, are more involved in the project (serving as Pigs rather than just Chickens, if you will).
  • Now, having handling many of the hard parts of efficiently building software that actually solves the user’s problems, there has been a focus on how to ship and support that software.  This has involved working with operations teams to create streamline deployment and monitoring of the systems throughout various environments.  And while this “DevOps” approach is solving a long list of long-standing problems in this business, it is, unfortunately, doomed to be the The Next Big Thing, the next Game Changer, the next Silver Bullet, and that is a Bad Thing.

 

[Buzzword]-In-A-Box = Failure-In-A-Box

Notice a pattern there?  It’s like an every-growing blob, pulling in more people from diverse teams.  Starting with just the developers, and then expand to other development teams, QA, project management, business users, and now operations.  Each step of the process involved building out bridges of communication and cooperation across teams.

But then it goes wrong.  Each of these steps went through a similar buzzword-ifcation.  Those steps were:

  • Some pioneering teams start to develop new processes for help they succeed with other teams.  Being analytic folks who can react and respond to new information, the most successful ones are developed over time with feedback from the other teams and a critical eye towards continuously improving the process.
  • Other folks notice this success and want to capture it, so the processes start to become more formalized and defined.  While the original processes were very customized, the industry starts to get a better of idea of what parts work well in more generic settings.
  • As the patterns become better understood, the repetitive friction points are identified and companies begin to build tools to automate away that friction and give people freedom to focus on the core benefits of the process.
  • More people, looking for the quickest way to get the most value from the concept and begin to think that the tools and formalized processes is the key to accomplishing that.You want DevOps!  I'll get you some DevOps!
  • Eventually, large companies are hiring high priced consultants and buying expensive enterprise tools as a part of a corporate-wide initiative to capture that buzzword magic.  This focuses on dogmatically following the process and the tooling.
  • While lip-service is paid to the idea of communication, and cross-team meetings are set up, it takes a backseat to the process and tools.  This is because cooperation and communication takes a lot of work over time to build, and that is not something you can sell in a box.  In the end, those companies are buying valuing Processes and tools over Individuals and Interactions, which is the complete reverse of the Agile Manifesto that drove the concepts to the great idea in the first place.
  • The magic is dead. Consultants are making money, executives are touting their groundbreaking strategies, and in the trenches the same pathologies remain.  While the masses try to follow the recipe and failed to be effective with it, the groundbreaking folks are off to solve the next problem.

 

Next Stop: DevOps

So what is this DevOps thing?  In it’s simplest sense, it expanding the development team beyond just developers and QA and PMs to include the operations and deployment teams.  The result is what you might call a “product delivery team”.

The first step, like all of of the other steps of the evolution, is “stop being a jerk”.  Get yourself OK with that first, and come back when you’re done.  “But I’m not a jerk, some people are just idiots”.  Yeah, that means you’re still a jerk.  As soon as you find yourself saying or even thinking that someone on your team (or worse one of your users) is an idiot or a moron or useless and their job responsibilities are pointless, you have more deep-seated issues to work out first.  Before you can succeed, you MUST treat those everyone on your team and other teams with dignity and respect.  And the hardest part about this is that you can’t just go through the motions, but deep down inside you need to actually believe it.  If you think that is too touch-feely, or you don’t think you can do that, or you don’t want to or you don’t believe that it’s necessary, that’s fine: Go away, and stay the &$@# away from my projects, you miserable person.  The idea of the "smart but difficult-to-work-with-developer” is a crock.  If you can’t work effectively with other developers and people on other teams, I don’t care how many books you’ve read, you suck as a developer, in my humble opinion.

OK, the next step is to actually talk the other people.  Recognize that as hard and important as your job may be, theirs is probably just as hard and just as important, and the only way your team is going to get better is if everyone works to make everyone’s job easier.  So set up a meeting with them, and ask, with a straight face and genuine interest, “what makes your job more difficult, and what can we do to make it easier”.  Then watch their face light up.  I guarantee you they have list of annoyances and manual steps and remediation steps that waste their time every day and they will be happy to have an opportunity to gripe about them in a welcoming setting without having to worry about being labeled a “complainer”.  Examples would be “we don’t know when something is deployed or needs to be deployed” or “we don’t know what changes are including in each release” or “every time I have to deploy something I need to copy files all over the place and edit some scripts and the dev team always forgets to include a certain file”.

Now, you will be tempted to smile and nod and shrug your shoulders and explain that it’s not a perfect system but it has worked so far.  Suppress this urge, write down the concerns, and start talking about them.  Throw around blue sky ideas of possible solutions.  Get an idea of not just want hinders them, but what would actually make them succeed. 

OK, now what is the official title of this part of the DevOps process?  There are probably several names, but I prefer “stop being a jerk, talk to people, and find out you can solve their problems”.  What tool should you use for this?  Usually Notepad, Evernote/OneNote, or a piece of paper, and an open mind.

Now, before you are done talking to them, pick a few of the most offensive and/or easiest problems and promise to fix them.  In fact, schedule a follow up meeting before you even leave the room, for a few days or a week or two away, where you will show you’re half-done solutions and get feedback about whether it actually is going to solve the problem or what you might be missing.  Or now that you gave them something to visualize, what brand new thing they thought of that would make it 10x better.  Or even that they now realize that were wrong, this is not going to solve the problem, so maybe we need to try something else; this is not a bad thing, and try not to get frustrated.  Instead, repeatedly stress to them that your main goal here is to make their life easier, not just because they want to hear that, but because it’s 100% true.

Sound simple?  It really is.  But that is the core of DevOps.  If you do this first, everything else will start to fall into your lap.  There are a bunch of tools and procedures and techniques that you can leverage to solve the problems, but you need to be sure that you are actually solving the right problems, and to that you need to build a positive working relationship to help root out and identify the solutions to those problems.  The tools are awesome, but you HAVE to focus on the “individuals and interactions over processes and tools”.  But once you have in place, you can do anything.

 

 

New To AngularJS?

Not sure what Angular JS is? 

The short version is that it is a client-side Javascript library.  And unlike most of those libraries I’ve fooled around with, it’s very quick to get started with, and makes a lot of stuff really easy.  Pulling data from JSON REST services.  Hash-bang routing.  Two-way data-binding.  All crazy simple.

To quote one fella a few months ago: “It’s like a backbonejs and knockoutjs sandwich covered in awesomesauce."

The website is here: http://angularjs.org/

The best point-by-point introduction you can get is here: http://www.egghead.io/

The best “let’s actually build something” introduction you can get is here: http://tekpub.com/products/angular

Even if you’ve worked in Angular JS in the past, go through the egghead.io and TekPub videos anyway.  Tons of good info in both of them.

The Problem

Back already?  OK, so now that you are familiar Angular JS, and maybe you’ve built an application or two with it. 

As you build out your applications, you’ll inevitably encounter some scaling problems.  You  are probably running into big fat controllers, hundreds of JS and HTML template files, and the like.  Any there are plenty of resources just a google away on how to deal with them. 

But the one problem I have not seen dealt with (to my satisfaction at least) is how to deal with the client side URLs.  Angular’s router makes it so easy to navigate to a URL have it bind to a specific controller and view.  But what happens when those URLs start to have a few parameters?  What happens when you want to change them?  All of the sample code I’ve seen has the URLs defined in the router config, and in the HTML templates, and maybe in the controllers as well.  Hard-coded, manually-concatenated strings all over the place.  Arg.

I don’t like this, and I want a better solution.

What Are You Talking About?

(The code samples for this are from the Sriracha Deployment System, and open source deployment platform that I’m building for fun, and the latest code is here: https://github.com/mmooney/Sriracha.Deploy)

So for example, let’s pretend you want to display project details.  You’ll create an HTML template that looks something like this:

<h2>
    Project {{project.projectName}} 
    <a ng-href="#/project/edit/{{project.id}}">
        (edit)
    </a>
</h2> 
... more info here ...

And then you would define your router like this:

ngSriracha.config(function ($routeProvider) {
    $routeProvider
        .when("/project/:projectId", {
            templateUrl: "templates/project-view-template.html",
            controller: "ProjectController"
        })

And then later when someone wants to get to that page, you direct them you create a link to “/#/project/whateverTheProjectIdIs”:

<tr ng-repeat="project in projectList">
    <td>
        <a ng-href="/#/project/{{project.id)}}">
            {{project.projectName}}
        </a>
    </td>
</tr>

 

OK, now we have three separate places that we’re referencing the URL, and they are all slightly different, and we have not even really built anything yet.  As this turns into a real application, we’re going to have 20 or 30 or 50 variations of this URL all over the place.  And at least 5 of them will have a typo.  And you will be sad.  Very sad, I say.

If we wanted to change this URL from “/project/:projectId” to “/project/details/:projectId”, it’s going to be a huge pain  the neck.  Couple this with the inevitable hard-to-find bugs that you’re going encounter because you spelled it “/proiect” instead of “/project”, and you’re wasting all kinds of time.

First Attempt, An Incomplete Solution

So I when through a few attempts at solving this before I settled on something that really worked for me.

First things first, I built a navigation class to store the URLs:

var Sriracha = Sriracha || {};

Sriracha.Navigation = {
    HomeUrl: "/#",
    Home: function () {
        window.location.href = this.HomeUrl;
    },

    GetUrl: function (clientUrl, parameters) {
        var url = clientUrl;
        if (parameters) {
            for (var paramName in parameters) {
                url = url.replace(":" + paramName, parameters[paramName]);
            }
        }
        return "/#" + url;
    },

    GoTo: function (clientUrl, parameters) {
        var url = this.GetUrl(clientUrl, parameters);
        window.location.href = url;
    },

    Project : {
        ListUrl: "/",
        List: function () {
            Sriracha.Navigation.GoTo(this.ListUrl);
        },
        CreateUrl: "/project/create",
        Create: function () {
            Sriracha.Navigation.GoTo(this.CreateUrl)
        },
        ViewUrl: "/project/:projectId",
        View: function (id) {
            Sriracha.Navigation.GoTo(this.ViewUrl, { projectId: id });
        },

 

At least now when I needed a URL from Javascript, I could just say Sriracha.Navigation.Project.ViewUrl.  When I want to actually go to a URL, it would be Sriracha.Navigation.Project.View(project.id);.  And if I wanted the formatted client side URL, with leading /# included and the parameters included, I could use the GetUrl() function to format it.  You’re router config is a little less hard-coded:

ngSriracha.config(function ($routeProvider) {
    $routeProvider
        .when(Sriracha.Navigation.Project.ViewUrl, {
            templateUrl: "templates/project-view-template.html",
            controller: "ProjectController"
        })

 

This worked pretty good, except for HTML templates.  This is a bit crazy to be burning into your ng-href calls, plus your templates should really only isolate their concerns tot he $scope object.  I’m pretty sure the Dependency Injection Police would come hunt me down with pitchforks if I start calling static global Javascript objects from inside a HTML template.  Instead I ended up with a lot of this:

<tr ng-repeat="project in projectList">
    <td>
            <a ng-href="{{getProjectViewUrl(project.id)}}">
                {{project.projectName}}
            </a>
    </td>
</tr>

 

And then this:

$scope.getViewProjectUrl = function (project) {
    if (component) {
       return Sriracha.Navigation.GetUrl(Sriracha.Navigation.Project.ViewUrl, { projectId: project.id });
    }
}

 

More Arg.

Better Solution: Navigator Pattern

So I figured what you really want is an object, that can be injected into the router configuration, and also into controller, and then stored in the scope so that it can be referenced by the HTML templates.

I played with Service and Factory and all of that jazz, but really if you want to create an object that is going to get injected into the router config, you need a Provider. 

So I created an object that looks just like the static global SrirachaNavigation object I had before, but a little more suited for the ways I want to use it.

ngSriracha.provider("SrirachaNavigator", function () {
    this.$get = function () {
        var root = {
            getUrl: function (clientUrl, parameters) {
                var url = clientUrl;
                if (parameters) {
                    for (var paramName in parameters) {
                        url = url.replace(":" + paramName, parameters[paramName]);
                    }
                }
                return "/#" + url;
            },
            goTo: function (clientUrl, parameters) {
                var url = this.getUrl(clientUrl, parameters);
                window.location.href = url;
            }
        };
        root.project = {
            list: {
                url: "/project",
                clientUrl: function () { return root.getUrl(this.url) },
                go: function() { root.goTo(this.url) }
            },
            create: {
                url: "/project/create",
                clientUrl: function () { return root.getUrl(this.url) },
                go: function() { root.goTo(this.url) }
            },
            view: {
                url: "/project/:projectId",
                clientUrl: function (projectId) { return root.getUrl(this.url, { projectId: projectId }) },
                go: function(projectId) { root.goTo(this.url, { projectId: projectId}) }
            },

Now we can inject this into the router config and use that instead:

ngSriracha.config(function ($routeProvider, SrirachaNavigatorProvider) {
    var navigator = SrirachaNavigatorProvider.$get();
    $routeProvider
        .when(navigator.project.view.url, {
            templateUrl: "templates/project-view-template.html",
            controller: "ProjectController"
        })

 

And then inject it into our controllers:

ngSriracha.controller("ProjectController", function ($scope, $routeParams, SrirachaResource, SrirachaNavigator) {
    $scope.navigator = SrirachaNavigator;

And then when we need to reference a URL from the your code:

<tr ng-repeat="project in projectList">
    <td>
        <a ng-href="{{navigator.project.view.clientUrl(project.id)}}">
            {{project.projectName}}
        </a>
    </td>
</tr>

Much, much nicer in my opinion.

 

But, but but…

I know, you may think this is silly. 

If you don’t have a problem putting your URLs all over the place, good for you, stick with what works for you.

Or, if you think this is an absurd waste of effort, because there is a better generally accepted way to deal with this, awesome.  Please let me know what you’re doing.  I’ve love to toss this and use a better way, I just have not been able to find anyone else who’s really solved this yet (to my great surprise).

But until then, if you’re AngularJS URLs are getting you down, this might work for you.

Good luck.

If I had a nickel for every time our deployment strategy for a new or different environment was to edit a few config files and then run some batch files and then edit some more config files, and then it goes down in a steaming pile of failure, I would buy a LOT of Sriracha.

image

(Picture http://theoatmeal.com/)

Here’s a config file.  Lets say we need to edit that connection string:

<Setting name="ConnectionString" 
value="Data Source=(local); Initial Catalog=SportsCommander; Integrated Security=true;" />

Now let’s say we are deploying to our QA server.  So after we deploy, we fire up our handy Notepad, and edit it:

<Setting name="ConnectionString" 
value="Data Source=SCQAServ; Initial Catalog=SportsCommander; Integrated Security=true;" />

OK good.  Actually not good.  The server name is SCQASrv not SCQAServ.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; Integrated Security=true;" />

OK better.  But wait, integrated security works great in your local dev environment, but in QA we need to use a username and password.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; UserID=qasite; Password=&SE&RW#$" />

OK cool.  Except you can’t put & in an XML file.  So we have to encode that.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; UserID=qasite; Password=&amp;SE&amp;RW#$" />

And you know what?  It’s User ID, not User ID.

<Setting name="ConnectionString" 
value="Data Source=SCQASrv; Initial Catalog=SportsCommander; User ID=qasite; Password=&amp;SE&amp;RW#$" />

OK, that’s all there is too it!  Let’s do it again tomorrow.  Make sure you don’t burn you don’t burn your fingers on this blistering fast development productivity.

I know this sounds absurd, but the reality is that for a lot of people, this really is their deployment methodology.  The might have production deployments automated, but their lower environments (DEV/QA/etc) are full of manual steps.  Or better yet, they have automated their lower environments because they deploy there every day, but their production deployments is manual because they only do it once per month.

And you know know what I’ve learned, the hard and maddeningly painful way?  Manual process fails.  Consistently.  And more importantly, it can’t be avoided. 

Storytime

A common scenario you see is a developer or an operations person (but of course never both at the same time, that would ruin the blame game)  is charged with deploying an application.  After many iterations, the deployment process has been clearly defined out as 17 manual steps.  This has been done enough times that the whole process is fully documented, with a checklist, and the folks running the deployment have done it enough times that they could do it in their sleep. 

The only problem is that in the last deployment, one of the files didn’t get copied.  The time before that, the staging file was copied instead of the production file.  And the time before that, they put a typo into the config.

Is the deployer an idiot?  No, as a matter of fact, the reason that he or she was entrusted with such an important role was that he or she was the most experienced and disciplined person on the team and was intimately familiar with the workings of the entire system.

Were the instructions wrong?  Nope, if the instructions were followed to the letter.

Was the process new?  No again, the same people have been doing this for a year.

At this point, the managers are exasperated, because no matter how much effort we put into formalizing the process, no matter how much documentation and how many checklists, we’re still getting failures.  It’s hard for the mangers to not assume that the deployers are morons, and the deployers are faced with the awful reality of going into every deployment knowing that it WILL be painful, and they WILL get blamed.

Note to management: Good people don’t stick around for this kind of abuse.  Some people will put up with it.  But trust me, you don’t want those people.

The lesson

The kick in the pants is, people are human.  They make mistakes.  A LOT Of mistakes.  And when you jump down their throat on every mistake, they learn to stop making mistakes by not doing anything.

This leads us to Mooney’s Law Of Guaranteed Failure (TM):

In the software business, every manual process will suffer at least a 10% failure rate, no matter how smart the person executing the process.  No amount of documentation or formalization will truly fix this, the only resolution is automation.

 

So the next time Jimmy screws up the production deployment, don’t yell at him (or sneer behind his back) “how hard is it to follow the 52-step 28-page instructions!”  Just remember that it is virtually impossible.

Also, step back and look at your day to day development process.  Almost everything you do during the day besides writing code is a manual process full of failure (coding is too, but that’s what you’re actually get getting paid for).  Like:

  • When you are partially checking in some changes to source control but trying to leave other changes checked out
  • When you need to edit a web.config connection string every time you get latest or check in
  • When you are interactively merging branches
  • When you are doing any deployment that involves editing a config or running certain batch files in order or entering values into an MSI interface, or is anything more than “click the big red button”
  • When you are setting up a new server and creating user or editing folder permissions or creating MSMQ queues or setting up IIS virtual directories.
  • When you are copying your hours from Excel into the ridiculously fancy but still completely unusable timesheet website
  • When, instead of entering your hours into a timesheet website, you are emailing them to somebody
  • When you are trying to figure out which version of “FeatureRequirements_New_Latest_Latest.docx” is actually the “latest”
  • When you are updating deploying database changes by trying to remember which tables you added to your local database or which scripts have or have not been run against production yet

It’s actually easier to find these things than you think.  The reason is, again, it is just about everything you do all day besides coding.  It’s all waste.  It’s all manual.  And it’s all guaranteed to fail.  Find a way to take that failure out of your hands and bath it in the white purifying light of automation.  Sure it takes time, but with a little time investment, you’ll be amazed how much time you have when you are not wasting it with amazing stupid busywork and guaranteed failure all day.

So your application needs to send emails.  So you start with this:

var message = new MailMessage();
message.From = new MailAddress("donoreply@example.com");
message.To.Add(new MailAddress("jimbob@example.com"));
message.Subject = "Password Reset";
message.Body = "Click here to reset your password: "
                      + "http://example.com/resetpassword?token=" 
                      + resetToken);
smtpClient.Send(message);

 

And that works OK, until you want to send some more detailed, data-driven emails. 

Duh, that’s what StringBuilder is for
var message = new MailMessage();
message.From = new MailAddress("donoreply@example.com");
message.To.Add(new MailAddress("jimbob@example.com"));
message.Subject = "Order Confirmation";

StringBuilder body = new StringBuilder();
body.AppendLine(string.Format("Hello {0} {1},", 
                    customer.FullName));
body.AppendLine();
body.AppendLine(string.Format("Thank you for your order.  "
                    + "Your order number is {0}.", 
                    order.OrderNumber));
body.AppendLine("Your order contained:");
foreach(var item in order.LineItems)
{
    body.AppendLine(string.Format("\t-{0}: {1}x${2:c}=${3:c}",
                    item.ProductName,item.Quanity,
                    item.UnitPrice,item.SubTotal));
}
body.AppendLine(string.Format("Order Total: ${0:c}", 
                    order.OrderTotal));
message.Body = body.ToString();

smtpClient.Send(message);

Yes, this is certainly the wrong way to do it.  It’s not flexible, you have to change code every time the email content changes, and it’s just plan ugly.

On the other hand, much of the time especially (early in a project), this is just fine.  Step 1 is admitting you have a problem, but step 0 is actually having a problem in the first place to admit to.  If this works for you, run with it until it hurts. 

I have a lot of code running this way in production right now, and it works swimmingly, because if there a content change I can code it, build it, and deploy it to production in 15 minutes.  If your build/deploy change is small enough, there is virtually no difference between content changes and code changes.

 

Back to the real problem please

But let’s say you really do want to be more flexible, and you really do need to be able update the email content without rebuilding/redeploying the whole world.

How about a tokenized string?  Something like:

string emailTemplate = "Hello {FirstName} {LastName},"
                       + "\r\n\r\nThank you for your order...";

That could work, and I’ve started down this path many times before, but looping messes you up.  If you needed to include a list of order line items, how would you represent that in a template?

What else?  If you are in 2003, the obvious answer is a to build an XSLT stylesheet. Serialize that data object as XML, jam it into your XSLT processor, and BAM you have nicely formatted HTML email content.  Except writing those stylesheets are a nightmare.  Maintaining them is even worse.  If you don’t have interns that you hate, you’re going to be stuck with it.

So yes of course you could use XSLT.  Or you could just shoot some heroin.  Both will make you feel good very briefly in the beginning, but both will spiral our of control and turn your whole life into a mess.  Honestly, I would not recommend either.

OK, so how about some MVC templatey type things?

The whole templating idea behind XSLT is actually a good idea, it’s just the execution that is painful.  We have an object, we have a view that contains some markup and some presentation-specific logic, we put them all into a view engine blender and get back some silky smooth content.

If you were in ASP.NET MVC web application, you could use the Razor view engine (or WebForms view engine, if you’re into that kinda thing) to run the object through the view engine and get some HTML, but that plumbing is a little tricky.  Also, what if you are not in an MVC web app, or any web app at all?  If you are looking to offload work from your website to background processes, moving all of your email sending to a background Window service is a great start, but it’s tough to extract out and pull in that Razory goodness.

Luckily some nice folks extracted all of that out into a standalone library (https://github.com/Antaris/RazorEngine), so you can execute Razor views against objects in a few limes of code:

string view = "Hello @Model.CustomerFirstName @Model.CustomerLastName, thank you for your order.";
var model = new { CustomerFirstName="Ty", CustomerLastName="Webb" };
var html = RazorEngine.Razor.Parse(view, model);

 

That is awful pretty. But if course we still need to write wrap that up with some SMTP code, so let’s take it a step farther. 

Razor Chocolate + SMTP Peanut Butter = MMDB.RazorEmail

Let’s say I’m lazy.  Let’s say I just want to do this:

new RazorEmailEngine().SendEmail("Order Receipt", model, view, 
                        new List<string>{"jimbob@example.com"}, 
                        "donoreply@example.com");

I couldn’t find anything that was this simple, so built one.  Here it is: https://github.com/mmooney/MMDB.RazorEmail

Just import that into your project via NuGet (https://nuget.org/packages/MMDB.RazorEmail/), and you can start sending emails right away. 

It works in both web apps and and Windows apps (we wrote it specifically because we needed to support this from a Windows Service).

It can use your app.config/web.config, settings or you can pass in different values.  It also has some rudimentary attachment handleing that will need to be improved.

Take a look, try it out, and let me how how it works for you at @mooneydev

Intro

So enums are awesome. They greatly simplify your ability to restrict data fields in clear and self-descriptive way. The C# implementation of enums have alleviated the need for all of those awful “item code”, magic number, and random unenforced constants that can be the source of so many bugs.

However, nothing is perfect, and so there can some rough edges when working with enums. Everyone ends up writing the bunch of plumbing code, or taking shortcuts that are not as type-safe as they could be. MMDB encountered this on several projects at the same time, so we put it all in a helper library (yes, this met our rigid definition of worthwhile reusability). Recently we put this up on github (https://github.com/mmooney/MMDB.Shared) and NuGet (http://nuget.org/packages/MMDB.Shared), so please help yourself.

Is this groundbreaking? Absolutely not. In fact, some may not even consider it all that useful.  But it’s been helpful for us, and it’s one of the first NuGet packages I pull into each new application, so hopefully it can help others simplify some of their code.

Anyhow, let’s get down to it. We’ll run through the problems we encountered with enums, and what we’ve done to solve them.

Parsing Enums

One of the most annoying things with enums is trying to parse random input into a strictly typed enum value. Say you have a string that has come from a database or user input, and you think the value should align nicely. You end up with something like this:

string input = "SomeInputValue";
var enumValue = (EnumCustomerType)Enum.Parse
                 (typeof(EnumCustomerType), inputValue);

 

Ugh. First, you have to pass in the type and cast the result, which is ugly, and should never be necessary since generics were introduced in .NET 2.0. Also, you have to hope that “SomeInputValue” is a valid value for the enum, otherwise you get a wonderful System.ArgumentException, with a  message like “Additional information: Requested value ‘SomeInputValue’ was not found.”, which is moderately helpful.

In .NET 4, we finally got introduced a strictly-typed Enum.TryParse:

string input = "SomeInputValue";
EnumCustomerType enumValue;
if(Enum.TryParse<EnumCustomerType>(input, out enumValue))
{
//...
}

This is much better, but can still be a little clunky.  You have to play the temporary variable game that all .NET TryParse methods require, and if you want to actually throw a meaningful exception you are back to calling Enum.Parse or writing the check logic yourself.

So lets try to simplify this a little with the EnumHelper class in MMDB.Shared.  Our basic Parse is nice and concise:

string input = "SomeInputValue";
var enumValue = EnumHelper.Parse<EnumCustomerType>(input);

And if this is not a valid value, it will throw a exception with the message “Additional information: Unrecognized value for EnumCustomerType: ‘SomeInputValue’”.  I always find a little more helpful to have exceptions tell you exactly what it was trying to do when it failed, not just that it failed.

Also, it has a strictly typed TryParse method, that does not require any temp values.  It returns a nullable enum, with which you can do whatever you like:

string input = "SomeInputValue";
EnumCustomerType? enumValue = 
             EnumHelper.TryParse<EnumCustomerType>(input);
return enumValue.GetValueOrDefault
             (EnumCustomerType.SomeOtherInputValue);

Enum Display Values

The next problem we run into assigning display values to enums, and more importantly easily retrieving them.

OK, so this has been done a few times.  In most cases, people have used the System.ComponentModel.DescriptionAttribute class to assign display values to enums:

public enum EnumCustomerType 
{
    [Description("This is some input value")]
    SomeInputValue,
    [Description("This is some other input value")]
    SomeOtherInputValue
}

Also the MVC folks introduced a DisplayAttribute in System.ComponentModel.DataAnnotations:

public enum EnumCustomerType 
{
	[Display("This is some input value")]
	SomeInputValue,
	[Display("This is some other input value")]
	SomeOtherInputValue
}

So lots of options there, with lots of dependencies.  Anyhow, to keep it minimal, we created a simple class specifically for enum display values:

public enum EnumCustomerType 
{
	[EnumDisplayValue("This is some input value")]
	SomeInputValue,
	[EnumDisplayValue("This is some other input value")]
	SomeOtherInputValue
}

What’s fun about this is that you can now get your display value in a single line:

public enum EnumCustomerType 
{
	[EnumDisplayValue("This is some input value")]
	SomeInputValue,
	[EnumDisplayValue("This is some other input value")]
	SomeOtherInputValue
}
//...
//Returns "This is some input value"
string displayValue = EnumHelper.GetDisplayValue
                       (EnumCustomerType.SomeInputValue);

And if you don’t have an EnumDisplayValue set, it will default to the stringified version of the enum value:

public enum EnumCustomerType 
{
	[EnumDisplayValue("This is some input value")]
	SomeInputValue,
	[EnumDisplayValue("This is some other input value")]
	SomeOtherInputValue,
	SomeThirdValue
}
//...
//Returns "SomeThirdValue"
string displayValue = EnumHelper.GetDisplayValue
                        (EnumCustomerType.SomeThirdValue);

Databind Enums

Next, let’s do something a little more useful with the enums.  Let’s start databinding them.

Normally if you want to display a dropdown or a radio button list or other type of list control to select an enum, you either have to manually create all of the entries (and then make sure they stay in sync with the enum definition), or write a whole bunch of plumbing code to dynamically generate the list of enum values and bind them to the control.  And if you want to include display values for your enums, it’s even worse, because you have to map the enum values and display values into object that exposes those fields to the DataTextField and DataValueField in the  list control.  Meh.

Or, you can just do this:

EnumHelper.DataBind(dropDownList, typeof(EnumCustomerType));

This will retrieve the the enum values, and their display values, put them into a list, and bind it to the control.

I know what you’re going to say.  “But I want to hide the zero enum value because that is really the ‘None’ value”:

EnumHelper.DataBind(comboBox, typeof(EnumCustomerType), 
             EnumHelper.EnumDropdownBindingType.RemoveFirstRecord);

Or, “I want to display the first zero record in my combobox, but I want to clear it out because it’s not a real valid selection, it’s just the default”:

EnumHelper.DataBind(comboBox, typeof(EnumCustomerType), 
             EnumHelper.EnumDropdownBindingType.ClearFirstRecord);

Or even, “yeah I want that blank first record in my combobox, but I don’t have a zero/none value defined in my enum values, so I want to add that when databinding”:

EnumHelper.DataBind(comboBox, typeof(EnumCustomerType), 
             EnumHelper.EnumDropdownBindingType.AddEmptyFirstRecord);

Conclusion

So again, nothing groundbreaking here.  Hey, it may not even be the best way to handle some of this stuff.  But it works great for us, removes a lot of the unnecessary noise from our code, and makes it a lot easier to read our code and get to the intent of what is being down without a whole lot of jibber-jabber about the how.

Hopefully this can make one part of your coding a lot easier.  Any feedback or suggestions welcome, or better yet, submit a patch Smile

So while ago I wrote about my adventures in SQL Azure backups.  At the time, there was very little offered by either Microsoft or tool vendors to provide an easy solution for scheduling SQL Azure backups.  So in the end, I cobbled together a solution involving batch files, Task Scheduler, and most importantly Red Gate Compare and Data Compare.

But much has changed the past year.  Red Gate released their new SQL Azure Backup product, whose functionality looks freakishly similar to other less polished solutions that people had written about.  The cool part is that while the SQL Compare solution I proposed originally required a purchased copy of the Red Gate SQL tools, Red Gate has been nice enough to release their Azure backup tool for free.

Also, Microsoft has released a CTP version of their SQL Import/Export Service.  This service allows you to backup and restore your database using Azure Blob storage instead having to download it to a local database server, which is actually what most of us really wanted in the first place anyway.  The latest versions of Red Gate’s Azure Backup also supports this functionality, which gives you a lot of options.

So just to close the loop on this, here’s the updated batch script file we’re using for SportsCommander now for doing regular production backups of our database.  We’re opting to use the the Import/Export functionality as our primary backup strategy:

SET SqlAzureServerName=[censored]
SET SqlAzureUserName=[censored]
SET SqlAzurePassword=[censored]
SET SqlAzureDatabaseName=[censored]

SET AzureStorageAccount=[censored]
SET AzureStorageKey=[censored]
SET AzureStorageContainer=[censored[

for /f "tokens=1-4 delims=/- " %%a in ('date /t') do set XDate=%%d_%%b_%%c
for /f "tokens=1-2 delims=: " %%a in ('time /t') do set XTime=%%a_%%b

SET BackupName=SportsCommander_Backup_%XDate%_%XTime%


C:\SQLBackups\RedGate.SQLAzureBackupCommandLine.exe /AzureServer:%SqlAzureServerName% /AzureDatabase:%SqlAzureDatabaseName% /AzureUserName:%SqlAzureUserName% /AzurePassword:%SqlAzurePassword% /CreateCopy /StorageAccount:%AzureStorageAccount% /AccessKey:%AzureStorageKey% /Container:%AzureStorageContainer% /Filename:%BackupName%.bacpac

 

A few notes:

- This runs the same Import/Export functionality you can get through the Azure portal.  If you have any problems with the parameters here, you can experiment in Azure portal

- The AzureStorageAccount parameter is the account name of your storage account.  So if your blob storage URL is http://myawesomeapp.blob.core.windows.net, your would want to use “myawesomeapp” in this parameter

- The /CreateCopy parameter will use SQL Azure’s CREATE DATABASE AS COPY OF method to create a snapshot first and then back that up, instead of just backing up the live database.  This takes a little extra time, but it is important to ensure that you are getting a transactionally consistent backup.

 

Of course, if you still want to copy down a local instance of the database like we did in the previous post, you can easily do that too:

SET SqlAzureServerName=[censored]
SET SqlAzureUserName=[censored]
SET SqlAzurePassword=[censored]
SET SqlAzureDatabaseName=[censored]

SET LocalSqlServerName=[censored]
SET LocalSqlUserName=[censored]
SET LocalSqlPassword=[censored]

for /f "tokens=1-4 delims=/- " %%a in (‘date /t’) do set XDate=%%d_%%b_%%c
for /f "tokens=1-2 delims=: " %%a in (‘time /t’) do set XTime=%%a_%%b

SET BackupName=SportsCommander_Backup_%XDate%_%XTime%

C:\SQLBackups\RedGate.SQLAzureBackupCommandLine.exe /AzureServer:%SqlAzureServerName% /AzureDatabase:%SqlAzureDatabaseName% /AzureUserName:%SqlAzureUserName% /AzurePassword:%SqlAzurePassword% /CreateCopy /LocalServer:%LocalSqlServerName% /LocalDatabase:%BackupName% /LocalUserName:%LocalSqlUserName% /LocalPassword:%LocalSqlPassword% /DropLocal

 

Good luck.

How to use SourceGear DiffMerge in SourceSafe, TFS, and SVN

What is DiffMerge

DiffMerge is yet-another-diff-and-merge-tool from the fine folks at SourceGear.  It’s awesome.  It’s head and shoulders above whatever junky diff tool they provided with your source control platform, unless of course you’re already using Vault.  Eric Sink, the founder of SourceGear, wrote about it here.  By the way, Eric’s blog is easily one of the most valuable I’ve read, and while it doesn’t get much love these days, there’s a lot of great stuff there, and it’s even worth going back and reading from the beginning if you haven’t seen it.

Are there better diff tools out there?  Sure, there probably are.  I’m sure you have your favorite.  If you’re using something already that works for you, great.  DiffMerge is just yet another great option to consider when you’re getting started.

You sound like a sleazy used car salesman

Yeah, I probably do, but I don’t work for SourceGear and have no financial interest in their products.  I’ve just been a very happy user of Vault and DiffMerge for years.  And it if increases Vault adoption, both among development shops and development tool vendors, it will make my life easier.

But when I go to work on long-term contracts for large clients, they already have source control in place that they want me to use, which is OK, but when I need to do some merging, it starts getting painful.  I want it to tell me not just that a line changed, but exactly what in that line changed.  I want to it actually be able to tell me the only change is whitespace.  I want it to offer me a clean and intuitive interface.  Crazy, I know.

Not a huge problem because DiffMerge is free, and it can plug into just about any source control system, replacing the existing settings.  However those settings can be tricky to figure out, so I figured I’d put together a cheat sheet of how to set it up for various platforms.

Adding DiffMerge To SourceSafe

Let’s start off with those in greatest need, ye old SourceSafe users.  First and foremost, I’m sorry.  We all feel bad that you are in this position.  SourceSafe was great for what it was, 15 years ago when file shares were considered a reliable data interchange format, but nobody should have to suffer through SourceSafe in this day and age.  But don’t worry, adding in DiffMerge can add just enough pretty flowers to your dung heap of a source control system to make it bearable.  Just like getting 1 hour of yard time when you’ve been in the hole for a week, it gives you something look forward to.

Anywho, let’s get started.  First, whip out your SourceSafe explorer:

DiffMerge_VSS_1

Here’s what we get for a standard VSS diff:

DiffMerge_VSS_2

Ugh.  So go to Tools->Options and go to the Custom Editors Tab.  From there, the following operations:

Operation: File Difference

File Extension: .*

Command:  [DiffMergePath]\diffmerge.exe –title1=”original version” –title2=”modified version” %1 %2

Operation: File Merge

File Merge: .*

Command: [DiffMergePath]\diffmerge.exe –title1=”source branch” –title2=”base version” –title3=”destination branch” –result=%4 %1 %3 %2

Now here’s our diff, much less painful:

DiffMerge_VSS_3

But merging is where it really shines:

DiffMerge_VSS_4

Thanks to Paul Roub from Source Gear for the details: http://blog.roub.net/2007/11/diffmerge_in_vss.html

Adding DiffMerge To Subversion

Obviously SVN is worlds better than VSS, but some of the standard tools distributed with TortoiseSVN are a little lacking.  You might say “you get what you paid for,” but you’d only say that if you wanted to tick off a lot of smart and helpful people.

So let’s take a look at a standard diff in SVN:

DiffMerge_SVN_1

Oof.  I’ve used SVN on and off for years, and I still don’t understand what is going on here.

So let’s get this a little mo’ better.  Right click your folder, and select TortoiseSVN->Settings.  Go to the External Programs->Diff Viewer, and enter this External tool:

 [DiffMergePath]\DiffMerge.exe /t1=Mine /t2=Original %mine %base

DiffMerge_SVN_2

Switch over to the Merge Tool screen, and enter this External Tool:

[DiffMergePath]\DiffMerge.exe /t1=Mine /t2=Base /t3=Theirs /r=%merged %mine %base %theirs

DiffMerge_SVN_3

And now our diffs look a little more familiar:

DiffMerge_SVN_4

Thanks Mark Porter for the details: http://www.manik-software.co.uk/blog/post/TortoiseSVN-and-DiffMerge.aspx

Adding DiffMerge To Team Foundation Server

For years I dreamed of using TFS.  I hoped that someday I would work at a company successful and cool enough to invest the money in a TFS solution.  And then I actually got it, and uh, it’s seems like a nice enough fella, but it seems that its tendencies towards megalomania have really had some negative consequences on the end-user experience.

Given that, after decades of technological advancement in source control, the TFS diff tool is pretty much just the same ugliness as SourceSafe:

DiffMerge_TFS_1

Get your spelunking helmet on, and we’ll go digging for the settings in TFS to change this.

  • Open up Visual Studio and select Tools->Options
  • Expand the Source Control group, and select Visual Studio Team Foundation Server
  • Click the Configure User Tools button

DiffMerge_TFS_2

Enter the following tool configurations:

Operation: Compare

Extension: .*

Command: [DiffMergePath]\DiffMerge.exe

Arguments: /title1=%6 /title2=%7 %1 %2

Operation: Merge

Extension: .*

Command: [DiffMergePath]\DiffMerge.exe

Arguments: /title1=%6 /title2=%7 /title3=%8 /result=%4 %1 %2 %3 (Corrected, thanks to Rune in the comments!)

Thanks to James Manning for the details: http://blogs.msdn.com/b/jmanning/archive/2006/02/20/diff-merge-configuration-in-team-foundation-common-command-and-argument-values.aspx

The End

So that’s all it takes to make your source control life a little bit easier.  Even if you don’t prefer DiffMerge, I’d suggest you find one you do like, because the built-in tools are usually pretty bad.  Diffing and merging is hard enough as it is, don’t waste precious brain cells on substandard tools.

The Error

So if you are working in SQL Azure, you’ve probably learned the hard way that you can’t just script out your DDL changes in your local SQL Management Studio and run it against your Azure database.  It throws in a whole bunch of extra fancy-pants DBA-y stuff that SQL Azure just doesn’t let you use. 

For example, say I throw together a simple table in my local database.  Depending on your SQL source control approach (you have one, right?), you might script it out in SQL Management Studio and get something like this:

CREATE

TABLE MyWonderfulSampleAzureTable (

[ID] [int] IDENTITY(1,1) NOT NULL,

[GoEagles] [varchar](50) NOT NULL,

[BeatThemGiants] [varchar](50) NOT NULL,

[TheCowboysAreAwful] [bit] NOT NULL,

[AndyReidForPresident] [varchar](50) NULL,

CONSTRAINT [PK_MyWonderfulSampleAzureTable] PRIMARY KEY CLUSTERED

(

[ID] ASC)

WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

)

ON [PRIMARY]

GO

 

Pretty enough, no?  Sure, it’s full of a lot of gibberish you don’t really care about, like PAD_INDEX=OFF, but hey if it runs, what’s the problem?

Now, let’s run that against our SQL Azure database:

Msg 40517, Level 16, State 1, Line 10
Keyword or statement option ‘pad_index’ is not supported in this version of SQL Server.

 

Ooops.  This is a pain to fix when you’re deploying a single script.  However, when you’re running a whole development cycle worth of these against your production database at 3 AM and it chokes one of these scripts, this is absolutely brutal.

Censorship Brings Peace

So why does this happen?  Why can’t SQL Azure handle these types of cool features?  Mostly because they just don’t want to.  Sure, some of the features missing from SQL Azure are because they just haven’t been implemented yet, but some of them are deliberately disabled to prevent unmitigated chaos.

While you may have a DBA managing your on-premise database who is working in your best interest (or at least your company’s interest), SQL Azure has a much bigger problem to solve.  They need to provide a shared SQL environment that does not let any one consumer hose up everyone else.  If you’ve ever hosted a SQL database in a high-traffic shared hosting environment, you’ve probably feel the pain of some joker going cookoo-bananas with the database resources.

In other words, what you do in the privacy of your own home is all well and good, but if you are going to go play bocce in the public park, you’re certainly going to have to watch your language and act live a civilized person.

And a lot of these features you don’t really have to care about anyway.  No doubt, you are really really smart and know when your indexes should be recompiled, but the reality is that much of the time whatever algorithm the folks on the SQL team came up with is going to be a little bit smarter than you, Mr. SmartyPants.

Anyhow, for your edification, here’s a wealth of information about the stuff you can’t do.

The Manual Workaround

So how do we get our script to run?  My general rule of thumb is to rip out all of the WITH stuff and all of the file group references:

CREATE TABLE MyWonderfulSampleAzureTable (

[ID] [int] IDENTITY(1,1) NOT NULL,

[GoEagles] [varchar](50) NOT NULL,

[BeatThemGiants] [varchar](50) NOT NULL,

[TheCowboysAreAwful] [bit] NOT NULL,

[AndyReidForPresident] [varchar](50) NULL,

CONSTRAINT [PK_MyWonderfulSampleAzureTable] PRIMARY KEY CLUSTERED

(

[ID] ASC)

WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

)

ON [PRIMARY]

GO

See, I had a general rule of thumb for this, because I encountered it a lot.  On just about every DDL script I had to generate.  And missed a lot of them.  Quite the pain in the neck.

 

The Much Better, Not-So-Manual Workaround

So I was at the a Philly.NET user group meeting last night and Bill Emmert from Microsoft was walking through the options for migrating SQL Server databases, and he showed us this setting that I wish I knew about a year ago:

SQLAzureSettings

If you change this to SQL Azure Database, it will change your environment settings to always use create SQL scripts that are compatible with Azure.  No more manual script trimming!  Unless, of course, you are into that kind of thing, in which case you really wasted the last 10 minutes reading this.

Good Luck.

Why?

Last year we launched a new version of SportsCommander.com, which offered volleyball organizations across the country the ability to promote their tournaments and accept registrations for a negligible fee.  Having grown out of our previous hosting company, we tried hosting the platform on Windows Azure, and for the most part it’s been great.  Also, the price was right.

We are also hosting our data in SQL Azure, which again for the most part has been great.  It has performed well enough for our needs, and it abstracts away a lot of the IT/DBA maintenance issues that we would really rather not worry about.

Of course, nothing is perfect.  We’ve had a few snags with Azure, all of which we were able to work around, but it was a headache. 

One of the biggest issues for us was the ability to run regular backups of our data, for both disaster recovery and testing purposes.  SQL Azure does a great job of abstracting away the maintenance details, but one of the things you lose is direct access to the SQL backup and restore functionality.  This was almost a deal-breaker for us.

Microsoft’s response to this issue is that they handle all of the backups and restores for you, so that if something went wrong with the data center, they would handle getting everything up and running again.  Obviously this only solves part of the problem, because many companies want to have their own archive copies of their databases, and personally I think doing a backup before a code deployment should be an absolute requirement.  Their answer has been “if you need your own backups, you need to build your own solution.”

Microsoft is aware of this need, and it has been the top-voted issue on their Azure UserVoice site for a while. 

In poking around the interwebs, I saw some general discussion of how to work around this, but very little concrete detail.  After hacking around for a while, I came up with a solution that has worked serviceably well for us, so I figured I’d share it with y’all.

 

What?

In order to address these concerns, Microsoft introduced the ability to copy a database in SQL Azure.  So, as a limited backup option, you can create a quick copy of your database before a deployment, and quickly restore it back if something fails.  However, this does not allow for archiving or exporting the data from SQL Azure, so all of the data is still trapped in the Azure universe.

Apparently another option is SSIS.  Since you can connect to Azure through a standard SQL connection, theoretically you could export the data this way.  Now I am no SSIS ninja, so I was just never able to get this working with Azure, and I was spending far too much time on something that I shouldn’t need to be spending much time on.

I’ve heard rumblings Microsoft’s Sync Framework could address the issue, but, uh, see the previous point.  Who’s got time for that?

So of course, Red Gate to the rescue.  Generally speaking their SQL Compare and SQL Data Compare solve this type of problem beautifully, they are excellent at copying SQL content from one server to another to keep them in sync.  The latest formal release of their products (v8.5 as of this writing) does not support SQL Azure.  However, they do have beta versions of their new v9.0 products, which do support SQL Azure.  Right now you can get time-locked beta versions for free, so get yourself over to http://www.red-gate.com/Azure and see if they are still available.  If you’re reading this after the beta program has expired, just pony up the cash and by them, they are beyond worth it.

 

How?

OK, so how do we set this all up?  Basically, we create a scheduled task that creates a copy of the database on SQL Azure, downloads the copy to a local SQL Server database, and then creates a zipped backup of that database.

First, you need a SQL Server database server.  And go install the Azure-enabled versions of SQL Compare and SQL Data Compare.

Also, go get a copy of 7-Zip, if you have any interest in zipping the backups.

The scheduled task will execute a batch file.  Here’s that batch file:

SET SqlAzureServerName=[censored]
SET SqlAzureUserName=[censored]
SET SqlAzurePassword=[censored]

SET LocalSqlServerName=[censored]
SET LocalSqlUserName=[censored]
SET LocalSqlPassword=[censored]

echo Creating backup on Azure server
sqlcmd -U
%SqlAzureUserName%@%SqlAzureServerName% -P %SqlAzurePassword% -S %SqlAzureServerName% -d master -i C:\SQLBackups\DropAndRecreateAzureDatabase.sql

echo Backup on Azure server complete

echo Create local database SportsCommander_NightlyBackup
sqlcmd -U %LocalSqlUserName% -P %LocalSqlPassword% -S %LocalSqlServerName% -d master -i C:\SQLBackups\DropAndRecreateLocalDatabase.sql

echo Synchronizing schema
"C:\Program Files (x86)\Red Gate\SQL Compare 9\SQLCompare.exe" /s1:%SqlAzureServerName% /db1:SportsCommanderBackup /u1:%SqlAzureUserName% /p1:%SqlAzurePassword% /s2:%LocalSqlServerName% /db2:SportsCommander_NightlyBackup /u2:%LocalSqlUserName% /p2:%LocalSqlPassword% /sync

echo Synchronizing data
"C:\Program Files (x86)\Red Gate\SQL Data Compare 9\SQLDataCompare.exe" /s1:%SqlAzureServerName% /db1:SportsCommanderBackup /u1:%SqlAzureUserName% /p1:%SqlAzurePassword% /s2:%LocalSqlServerName% /db2:SportsCommander_NightlyBackup /u2:%LocalSqlUserName% /p2:%LocalSqlPassword% /sync

echo Backup Local Database
for /f "tokens=1-4 delims=/- " %%a in (‘date /t’) do set XDate=%%d_%%b_%%c
for /f "tokens=1-2 delims=: " %%a in (‘time /t’) do set XTime=%%a_%%b
SET BackupName=SportsCommander_Backup_%XDate%_%XTime%
sqlcmd -U %LocalSqlUserName% -P %LocalSqlPassword% -S %LocalSqlServerName% -d master -Q "BACKUP DATABASE SportsCommander_NightlyBackup TO DISK = ‘C:\SQLBackups\%BackupName%.bak’"

"C:\Program Files\7-Zip\7z.exe" a "C:\SQLBackups\%BackupName%.zip" "C:\SQLBackups\%BackupName%.bak"

del /F /Q  "C:\SQLBackups\%BackupName%.bak"

echo Anonymize Database For Test Usage
sqlcmd -U %LocalSqlUserName% -P %LocalSqlPassword% -S %LocalSqlServerName% -d SportsCommander_NightlyBackup -i "C:\SQLBackups\AnonymizeDatabase.sql"

 

The first thing this does is run a SQL script against the SQL Azure server (DropAndRecreateAzureDatabase.sql).  This script will create a backup copy of the database on Azure, using their new copy-database functionality.  Here’s that script:

DROP DATABASE SportsCommanderBackup
GO
CREATE DATABASE SportsCommanderBackup AS COPY OF SportsCommander
GO
DECLARE @intSanityCheck INT
SET @intSanityCheck = 0
WHILE(@intSanityCheck < 100 AND (SELECT state_desc FROM sys.databases WHERE name=’SportsCommanderBackup’) = ‘COPYING’)
BEGIN
— wait for 10 seconds
WAITFOR DELAY ’00:00:10′
SET @intSanityCheck = @intSanityCheck+1
END
GO
DECLARE @vchState VARCHAR(200)
SET @vchState = (SELECT state_desc FROM sys.databases WHERE name=’SportsCommanderBackup’)
IF(@vchState != ‘ONLINE’)
BEGIN
DECLARE @vchError VARCHAR(200)
SET @vchError = ‘Failed to copy database, state = ”’ + @vchState + ””
RAISERROR (@vchError, 16, 1)
END
GO

 

A few notes here:

  • We are always overwriting the last copy of the backup.  This is not an archive; that will be on the local server.  Instead, this always the latest copy.  Besides, extra Azure databases are expensive.
  • For some reason SQL Azure won’t let you run a DROP DATABASE command in a batch with other commands, even though SQL 2008 allows it.  As a result, we can’t wrap the DROP DATABASE in an “IF(EXISTS(“ clause.  So, we need to always just drop the database, which means you’ll have to create an initial copy the database drop for the first time you run the script.
  • The CREATE DATABASE … AS COPY OF will return almost immediately, and the database will be created, but it is not done the copying.  That is actually still running in the background, and it could take a minute or two to complete depending on the size of the database.  Because of that, we sit in a loop and wait for the copy to finish before continuing.  We put a sanity check in there to throw an exception just in case it runs forever.

 

Once that is complete, we create a local database and copy the Azure database down into that.  There are several ways to do this, but we chose to keep a single most-recent version on the server, and then zipped backups as an archive.  This gives a good balance of being able to look at and test against the most recent data, and having access to archived history if we really need it, while using up as little disk space as possible.

In order to create the local database, we run a very similar script (DropAndRecreateLocalDatabase.sql):

IF(EXISTS(SELECT * FROM sys.databases WHERE Name=’SportsCommander_NightlyBackup’))
BEGIN
DROP DATABASE SportsCommander_NightlyBackup
END
CREATE DATABASE SportsCommander_NightlyBackup

 

In this case, we actually can wrap the DROP DATABASE command in a “IF(EXISTS”, which makes me feel all warm and fuzzy.

After that, it’s a matter of calling the SQL Compare command line to copy the schema down to the new database, and then calling SQL Data Compare to copy the data down into the schema.  At this point we have a complete copy of the database exported from SQL Azure.

As some general maintenance, we then call sqlcmd to backup the database out to time-stamped file on the drive, and then calling 7-Zip to compress it.  You might want to consider dumping this out to a DropBox folder, and boom-goes-the-dynamite, you’ve got some seriously backed-up databii.

Lastly, we run an AnonymizeDatabase.sql script to clear out and reset all of the email addresses, so that we can use the database in a test environment without fear of accidentally sending bogus test emails out to our users, which I’ve done before and it never reflected well on us.

Run that batch file anytime you want to get a backup, or create a scheduled task in Windows to run it every night.

Anyhoo, that’s about it.  It’s quick, it’s dirty, but it worked for us in a pinch.  Microsoft is just getting rolling on Azure and adding more stuff every month, so I’m sure they will provide a more elegant solution sooner or later, but this will get us by for now.

Have you had a similar experience?  How are you handling SQL Azure backups?

So you have a million passwords, right?  Every site out there requires you to enter a username and password to buy a widget that plugs into your doohickey for whatever silly little hobby you have that is supposed to distract you from writing code in your free time so you don’t feel like a complete loser. 

Or better yet, you need a username and password to buy flowers for your wife.  Or you need a username and password if you’re buying a $3.75 Cozy Coupe Wheel With Capnut, plus $7.95 shipping, when all you really need is the damn capnut anyway because the first one got slightly bent during assembly so every now and then the wheel pops off and your horrified toddler is trapped in plastic car wreck.  Or something like that.

And of course, we all know that you shouldn’t be using the same password everywhere.  At the very least, you should be using a different password for everything important and sensitive, like email and banking and online gambling.

Some say Open ID may be the answer.  It certainly is gaining popularity with many sites in the development community.   But the real test will be if it ever catches on with people who have real lives, and really couldn’t care less about your cool new shared authentication mechanism, and don’t really know or care that they shouldn’t reuse their favorite celebrity’s nickname as their password everywhere.

But even then, even if the world were to become thusly enlightened, a large number of the sites our there start using Open ID as their core authentication, there will still be countless little sites out there written by internal IT departments who have never even heard of Open ID and certainly aren’t going to trust some new-fangled “Web 2.0” technology, when they’ve spent the last 10 years working their way up to “Enterprise Architect” of their little fiefdom, and they are certainly smart enough to build a completely secure authentication system from scratch that is going to be so much better than anything anyone has ever seen, thank you very much.

So, yeah, you’re probably still going to be stuck with a million passwords.  Or maybe just half a million, but it’s still the same problem.  If someone dumps a half-ton of manure on your front lawn, are you really relieved that it wasn’t a full ton?

Password Safe to the rescue

I’ve been using Password Safe for years, and I definitely recommend it.  It’s very easy to add new entries, to quickly generate secure passwords, and to attach additional notes (like the answers to the stupid security questions that don’t have clear and definitive answers). 

Of course, it doesn’t have to be Password Safe, there are plenty of other good and free products out there, but I’m not that familiar with them so I’m going to assume that they make your computer burst into flames.

Another benefit of Password Safe, besides the lack of flames, is that the database is very portable, so you can easily copy it to another computer.  However, what about keeping the databases synchronized across multiple computers you ask?

DropBox to the rescue of Password Safe’s … rescue, or something

Since the Password Safe database is just a file, it’s actually pretty easy to keep them synchronized across a few separate machines.  As a pathological-job-hopper-turned-consultant, I’ve usually had some new machine for some reason every six months or so, and I end up with a LOT of copies of my password database floating around.  But after a few years of headaches and manually copying/merging password databases, services like DropBox came along and solved the problem for me. 

Since DropBox treats a directly on your machine as a share and automatically syncs that directory across all of your machines through the DropBox cloud (+1 Google buzzword, yay), then all you have to do is to keep your working copy of the password database on your DropBox share, and voila, you always have your up-to-date passwords at your fingertips.

Well almost.  Of course, there is one gotcha.  When you have Password Safe open in read/write mode, it locks the file (more specifically, it locks the .plk file).  This will actually prevent block the DropBox sync process and prevent it from synchronizing not just the database file, but also anything else on the share.  If you’re like me, you very rarely make changes to your password list, so I just go into Password Safe and select the option to always open database files as read-only by default, and everyone is happy.

Good luck.