Posts Tagged ‘continuous integration’

MSBuild deployment sample for web applications

January 15th, 2010 No comments

Check out the code sample

Download the Deployment Code Sample from github and select to download the zip archive of the source

Overview of the code



        [these are needed to be installed on machines]

Explanation/Overview of files and folders


This holds the major, minor and revision number for the product. This is effectively the marketing name of the product and is changed rarely. This “version” combined with the revision number from the source control is what is used to version DLLs.

build.bat and build.proj

Build.bat is the merely a GUI-based, double-click runner for invoking msbuild file build.proj. Build.proj is a manifest file allowing access to the real worker tasks in the scripts/ folder.


As you would expect holds all the dependent libraries. I always bundle up my script files ensuring that they are available in every environment. This project demonstrates the use of three msbuild libraries. Each have there strengths so I just use as needed and don’t get too hung up about which one. I often also bundle up the Visual Studio teambuild targets because this allows me to avoid having to install the correct version of Visual Studio on the build server (this problem has got better over time and haven’t reviewed problems here in a whiles).


This is the focus of the sample. I keep my build scripts in their own folder as to not clutter up root and I keep them out of src/ so that I get better reuse and a cleaner folder. I also keep my initial database creation scripts here too. These are the first-time creation scripts rather than the migration scripts which are in src/Infrastructure/Database/


Here’s the home of the application code. A couple of conventions, the demo is using a domain-driven-design naming conventions (UI, infrastructure) and java too (src cf Source). Importantly, I see that database work such as schema changes and data transformations are tightly coupled with the application code. Hence they are part of the application code base. I think this type of versioning is far simpler. There is a good blog entry that I can’t find that compares the “migration approach” versus the database compare. Both work compared with the free for all I see – I just prefer migrations and have never found its simplicity to cause problems. Plus it allows me to go up and down in migrations. The UI code is there just for illustrative purposes so that there is something to deploy.


All this binary packages that need to be installed on the target machine to make the system work but can’t be (or shouldn’t be bundled). If my packaging strategy is to “download-and-deploy” my source code strategy is to “checkout-and-compile”. So I want as much as possible available with source so that in two year’s time I don’t have to go looking. It is surprising how this pushes some people’s buttons.

MSBuild files: proj vs tasks vs xml

My assumption is that you know how msbuild works and the structure of the xml files. I use three extensions to mark out different types. Yet, they all follow the xml schema definition for msbuild.


a file to be run by msbuild – hopefully the only one in the folder so that you don’t have to specify it


a specific set of tasks that is included/imported into a proj file and allows for clarity of purpose and reuse


a set of properties that are imported into a task or project file

Introduction to using msbuild to manage ( mvc) application lifecycles

January 15th, 2010 No comments

Introduction: what’s there to really manage?
Part 1: Code Overview
Part 2: Commandline msbuild in action
Part 3: Understanding the packaging
Part 4: Build & Test on your local machine
Part 5: First-time installs and redeployments
Parts 5-7 to come
Summary and References


Ever since … well since ever … I have needed to manage the lifecycle of my applications. Through-the-GUI approaches hit a limit of manageability quickly so like most others I use scripting. In mvc, I use msbuild. It is already part of the .Net framework and generally accepted. I say generally accepted because it is not used a lot where I work. Only some seniors in the dotnet space actually use this toolset to manage their application lifecycle. This is a pity because many of these seniors have actually had a life before dotnet and often microsoft. They know about commandline tools. One of the problems for me is that this also means that we don’t have emerging developers who can actually think through the lifecycle of their applications, let alone script its phases. So let me get this little rant out of the way … in microsoft’s GUI-based application-lifecycle-management approach they attempt to make development accessible to a wider audience and speed up development; the wider goal is to commoditise development which in effect increases the uptake of microsoft products (ie revenues). The effect is that developers are deskilled, the application lifecycle is fragmented and unreliable across environments. Msbuild for all its limitations helps us to counter these problems: the build scripts help increase the transparency of the lifecycle and allow us to reproduce each step through each environment.

I personally find msbuild quirky and many features counter intuitive. I would really like to add “at first” after the counter intuitive but actually I still find its design that way. Output params, task items, default values for parameters are examples. But it is powerful and can do the job. If I had my way I would head down the Rake road like many others already have. It is cleaner and clearer: simply put, its build for purpose. Roll on iron ruby!

So this series of blogs spell out how I go about deploying web apps. There are lots of variations within each step that I will attempt to avoid distracting you with. So my goals is the illustrate the general approach with specific examples. These examples are taken over the last four projects that I have worked on.

Application lifecycle

The application lifecycle are the phases through environments that your code goes through. Here I mean that code are your binaries, data, configuration, documentation and setup scripts. Take a look at my provisional/simplified list below:

  • Environments: Local(Dev), Build, Test, Production (perhaps even pre-production as a clone of production)
  • Configurations: Dev, Test, Production (production might be seen as the no-name environment)
  • Phases: Checkout/Update, Build (Compile/ReBuild), Test, Package, Deploy

Looking at the list, you’ll see that I already have added synonyms suggesting immediate difficulties in agreeing what things are called. For example, the “local” environment is often called “dev” – by local, I am meaning that you have an environment which tends to allow you to notionally pull out your LAN cable such that you can setup your environment for development and testing. For many, dev is something where the codebase for the dev lives but the data that they test against lives somewhere else. For me, when pursuing a layered test automation strategy, not having data under source control is a code smell: it works but it isn’t ideal.

Around configuration

Another complexity is matching configurations with environments. No one needs to be preached to on the importance of configuration management through the phases through the environments. Yet, this is difficult to get agreement on in dotnet because in practice microsoft’s out-of-the-box approaches are inadequate and people put up with it. My cynical view is that making providing inadequate tools creates the need for the tools in the first place because we are always seeking a solution to the fundamental problem. I think that this is also fuelled by the pleasures people in experiencing this pain. I take for instance that it is difficult to securely do a push deploy to a remote machine. By the time I exhaust all my secure options and I open it up to the Network Service which goes against all my beliefs that I experience relief and pleasure at simply getting the job done. In fact, is some weird way, I am grateful when the job is done regardless. So configuration management is just about always the bone of contention. Below are three approaches. The first two are acceptable approaches to automation. The third is still a dominant practice that I wish to discount.

Option one: save each environment configuration in source control
Option two: save each environment’s configuration in their environment and pass through at deployment
Option three: manually update environment configuration in an ad hoc way

I prefer option two and often concede with option one. Option one is currently in the sample code I will provide: there is a separate web.config for each environment. The big problem with this approach is that to automate well you really should provide a package for each environment. You do this because as a rule the configuration settings from one environment should never be available to other environments. Conversely put, production settings should never be known outside the production environment.

So option two is preferable because I can save the configuration of each environment in each environment. I have range of options of how I store those settings. I can use environments variables, hand into commandline msbuild or file-based settings (as response files or project files). I don’t use registry settings. Ideally, these environment settings are centrally managed but are separate from source control.

Around phases

You would think that the phases are generally agreed upon. I don’t think so. At best I get agreement on the get source code (checkout/update), do something and build (compile and test in some form), and then send it somewhere to do something (deploy). the GUI-based tools in Visual Studio do little to help the situation for the local environment. Source control is often managed in ways that the inexperienced developer is in control. In web applications, they can then go publish which compiles, publishes and runs a local version of an application/web server. Next you can then publish from your local environment to say the test environment. That is, after you have manually updated settings in the local for the test. I want better separation and knowing what phases occur in what order and in which environment. Here’s a stab at it based on the scripts I will explain:

- Environment: Local
  - Configuration: Dev
    - Phases: Checkout, Update, Compile, UnitTest, Deploy, Migrations, IntegrationTest, AcceptanceTest
- Environment: Build
  - Configuration: Test
    - Phases: Checkout, Update, Compile, UnitTest, Package, Deploy, IntegrationTest, Notify
- Environment: Test
  - Configuration: Test
    - Phases: GetPackage, Deploy, Migrations, AcceptanceTests
- Environment: Production
  - Configuration: Producton
    - Phases: GetPackage, Deploy, Migrations

Let me compare and contrast the phases to see how each is subtly different but together they make up a coherent the application lifecyle.

  • the local environment tends to have all phases except that in practice you don’t create packages for deployment
  • the build server in contrast is all about creating the package that will be moved through all of the subsequent environments (so it watches the source code respository and works out whether the package should be released and hence people notified)
  • the test environment is the place the confirms that the build server made the correct decisions to release the package and is also the place for verification at the system level. This environment is also the place where we try out the application and its migrations against the production data. So while the package moves forward through environments, data comes backwards. It is the place where the two get a change to meet and greet (integrate). We are looking for stability over time in the this environment. This environment may have acceptance tests that also get run to look for the “non-functional” tests.
  • the production environment should be no different to the prior (test) environment.

First-time cycle of phases versus subsequent cycles

What I have actually just described are the phases went the application is actually underway. What we forget is that each environment has to be setup cycle in the first place. The classic is the new developer who takes days to get everything going. There are phases in this part of the cycle that too can be scripted.

- Environment: Local
  - Configuration: Dev
    - Phases: Setup Repository, Create Database with user perms, Create Aliases, Create IIS
- Environment: Build
  - Configuration: Test
    - Phases: Create Users, Create Database with user perms, Create Aliases, Create IIS
- Environment: Test
  - Configuration: Test
    - Phases: Create Users, Create Database with user perms, Create Aliases, Create IIS
- Environment: Production
  - Configuration: Producton
    - Phases: Create Users, Create Database with user perms, Create Aliases, Create IIS

The goal of this approach is

* the local environment for developers is separated for all other areas of the system
* create packages on the build server that are potentially releasable to production
* deploying a package should aim for “download-and-deploy”
* configurations and dependencies should be setup only once prior to the first deploy
* deployments to test should lag development as little as possible
* the trend of success to test is the indicator of readiness for deployment to production
* changes must flow through environments in order, every time

Understanding msbuild sample deployment project: an lifecycle overview

  • Check out the code sample
  • Overview of the code
  • Commandline msbuild
  • Help as default target
  • Build.proj as manifest and linking helps
  • build the package
  • unzip and explore
  • understand the versioning
  • Build in dev
  • Run the migrations in dev
  • rebuild
  • setup the db
  • deploy to iis
  • back in dev
  • update migrations
  • build
  • package
  • unzip
  • Script your initial setups
  • Scripted redeployments
  • up migrations
  • up iis
  • view