GitFlow Cheat Sheet

September 8, 2016

Installing GitFlow on Windows

  1. Install cmder. Google it. Make sure you get the git for windows version.
  2. Download and install in C:\Program Files\Git\bin

a) getopt.exe from util-linux-ng package from the Binaries zip folder found at

b) libintl3.dll from libintl package from the Binaries zip folder found at

c) libiconv2.dll from libiconv2 package from the Binaries zip folder found at

  1. Start cmder, which is a better command prompt console. Change folder to the one that you want to install gitflow in. It will install a gitflow folder under this.
cd c:\users\Tony

4. Clone the gitflow repository.

git clone –recursive git:// 

5. Change to the gitflow folder

cd gitflow
  1. Install gitflow, using the following command:
Contrib\msysgit-install.cmd “c:\Program Files\Git\”


Installing GitFlow in a repository

  1. Create your project folder: mkdir c:\demos\flow
  2. Change directory to the project folder: cd c:\demos\flow
  3. Initialise an empty Git repository: git init
  4. Initialise the repository for GitFlow: git flow init

Choose all the defaults.

The prompt should change show that you are in the c:\demos\flow (develop) branch.

  1. Checkout the master branch
git checkout master
  1. Make sure you have set up a repository on GitHub. On the github repository page, click on the Clone or download button and then click on the Use SSH link in the top right hand corner. Copy the link. Then execute the git remote command to set up the origin in git:
git remote add origin
  1. Now push the origin to the master, to establish the github connection
git push –u origin master
  1. Change back to the develop branch
git checkout develop
  1. Now push the develop branch
git push origin develop
  1. If you go back to the repository page, you should now be able to select the two branches from the Branch drop down.


Creating a Feature Branch

  1. Make sure you have cloned the repository into a destination directory
git clone .

Note the period (.) which is used to force it to be installed in the current directory, not a child of the current directory.

It will put you in the (master) branch

  1. Change to the develop branch
git checkout develop
  1. Initialise gitflow in the new folder if it hasn’t been done already
git flow init

and select all the defaults

  1. Go into github for this repository and select the Issue tab. We want the new feature to be associated with the issue. So add a new issue for the feature. The Issue number and issue subject should be part of the new feature name. The Issue Subject here would be “Users Can Access Single Entries” for example.
  2. Add a new feature branch
git flow feature start 2-UsersCanAccessSingleEntries

where 2 is the Issue number and UsersCanAccessSingleEntries is a concatenation of the issue subject. The command prompt will now show the feature branch:

c:\demos\tony (feature/2-UsersCanAccessSingleEntries)

  1. This branch hasn’t been pushed back into the repository yet, so there is no tracking in github. If you make changes to code in this folder, the prompt will now be highlighted in red, to show changes pending. If you want to see the pending changes, execute:
git status
  1. To add the files into git
git add .
  1. To commit the repository locally, with a message:
git commit –am “Added code to get single entry”

This will change the command prompt folder back to white and commit the changes locally.

  1. To add the feature back onto the central repository
git flow feature publish 2-UsersCanAccessSingleEntries
  1. If you go into github, you can now select the feature branch from the Branch drop down


Reviewing a feature branch on another machine (or in another folder on the same machine)

  1. To see the list of feature sub-commands in gitflow:
git flow feature help
  1. To pull the feature into my local repository, switch you into that branch, but don’t track changes:
git flow feature pull 2-UsersCanAccessSingleEntries
  1. To pull the feature into my local repository, switch you into that branch and track changes to that feature:
git flow feature track 2-UsersCanAccessSingleEntries
  1. Make your changes in the tracked folder. In Cmder the command prompt will show a red feature folder. Again, you can see the pending changes by executing:
git status
  1. Add the files that have been changed:
git add .
  1. Then commit them to the local repository with message:
git commit –am “Added exception handling”
  1. Finally, to push them to the central repository:
git push


Get the reviewers changes back on the originator’s machine

  1. Check out the feature and pull it.
git pull


Finishing the Feature Branch and merging back into the develop branch

  1. The developer closes the branch, not the reviewer. The developer would click the Merge pull request button to merge back with the develop branch. The reviewer Closes the pull request, but doesn’t finish it. To finish the feature, the developer executes:
git flow finish 2-UsersCanAccessSingleEntries
  1. The feature branch is now deleted both locally and remotely, and you will have been switched back to the develop branch.
  2. Other developers that are using this feature will also need to delete their local branch. That is done by executing:
git checkout develop
git branch –d 2-UsersCanAccessSingleEntries
  1. To check it has been removed
git branch

should no longer be showing the feature branch.


Creating a Release Branch

  1. This is the point in time where a release is ready.
  2. Once the Release Branch is created, it is passed to QA for testing.
  3. Any bugs that are found on the release branch will need to be fixed on the Release Branch and then merged back into the develop branch, so that any future feature branches will pick up those fixes.
  4. The architect creates the Release Branch by executing:
git flow release start sprint1-release
  1. The command prompt will show the new release folder, but this branch is currently only on the local machine. To publish this release so everyone can access it:
git flow release publish sprint1-release
  1. If you go into github and pull down the Branch drop down, you will see the new Release Branch.


Reviewing a Release Branch

  1. To view and track the release on someone else’s machine, set up a folder on that machine and execute:
git flow release track sprint1-release
  1. If someone makes changes to files in that branch, you can check the changes using
git status
  1. You can then add any changes to the local repository
git add .
  1. Then commit the changes to the local repository
git commit –am “Add logging to exception (example message)”
  1. Then push the change to the remote repository
git push
  1. The changes are now in the Release Branch, but need to be merged back to the develop branch.
git checkout develop
  1. Pull the develop branch
git pull
  1. Then merge it with the release branch
git merge release/sprint1-release
  1. That merge is local, so we now need to push changes back to the develop branch
git push 


Cleaning up the Release Branch and pushing to the master branch

  1. This job is done by the Architect, who will change to the Release Branch
git checkout release/sprint1-release
  1. Do a pull to make sure the local machine is up-to-date
git pull
  1. Now finish the release
git flow release finish sprint1-release

This will merge our flow back into the master branch and open up an emacs text editor to allow us to enter a more substantial release note. Save the text editor and exit. The release will be merged into the master branch and tagged with the release name. The tag is also back-merged into the develop branch. The release branch is also deleted both locally and remotely, and you are switched back to the develop branch.

  1. At this time, the develop branch is still checked out, so we need to change back to the master branch to check it all in.
git checkout master

When executing this command it will tell you how many commits there are difference between the master and the develop branch.

  1. We need to push all these changes including all the tags back to the remote repository.
git push –-tags


Creating a HotFix

  1. A Hot Fix is an emergency fix to released code. The Hot Fix is created on the master (production) branch. After making the fix on the Hot Fix Branch, it is then merged back into the master and develop branches.
  2. On the machine where the fix needs to occur, start the hotfix
git flow hotfix start hotfix1

Note that hotfix1 should match an issue in GitHub

  1. You will now be in the hotfix branch. Make changes to this branch and you can see the changes to the branch by typing
git status
  1. Once the hotfix has been finished, the changes need to be committed:
git commit –am “Hot fix changes”
  1. The developer can then finish the hotfix by executing
git flow hotfix finish hotfix1

This will bring up the emacs text editor and you can add a hotfix release note, then save and close the editor and it will merge the hotfix into the master, tag it as hotfix1, back merge that tag into the develop branch, deleted the local hotfix1, and switch you into the develop branch.

  1. To see how many commits are outstanding on the develop branch, type
git status
  1. You can also switch to the master branch and see where that is
git checkout master
git status
  1. To push changes back to both remote branches, execute:
git push –-all origin

What are Microservices?

August 14, 2016

Microservices sounds like a pretty slick name, but in fact, it isn’t all that complicated. Microservices are, broadly speaking, all about providing APIs and collaborating between those APIs.

Microservices are a specialisation, a refinement and an evolution of Service-Oriented Architecture (SOA). It is a specific approach for how to do Service-Oriented Architecture better. It has arisen, not due to any academic theories, but from an analysis of lots of real world projects, and takes all the best approaches of SOA learnt from that experience.

The reason you don’t hear much about Service-Oriented Architecture any more is that it was actually a big embarrassing failure. For all its promises, very little actually materialised. There was a lack of consensus on how to do SOA well, there was a lack of guidance on service granularity, SOA doesn’t talk about how to ensure services don’t become overly coupled, and there were too many attempts by vendors to lock you in.

Microservices provides architectural guidance to ensure better choices are made when divvying up an application for better maintenance, flexibility, scaling, resilience and reuse. The idea is to break up large all-encompassing monolithic applications into a whole lot of little services. The smaller the services are, the more the benefits around interdependence are maximised. It also allows functionality to be consumed in different ways for different purposes. The downside is that extra complexity emerges from having more and more moving parts, so we have to become a lot better at handling those complexities.

Microservices aligns well with existing software development methodology, technologies and processes. By splitting an application into a bunch of services, and forcing them to communicate with each only via network calls, it allows each service to be treated like its own bounded context, a concept from Domain Driven Design. Each service also needs to have a Single Responsibility, to be a completely separate entity, to be able to change independently of each other, and to be deployed by themselves, without requiring their consumers to change.

By managing a bunch of separate services, each component can be scaled separately. Too much demand for one service? Spin up another process of that service. They can also be run on multiple separate machines. The system is also much more resilient, as the failure of a single service does not bring the entire system down.

And you are not constrained by the technologies that they run under either. Because there are lots of little services, they work well in the cloud, where the architectural approach can be so closely correlated to an almost immediate cost saving as you reduce compute time for elements that don’t require much resources and increase compute time for the bottlenecks in the application.

Large organisations may have a large number of microservices, and because each microservice is entirely independent, they can be coded in isolation as well. The Microservice approach allows us to divvy up the services so that we can hit the sweet spot between team size and productivity. A good starting point for how big a microservice should be is around 2 weeks of work for team of around 8, so that fits really well in with Agile sprint size as well. You can also have the entire team for a single microservice collocated, while another team may be working on a complementary microservice collocated elsewhere.

There are also other benefits from using the microservice approach to application construction. Teams that follow that approach are actually quite comfortable with completely rewriting services when required, and choosing alternative technologies with the ability to make more choices on how to solve problems. They also have no problem replacing services that they no longer need. When the code base is only a few hundred lines of code, it becomes difficult to become emotionally attached to it, and the cost of replacing it is pretty small.

Angular 1 is dead. Where to now?

August 7, 2016

Angular 1 has a massive market. It is by far the most widely used JavaScript framework available. It is a very opinionated framework, it has declarative power, and developers tend to lean towards the MV* patterns which has a whole lot of benefits and with which they are familiar with. So Angular itself is not going away any time soon.

The biggest problem with Angular 1 is that it is no longer being actively maintained. The main reasons for this are Componentisation, Performance and an inability to play well with search engines (SEO), which, incidently, are the main factors that have made its main competitor, React, so popular. There is also quite a significant learning curve with Angular.

Componentisation enables you to build custom component trees quite easily, and the resulting code is usually much more maintainable. Performance was always a killer in Angular 1 due to watches and the digest cycle, which was basically a system for monitoring every single changing item on your page.

There was a limit of 2000 watches, and as soon as you went over that, IE pages simply ground to a halt. Finally, having a whole lot of script on the page did not make Search Engine Optimisation easy at all. Search engines don’t know what to look at with a single page application. They find it hard to walk the tree of links between your pages, because they aren’t seeing what you are seeing, they need to interpret the script behind the scenes that is being executed.

So the Angular team announced a complete rewrite of Angular 1, because they found that the structural problems with Angular 1 could not be resolved via a simple upgrade. They gave their own existing product a resounding fail. In doing so, they signed its death warrant.

What do you select then, if you have a whole lot of experience in Angular 1, and need to choose a JavaScript framework for your next project?

Well, after analysing the market, reading a whole stack of analysis and reviews, having a play around with the technologies, I can say that there’s not a lot in it. Because Angular 2 is so different to Angular 1, you don’t need to automatically choose Angular 2 going forward. That said, because of the strength and size of the Angular 1 market, I don’t see Angular 2 going away any time soon.It may be an easier sell to management, especially how much was previously invested in Angular 1 training, to go to Angular 2.

Steve Sanderson, from Microsoft, produced the following table, showing the benefits of the few of the frameworks. I really thing the server side pre-rendering is important, especially when one of the major complaints with Angular 1 was the lack of deep-linking and SEO support.

Angular 2 Knockout React React + Redux
Language TypeScript TypeScript TypeScript TypeScript
Build/loader [1] Webpack Webpack Webpack Webpack
Client-side navigation Yes Yes Yes Yes
Dev middleware [2] Yes Yes Yes Yes
Hot module replacement [3] Yes, limited Yes, limited Yes, awesome Yes, awesome
Server-side prerendering [4] Yes No No Yes
Lazy-loading [5] No Yes No No
Efficient prod builds [6] Yes Yes Yes Yes

There is one framework not shown here that has gained some traction in recent times and that is Aurelia, which has recently been released (RTM). Aurelia was created by the developer who produced Durandal. He later joined the Angular 2 team, had some input into that, but later left that team because he disagreed with some of their decisions. And some of those decisions are probably valid, while others may not have been, such are the egos of developers. Aurelia is supposed to have a more simplified syntax to Angular but doesn’t currently have the market penetration.

I like to keep things simple. I like to look at what has solid traction, and try to limit my choices based on what the technical capabilities are, maintainability, performance, ease of learning it and popularity. This tells me that the two frameworks with the most promise are actually Angular 2 and React+Redux.

Although Angular 2 has only reached RC4, I still consider it a viable choice today, as, remember, by the time  your app is released it will most likely have gone to RTM. There are actually a number of significant applications that have been built in Angular2 release candidate. The strong tooling and support when Angular 2 is finally released is also a consideration, as whatever your choice is, you really will want longevity of your code base, and you certainly don’t want to be embarrassed by making a fringe choice that has potential that never materialises.

Alternatively, you might choose to go with React+Redux, which is also available with Visual Core 1.0 and Visual Studio 2015. React is supported by Facebook, and is part of a more advanced ecosystem. Facebook are also innovating faster to answer any architectural issues related to component-based frameworks. Each framework tries to steal the best bits from each other, and both React and Angular have been doing this.

If it was pure performance I was after, I think I would have to go with React. React is not an Angular killer, however, mainly because of the size of the Angular base and the structure it provides.  React is probably a lot simpler to learn, while Angular 2 has become better at this. It really comes down to how structured you need your code to be versus how much performance you need to get out of your web servers. With massive cloud based sites, extra web servers and lower serving capacity costs money, so I’d say they’d probably be better with React.


Edit: I just found another table that is worth linking to, by Shannon Duncan. It has more attributes compared, which make it much more interesting:


That article may be found here: Angular2 vs React

Installing Angular 2 to run with Visual Core 1.0 in Visual Studio 2015

August 7, 2016

I initially had a lot of trouble even finding references to people using Angular 2 in Visual Studio 2015. It seems that no matter what I fiddled with, there were failures at every turn. It ended up being quite tricky to get it working. In the end I found that the best way to get going in Visual Studio 2015 was to use yeoman to create your base. And then work backwards to figure out where I went wrong.

Yeoman is yet another package manager. Basically, smart people put together packages with technologies that they think are right together, and submit the packages to yeoman. You go to and you can look up the packages that others have put together.

I initially tried via the yeoman web site, clicked on Discovering Generators, then searched for Angular2, and found the aspnetcore-angular2 package. It was ok, but I had trouble getting it working with ES5.

I recently went to NDC Sydney, and saw a session by Steve Sanderson. He has put together a great yeoman package that works with Visual Core 1.0 in Visual Studio 2015. The package is called generator-aspnetcore-spa, and installation details are available from his web site: Steve Sanderson’s blog. It has been updated to RC4, and the TypeScript target is set to es5, so it will run on most popular modern browsers.

The beauty of Steve Sanderson’s package is that it also supports React as well, in case you want to give that a try.

ASP.NET Core 1.0 – How to install gulp

July 31, 2016

In a previous blog post, I installed npm, otherwise known as the node package manager. I added an npm configuration file under the wwwroot folder called package.json. There are two problems with this. Firstly, Visual Studio Dependencies haven’t been designed for that scenario, which means adding npm packages won’t update the Dependencies folder at the root level, so you lose a fair bit of control over the packages installed. Secondly, the nature of npm is that there could be a whole bunch of additional files added to the package that could be unrelated to the runtime needs of the package. Having these files added could potentially create a risk.

Now, to approach a better practice, I have decided to go back to putting the packages in the root folder. I right-clicked on the project, add new item, and then selected an npm configuration file, then add. This adds the package.json back into the root folder. I then copied the contents of the original package.json I had under wwwroot into the package.json file in the root folder. After this, I deleted the package.json file from the wwwroot folder and deleted the entire node_modules folder. Why did I do this? Because that is what the state of the folder would have been like under the default scenario of installing npm packages at the top level.

Now, given that any static files that are served to the web site need to reside under wwwroot, I had to come up with a way to relocate the contents of node_modules under wwwroot that didn’t involve putting the package.json file there.

While the most common way to do this was simply to add a static file provider to the Startup.cs file, in the configure method, as the following code demonstrates

 app.UseStaticFiles(new StaticFileOptions
           FileProvider = new PhysicalFileProvider(Path.Combine(env.ContentRootPath, "node_modules")),
           RequestPath = "/node_modules"

I decided that eventually I will want a lot more control over this.

Well, the way of the future is to use a tool like gulp, which enables you to run tasks via the Task Runner Explorer in Visual Studio 2015.

Now, I have to admit that I have been attempting to get Angular 2 running in Visual Studio Core 1.0, with some success. That will be the subject of a future post. But for now, I have added gulp into the npm package.json file. That now likes like this:

  "version": "1.0.0",
  "name": "myfirstaspnetcoreapp",
  "private": true,
  "devDependencies": {},
  "dependencies": {
    "@angular/common": "2.0.0-rc.4",
    "@angular/compiler": "2.0.0-rc.4",
    "@angular/core": "2.0.0-rc.4",
    "@angular/http": "2.0.0-rc.4",
    "@angular/platform-browser": "2.0.0-rc.4",
    "@angular/platform-browser-dynamic": "2.0.0-rc.4",
    "@angular/router": "3.0.0-beta.2",
    "bootstrap": "^3.3.7",
    "core-js": "^2.4.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "systemjs": "0.19.27",
    "zone.js": "^0.6.12",
    "gulp": "^3.9.1",
    "rimraf": "^2.5.4"

At the bottom of this file are references to gulp and rimraf. Rimraf is the package for doing the unix equivalent of an rm -rf. Gulp is needed to support gulp in the Task Runner Explorer.

Next I added the gulp configuration file to the top level of my project. Right click on MyFirstAspNetCoreApp and click Add New Item, then select Gulp Configuration File. The Gulp Configuration File is a javascript file called gulpfile.js. Keep that name and click Add.

Open up gulpfile.js, and paste in the following code:

var gulp = require("gulp"),
    rimraf = require("rimraf");

var paths = {
  webroot: "./wwwroot/",
  node_modules: "./node_modules/"

paths.libDest = paths.webroot + "node_modules/";

gulp.task("clean:node_modules", function (cb) {
  rimraf(paths.libDest, cb);

gulp.task("copy:node_modules", ["clean:node_modules"], function () {

  var node_modules = gulp.src(paths.node_modules + "/**")
                    .pipe(gulp.dest(paths.libDest + ""));

  return node_modules;

What this code does is copy the entire nested contents of node_modules in the root folder to node_modules under wwwroot. Now, I wouldn’t ordinarily finish here, as you really should be more specific about the content you’re actually copying. But to keep it simple, I have settled on this for now.

Next, open up the Task Runner Explorer. If you can’t see it at the bottom of your screen, it is found under View > Other Windows > Task Runner Explorer.

After building my app, the task runner explorer looks like this for me.


Now I can right-click on the copy:node_modules task and click Run. If you notice in gulpfile.js, there is a dependency on clean:node_modules, so that will run clean as well. You shouldn’t need to run this every time you compile the application. You only need to run this when adding and removing npm packages. Nothing changes in the meanwhile.

Now, when you you go to wwwroot and Show All Files, you should see the node_modules folder has been copied.

The files within node_modules are now available to be added into your html.

Why you should (almost) always choose an off-the-shelf grid and not build your own.

July 30, 2016

Recently I was in a situation with a whole lot of people who I think should know better. We were building an application and I was not there when the questionable decision was made to build their own grid.

There are a whole swag of reasons, except in the simplest of cases, why you should never build your own grid. Grids can be complicated, and they can require a significant investment to obtain even the simplest of features that you would otherwise get in an off-the-shelf product.

Features like sorting, filtering, frozen columns, frozen rows, summing, hierarchies, cell editing, data exporting, pagination etc. For high volume data, they also include virtual paging, which loads data into the grid page by page, instead of all at once. They can be styled however you want them, and they are fully tested. Sure, they can require a little bit of learning to achieve what you need, but the cost of doing this is significantly less than the build your own solution. The only time you run into problems is when there is too much bloat, or you are trying to do too much with the grid, a problem you would probably have regardless of which path you took.

But you don’t need to believe my opinion. It is a principle of Domain Driven Design. Eric Evans, the original author of Domain Driven Design has a Domain Driven Design Navigation Map which clearly states “Avoid over-investing in generic sub-domains.”

A grid is a perfect example of a generic sub-domain. From Eric’s Diagram:

domain driven design navigation map - generic subdomain

So next time someone is absolutely adamant that they need to build their own grid, see through that for what it is, especially if they claim to be Domain Driven Design experts.

ASP.NET Core 1.0 – How to install npm

July 28, 2016

npm is a package manager that installs, publishes and manages node programs.

To install it in an ASP.NET Core 1.0 Visual Studio 2015 application, Right-Click on the wwwroot folder, Add a New Item, then click on Client Side in the left nav, and select npm Configuration File and click Add. It will add the default package.json file to the project.

Within the package.json file, change the name attribute to something specific to your application. In my case, I named it “myfirstaspnetcoreapp.” Must be lowercase, it seems to accept spaces but I have avoided them.

By default the attribute name is set to “”. If you don’t change the name, the Dependencies folder won’t get generated. The Dependencies folder is where the npm packages are referenced.

To add bootstrap to my project, I need to add a dependencies attribute group, and specify the package and version number. Here is my package.json file:

  "version": "1.0.0",
  "name": "myfirstaspnetcoreapp",
  "private": true,
  "devDependencies": {
  "dependencies": {
    "bootstrap": "^3.3.7"

Note the hat/carat character in the version number for bootstrap. That means give me the latest version 3 package greater than or equal to 3.3.7 but less than version 4.

The moment you save this file, bootstrap will be loaded and you will see the bootstrap package referenced under the Dependencies > npm folder.

The files themselves will be installed at the root level of the wwwroot folder under a node_modules folder, but that folder will be hidden. You can find it by clicking on the Show All Files icon at the top of Solution Explorer.


Note that a lot of people install the package.json file at the top level of the project. The problem with that is that only files installed under wwwroot are able to be served to the web site. To get around that, those people have to either use gulp to relocate the package files as a post build step, or they use another UseStaticFiles statement in the Startup Configure method and supply the alternative folder in the options collection.

The method I am using seems a bit cleaner to me, so unless someone can tell me why I shouldn’t do this, I’m going with this method.

Now if you build and run the application, you should be able to go directly to the file and it will be served to you:


Because of this, you will now be able to reference the stylesheet from within your header:

  		<link href="/node_modules/bootstrap/dist/css/bootstrap.css" rel="stylesheet" />

Bootstrap has been around for a long time, so I won’t go into how bootstrap works. By default bootstrap works off a division of horizontal screen space into 12 columns. Here I have put a little bit of styling on one of my pages. col-md-6 is a 6 column division for a medium screen size (to become responsive, you can specify in the same class tags a different number of columns for a different target screen size).

@using System.Security.Claims
@model MyFirstAspNetCoreApp.Entities.Hotel

  	<link href="/node_modules/bootstrap/dist/css/bootstrap.css" rel="stylesheet" />
    .col-md-6 {
      border: 2px solid black;

@if (User.Identity.IsAuthenticated)
<div class="row">
<div class="col-md-12">@User.Identity.Name</div>
<div class="row">
<div class="col-md-12">
<form method="post" asp-controller="Account" asp-action="Logout">
          <input type="submit" value="Logout" />
<div class="row">
<div class="col-md-6">
        <a asp-controller="Account" asp-action="Login">Login</a></div>
<div class="col-md-6">
        <a asp-controller="Account" asp-action="Register">Register</a></div>
<div class="row">
<div class="col-md-6">@Model.Id</div>
<div class="col-md-6">@Model.DisplayName</div>