Which JavaScript framework should I choose in the Enterprise?

July 16, 2017

There are various reasons that modern application developers should be using JavaScript frameworks when developing new applications. The modern browser has become almost a complete application hosting environment runtime. The added responsiveness, the performance, the ability to easily asynchronously request and manipulate data, to build different parts of your page progressively and independently, are only some of the benefits. If you’re going to choose to build an application in the browser, you need structure, you need to deliver features quickly, you need them to perform and you need to provide consistency.

JavaScript alone does not provide this. JavaScript doesn’t provide structure, it doesn’t provide custom UI elements, it doesn’t provide data binding or animations and it doesn’t provide a network communication framework. Everything in JavaScript is done with add-ons, and the time taken to build your own libraries can be prohibitive and a hindrance to actually delivering your own custom application logic.

Even Gartner says that you should be using a JavaScript framework. Gartner’s Research Director Bradley Daley says that 40% of companies are now using JavaScript frameworks heavily in their projects.

But we need to be careful here. It’s very easy to be swept up in hype. It seems to be a systemic problem in the industry that we have a tendency to jump into new technological choices far too quickly. Don’t be swayed or swept up by the hype.

But there are so many JavaScript frameworks! So which framework should you choose, and what are the criteria you should be looking at?

When selecting a JavaScript framework for the Enterprise, there are a number of factors you really need to consider. These include:

1)     Adoption – Does it have the backing of industry leaders? Who is behind it? Who is using it? How many production deployments are there? Is there community and developer support? Are there many jobs for it? Can I get the developers I need?

2)     Opinionated – does it provide you with a ready-made framework where most of the basic structural problems already have solutions, or do you need to put those solutions together yourself?

3)     Learning curve – How hard is it to learn and is it worth the effort to do so?

4)     Future proof – will this framework be around for a while? Is it using web standards and does it have a path towards future web standards? Is there a roadmap for supporting future web standards?

5)     Feature richness – Can I build awesome sites with it?

6)     Productivity – How easy is it to add features? How easy is it to maintain?

7)     Testable – is there a strategy for testing the application components?

8)     Size – does the framework contain a lot of large files? Is it slow to download? Is it heavy?

9)     Performance – does the framework introduce impediments to performance?

10)  Browser support – does the framework support many browsers including older versions?

11)  Licensing – are there any gotchas?

Adoption

There are only a few JavaScript frameworks that have made it really big. The two biggest are Angular and React. Angular is backed by Google, while React is backed by Facebook. Angular version 1 (AngularJS) was one of the most widespread technologies in use for an exceptionally long time. In fact, the earliest issue on the AngularJS wiki was raised in 2011 and its adoption was about 5 times bigger than React at the time it was deprecated. At the time of writing, Angular version 1 had 56,400 stars on GitHub. That’s a popular base.

But you wouldn’t start a new project in Angular version 1. There is virtually no one recommending people take that path. Bradley Dayley from Gartner certainly doesn’t recommend it, saying “Angular 2 is a much better option than Angular 1” and “Angular 2 is still one of the better frameworks out there”. Rob Eisenberg from Aurelia (and now Microsoft) doesn’t recommend it. Angular 1 has structural problems that cannot be discounted. If you’re on Angular 1, you really need to move forward. Jeremy Likeness, Director of Application Development at iVision made the comment “There will continue to be a long tail for Angular 1 apps, but there is a clear path to Angular 2 and I see people taking that path.”

Angular version 2 (now known as the singular “Angular” and currently at version 4) is a complete rewrite of Angular version 1. It was given the same significant support that Angular 1 was given. Google has backed it all the way, and, in fact, they have completely rewritten Google AdWords with it. That’s over a million lines of code. Angular 2 was started in 2015, and it now has 25,900 stars on GitHub. And according to Bradley Dayley, “Angular 2 is still one of the better frameworks out there.”

React, on the other hand, has grown significantly. React is used by Facebook on Facebook. It currently has 71,033 stars on GitHub and has been around since 2013. Up until the Angular 1 rewrite was announced, it was making reasonable progress, but after the Angular 1 announcement, its popularity shot up as it was the main beneficiary of the doubts around Angular. People say you can’t compare the two, but that’s rubbish. The reality is that you can compare the two, because to use React you will pull in a Router, a Flux implementation, and various libraries. You will build a framework to provide the same functionality that you get from Angular. React probably has the strongest developer advocacy of any of the JavaScript frameworks.

Then, of course, there are other libraries. Backbone and Knockout are on the way out. Aurelia just hasn’t got the take-up, although it certainly has the support of its developers. Polymer is just meh at the moment. Meteor is a bit too rigid and doesn’t have the take-up.

Probably the biggest contender at the moment is VueJs. It’s new and the fastest growing of all the frameworks. It has 60,100 stars on GitHub already, and it’s only been around since 2015. It’s just like React, though, and will require you to put together a whole framework.

In terms of trends, in the last month, Angular 2 has grown by 1016 stars, React has grown by 2,495 stars, but Vue has grown by a staggering 3,546 stars!

For now, though, in the Enterprise, I’d probably say no to VueJs, for now, but it’s certainly worth keeping an eye on and revisit in a year or so.

Because of these factors, in this article, I will focus on the two biggest players in the space, React and Angular.

Opinionated

Probably the biggest argument for and against the frameworks is whether or not they are opinionated. An opinionated framework is prescriptive; It is one where the majority of the structural/infrastructure decisions have been made for you. That is, it provides prebuilt boilerplate code that forces you to structure your code in a particular way. And because a lot of those decisions have been made for you, you can get on and focus on your own custom code, rather than having to work on building up the framework, spending a ridiculous amount of time building the infrastructure before you even get to write your first line of business-related custom application code. Sure, it might be a technically brilliant solution, but apart from some nerd points, who really cares?

Angular 2 is an opinionated framework. Out of the box, you have pretty much everything you need to build an Enterprise application. React, on the other hand, has so few strong opinions. To build something Enterprise ready requires a significant number of decisions. You need to build a foundation. These days, React provides a Router (it never used to) and you need to select one of the 20+ variations of Flux, but you should probably choose the most popular, being Redux. You’ll also need to select an interaction library to get you data from an external repository, something like Thunk. In fact, I would say that React, to me, requires a dog’s breakfast of technologies just to get a foundation up and running. And with putting together such a mix of new technologies, some will be more mature than others, and many are not actually supported by FaceBook.

So of course, it can be done, and many people are doing it. Between the time when Angular 1 was deprecated and Angular 2 was just getting off the ground, that’s what people were doing, as they didn’t really have another choice, and people were quite angry at what was happening to Angular.

If you’re going down the path of React, I would recommend going with all the most popular versions of the React tech stack. Here I recommend that you do a Pluralsight course and put together a framework based on what you learn from that, and build on that learning.

So, in the case of choosing a JavaScript Framework that has everything you need to build an Enterprise application, with all the modules supported, I would have to say Angular 2 wins hands down.

Learning curve

JavaScript frameworks are not the easiest to learn. Both Angular and React require a significant amount of time to get to an intermediate level, which would allow you to build an Enterprise application. I know there are people out there that claim it will take them a few minutes to get going with React and take far longer to learn Angular, but they’re not really talking about learning the full framework. With React, they are usually just talking about how quick they can learn how to use the view layer.

If you want to do something significant and Enterprise ready, my advice is to immerse yourself in a Pluralsight course. Make your entire team do it.

To give you an example of how long it takes to realistically learn Angular vs React, I suggest that you would need to do both a Beginner and an Intermediate course.

Doing John Papa’s Angular: First Look Beginner’s course takes 4.5 hours. Following up with Joe Eames Angular Fundamentals Intermediate course will take a further 10 hours.  That’s a 14.5-hour investment.

Doing Cory House’s Building Applications with React and Flux Beginner’s course will take around 5 hours. Following up with Cory House’s Building Applications with React and Redux in ES6 Intermediate course will take a further 6.25 hrs. That’s 11.25 hours.

So, when people tell you React is easier to learn than Angular, my response to them is, not in the Enterprise it’s not!  There’s not that much difference between 11.25 hours and 14.5 hours. If you are learning these frameworks from scratch, the investment in these courses is well worth it. You will get significant productivity benefits in doing this from day one. One gotcha here though – to learn a 10-hour course does not take 10 hours. It takes significantly longer to learn and absorb everything from one of those courses. When I started doing those courses, I found it could sometimes take me a whole day to get through one hour of course!

Future proof

We are now past the Angular 1 to 2 hiccup, and both Angular 2 and React will be around for a long time. Here, you need to ask how much each of these frameworks is oriented towards future web standards. In the case of Angular, it is closer to the web standard. React, on the other hand, diverges from web standards and patterns, and that is risky. That’s because they do their own virtual DOM code compilation, which is seen as a benefit in the case of React. Some people say that it would make it more difficult to move away from that towards a different framework that conforms better with the standard, but let’s be serious here, no one ever changes once they’ve decided on the framework until they are ready to completely rebuild the application, and an application’s lifetime is around 5 years, so it won’t be happening for some time.

Productivity

I have written apps in React and Angular and I have yet to form an opinion on this one. I rewrote an entire C# MVC web app in Angular 2 recently, it took me 2 weeks (just about killed me), but I was able to do it efficiently, and once the patterns were in place, it was pretty easy to cut and paste, and add features.

Also, I am actually more impressed with the separation of html from code, which Angular provides. Because of the way React works, the code and the html are all in one file. They say this is a good thing, but I believe that keeping the UI as separate as possible from the code is better, especially if you have designers on the team, who might want to modify the layout of the html while you are still coding the component.

Feature richness

Both Angular and React are feature rich and there is support for a massive number of add-ons for both environments.

Size

I found it exceptionally hard to verify the claims that React is significantly smaller than Angular. Anton Vynogradenko produced a site that showed some stats obtained from a CDN for various JavaScript framework configurations. They showed that Angular 2 minimised would take 566K out of the box, whereas React with React DOM, Redux and React Router is around 191K. That’s a significant difference, and React would win hands down.

Here’s a link to Anton’s page: https://gist.github.com/Restuta/cda69e50a853aa64912d

The issue with size comes back to how long it takes to transfer the file, and for a large site with significant traffic, it would probably affect data cost as well.

Unfortunately, the stats haven’t been kept up to date, and there is no CDN provided for Angular 2 version 4. Given that Angular 4 was an optimisation release, it would surprise me considerably if the difference was still that great. Without further investigation, I can only form the opinion that React is probably about 1/3rd of the file size. In a corporate environment, it probably wouldn’t matter much, as you’d just wait the extra second for the page to load, put on a significant public site you might want to investigate this more thoroughly.

Performance

It’s hard to judge performance when all you’ve got to go on is other people’s potentially biased and vested opinions. I don’t need to give you too much of my own opinion here, other than to suggest that you take a look at Stefan Krause’s JavaScript Framework Benchmark site. This site contains a whole lot of benchmark tests for a wide array of JavaScript Frameworks, and he provides a calculated score (slowdown geometric mean) for framework speed. The benchmark was up to Round 6 at time of writing.

According to that site, Angular 2 (version 4) gets a speed score of 1.31 at its most efficient, and React with Redux gets a score of 1.41 at its most efficient. Certainly not enough to make a decision one way or the other.

In terms of memory performance, Angular 2 (version 4) and React (with Redux) are also very close. After adding 1000 rows, Angular 2 (v4) uses around 10.88 MB, while React with Redux uses 10.76 MB

Stefan’s site is here: http://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts/table.html

Browser support

Both Angular 2 and React take advantage of polyfills. Polyfills are JavaScript libraries that provide backward compatibility with older browsers.  But seriously, any organisation that is still on IE9 really should have its head checked.  Either way, transpilers these days are exceptional and you should be able to find a solution to produce what you need for either Angular or React.

Maintainability

I don’t really see much difference in the time taken to maintain React or Angular 2. Both environments are similarly componentised, so shouldn’t be a problem to maintain.

Testable

Both Angular 2 and React have a similar testing setup. I don’t see much difference here.

Licensing

Angular 2 is released under the MIT licence. You can use it pretty much however you like. You can modify it. You can on-sell it.

React is released under a modified BSD licence. The “FaceBook” licence. There is an odd clause in the FaceBook licence. Basically, there’s a non-compete clause in there. I don’t want to interpret it for you, but you should take a look yourself if you think you might have an issue. Here is the text:

“The license granted hereunder will terminate, automatically and without notice, if you (or any of your subsidiaries, corporate affiliates or agents) initiate directly or indirectly, or take a direct financial interest in, any Patent Assertion: (i) against Facebook or any of its subsidiaries or corporate affiliates, (ii) against any party if such Patent Assertion arises in whole or in part from any software, technology, product or service of Facebook or any of its subsidiaries or corporate affiliates, or (iii) against any party relating to the Software.”

Conclusion

There are a whole lot of reasons why you might choose one of these frameworks over the other, and I hope I have provided you with a few more issues to consider. If your application is small, or you want a more traditional application with only a JavaScript-based view, you might straight away choose React, or even Vue. In small applications, it probably doesn’t really matter.

If you’re building a large application and you have a highly sophisticated team that enjoys investigating and evaluating new technologies, are happy to make the specialised decisions and put in the hard work required to build a foundation, and you’re not worried about the patent clause, then by all means, go with React. But if you’re writing a large application and you need the enforced structure out of the box, I would recommend that you go with Angular.


The failure of npm for Visual Studio in the Enterprise

January 15, 2017

Modern application development is hard. There are simply so many things you have to think about when you are developing, and over time, more and more features are created and many of those need to be integrated into your applications.

This is not without cost. Early on in my career, we used to talk about DLL Hell. DLL Hell is the problem where there were so many versions of a DLL installed in your environment that you never knew which one that application was trying to use.

The modern version of this I now call Package Hell. When I open a modern enterprise Visual Studio application, such as an Angular 2 application, I now have a minimum of around 40 packages installed under the Dependencies/npm branch, just to get a simple application up and running, and many of those packages have dependencies too. And that’s only one of the package managers. Other package managers available include nuget, and Bower.

What is supposed to happen these days is that anything that is Microsoft Dot Net related may be found in nuget, while anything that is javascript related will be in npm. Npm is the node package manager. It is essentially an online repository for javascript packages, not just in the Microsoft world, but for any environment that wants access to those packages over the web. It enables developers to find, share, and reuse packages of code from hundreds of thousands of developers — and assemble them in powerful new ways. Microsoft didn’t invent npm. Microsoft decided it was the tool everyone was using, that did the job, so decided to get onboard. They decided they needed to do this to keep up, to stay competitive in the development space.

To easily explain the problems with npm, I will compare this to nuget. Why? Because nuget works! Nuget Package Manager is simple, it is visual, it keeps you informed, it’s easy to find packages and keep them up to date, and it’s easy to change versions of packages if you need to. You don’t need to focus on the tooling – you can install packages and focus instead on integrating with your business logic and providing business value.

npm Problem 1: The proxy.

If you want npm to work in an enterprise environment, you will most likely have to go through a proxy server.

With nuget, you open up the package manager screen, type in package names, and it gives you a list of candidate packages. It automatically handles the proxy for you. You don’t have to configure it to work. You don’t have to go to everyone’s machine, modify a configuration file, just to ensure that their login has the credentials to authenticate through the proxy to get to the nuget repository. It is automatic.

npm, in this regard, is a complete failure. In the environment I was in, we couldn’t even get that configuration right – even with all the correct settings, it still failed. The workaround is to install a third party package called cntlm, which is a service that opens a local port and automatically authenticates through the proxy. All you then have to do is point npm at that port. “Install what?” I hear you say. Yep, exactly. That’s a major fail in a large environment.

Note: you could also use fiddler, but its the same issue. Developers shouldn’t have to spend time configuring or using third party packages for something that should just work out of the box. It works for nuget. It needs to work for npm

npm Problem 2: Finding new packages.

When using nuget, you type a keyword into the search bar, and you can see a list of packages come up. Most of the time, this is because you were googling and found a reference to a package that might solve a problem, or you might have come across some cool new feature and want to try it out. In the process of doing that, you might also discover other packages that do the job, because you can easily scroll down the list and see what else is on offer. Nuget makes discovery of new interesting packages easy.

But not with npm. Sure, you might find out about the package by googling, but the exploration in npm just isn’t there. In npm, it involves opening up the package.json file, typing a double quote and then you will get your list of choices in a 9 item scrollable tool tip. It’s rubbish. Not to mention that some packages aren’t even discoverable. That’s right, you can’t actually find any package in the registry that starts with an “@” symbol, such as @angular because there are special rules for scoped packages.

npm Problem 3: Version control.

With nuget, when you open up the package manager, it looks up the list of installed packages and compares their version number with what’s available on the net. If one of the packages has been upgraded, it shows you in an Updates tab. You can then choose to upgrade if you want. It’s entirely up to you. But at least it has that feature.

With npm, on the other hand, you might have 40+ packages, but there’s nowhere near as much control. Compared to nuget, it really sucks. To tell nuget that you want to continually upgrade, you have to manage it in a configuration file, for example:

“jquery”: “^2.2.1”,

The hat ^ character tells it that you are happy for it to install any version it finds above this. Um. Wrong. You should be the one to decide when you want upgrades. Part of the problem is finding out when something needs to be upgraded, and npm fails at that. The second problem is that not every upgrade is a success. In a corporate environment, you don’t upgrade a major package automatically because it will break stuff and then your whole application is unusable. But you still want the option. You still want to know if there is a package upgrade available, so the npm way is to only install a particular version. Never mind that you would have at least liked the option to upgrade. The whole concept is flawed.

npm Problem 4: Configuration files and the command line

Ok, so we somehow have reverted back to using the command line or fiddling with configuration files. It’s all very 1990s. I mean seriously, who has the massive enough ego to require people to fiddle with json configuration files? Is there some hugely nerdy boffin who still believes they are better than everyone else because they can memorise a bunch of command line attributes?

This is the 21st century. I want my people focusing on business logic and producing business value, not working out the correct command they need to type to get some package installed on their machine. Not when a visual tool will provide everything they need to continue with their core function, which is to provide business value.

Those are the 4 major failures, but now for a couple of quirks.

npm The Quirks

Firstly, when I do an npm restore packages, its often quite difficult to figure out what’s going on or whether its finished its work. The user interface is still interactive too, and you can right-click and install individual npm packages and click restore, even though a global restore is in progress. Huh?

Secondly, my Dependencies folder is almost permanently set to “Dependencies – not installed” even though all my packages are installed. What is the point of showing this if the message isn’t helpful. It makes people lose confidence in the tooling.

In our environment, like most corporate environments, introducing new technologies can be quite difficult. It’s a typical catch-22 situation. You can’t introduce a new technology until its proven, but on the other hand, you can’t prove it until you’re allowed to introduce it. It’s why so many corporates bypass the architecture teams and build a silo to enable innovation within an environment to gain a competitive advantage. It becomes even harder when the tools are problematic.

I was able to get an application up and running within the corporate environment. It was an Angular 2 application running on Dot Net Core with Web Pack. Because of my skill level, I could get it going, but to expect others with less experience to have to fight configuration files and do stuff from the command line and configure the proxy, and install third party tools just to be able to start their job is ridiculous. It’s all experience, I hear you say, well, no. I don’t buy it. It’s hard enough to move to new technologies without the added complexity of dealing with problems that should not exist.

The result was that after a week of having the team fighting (mostly) npm and all the new technologies, we decided to fail early. The entire rest of the team were continually struggling with the development infrastructure and it became a productivity killer. So we have gone back to our old and working development environment. The down side is that there a certain packages that aren’t available on nuget, such as Angular 2. But the up side is that everything else works.

I have to say, I’m disappointed. For all its supposed benefits, the new environment just felt half-baked. The impediment to getting a team running smoothly was just too high. For this to work, npm needs a user interface, and it needs to work automatically through the proxy, much like the far superior experience of nuget. This needs to be fixed for us to be able to move forward, or they will be just as happy, where I am, to stay on the existing tech stack that runs smoothly and virtually without a hitch.

Edit: I have since found out that there is, indeed, a GUI for npm package management. The problem is that it is only available in Node Js applications and not standard asp.net applications. What’s also disappointing is that the GUI isn’t really very good. It certainly isn’t up to the standard of nuget – it feels very much a hack.


GitFlow Cheat Sheet

September 8, 2016

Installing GitFlow on Windows

  1. Install cmder. Google it. Make sure you get the git for windows version.
  2. Download and install in C:\Program Files\Git\bin

a) getopt.exe from util-linux-ng package from the Binaries zip folder found at http://gnuwin32.sourceforge.net/packages/util-linux-ng.htm

b) libintl3.dll from libintl package from the Binaries zip folder found at http://gnuwin32.sourceforge.net/packages/libintl.htm

c) libiconv2.dll from libiconv2 package from the Binaries zip folder found at http://gnuwin32.sourceforge.net/packages/libiconv.htm

  1. Start cmder, which is a better command prompt console. Change folder to the one that you want to install gitflow in. It will install a gitflow folder under this.
cd c:\users\Tony

4. Clone the gitflow repository.

git clone –recursive git://github.com/nvie/gitflow.git 

5. Change to the gitflow folder

cd gitflow
  1. Install gitflow, using the following command:
Contrib\msysgit-install.cmd “c:\Program Files\Git\”

 

Installing GitFlow in a repository

  1. Create your project folder: mkdir c:\demos\flow
  2. Change directory to the project folder: cd c:\demos\flow
  3. Initialise an empty Git repository: git init
  4. Initialise the repository for GitFlow: git flow init

Choose all the defaults.

The prompt should change show that you are in the c:\demos\flow (develop) branch.

  1. Checkout the master branch
git checkout master
  1. Make sure you have set up a repository on GitHub. On the github repository page, click on the Clone or download button and then click on the Use SSH link in the top right hand corner. Copy the link. Then execute the git remote command to set up the origin in git:
git remote add origin git@github.com:tonywr71/PSGitFlow.git
  1. Now push the origin to the master, to establish the github connection
git push –u origin master
  1. Change back to the develop branch
git checkout develop
  1. Now push the develop branch
git push origin develop
  1. If you go back to the repository page, you should now be able to select the two branches from the Branch drop down.

 

Creating a Feature Branch

  1. Make sure you have cloned the repository into a destination directory
git clone git@github.com:tonywr71/PSGitFlow.git .

Note the period (.) which is used to force it to be installed in the current directory, not a child of the current directory.

It will put you in the (master) branch

  1. Change to the develop branch
git checkout develop
  1. Initialise gitflow in the new folder if it hasn’t been done already
git flow init

and select all the defaults

  1. Go into github for this repository and select the Issue tab. We want the new feature to be associated with the issue. So add a new issue for the feature. The Issue number and issue subject should be part of the new feature name. The Issue Subject here would be “Users Can Access Single Entries” for example.
  2. Add a new feature branch
git flow feature start 2-UsersCanAccessSingleEntries

where 2 is the Issue number and UsersCanAccessSingleEntries is a concatenation of the issue subject. The command prompt will now show the feature branch:

c:\demos\tony (feature/2-UsersCanAccessSingleEntries)

  1. This branch hasn’t been pushed back into the repository yet, so there is no tracking in github. If you make changes to code in this folder, the prompt will now be highlighted in red, to show changes pending. If you want to see the pending changes, execute:
git status
  1. To add the files into git
git add .
  1. To commit the repository locally, with a message:
git commit –am “Added code to get single entry”

This will change the command prompt folder back to white and commit the changes locally.

  1. To add the feature back onto the central repository
git flow feature publish 2-UsersCanAccessSingleEntries
  1. If you go into github, you can now select the feature branch from the Branch drop down

 

Reviewing a feature branch on another machine (or in another folder on the same machine)

  1. To see the list of feature sub-commands in gitflow:
git flow feature help
  1. To pull the feature into my local repository, switch you into that branch, but don’t track changes:
git flow feature pull 2-UsersCanAccessSingleEntries
  1. To pull the feature into my local repository, switch you into that branch and track changes to that feature:
git flow feature track 2-UsersCanAccessSingleEntries
  1. Make your changes in the tracked folder. In Cmder the command prompt will show a red feature folder. Again, you can see the pending changes by executing:
git status
  1. Add the files that have been changed:
git add .
  1. Then commit them to the local repository with message:
git commit –am “Added exception handling”
  1. Finally, to push them to the central repository:
git push

 

Get the reviewers changes back on the originator’s machine

  1. Check out the feature and pull it.
git pull

 

Finishing the Feature Branch and merging back into the develop branch

  1. The developer closes the branch, not the reviewer. The developer would click the Merge pull request button to merge back with the develop branch. The reviewer Closes the pull request, but doesn’t finish it. To finish the feature, the developer executes:
git flow finish 2-UsersCanAccessSingleEntries
  1. The feature branch is now deleted both locally and remotely, and you will have been switched back to the develop branch.
  2. Other developers that are using this feature will also need to delete their local branch. That is done by executing:
git checkout develop
git branch –d 2-UsersCanAccessSingleEntries
  1. To check it has been removed
git branch

should no longer be showing the feature branch.

 

Creating a Release Branch

  1. This is the point in time where a release is ready.
  2. Once the Release Branch is created, it is passed to QA for testing.
  3. Any bugs that are found on the release branch will need to be fixed on the Release Branch and then merged back into the develop branch, so that any future feature branches will pick up those fixes.
  4. The architect creates the Release Branch by executing:
git flow release start sprint1-release
  1. The command prompt will show the new release folder, but this branch is currently only on the local machine. To publish this release so everyone can access it:
git flow release publish sprint1-release
  1. If you go into github and pull down the Branch drop down, you will see the new Release Branch.

 

Reviewing a Release Branch

  1. To view and track the release on someone else’s machine, set up a folder on that machine and execute:
git flow release track sprint1-release
  1. If someone makes changes to files in that branch, you can check the changes using
git status
  1. You can then add any changes to the local repository
git add .
  1. Then commit the changes to the local repository
git commit –am “Add logging to exception (example message)”
  1. Then push the change to the remote repository
git push
  1. The changes are now in the Release Branch, but need to be merged back to the develop branch.
git checkout develop
  1. Pull the develop branch
git pull
  1. Then merge it with the release branch
git merge release/sprint1-release
  1. That merge is local, so we now need to push changes back to the develop branch
git push 

 

Cleaning up the Release Branch and pushing to the master branch

  1. This job is done by the Architect, who will change to the Release Branch
git checkout release/sprint1-release
  1. Do a pull to make sure the local machine is up-to-date
git pull
  1. Now finish the release
git flow release finish sprint1-release

This will merge our flow back into the master branch and open up an emacs text editor to allow us to enter a more substantial release note. Save the text editor and exit. The release will be merged into the master branch and tagged with the release name. The tag is also back-merged into the develop branch. The release branch is also deleted both locally and remotely, and you are switched back to the develop branch.

  1. At this time, the develop branch is still checked out, so we need to change back to the master branch to check it all in.
git checkout master

When executing this command it will tell you how many commits there are difference between the master and the develop branch.

  1. We need to push all these changes including all the tags back to the remote repository.
git push –-tags

 

Creating a HotFix

  1. A Hot Fix is an emergency fix to released code. The Hot Fix is created on the master (production) branch. After making the fix on the Hot Fix Branch, it is then merged back into the master and develop branches.
  2. On the machine where the fix needs to occur, start the hotfix
git flow hotfix start hotfix1

Note that hotfix1 should match an issue in GitHub

  1. You will now be in the hotfix branch. Make changes to this branch and you can see the changes to the branch by typing
git status
  1. Once the hotfix has been finished, the changes need to be committed:
git commit –am “Hot fix changes”
  1. The developer can then finish the hotfix by executing
git flow hotfix finish hotfix1

This will bring up the emacs text editor and you can add a hotfix release note, then save and close the editor and it will merge the hotfix into the master, tag it as hotfix1, back merge that tag into the develop branch, deleted the local hotfix1, and switch you into the develop branch.

  1. To see how many commits are outstanding on the develop branch, type
git status
  1. You can also switch to the master branch and see where that is
git checkout master
git status
  1. To push changes back to both remote branches, execute:
git push –-all origin

What are Microservices?

August 14, 2016

Microservices sounds like a pretty slick name, but in fact, it isn’t all that complicated. Microservices are, broadly speaking, all about providing APIs and collaborating between those APIs.

Microservices are a specialisation, a refinement and an evolution of Service-Oriented Architecture (SOA). It is a specific approach for how to do Service-Oriented Architecture better. It has arisen, not due to any academic theories, but from an analysis of lots of real world projects, and takes all the best approaches of SOA learnt from that experience.

The reason you don’t hear much about Service-Oriented Architecture any more is that it was actually a big embarrassing failure. For all its promises, very little actually materialised. There was a lack of consensus on how to do SOA well, there was a lack of guidance on service granularity, SOA doesn’t talk about how to ensure services don’t become overly coupled, and there were too many attempts by vendors to lock you in.

Microservices provides architectural guidance to ensure better choices are made when divvying up an application for better maintenance, flexibility, scaling, resilience and reuse. The idea is to break up large all-encompassing monolithic applications into a whole lot of little services. The smaller the services are, the more the benefits around interdependence are maximised. It also allows functionality to be consumed in different ways for different purposes. The downside is that extra complexity emerges from having more and more moving parts, so we have to become a lot better at handling those complexities.

Microservices aligns well with existing software development methodology, technologies and processes. By splitting an application into a bunch of services, and forcing them to communicate with each only via network calls, it allows each service to be treated like its own bounded context, a concept from Domain Driven Design. Each service also needs to have a Single Responsibility, to be a completely separate entity, to be able to change independently of each other, and to be deployed by themselves, without requiring their consumers to change.

By managing a bunch of separate services, each component can be scaled separately. Too much demand for one service? Spin up another process of that service. They can also be run on multiple separate machines. The system is also much more resilient, as the failure of a single service does not bring the entire system down.

And you are not constrained by the technologies that they run under either. Because there are lots of little services, they work well in the cloud, where the architectural approach can be so closely correlated to an almost immediate cost saving as you reduce compute time for elements that don’t require much resources and increase compute time for the bottlenecks in the application.

Large organisations may have a large number of microservices, and because each microservice is entirely independent, they can be coded in isolation as well. The Microservice approach allows us to divvy up the services so that we can hit the sweet spot between team size and productivity. A good starting point for how big a microservice should be is around 2 weeks of work for team of around 8, so that fits really well in with Agile sprint size as well. You can also have the entire team for a single microservice collocated, while another team may be working on a complementary microservice collocated elsewhere.

There are also other benefits from using the microservice approach to application construction. Teams that follow that approach are actually quite comfortable with completely rewriting services when required, and choosing alternative technologies with the ability to make more choices on how to solve problems. They also have no problem replacing services that they no longer need. When the code base is only a few hundred lines of code, it becomes difficult to become emotionally attached to it, and the cost of replacing it is pretty small.


Angular 1 is dead. Where to now?

August 7, 2016

Angular 1 has a massive market. It is by far the most widely used JavaScript framework available. It is a very opinionated framework, it has declarative power, and developers tend to lean towards the MV* patterns which has a whole lot of benefits and with which they are familiar with. So Angular itself is not going away any time soon.

The biggest problem with Angular 1 is that it is no longer being actively maintained. The main reasons for this are Componentisation, Performance and an inability to play well with search engines (SEO), which, incidently, are the main factors that have made its main competitor, React, so popular. There is also quite a significant learning curve with Angular.

Componentisation enables you to build custom component trees quite easily, and the resulting code is usually much more maintainable. Performance was always a killer in Angular 1 due to watches and the digest cycle, which was basically a system for monitoring every single changing item on your page.

There was a limit of 2000 watches, and as soon as you went over that, IE pages simply ground to a halt. Finally, having a whole lot of script on the page did not make Search Engine Optimisation easy at all. Search engines don’t know what to look at with a single page application. They find it hard to walk the tree of links between your pages, because they aren’t seeing what you are seeing, they need to interpret the script behind the scenes that is being executed.

So the Angular team announced a complete rewrite of Angular 1, because they found that the structural problems with Angular 1 could not be resolved via a simple upgrade. They gave their own existing product a resounding fail. In doing so, they signed its death warrant.

What do you select then, if you have a whole lot of experience in Angular 1, and need to choose a JavaScript framework for your next project?

Well, after analysing the market, reading a whole stack of analysis and reviews, having a play around with the technologies, I can say that there’s not a lot in it. Because Angular 2 is so different to Angular 1, you don’t need to automatically choose Angular 2 going forward. That said, because of the strength and size of the Angular 1 market, I don’t see Angular 2 going away any time soon.It may be an easier sell to management, especially how much was previously invested in Angular 1 training, to go to Angular 2.

Steve Sanderson, from Microsoft, produced the following table, showing the benefits of the few of the frameworks. I really thing the server side pre-rendering is important, especially when one of the major complaints with Angular 1 was the lack of deep-linking and SEO support.

Angular 2 Knockout React React + Redux
Language TypeScript TypeScript TypeScript TypeScript
Build/loader [1] Webpack Webpack Webpack Webpack
Client-side navigation Yes Yes Yes Yes
Dev middleware [2] Yes Yes Yes Yes
Hot module replacement [3] Yes, limited Yes, limited Yes, awesome Yes, awesome
Server-side prerendering [4] Yes No No Yes
Lazy-loading [5] No Yes No No
Efficient prod builds [6] Yes Yes Yes Yes

There is one framework not shown here that has gained some traction in recent times and that is Aurelia, which has recently been released (RTM). Aurelia was created by the developer who produced Durandal. He later joined the Angular 2 team, had some input into that, but later left that team because he disagreed with some of their decisions. And some of those decisions are probably valid, while others may not have been, such are the egos of developers. Aurelia is supposed to have a more simplified syntax to Angular but doesn’t currently have the market penetration.

I like to keep things simple. I like to look at what has solid traction, and try to limit my choices based on what the technical capabilities are, maintainability, performance, ease of learning it and popularity. This tells me that the two frameworks with the most promise are actually Angular 2 and React+Redux.

Although Angular 2 has only reached RC4, I still consider it a viable choice today, as, remember, by the time  your app is released it will most likely have gone to RTM. There are actually a number of significant applications that have been built in Angular2 release candidate. The strong tooling and support when Angular 2 is finally released is also a consideration, as whatever your choice is, you really will want longevity of your code base, and you certainly don’t want to be embarrassed by making a fringe choice that has potential that never materialises.

Alternatively, you might choose to go with React+Redux, which is also available with Visual Core 1.0 and Visual Studio 2015. React is supported by Facebook, and is part of a more advanced ecosystem. Facebook are also innovating faster to answer any architectural issues related to component-based frameworks. Each framework tries to steal the best bits from each other, and both React and Angular have been doing this.

If it was pure performance I was after, I think I would have to go with React. React is not an Angular killer, however, mainly because of the size of the Angular base and the structure it provides.  React is probably a lot simpler to learn, while Angular 2 has become better at this. It really comes down to how structured you need your code to be versus how much performance you need to get out of your web servers. With massive cloud based sites, extra web servers and lower serving capacity costs money, so I’d say they’d probably be better with React.

 

Edit: I just found another table that is worth linking to, by Shannon Duncan. It has more attributes compared, which make it much more interesting:

angular2-vs-react

That article may be found here: Angular2 vs React


Installing Angular 2 to run with Visual Core 1.0 in Visual Studio 2015

August 7, 2016

I initially had a lot of trouble even finding references to people using Angular 2 in Visual Studio 2015. It seems that no matter what I fiddled with, there were failures at every turn. It ended up being quite tricky to get it working. In the end I found that the best way to get going in Visual Studio 2015 was to use yeoman to create your base. And then work backwards to figure out where I went wrong.

Yeoman is yet another package manager. Basically, smart people put together packages with technologies that they think are right together, and submit the packages to yeoman. You go to yeoman.io and you can look up the packages that others have put together.

I initially tried via the yeoman web site, clicked on Discovering Generators, then searched for Angular2, and found the aspnetcore-angular2 package. It was ok, but I had trouble getting it working with ES5.

I recently went to NDC Sydney, and saw a session by Steve Sanderson. He has put together a great yeoman package that works with Visual Core 1.0 in Visual Studio 2015. The package is called generator-aspnetcore-spa, and installation details are available from his web site: Steve Sanderson’s blog. It has been updated to RC4, and the TypeScript target is set to es5, so it will run on most popular modern browsers.

The beauty of Steve Sanderson’s package is that it also supports React as well, in case you want to give that a try.


ASP.NET Core 1.0 – How to install gulp

July 31, 2016

In a previous blog post, I installed npm, otherwise known as the node package manager. I added an npm configuration file under the wwwroot folder called package.json. There are two problems with this. Firstly, Visual Studio Dependencies haven’t been designed for that scenario, which means adding npm packages won’t update the Dependencies folder at the root level, so you lose a fair bit of control over the packages installed. Secondly, the nature of npm is that there could be a whole bunch of additional files added to the package that could be unrelated to the runtime needs of the package. Having these files added could potentially create a risk.

Now, to approach a better practice, I have decided to go back to putting the packages in the root folder. I right-clicked on the project, add new item, and then selected an npm configuration file, then add. This adds the package.json back into the root folder. I then copied the contents of the original package.json I had under wwwroot into the package.json file in the root folder. After this, I deleted the package.json file from the wwwroot folder and deleted the entire node_modules folder. Why did I do this? Because that is what the state of the folder would have been like under the default scenario of installing npm packages at the top level.

Now, given that any static files that are served to the web site need to reside under wwwroot, I had to come up with a way to relocate the contents of node_modules under wwwroot that didn’t involve putting the package.json file there.

While the most common way to do this was simply to add a static file provider to the Startup.cs file, in the configure method, as the following code demonstrates

 app.UseStaticFiles(new StaticFileOptions
      {
           FileProvider = new PhysicalFileProvider(Path.Combine(env.ContentRootPath, "node_modules")),
           RequestPath = "/node_modules"
      });

I decided that eventually I will want a lot more control over this.

Well, the way of the future is to use a tool like gulp, which enables you to run tasks via the Task Runner Explorer in Visual Studio 2015.

Now, I have to admit that I have been attempting to get Angular 2 running in Visual Studio Core 1.0, with some success. That will be the subject of a future post. But for now, I have added gulp into the npm package.json file. That now likes like this:

{
  "version": "1.0.0",
  "name": "myfirstaspnetcoreapp",
  "private": true,
  "devDependencies": {},
  "dependencies": {
    "@angular/common": "2.0.0-rc.4",
    "@angular/compiler": "2.0.0-rc.4",
    "@angular/core": "2.0.0-rc.4",
    "@angular/http": "2.0.0-rc.4",
    "@angular/platform-browser": "2.0.0-rc.4",
    "@angular/platform-browser-dynamic": "2.0.0-rc.4",
    "@angular/router": "3.0.0-beta.2",
    "bootstrap": "^3.3.7",
    "core-js": "^2.4.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "systemjs": "0.19.27",
    "zone.js": "^0.6.12",
    "gulp": "^3.9.1",
    "rimraf": "^2.5.4"
  }
}

At the bottom of this file are references to gulp and rimraf. Rimraf is the package for doing the unix equivalent of an rm -rf. Gulp is needed to support gulp in the Task Runner Explorer.

Next I added the gulp configuration file to the top level of my project. Right click on MyFirstAspNetCoreApp and click Add New Item, then select Gulp Configuration File. The Gulp Configuration File is a javascript file called gulpfile.js. Keep that name and click Add.

Open up gulpfile.js, and paste in the following code:

var gulp = require("gulp"),
    rimraf = require("rimraf");

var paths = {
  webroot: "./wwwroot/",
  node_modules: "./node_modules/"
};

paths.libDest = paths.webroot + "node_modules/";

gulp.task("clean:node_modules", function (cb) {
  rimraf(paths.libDest, cb);
});

gulp.task("copy:node_modules", ["clean:node_modules"], function () {

  var node_modules = gulp.src(paths.node_modules + "/**")
                    .pipe(gulp.dest(paths.libDest + ""));

  return node_modules;
});

What this code does is copy the entire nested contents of node_modules in the root folder to node_modules under wwwroot. Now, I wouldn’t ordinarily finish here, as you really should be more specific about the content you’re actually copying. But to keep it simple, I have settled on this for now.

Next, open up the Task Runner Explorer. If you can’t see it at the bottom of your screen, it is found under View > Other Windows > Task Runner Explorer.

After building my app, the task runner explorer looks like this for me.

asp-net-core-task-runner-explorer

Now I can right-click on the copy:node_modules task and click Run. If you notice in gulpfile.js, there is a dependency on clean:node_modules, so that will run clean as well. You shouldn’t need to run this every time you compile the application. You only need to run this when adding and removing npm packages. Nothing changes in the meanwhile.

Now, when you you go to wwwroot and Show All Files, you should see the node_modules folder has been copied.

The files within node_modules are now available to be added into your html.