The failure of npm for Visual Studio in the Enterprise

January 15, 2017

Modern application development is hard. There are simply so many things you have to think about when you are developing, and over time, more and more features are created and many of those need to be integrated into your applications.

This is not without cost. Early on in my career, we used to talk about DLL Hell. DLL Hell is the problem where there were so many versions of a DLL installed in your environment that you never knew which one that application was trying to use.

The modern version of this I now call Package Hell. When I open a modern enterprise Visual Studio application, such as an Angular 2 application, I now have a minimum of around 40 packages installed under the Dependencies/npm branch, just to get a simple application up and running, and many of those packages have dependencies too. And that’s only one of the package managers. Other package managers available include nuget, and Bower.

What is supposed to happen these days is that anything that is Microsoft Dot Net related may be found in nuget, while anything that is javascript related will be in npm. Npm is the node package manager. It is essentially an online repository for javascript packages, not just in the Microsoft world, but for any environment that wants access to those packages over the web. It enables developers to find, share, and reuse packages of code from hundreds of thousands of developers — and assemble them in powerful new ways. Microsoft didn’t invent npm. Microsoft decided it was the tool everyone was using, that did the job, so decided to get onboard. They decided they needed to do this to keep up, to stay competitive in the development space.

To easily explain the problems with npm, I will compare this to nuget. Why? Because nuget works! Nuget Package Manager is simple, it is visual, it keeps you informed, it’s easy to find packages and keep them up to date, and it’s easy to change versions of packages if you need to. You don’t need to focus on the tooling – you can install packages and focus instead on integrating with your business logic and providing business value.

npm Problem 1: The proxy.

If you want npm to work in an enterprise environment, you will most likely have to go through a proxy server.

With nuget, you open up the package manager screen, type in package names, and it gives you a list of candidate packages. It automatically handles the proxy for you. You don’t have to configure it to work. You don’t have to go to everyone’s machine, modify a configuration file, just to ensure that their login has the credentials to authenticate through the proxy to get to the nuget repository. It is automatic.

npm, in this regard, is a complete failure. In the environment I was in, we couldn’t even get that configuration right – even with all the correct settings, it still failed. The workaround is to install a third party package called cntlm, which is a service that opens a local port and automatically authenticates through the proxy. All you then have to do is point npm at that port. “Install what?” I hear you say. Yep, exactly. That’s a major fail in a large environment.

Note: you could also use fiddler, but its the same issue. Developers shouldn’t have to spend time configuring or using third party packages for something that should just work out of the box. It works for nuget. It needs to work for npm

npm Problem 2: Finding new packages.

When using nuget, you type a keyword into the search bar, and you can see a list of packages come up. Most of the time, this is because you were googling and found a reference to a package that might solve a problem, or you might have come across some cool new feature and want to try it out. In the process of doing that, you might also discover other packages that do the job, because you can easily scroll down the list and see what else is on offer. Nuget makes discovery of new interesting packages easy.

But not with npm. Sure, you might find out about the package by googling, but the exploration in npm just isn’t there. In npm, it involves opening up the package.json file, typing a double quote and then you will get your list of choices in a 9 item scrollable tool tip. It’s rubbish. Not to mention that some packages aren’t even discoverable. That’s right, you can’t actually find any package in the registry that starts with an “@” symbol, such as @angular because there are special rules for scoped packages.

npm Problem 3: Version control.

With nuget, when you open up the package manager, it looks up the list of installed packages and compares their version number with what’s available on the net. If one of the packages has been upgraded, it shows you in an Updates tab. You can then choose to upgrade if you want. It’s entirely up to you. But at least it has that feature.

With npm, on the other hand, you might have 40+ packages, but there’s nowhere near as much control. Compared to nuget, it really sucks. To tell nuget that you want to continually upgrade, you have to manage it in a configuration file, for example:

“jquery”: “^2.2.1”,

The hat ^ character tells it that you are happy for it to install any version it finds above this. Um. Wrong. You should be the one to decide when you want upgrades. Part of the problem is finding out when something needs to be upgraded, and npm fails at that. The second problem is that not every upgrade is a success. In a corporate environment, you don’t upgrade a major package automatically because it will break stuff and then your whole application is unusable. But you still want the option. You still want to know if there is a package upgrade available, so the npm way is to only install a particular version. Never mind that you would have at least liked the option to upgrade. The whole concept is flawed.

npm Problem 4: Configuration files and the command line

Ok, so we somehow have reverted back to using the command line or fiddling with configuration files. It’s all very 1990s. I mean seriously, who has the massive enough ego to require people to fiddle with json configuration files? Is there some hugely nerdy boffin who still believes they are better than everyone else because they can memorise a bunch of command line attributes?

This is the 21st century. I want my people focusing on business logic and producing business value, not working out the correct command they need to type to get some package installed on their machine. Not when a visual tool will provide everything they need to continue with their core function, which is to provide business value.

Those are the 4 major failures, but now for a couple of quirks.

npm The Quirks

Firstly, when I do an npm restore packages, its often quite difficult to figure out what’s going on or whether its finished its work. The user interface is still interactive too, and you can right-click and install individual npm packages and click restore, even though a global restore is in progress. Huh?

Secondly, my Dependencies folder is almost permanently set to “Dependencies – not installed” even though all my packages are installed. What is the point of showing this if the message isn’t helpful. It makes people lose confidence in the tooling.

In our environment, like most corporate environments, introducing new technologies can be quite difficult. It’s a typical catch-22 situation. You can’t introduce a new technology until its proven, but on the other hand, you can’t prove it until you’re allowed to introduce it. It’s why so many corporates bypass the architecture teams and build a silo to enable innovation within an environment to gain a competitive advantage. It becomes even harder when the tools are problematic.

I was able to get an application up and running within the corporate environment. It was an Angular 2 application running on Dot Net Core with Web Pack. Because of my skill level, I could get it going, but to expect others with less experience to have to fight configuration files and do stuff from the command line and configure the proxy, and install third party tools just to be able to start their job is ridiculous. It’s all experience, I hear you say, well, no. I don’t buy it. It’s hard enough to move to new technologies without the added complexity of dealing with problems that should not exist.

The result was that after a week of having the team fighting (mostly) npm and all the new technologies, we decided to fail early. The entire rest of the team were continually struggling with the development infrastructure and it became a productivity killer. So we have gone back to our old and working development environment. The down side is that there a certain packages that aren’t available on nuget, such as Angular 2. But the up side is that everything else works.

I have to say, I’m disappointed. For all its supposed benefits, the new environment just felt half-baked. The impediment to getting a team running smoothly was just too high. For this to work, npm needs a user interface, and it needs to work automatically through the proxy, much like the far superior experience of nuget. This needs to be fixed for us to be able to move forward, or they will be just as happy, where I am, to stay on the existing tech stack that runs smoothly and virtually without a hitch.

Edit: I have since found out that there is, indeed, a GUI for npm package management. The problem is that it is only available in Node Js applications and not standard asp.net applications. What’s also disappointing is that the GUI isn’t really very good. It certainly isn’t up to the standard of nuget – it feels very much a hack.


GitFlow Cheat Sheet

September 8, 2016

Installing GitFlow on Windows

  1. Install cmder. Google it. Make sure you get the git for windows version.
  2. Download and install in C:\Program Files\Git\bin

a) getopt.exe from util-linux-ng package from the Binaries zip folder found at http://gnuwin32.sourceforge.net/packages/util-linux-ng.htm

b) libintl3.dll from libintl package from the Binaries zip folder found at http://gnuwin32.sourceforge.net/packages/libintl.htm

c) libiconv2.dll from libiconv2 package from the Binaries zip folder found at http://gnuwin32.sourceforge.net/packages/libiconv.htm

  1. Start cmder, which is a better command prompt console. Change folder to the one that you want to install gitflow in. It will install a gitflow folder under this.
cd c:\users\Tony

4. Clone the gitflow repository.

git clone –recursive git://github.com/nvie/gitflow.git 

5. Change to the gitflow folder

cd gitflow
  1. Install gitflow, using the following command:
Contrib\msysgit-install.cmd “c:\Program Files\Git\”

 

Installing GitFlow in a repository

  1. Create your project folder: mkdir c:\demos\flow
  2. Change directory to the project folder: cd c:\demos\flow
  3. Initialise an empty Git repository: git init
  4. Initialise the repository for GitFlow: git flow init

Choose all the defaults.

The prompt should change show that you are in the c:\demos\flow (develop) branch.

  1. Checkout the master branch
git checkout master
  1. Make sure you have set up a repository on GitHub. On the github repository page, click on the Clone or download button and then click on the Use SSH link in the top right hand corner. Copy the link. Then execute the git remote command to set up the origin in git:
git remote add origin git@github.com:tonywr71/PSGitFlow.git
  1. Now push the origin to the master, to establish the github connection
git push –u origin master
  1. Change back to the develop branch
git checkout develop
  1. Now push the develop branch
git push origin develop
  1. If you go back to the repository page, you should now be able to select the two branches from the Branch drop down.

 

Creating a Feature Branch

  1. Make sure you have cloned the repository into a destination directory
git clone git@github.com:tonywr71/PSGitFlow.git .

Note the period (.) which is used to force it to be installed in the current directory, not a child of the current directory.

It will put you in the (master) branch

  1. Change to the develop branch
git checkout develop
  1. Initialise gitflow in the new folder if it hasn’t been done already
git flow init

and select all the defaults

  1. Go into github for this repository and select the Issue tab. We want the new feature to be associated with the issue. So add a new issue for the feature. The Issue number and issue subject should be part of the new feature name. The Issue Subject here would be “Users Can Access Single Entries” for example.
  2. Add a new feature branch
git flow feature start 2-UsersCanAccessSingleEntries

where 2 is the Issue number and UsersCanAccessSingleEntries is a concatenation of the issue subject. The command prompt will now show the feature branch:

c:\demos\tony (feature/2-UsersCanAccessSingleEntries)

  1. This branch hasn’t been pushed back into the repository yet, so there is no tracking in github. If you make changes to code in this folder, the prompt will now be highlighted in red, to show changes pending. If you want to see the pending changes, execute:
git status
  1. To add the files into git
git add .
  1. To commit the repository locally, with a message:
git commit –am “Added code to get single entry”

This will change the command prompt folder back to white and commit the changes locally.

  1. To add the feature back onto the central repository
git flow feature publish 2-UsersCanAccessSingleEntries
  1. If you go into github, you can now select the feature branch from the Branch drop down

 

Reviewing a feature branch on another machine (or in another folder on the same machine)

  1. To see the list of feature sub-commands in gitflow:
git flow feature help
  1. To pull the feature into my local repository, switch you into that branch, but don’t track changes:
git flow feature pull 2-UsersCanAccessSingleEntries
  1. To pull the feature into my local repository, switch you into that branch and track changes to that feature:
git flow feature track 2-UsersCanAccessSingleEntries
  1. Make your changes in the tracked folder. In Cmder the command prompt will show a red feature folder. Again, you can see the pending changes by executing:
git status
  1. Add the files that have been changed:
git add .
  1. Then commit them to the local repository with message:
git commit –am “Added exception handling”
  1. Finally, to push them to the central repository:
git push

 

Get the reviewers changes back on the originator’s machine

  1. Check out the feature and pull it.
git pull

 

Finishing the Feature Branch and merging back into the develop branch

  1. The developer closes the branch, not the reviewer. The developer would click the Merge pull request button to merge back with the develop branch. The reviewer Closes the pull request, but doesn’t finish it. To finish the feature, the developer executes:
git flow finish 2-UsersCanAccessSingleEntries
  1. The feature branch is now deleted both locally and remotely, and you will have been switched back to the develop branch.
  2. Other developers that are using this feature will also need to delete their local branch. That is done by executing:
git checkout develop
git branch –d 2-UsersCanAccessSingleEntries
  1. To check it has been removed
git branch

should no longer be showing the feature branch.

 

Creating a Release Branch

  1. This is the point in time where a release is ready.
  2. Once the Release Branch is created, it is passed to QA for testing.
  3. Any bugs that are found on the release branch will need to be fixed on the Release Branch and then merged back into the develop branch, so that any future feature branches will pick up those fixes.
  4. The architect creates the Release Branch by executing:
git flow release start sprint1-release
  1. The command prompt will show the new release folder, but this branch is currently only on the local machine. To publish this release so everyone can access it:
git flow release publish sprint1-release
  1. If you go into github and pull down the Branch drop down, you will see the new Release Branch.

 

Reviewing a Release Branch

  1. To view and track the release on someone else’s machine, set up a folder on that machine and execute:
git flow release track sprint1-release
  1. If someone makes changes to files in that branch, you can check the changes using
git status
  1. You can then add any changes to the local repository
git add .
  1. Then commit the changes to the local repository
git commit –am “Add logging to exception (example message)”
  1. Then push the change to the remote repository
git push
  1. The changes are now in the Release Branch, but need to be merged back to the develop branch.
git checkout develop
  1. Pull the develop branch
git pull
  1. Then merge it with the release branch
git merge release/sprint1-release
  1. That merge is local, so we now need to push changes back to the develop branch
git push 

 

Cleaning up the Release Branch and pushing to the master branch

  1. This job is done by the Architect, who will change to the Release Branch
git checkout release/sprint1-release
  1. Do a pull to make sure the local machine is up-to-date
git pull
  1. Now finish the release
git flow release finish sprint1-release

This will merge our flow back into the master branch and open up an emacs text editor to allow us to enter a more substantial release note. Save the text editor and exit. The release will be merged into the master branch and tagged with the release name. The tag is also back-merged into the develop branch. The release branch is also deleted both locally and remotely, and you are switched back to the develop branch.

  1. At this time, the develop branch is still checked out, so we need to change back to the master branch to check it all in.
git checkout master

When executing this command it will tell you how many commits there are difference between the master and the develop branch.

  1. We need to push all these changes including all the tags back to the remote repository.
git push –-tags

 

Creating a HotFix

  1. A Hot Fix is an emergency fix to released code. The Hot Fix is created on the master (production) branch. After making the fix on the Hot Fix Branch, it is then merged back into the master and develop branches.
  2. On the machine where the fix needs to occur, start the hotfix
git flow hotfix start hotfix1

Note that hotfix1 should match an issue in GitHub

  1. You will now be in the hotfix branch. Make changes to this branch and you can see the changes to the branch by typing
git status
  1. Once the hotfix has been finished, the changes need to be committed:
git commit –am “Hot fix changes”
  1. The developer can then finish the hotfix by executing
git flow hotfix finish hotfix1

This will bring up the emacs text editor and you can add a hotfix release note, then save and close the editor and it will merge the hotfix into the master, tag it as hotfix1, back merge that tag into the develop branch, deleted the local hotfix1, and switch you into the develop branch.

  1. To see how many commits are outstanding on the develop branch, type
git status
  1. You can also switch to the master branch and see where that is
git checkout master
git status
  1. To push changes back to both remote branches, execute:
git push –-all origin

What are Microservices?

August 14, 2016

Microservices sounds like a pretty slick name, but in fact, it isn’t all that complicated. Microservices are, broadly speaking, all about providing APIs and collaborating between those APIs.

Microservices are a specialisation, a refinement and an evolution of Service-Oriented Architecture (SOA). It is a specific approach for how to do Service-Oriented Architecture better. It has arisen, not due to any academic theories, but from an analysis of lots of real world projects, and takes all the best approaches of SOA learnt from that experience.

The reason you don’t hear much about Service-Oriented Architecture any more is that it was actually a big embarrassing failure. For all its promises, very little actually materialised. There was a lack of consensus on how to do SOA well, there was a lack of guidance on service granularity, SOA doesn’t talk about how to ensure services don’t become overly coupled, and there were too many attempts by vendors to lock you in.

Microservices provides architectural guidance to ensure better choices are made when divvying up an application for better maintenance, flexibility, scaling, resilience and reuse. The idea is to break up large all-encompassing monolithic applications into a whole lot of little services. The smaller the services are, the more the benefits around interdependence are maximised. It also allows functionality to be consumed in different ways for different purposes. The downside is that extra complexity emerges from having more and more moving parts, so we have to become a lot better at handling those complexities.

Microservices aligns well with existing software development methodology, technologies and processes. By splitting an application into a bunch of services, and forcing them to communicate with each only via network calls, it allows each service to be treated like its own bounded context, a concept from Domain Driven Design. Each service also needs to have a Single Responsibility, to be a completely separate entity, to be able to change independently of each other, and to be deployed by themselves, without requiring their consumers to change.

By managing a bunch of separate services, each component can be scaled separately. Too much demand for one service? Spin up another process of that service. They can also be run on multiple separate machines. The system is also much more resilient, as the failure of a single service does not bring the entire system down.

And you are not constrained by the technologies that they run under either. Because there are lots of little services, they work well in the cloud, where the architectural approach can be so closely correlated to an almost immediate cost saving as you reduce compute time for elements that don’t require much resources and increase compute time for the bottlenecks in the application.

Large organisations may have a large number of microservices, and because each microservice is entirely independent, they can be coded in isolation as well. The Microservice approach allows us to divvy up the services so that we can hit the sweet spot between team size and productivity. A good starting point for how big a microservice should be is around 2 weeks of work for team of around 8, so that fits really well in with Agile sprint size as well. You can also have the entire team for a single microservice collocated, while another team may be working on a complementary microservice collocated elsewhere.

There are also other benefits from using the microservice approach to application construction. Teams that follow that approach are actually quite comfortable with completely rewriting services when required, and choosing alternative technologies with the ability to make more choices on how to solve problems. They also have no problem replacing services that they no longer need. When the code base is only a few hundred lines of code, it becomes difficult to become emotionally attached to it, and the cost of replacing it is pretty small.


Angular 1 is dead. Where to now?

August 7, 2016

Angular 1 has a massive market. It is by far the most widely used JavaScript framework available. It is a very opinionated framework, it has declarative power, and developers tend to lean towards the MV* patterns which has a whole lot of benefits and with which they are familiar with. So Angular itself is not going away any time soon.

The biggest problem with Angular 1 is that it is no longer being actively maintained. The main reasons for this are Componentisation, Performance and an inability to play well with search engines (SEO), which, incidently, are the main factors that have made its main competitor, React, so popular. There is also quite a significant learning curve with Angular.

Componentisation enables you to build custom component trees quite easily, and the resulting code is usually much more maintainable. Performance was always a killer in Angular 1 due to watches and the digest cycle, which was basically a system for monitoring every single changing item on your page.

There was a limit of 2000 watches, and as soon as you went over that, IE pages simply ground to a halt. Finally, having a whole lot of script on the page did not make Search Engine Optimisation easy at all. Search engines don’t know what to look at with a single page application. They find it hard to walk the tree of links between your pages, because they aren’t seeing what you are seeing, they need to interpret the script behind the scenes that is being executed.

So the Angular team announced a complete rewrite of Angular 1, because they found that the structural problems with Angular 1 could not be resolved via a simple upgrade. They gave their own existing product a resounding fail. In doing so, they signed its death warrant.

What do you select then, if you have a whole lot of experience in Angular 1, and need to choose a JavaScript framework for your next project?

Well, after analysing the market, reading a whole stack of analysis and reviews, having a play around with the technologies, I can say that there’s not a lot in it. Because Angular 2 is so different to Angular 1, you don’t need to automatically choose Angular 2 going forward. That said, because of the strength and size of the Angular 1 market, I don’t see Angular 2 going away any time soon.It may be an easier sell to management, especially how much was previously invested in Angular 1 training, to go to Angular 2.

Steve Sanderson, from Microsoft, produced the following table, showing the benefits of the few of the frameworks. I really thing the server side pre-rendering is important, especially when one of the major complaints with Angular 1 was the lack of deep-linking and SEO support.

Angular 2 Knockout React React + Redux
Language TypeScript TypeScript TypeScript TypeScript
Build/loader [1] Webpack Webpack Webpack Webpack
Client-side navigation Yes Yes Yes Yes
Dev middleware [2] Yes Yes Yes Yes
Hot module replacement [3] Yes, limited Yes, limited Yes, awesome Yes, awesome
Server-side prerendering [4] Yes No No Yes
Lazy-loading [5] No Yes No No
Efficient prod builds [6] Yes Yes Yes Yes

There is one framework not shown here that has gained some traction in recent times and that is Aurelia, which has recently been released (RTM). Aurelia was created by the developer who produced Durandal. He later joined the Angular 2 team, had some input into that, but later left that team because he disagreed with some of their decisions. And some of those decisions are probably valid, while others may not have been, such are the egos of developers. Aurelia is supposed to have a more simplified syntax to Angular but doesn’t currently have the market penetration.

I like to keep things simple. I like to look at what has solid traction, and try to limit my choices based on what the technical capabilities are, maintainability, performance, ease of learning it and popularity. This tells me that the two frameworks with the most promise are actually Angular 2 and React+Redux.

Although Angular 2 has only reached RC4, I still consider it a viable choice today, as, remember, by the time  your app is released it will most likely have gone to RTM. There are actually a number of significant applications that have been built in Angular2 release candidate. The strong tooling and support when Angular 2 is finally released is also a consideration, as whatever your choice is, you really will want longevity of your code base, and you certainly don’t want to be embarrassed by making a fringe choice that has potential that never materialises.

Alternatively, you might choose to go with React+Redux, which is also available with Visual Core 1.0 and Visual Studio 2015. React is supported by Facebook, and is part of a more advanced ecosystem. Facebook are also innovating faster to answer any architectural issues related to component-based frameworks. Each framework tries to steal the best bits from each other, and both React and Angular have been doing this.

If it was pure performance I was after, I think I would have to go with React. React is not an Angular killer, however, mainly because of the size of the Angular base and the structure it provides.  React is probably a lot simpler to learn, while Angular 2 has become better at this. It really comes down to how structured you need your code to be versus how much performance you need to get out of your web servers. With massive cloud based sites, extra web servers and lower serving capacity costs money, so I’d say they’d probably be better with React.

 

Edit: I just found another table that is worth linking to, by Shannon Duncan. It has more attributes compared, which make it much more interesting:

angular2-vs-react

That article may be found here: Angular2 vs React


Installing Angular 2 to run with Visual Core 1.0 in Visual Studio 2015

August 7, 2016

I initially had a lot of trouble even finding references to people using Angular 2 in Visual Studio 2015. It seems that no matter what I fiddled with, there were failures at every turn. It ended up being quite tricky to get it working. In the end I found that the best way to get going in Visual Studio 2015 was to use yeoman to create your base. And then work backwards to figure out where I went wrong.

Yeoman is yet another package manager. Basically, smart people put together packages with technologies that they think are right together, and submit the packages to yeoman. You go to yeoman.io and you can look up the packages that others have put together.

I initially tried via the yeoman web site, clicked on Discovering Generators, then searched for Angular2, and found the aspnetcore-angular2 package. It was ok, but I had trouble getting it working with ES5.

I recently went to NDC Sydney, and saw a session by Steve Sanderson. He has put together a great yeoman package that works with Visual Core 1.0 in Visual Studio 2015. The package is called generator-aspnetcore-spa, and installation details are available from his web site: Steve Sanderson’s blog. It has been updated to RC4, and the TypeScript target is set to es5, so it will run on most popular modern browsers.

The beauty of Steve Sanderson’s package is that it also supports React as well, in case you want to give that a try.


ASP.NET Core 1.0 – How to install gulp

July 31, 2016

In a previous blog post, I installed npm, otherwise known as the node package manager. I added an npm configuration file under the wwwroot folder called package.json. There are two problems with this. Firstly, Visual Studio Dependencies haven’t been designed for that scenario, which means adding npm packages won’t update the Dependencies folder at the root level, so you lose a fair bit of control over the packages installed. Secondly, the nature of npm is that there could be a whole bunch of additional files added to the package that could be unrelated to the runtime needs of the package. Having these files added could potentially create a risk.

Now, to approach a better practice, I have decided to go back to putting the packages in the root folder. I right-clicked on the project, add new item, and then selected an npm configuration file, then add. This adds the package.json back into the root folder. I then copied the contents of the original package.json I had under wwwroot into the package.json file in the root folder. After this, I deleted the package.json file from the wwwroot folder and deleted the entire node_modules folder. Why did I do this? Because that is what the state of the folder would have been like under the default scenario of installing npm packages at the top level.

Now, given that any static files that are served to the web site need to reside under wwwroot, I had to come up with a way to relocate the contents of node_modules under wwwroot that didn’t involve putting the package.json file there.

While the most common way to do this was simply to add a static file provider to the Startup.cs file, in the configure method, as the following code demonstrates

 app.UseStaticFiles(new StaticFileOptions
      {
           FileProvider = new PhysicalFileProvider(Path.Combine(env.ContentRootPath, "node_modules")),
           RequestPath = "/node_modules"
      });

I decided that eventually I will want a lot more control over this.

Well, the way of the future is to use a tool like gulp, which enables you to run tasks via the Task Runner Explorer in Visual Studio 2015.

Now, I have to admit that I have been attempting to get Angular 2 running in Visual Studio Core 1.0, with some success. That will be the subject of a future post. But for now, I have added gulp into the npm package.json file. That now likes like this:

{
  "version": "1.0.0",
  "name": "myfirstaspnetcoreapp",
  "private": true,
  "devDependencies": {},
  "dependencies": {
    "@angular/common": "2.0.0-rc.4",
    "@angular/compiler": "2.0.0-rc.4",
    "@angular/core": "2.0.0-rc.4",
    "@angular/http": "2.0.0-rc.4",
    "@angular/platform-browser": "2.0.0-rc.4",
    "@angular/platform-browser-dynamic": "2.0.0-rc.4",
    "@angular/router": "3.0.0-beta.2",
    "bootstrap": "^3.3.7",
    "core-js": "^2.4.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "systemjs": "0.19.27",
    "zone.js": "^0.6.12",
    "gulp": "^3.9.1",
    "rimraf": "^2.5.4"
  }
}

At the bottom of this file are references to gulp and rimraf. Rimraf is the package for doing the unix equivalent of an rm -rf. Gulp is needed to support gulp in the Task Runner Explorer.

Next I added the gulp configuration file to the top level of my project. Right click on MyFirstAspNetCoreApp and click Add New Item, then select Gulp Configuration File. The Gulp Configuration File is a javascript file called gulpfile.js. Keep that name and click Add.

Open up gulpfile.js, and paste in the following code:

var gulp = require("gulp"),
    rimraf = require("rimraf");

var paths = {
  webroot: "./wwwroot/",
  node_modules: "./node_modules/"
};

paths.libDest = paths.webroot + "node_modules/";

gulp.task("clean:node_modules", function (cb) {
  rimraf(paths.libDest, cb);
});

gulp.task("copy:node_modules", ["clean:node_modules"], function () {

  var node_modules = gulp.src(paths.node_modules + "/**")
                    .pipe(gulp.dest(paths.libDest + ""));

  return node_modules;
});

What this code does is copy the entire nested contents of node_modules in the root folder to node_modules under wwwroot. Now, I wouldn’t ordinarily finish here, as you really should be more specific about the content you’re actually copying. But to keep it simple, I have settled on this for now.

Next, open up the Task Runner Explorer. If you can’t see it at the bottom of your screen, it is found under View > Other Windows > Task Runner Explorer.

After building my app, the task runner explorer looks like this for me.

asp-net-core-task-runner-explorer

Now I can right-click on the copy:node_modules task and click Run. If you notice in gulpfile.js, there is a dependency on clean:node_modules, so that will run clean as well. You shouldn’t need to run this every time you compile the application. You only need to run this when adding and removing npm packages. Nothing changes in the meanwhile.

Now, when you you go to wwwroot and Show All Files, you should see the node_modules folder has been copied.

The files within node_modules are now available to be added into your html.


Why you should (almost) always choose an off-the-shelf grid and not build your own.

July 30, 2016

Recently I was in a situation with a whole lot of people who I think should know better. We were building an application and I was not there when the questionable decision was made to build their own grid.

There are a whole swag of reasons, except in the simplest of cases, why you should never build your own grid. Grids can be complicated, and they can require a significant investment to obtain even the simplest of features that you would otherwise get in an off-the-shelf product.

Features like sorting, filtering, frozen columns, frozen rows, summing, hierarchies, cell editing, data exporting, pagination etc. For high volume data, they also include virtual paging, which loads data into the grid page by page, instead of all at once. They can be styled however you want them, and they are fully tested. Sure, they can require a little bit of learning to achieve what you need, but the cost of doing this is significantly less than the build your own solution. The only time you run into problems is when there is too much bloat, or you are trying to do too much with the grid, a problem you would probably have regardless of which path you took.

But you don’t need to believe my opinion. It is a principle of Domain Driven Design. Eric Evans, the original author of Domain Driven Design has a Domain Driven Design Navigation Map which clearly states “Avoid over-investing in generic sub-domains.”

A grid is a perfect example of a generic sub-domain. From Eric’s Diagram:

domain driven design navigation map - generic subdomain

So next time someone is absolutely adamant that they need to build their own grid, see through that for what it is, especially if they claim to be Domain Driven Design experts.