How to build a modern best practice enterprise application on-premise or in the cloud

May 24, 2012

This is a summary of the key points of my presentation at the Victoria.Net users group, held at Microsoft in Melbourne, Australia, with a few added points thrown in for extra value.

Developing large scale applications is hard, and never before have we been under so much pressure to be efficient with our time. It is imperitive that we spend as little time as we can on unproductive infrastructure code and as much of our time actually implementing the domain/business services logic.

For some reason, many enterprise architects promote leaving performance to the end of the development process. They decide on an architecture, get all their developers to build it, and then when it gets to the end of the development process, wonder why the application runs so slowly. They haven’t even done a basic performance test, to see if any of their assumptions were correct, and at the end of the process sometimes discover serious flaws in the architecture of their sites requiring considerable rework, and occasionally a complete rewrite is required.

In a previous post, I showed statistics produced by Watt’s Humphrey, known as the father of Software Engineering, where he shows the cost of discovering and rectifying defects in their respective project phases. The later you find defects in the development process, the more expensive they are to rectify, so if you wait to the end of a project before dealing with an issue that should have been rectified at the start of the project, it could cost you magnitudes more than if you at least have an attempt at dealing with performance from the start. You can find that post here.

Another issue with large scale web sites is configuration. Most configuration is found in web.config files or app.config files within a .net application. Within a web site, if you modify the web.config file, it resets the web server to refresh the configuration settings. One environment I worked in had a web farm that consisted of 8 web servers. Because of the use of web.config settings and also Enterprise Application Blocks, which heavily utilise configuration files, they were always running into problems with servers that have their configuration out-of-synch, and also have an unstable state during the update. And they still are.

So on Tuesday night, I showed a comprehensive end-to-end application that demonstrates an n-tier, service-oriented architecture that is considered best practice. It also allows on-premise, hybrid and cloud-based tiers. For people that are struggling with the whole “to cloud or not to cloud” arguments, this allows you to hedge your bets. If this point has been holding back project development, you don’t need to worry. This architecture allows you to build an on-premise application, then move it to the cloud at a later date, for very little extra cost, if there is a movement to do so.

The features are:

A configuration block. The configuration block is at the heart of the application. It enables you to centralise configuration for all the tiers of your application. It provides a web based portal for making modifications. When you make changes to configuration, those changes are automatically updated on the server.

Centralised exception handling and errors.

Centralised logging.

A service map. The service map shows a visual representation of all your tiers, where they are hosted, what instances are running, what databases you are running, and it even allows you to drill down into a database and see latency figures and the least performing and most CPU intensive queries. Using the service map, you are able to see any servers that become unavailable, for whatever reason. The architecture actually implements polling of all the component instances, so you’ll know pretty much straight away if any of the services aren’t available.

A large scale application project structure that you can use, separating User Interface, Business Services Logic and Database.

A database layer that is optimised for performance. This was at the heart of the performance tests against IBM WebSphere, so if there was a faster way of interacting with the database, I would be absolutely shocked. Microsoft has to be able to push as many transactions per second as possible through this system. They have to outperform IBM WebSphere, which they do dramatically. (IBM WebSphere is 6 times more expensive for the same grunt.)

A benchmarking tool that allows you to test your architectural choices. What happens with this architecture if you make the wrong choice? Nothing. The application has been designed so that you can change where your tiers are hosted, at runtime, on the fly! So you are protected against making the wrong choice.

I should point out that I demonstrated all of this. I started by running the application fully in-proc. Then I moved the business services layer to be IIS hosted. I was able to show this on the visual service map. Then I switched the application over to a business service layer hosted in the cloud. When I showed the service map, it looked awesome.

There are full instructions on how to set up the application. The installer only takes about 5 to 10 minutes to install everything. It is rich with documentation and it includes full instructions on how to configure Azure to have different tiers, how to install digital certificates, and ensure that all the communication channels are encrypted.

I mentioned that if you host a business services layer in Azure, then you can supply digital certificates to any company that you would like to use that service. So any company can integrate with the business services layer without you having to worry about them gaining access to your network.

I explained how the load balancing works. You can have as many servers you like in the web farm, and the requests are round-robined to each of the services.

I spoke about how to integrate the configuration block into your own application, and also demonstrated the Visual Studio template provided to generate the basic application layers, which include the configuration block.

The application itself was written by Gregory Leake. Gregory is the Technical Product Manager at Microsoft for the SQL Azure and Azure AppFabric teams. The application was first written around 2006, and is now at version 5. Thousands and thousands and thousands of hours have gone into thinking through the mass of scenarios that you might need to consider, and spent on coding this application. At the time of writing, this application has been evolving for 6 years. The application, including a working demonstration of the running Azure version, may be found here: http://msdn.microsoft.com/stocktrader

I guess my final message here is that in order to be more efficient in your coding, you need to simplify. Forget over complicated tricky designs. All that will happen is that they will be difficult to maintain. Spend as little time as you can on infrastructure code. If you can find a framework such as this that has all these goodies already provided, then embrace it. There is no way you can develop all this efficiently in short timeframes, and why would you want to anyway? It’s just reinventing the wheel. Take it on and use it within your environment. Then spend most of your time building your application, and not the infrastructure to support it.


What are the security and regulatory issues with cloud computing?

January 30, 2012

There is no easy answer as to whether you will personally be happy to put all your data and intellectual property in the cloud. There are all sorts of issues that you will need to address, and nothing I can say will really mitigate any of your concerns.

Example issues might be:

  • Is my data too sensitive to put in the cloud?
  • Do I retain legal ownership of my data?
  • What happens to my data if I forget to pay my subscription?
  • Will I have a backup of my data if something happens to the cloud version?
  • Can I easily move my data from one vendor to another or in house?
  • Do I know and accept the privacy laws of the countries that have access to my data?
  • If I’m in a shared environment, and I delete my data, how sure can I be that the data is sanitised?
  • Does the SLA adequately reflect the actual damage caused by a breach of the SLA, such as scheduled down time or data loss?

In Australia, we have a government organisation called the Defence Signals Directorate. They have produced a white paper on what government organisations need to do to ensure they are adequately protecting their data. That document may be found here: Cloud Computing Security Considerations. One interesting thing is that this document has a whole list of issues for you to consider, such as those above. It is well worth a look.

That document isn’t just good for it’s discussion on security implications. It also has a great “Additional Information” section, where it provides links to various other bodies who have performed analysis in this area. For example, there is a reference to a comparison of cloud providers.

I think the message to take away from this is that it can be done, and if the military and government are allowing it to be considered, then perhaps you should consider it as well.


How much can I save with Cloud Computing?

January 27, 2012

Well, that depends. How much of your environment are you intending to put in the cloud? Are you intending to just put a toe in the water, or are you going to put everything in?

If you’re talking about moving all your servers into the cloud, Microsoft produced a whitepaper in November 2010 that showed the Economics of going cloud. That whitepaper is found here: Microsoft Economics Of The Cloud

To keep it simple, in the article, on page 10 there is a graph. It shows the Total Cost of Ownership based on how many servers you have in the cloud. It starts at 100 servers, and as the number of servers increases, the costs drop. The sweet point is around 1000 servers, where the real benefits reach their peak. After that, the costs do still reduce, but not at the rate up to the 1000 server point.

For a mid-sized company, I would expect to see between 20% and 30% reduction in costs. I recently had a conversation with a company that has 40,000 employees that have now put everything in the cloud. They claimed a 25% reduction, and I would say they are pretty conservative with their calculations.

The next issue is private vs. public cloud. Of course, there are many companies that can’t put their data in the public cloud for a variety of reasons. Many organisations can’t share or even be seen to be sharing their data with other companies. Organisations with privacy issues, such as childcare providers or elderly facilities, government, military. There are organisations that have regulatory issues relocating data to offshore data centres. The public cloud for Australia, for example, is located in Singapore.  There are organisations you don’t want to hear have gone to the public cloud. This is in spite of the fact that the public cloud is architected with the utmost focus on security and separation of tenants and their data. These people need to choose a private cloud.

A private cloud can be hosted in house, or in a local data centre and is generally set up for you by a larger organisation, such as Fujitsu. The equipment is sourced from the same batch of computers that would be found in a public cloud set up, but you don’t share any machines with other organisations. You are the only tenant. Whoever you get to implement your private cloud will also manage your service, so they can have employees that manage significantly more servers than your own infrastructure employees can manage, so you get that economy of scale. The only problem with private cloud is that it costs, because you are the only tenant.

If you look at the Microsoft Economics of the Cloud whitepaper, on page 15, they have a graph that shows the relative costs of private cloud vs. public cloud. For smaller organisations, such as those with 100 servers, the cost of private cloud is more than 40 times the cost of utilising public servers. Of course, the cost of private cloud drops as you add on more servers. Looking at the graph, by the time you get to 1000 servers, the cost of private cloud is 10 times the cost of the public cloud.

But you really need to ask yourself, do I really need private cloud? Security in the public cloud is extremely robust. The architecture prohibits data crossing the tenant boundary. Microsoft and other organisations know that if they have security breaches then they are virtually finished, as Amazon found recently. And Microsoft must be the most attacked company in history. So if they send their security specialists in to architect a system where companies may want to put their most secure trade secrets, they have to make sure it is secure. There are large multinational companies with their entire operations in the public cloud. I’ve met one of them. They wouldn’t have done it if they had real concerns about the safety of their data.

 


What is Cloud Computing? Is it hype or will it give me real benefits?

January 24, 2012

There is still a lot of misunderstanding about what the cloud is. People think that having their servers hosted in a data centre is the same as having it in the cloud, so they can’t visualise the change. The cloud is about providing economies of scale, and also it’s about bulk administration of servers and services provided.

Firstly, there are cloud data centres. The cloud providers go out and get the system architected so that they can then mass produce a single server type for their data centres. Then they bulk produce it. The data centres are absolutely massive. The interesting thing is that for them to add another server in, it costs in the order of $10. (Feel free to jump in with corrected figures if you have them. The figure I was originally told was actually less) That means that they can pass on some of the savings to you.

Secondly, there are the cloud applications. Hosted applications in the cloud. This means that instead of your staff having to go buy and build a server, install the software, and configure it, you can simply put in a request to have it provisioned and a new instance will be created and available either instantaneously are within a very short period of time. You no longer need staff that are skilled in building and configuring hardware. You just provision another application and it’s up and running in a timely manner.

Of course, if you’ve already got all the hardware and got all the licences, you’re quite possibly not going to consider moving to the cloud. Therefore many people will wait around until their hardware or O/S is virtually obsolete. This will perhaps cause them to miss an opportunity to reduce or retask headcount.

That’s because the cloud providers will be providing all the hardware and provisioning of servers. They have teams that can administer 1000 servers per person. Within your own company, you’d be lucky if you had one person that could administer 100 servers. So instead of employing people to manage all that infrastructure, you need less people.

Now, I’ll use a lot of figures here to try to make a point. It’s hypothetical. In fact, I’ll say they’re all made up so I don’t have to get into arguments about them. (If you believe you have accurate references to the true costs, by all means, put them forward!)

To a small business, whether you spend $200 a year or $300 a year on a server, there’s not really much difference. But consider larger organisations that have 1000 servers. The cost difference in that case becomes $200,000 vs $300,000.

In the cloud case, that server cost might drop to $100 per server per year.

Say a person with that skillset was costing you, say, $150,000 a year (not what they’re earning, their cost to the business), and you now needed 3 less staff members because of it. The saving from that is $390,000 a year. Add the $200,000 saving from server cloud and that starts looking like a more interesting $590,000.

So when companies consider the cloud and they scoff at moving to the latest “fad” because of their existing environments you think, yea ok. But then consider new companies, fast movers, up and coming. They aren’t going to care about all these issues bigger companies are having regarding corporate governance and data safety, because the cloud offering will be an adequate proposition for them. Data safety and security will be good enough. Then when it comes for these companies to scale, they can just spin up new servers. Old school companies will move much slower and the new cloud businesses will become the rule breakers that will catch up faster than ever before.