Infrastructure as a Service and Platform as a Service offer us easy scaling of services. However, scaling is not as easy as it seems to be in the Cloud. If your software architecture isn’t done right, your services and applications might not scale as expected, even if you add new instances. As for most distributed systems, there are a couple of guidelines you should consider. I have summed up the ones I use most often for designing distributed systems.
Design for Failure
As Moore stated, everything that can fail, will fail. So it is very clear that a distributed system will fail at a certain time, even though cloud computing providers tell us that it is very unlikely. We had some outages [1][2] in the last year of some of the major platforms, and there might be even more of them. Therefore, your application should be able to deal with an outage of your cloud provider. This can be done with different techniques such as distributing an application in more than one availability zone (which should be done anyway). Netflix has a very interesting approach to steadily test their software for errors – they have employed an army of “Chaos Monkeys” [3]. Of course, they are not real monkeys. It is software that randomly takes down different Instances. Netflix produces errors on purpose to see how their system reacts and if it is still performing well. The question is not if there will be another outage; the question is when the next outage will be.
Design for at Least Three Running Systems
For on-premise systems, we always used to do an “N+1” Design. This still applies in the cloud. There should always be one more system available than actually necessary. In the cloud, this can easily be achieved by running your instances in different geographical locations and availability zones. In case one region fails, the other region will take over. Some platforms offer intelligent routing and can easily forward traffic to another zone if one zone is down. However, there is this “rule of three,” that basically says you should have three systems available: one for me, one for the customer and one if there is a failure. This will minimize the risk of an outage significantly for you.
Design for Monitoring
We all need to know what is going on in our datacenters and on our systems. Therefore, monitoring is an important aspect for every application you build. If you want to design intelligent monitoring, I/O performance or other metrics are not the only important things. It would be best if your system could “predict” your future load – this could either be done by statistical data you have from your applications‘ history or from your applications‘ domain. If your application is on sports betting, you might have high load on during major sports events. If it is for social games, your load might be higher during the day or when the weather is bad outside. However, your system should be monitored all the time and it should tell you in case a major failure might come up.
Design for Rollback
Large systems are typically owned by different teams in your company. This means that a lot of people work on your systems and rollout happens often. Even though there should be a lot of testing involved, it will still happen that new features will affect other services of your application. To prevent from that, our application should allow an easy rollback mechanism.
Design No State
State kills. If you store states on your systems, this will make load balancing much more complicated for you. State should be eliminated wherever and whenever possible. There are several techniques to reduce or remove state in your application. Modern devices such as tablets or smartphones have sufficient performance to store state information on the client. Every service call should be independent and it shouldn‘t be necessary to have a session state on the server. All session state should be transferred to the client, as described by Roy Fielding [4]. Architectural styles such as ROA support this idea and help you make your services stateless. I will dig into ReST and ROA in one of my upcoming articles since this is really great for distributed systems.
Design to Disable Services
It should be easy to disable services that are not performing well or influencing your system in a way that is poisoning the entire application. Therefore, it will be important to isolate each service from each other, since it should not affect the entire system’s functionality. Imagine the comment function of Amazon is not working – this might be essential to make up your mind about buying a book, but it wouldn’t prevent you from buying the book.
Design Different Roles
However, with distributed systems we have a lot of servers involved – and it‘s necessary not to scale a front-end or back-end server, but to scale individual services. If there is exactly one front-end system that hosts all roles and a specific service experiences high load, why would it be necessary to scale up all services, even those services that have minor load? You might improve your systems if you have them split up in different roles. As already described by Bertram Meyer [5] with Command Query Separation, your application should also be split in different roles. This is basically a key thing for SOA applications; however, I still see that most services are not separated. There should be more separation of concerns based on the services. Implement some kind of application role separation for your application and services to improve scaling.
There might be additional principles for distributed systems. I see this article as a rather “living” one and will extend it over the time. I would be interested is your feedback on this. What are your thoughts on distributed systems? Email me at mario.mh@cloudvane.com, use the comment section here or get in touch with me via Twitter at @mario_mh
- Azure Management Outage, Ars Technica
- Amazon EC2 Outage, TechCrunch
- The Netflix Simian Army, Netflix Tech Blog
- Representational State Transfer (REST), Roy Fielding, 2000
- Command Query Separation, Wikipedia
Trackbacks & Pingbacks
[…] Design Guidelines for Cloud Computing and Distributed Systems […]
Leave a Reply
Want to join the discussion?Feel free to contribute!