Archive for the ‘Linux’ Category


Simply stated High Availability is about a continuously operational system for an ideally long period of time.

For sure you have heard of the “99’s” percentage rates of Service Level Agreement, 99%, 99.9%, 99.999%… that are assigned to services around the internet or in your job deployed services. This refers to uptime (and downtime) time of a service during the year, in which 99% means an downtime of 3.65 days a year and the the “five nines” rate points to a downtime of 5.26 minutes of 525600 minutes that comprises a whole year.

This SLA is at same time the proud and joy of solution providers and the source of oganza bisasa moments for business contractors that imagine how such wonders could be achieved.

So, how such a thing could be achieved? Basically by the observation three system design principles:

  1. Elimination of SPOF (Single Point of Failures);
  2. Reliable Crossover;
  3. Detection of failures and unnoticed recover for the eyes of the user.

This design principles are built-in in a Java EE solution and defines the very need of such  middleware platform.

In this series we’ll walk trough an alternative approach  based on FOSS, virtualization, containerization and Linux infrastructure. Our tech stack for enabling a java microservices architecture is composed of:

  • Spring Boot
  • Ubuntu
  • Vagrant
  • Ansible
  • HAProxy
  • Keepalived
  • Docker
  • Docker Swarm

But first let’s rapidly review HA with Java EE.

Java EE on HA

An application server like Oracle Weblogic Server approaches HA by the implementation of clusters of instances that provides load balancing and failover features to a traditional Java EE application. And to do so OWS provides a mechanism to tie together independent instances of Web Containers and EJB Containers.

Each server maintains a list of other servers in a logically configured cluster via administration console. What happens now is that each cluster maintains the status of each other servers in the cluster trough the use of a heartbeat mechanism. The heartbeat consists of a  TCP/IP unicast or UDP multicast message for each other instance in a cluster. The messages that fails to arrive to its destiny node notices an down service readily removed from active instances list.

We could observe that OWS complies with the principles 1 and 3 of HA system design. For Web Applications, OWS suggested architecture relies on a Hardware Load Balancer or  a HTTP server like MS IIS, Apache or even another instance of OWS that could provide sort of reliable crossover for our Java EE Application.

If you are a savvy Java EE rat, bingo. I omitted session replication from the OWS  equation. That was for the sake of conciseness of this HA series.

No. I’m lying. Stateful session replication is deadly boring, brings doubtful outcomes and for our Spring Boot Stateless REST that will not be necessary. For a complete reference on OWS clustering, you could refer to this book.

HA with Commodity Software

Achieving HA in a one stop shop (well) paid solution could bring comfort and manageability once paid solutions often offers full administration consoles in order to deploy clustering environments.

Nevertheless, virtualization, the cloud,  NoSql solutions fronted by microservices APIs and  more recently process isolation containerization trough brings to table new possibilities in the implementation, and new issues.

I always has been fan of commoditization in technology. And FOSS, even when it was not accepted in enterprise environments. Commoditization and FOSS means that we could achieve results comparable to giants like Oracle.

In traditional Java EE we talk about Tomcat, Apache and JBoss (Wildfly). But in recent microservices movement upon virtualized environments we achieve one step further in commoditization. We are leveling down Application Development to Linux ecosystem.

One advantage of microservices aparted complex Java EE application servers for devops movement is that we can now rely on Linux software to unify the tech stack on the ops side. In HA that means that we don’t need a multicertified  professional shaman to cast spells on a proprietary Java EE stack in order to manage application server magic.

If we are talking on Linux commodity for HA we are talking about HAProxy and keepalived.

But let’s calm down Application Server refugees, like myself, that at this time may just wonder how to manage the configuration of several environments without an admin console.

Coding the Infrastructure

One of advantages of one stop shop solutions like (paid) OWS or (free) Wildfly is that they offer unified environment to implement HA solutions.

Some developers and Java architects may feel uncomfortable in rely only in Linux stack mainly because administration concerns and that’s a point. But in an age which we have to live togheter with a plethora of fine grain services I’m afraid that administrators driving a cozy console UI (ok, no so cozy…), or even supported by Jenkins, would be quickly overwhelmed by an infinitude of new error prone points and clicks and ssh sessions.

 

Differently of immemorial bash and bat files, Infrastructure as code incorporates new configuration capabilities to application code. With tools like Vagrant, Docker, and Ansible we can automate virtually anything that a sysadmin would, and even more. We can orchestrate infrastructure close to the needs of our architecture, integrating our application architecture to operating system resources.

This infrastructure coding is exactly what we will do here. First things first.

For now,  trust me when I tell you that we need at least 4 machines to demonstrate a full HA solution. That comprises the Spring Boot microservice machine and the load balancer machine.

And as we are talking about SPOF elimination we need double it all! That can be seen in next figure.

Untitled (2)

To start coding the infrastructure we begin with all different machine profiles planned to architecture. Enters Vagrant.

Vagrant

Vagrant is a tool that simplifies the workflow and reduces the workload necessary to run and operate virtual machines by offering a simple command-line interface  to manage VMs. To run a VM and connect to it from your host workstation you just need two commands:


$ vagrant up

$ vagrant ssh

Under the hood, with this simple commands vagrant saved to me the following steps:

  1. Download the VM image;
  2. Start the VM;
  3. Configure VM’s resources like RAM, CPUs, shared directories and network interfaces
  4. Optionally install software  within the VM with tools like Puppet, Chef, Ansible and Salt, wich we call provisioning.

Vagrant is not a VM tool, its works upon VM solutions like VirtualBox, Vmware and Hyper V. Vagrant unifies the management of these tools, and that’s all the magic.

Installing Vagrant

As we saw, Vagrant relies on well known VM tech, that in Vagrant parlance we call VM Provider, we could use the free solution Virtual Box. My host machine is a Ubuntu derived, and virtual box is available on default apt-get repositories, I can install it just by issuing:


$ sudo apt-get install virtualbox

If you want some other version than the available on repositories, you can follow these steps.  To test the installation, just run the following command and wait for VirtualBox visual console:


$ virtualbox 

Vagrant is also available in ubuntu repositories so you can install it just by issuing:


$ sudo apt-get install vagrant

Our first Vagrant machine

To code our infrastructure with Vagrant we use a VagrantFile. We use it to describes the type of machine that we need and how to configure and provision this machine. A VagrantFile is a kind of Ruby DSL that is supposed to be versioned in order to be shared by developers in individual workstations.

Let’s create a directory named vagrant_springboot_ha and there create a file
named Vagrantfile with the following contents:


VAGRANT_API = "2"

Vagrant.configure(VAGRANT_API) do |config|

  config.vm.box = ";ubuntu/trusty64"

end

Above we tell to Vagrant that we wish to create an ubuntu VM 64 bits. This VM file comes from the Hashicorp repository of images that we can search at https://atlas.hashicorp.com/boxes/search.  An image used by vagrant is called Vagrant Box. To define a custom path  to cache this images in our local machine we could use the following command:


$ export VAGRANT_HOME=/opt/pit/vagrant_home

And finally we could access this machine with the following commands issued inside the containing directory for our Vagrant File:


$ vagrant up
$ vagrant ssh

That’s it! Now we have our  configurable virtual machine based on ubuntu to use as our development environment for our architecture. If you start VMware you could see a new specific virtual box machine as follows:

Screenshot from 2016-04-18 22:45:20

 

Vagrant Multimachine: simulating our topology with Phoenix Servers

Our topology requires at least four machines. Fortunately vagrant supports the configuration of several machines with the mechanism called Multimachine, in which we can add as much machines as needed based on different Boxes if necessary.

Before,  we can fearlessly destroy this PhoenixServer with following command:


$ vagrant halt
$ vagrant destroy

Why I called it PhoenixServer? Just because, after destroy I could create a brand new
identical ubuntu machine instance base on standard Hashicorp repository image just by repeating the above steps.

To define the four above specified machines just update our Vagrant File with the following content:


VAGRANT_API = "2"

Vagrant.configure(VAGRANT_API) do |config|

 config.vm.box = "ubuntu/trusty64"

 config.vm.define "load_balancer_active" do |load_balancer_active|
  load_balancer.vm.network "private_network", ip: "192.168.33.10"
 end

 config.vm.define "load_balancer_bkp" do |load_balancer_bkp|
  load_balancer.vm.network "private_network", ip: "192.168.33.11"
 end

 config.vm.define "hello_service1" do |hello_service1|
  hello_service1.vm.network "private_network", ip: "192.168.33.12"
 end

 config.vm.define "hello_service2" do |hello_service2|
  hello_service2.vm.network "private_network", ip: "192.168.33.13"
 end

end

In the above code we defined a topology of 4 servers: hello_service1, hello_service2, load_balancer_bkp,  and load_balancer_active.  We can now use all the above by issuing the commands:

$ vagrant up hello_service1
$ vagrant up hello_service2
$ vagrant up load_balancer_bkp
$ vagrant up load_balancer_active

As you could see, we declarativelly configured private networking for all machines and assigned a specific ip address for each one. For the disbelievers, we could make a test. In two different terminals (don’t forget to assign VAGRANT_HOME) we could open ssh sessions for machines hello_service1 amd hello_service2 and in the first terminal type

$ vagrant ssh hello_service1
vagrant@vagrant-ubuntu-trusty-64:~$ ping 192.168.33.13

And in the second terminal:

$ vagrant ssh hello_service2
vagrant@vagrant-ubuntu-trusty-64:~$ ping 192.168.33.12

An oganza bisasa moment.

Once we have all the machines to build our topology we can add some purpose to them, Lets provision using Ansible.

Ansible

Ansible is an automation tool. It runs remotely trough ssh against machines registered in an machine inventory. It performs these machines provisioning acordingly definitions specified in an automation language call playbooks.

For each machine resource type in our architecture, like load balancer or microservice, we could use a reusable set of roles, comprising a set of configurations like java, keepalived, haproxy and load balancer. Vagrant has native support to ansible, and trough its integration could provide seamlessly inventory mount based on each defined machine in Vagrantfile.

To install ansible in an ubuntu derivative system, run the following commands.


$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

First things first, we’ll provide our microservices and load balancers machine without the concept of ansible roles for now. In the next posts we will just refactor our scripts. To make things clearer let’s create a sub directory provisioning in our vagrant project.


$ mkdir provisioning

So we’ll create our first playbook for java microservices machines named java-microservices.yml as follows:


---
- hosts: spring-boot-microservices
 tasks:
 - debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"

 - block:
 - name: Install Oracle Java 8 repository
 apt_repository: repo='ppa:webupd8team/java'

 - name: Accept Java 8 License
 sudo: yes
 debconf: >
 name='oracle-java8-installer'
 question='shared/accepted-oracle-license-v1-1' value='true' vtype='select'

 - name: Install Oracle Java 8
 sudo: yes
 apt: name=oracle-java8-installer update_cache=yes state=present force=yes

As we can see, we translated a bash sequence of commands into the YML based playbook dsl. We added ubuntu ppa repository, used debconf to validate the oracle java license and finally installed the java in our microservice machine.

You can see that in the hosts directive we specify a unique group of machines from our inventory named spring-boot-microservices.

Finally we come back to our Vagrantfile and include the ansible provider configuration for both spring boot machines  :

 


 config.vm.define "hello_service1" do |hello_service1|
 hello_service1.vm.network "private_network", ip: "192.168.33.12"

 hello_service1.vm.provision "ansible" do |ansible|
 ansible.playbook = "provisioning/microservices.yml"
 ansible.sudo = true
 ansible.raw_arguments = ["--connection=paramiko"]

 ansible.groups = {
 "spring-boot-microservices" => ["hello_service1"]
 }
 end
 end

 config.vm.define "hello_service2" do |hello_service2|
 hello_service2.vm.network "private_network", ip: "192.168.33.13"

 hello_service2.vm.provision "ansible" do |ansible|
 ansible.playbook = "provisioning/microservices.yml"
 ansible.sudo = true
 ansible.raw_arguments = ["--connection=paramiko"]

 ansible.groups = {
 "spring-boot-microservices" => ["hello_service1"]
 }
 end
 end

Vagrant can generate for us an inventory file to input in ansible. To do so we use the ansible.groups parameter in Vagrantfile. This is how we tell to ansible which inventory group the current Vagrant machine is part of.

Coming up next…

These are the first steps in Microservice HA infrastructure. We just framed-up the the necessary elements of a HA solution, in the next installments we’ll flesh up the solution by configuring load balancing, fault tolerance, process isolation and cluster management. You may ask how to monitor all these things, and why I didn’t talk about it till now. Monitoring is a huge topic that I’ll in another series of posts….

Advertisements