In last installment, we place the topology and basic stack for our riddance from Java EE monolith. We coded an infrastructure in Ansible upon a Vagrant VM simple topology  as follows.

Untitled (2)

We created four machines: hello-service1 and hello-service2, for our microservice, and,  proxy-active and proxy-bkp for our load balancer. Now we’ll dig in our simple Spring Boot service and then we’ll provision the microservice machine with java and a standard  distribution package for Debian Linux family using Netflix Nebula os-packager plugin for Gradle.

This post wasn in my original plan, but the Linux standard distribution scheme showed a high importance  in overal microservices deployment scheme.

A simple Spring Boot Rest Service

We have plenty of examples using Spring Boot Microservices. So we’ll limit it to the basics once that we’re interested in the infrastructure concerns around spring boot microservices. I used as base the Spring Boot Guide for Rest Service, so, please, refer to it for a more comprehensive explanation. Our goodbye service is in github. Here follows its main snippet:

@RestController
@ConfigurationProperties(prefix = "goodbye")
public class GoodbyeController {
    private static final String template = "[%s] Goodbye JavaEE Monolith";
    private final AtomicLong counter = new AtomicLong();

    @Value("${ragna.gooodbye.instance:'NO_INSTANCE_SET'}")
    private String instanceId;

    @RequestMapping("/goodbye")
    public Goodbye goodbye (@RequestParam(value="name", defaultValue = "default node") String name){
        return new Goodbye(instanceId, counter.incrementAndGet(), String.format(template, name));
    }
}

The interesting part for us is the use of the @Value annotation on instanceId attribute, that takes a value from a java property named  ragna.goodbye.instance.

Executable jar with Spring Boot Plugin for Gradle

The interesting thing here in our Spring Boot Service is the the use of the Spring Boot Plugin for Gradle in order to create an executable jar.  Let’s highlight the build.gradle important setup for its generation:

buildscript {
   ext { }
   repositories { }
   dependencies {
      classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
      classpath "com.netflix.nebula:gradle-ospackage-plugin:${osPackageVersion}";
   }
}

apply plugin: 'spring-boot'
apply plugin: 'nebula.ospackage'

group = 'ragna'
version = '0.1.0'
mainClassName = 'ragna.goodbye.Application'

dependencies {
   compile("org.springframework.boot:spring-boot-starter-web")
   testCompile("junit:junit")
}

distributions {
    main {
        baseName = 'ragna-goodbye'
        version = "${project.version}"
    }
}

jar {
   baseName = 'ragna-goodbye'
   version = "${project.version}"
    manifest {
        attributes("Implementation-Title": "Ragna Service, "Implementation-Version": "${project.version}")
    }
}

springBoot {
   executable = true
   excludeDevtools = true
}

First, we can see above the buildscript dependencies refering to Netflix Nebula and Spring Boot. We tell to Gradle that we are customizing the build by applying the plugins, as we can see it for ‘spring-boot’ and ‘nebula-osspackage’ plugin names.

We customize the baseName for distributions plugin and the jar plugin (both implicitly imported). This will define the name for generated zip, tar and jar packages.

Finally,  in the customization for Spring Boot, we tell to this plugin that we want a executable jar. This plugin will instrument the final jar so that we could run it without the the explicit call for the java runtime, as follows:

ragna_goodby-run

This facilitates the administration of services in linux environments.

Building a Debian package with Netflix OSS – Nebula osspackage

Each Linux family, Debian, CentOS presents several differences as packaging  system. Here we’ll focus in Debian Packaging System, used by its derivatives such ubuntu and mint, too.

The systemd provides for us an standard for service administration. To comply with the standard, we need setup specific users for the service, initialization and termination scripts, runlevel and so on. To create a debian package we’ll customize our Netflix Nebula ospackage as follows:

ospackage {
   packageName = 'ragna-goodbye'
   version = "${project.version}"
   release = '1'
   type = BINARY
   os = LINUX

   preInstall file("scripts/rpm/preInstall.sh")
   postInstall file("scripts/rpm/postInstall.sh")
   preUninstall file("scripts/rpm/preUninstall.sh")
   postUninstall file("scripts/rpm/postUninstall.sh")

   into "/opt/local/ragna-goodbye"
   user "ragna-service"
   permissionGroup "ragna-service&"

   from(jar.outputs.files) {
      // Strip the version from the jar filename
      rename { String fileName ->
         fileName.replace("-${project.version}", "")
      }
      fileMode 0500
      into "bin"
   }

   from("install/linux/conf") {
      fileType CONFIG | NOREPLACE
      fileMode 0754
      into "conf"
   }
}

Firstly we define the package name, os and type.  To install the service, we need to provide custom bash scripts for the following installation events: preInstall, which we’ll use to create custom linux user and group, postInstall, used for change log directory ownership to our user; preUninstall, used to stops the service previously its removal and our unused (onde that our service is simple) postUninstall script. The scripts are placed in the scripts/rpm directory of our application. Here are the snippets:

preInstall.sh:

#!/usr/bin/env bash
echo "Creating group: ragna-service"
/usr/sbin/groupadd -f -r ragna-service 2> /dev/null || :

echo "Creating user: ragna-service"
/usr/sbin/useradd -r -m -c "ragna-service user" ragna-service -g ragna-service 2> /dev/null || :

postInstall.sh:

#!/usr/bin/env bash

chown ragna-service:ragna-service /opt/local/ragna-goodbye/log

preUninstall.sh:

#!/usr/bin/env bash
service ragna-goodbye stop

postUninstall.sh:

#!/usr/bin/env bash

# Nothing here...

Now we set the placement of our jar packaged microservice to /opt/local/ragna-goodbye, via into attribute along with the user and permissiontGroup. In the bin target directory, we copy the fat jar built by spring boot plugin.

Then we copy, paying attention in the needed file permissions, the configuration files  in the conf target folder.

We need two files, the file used to set the bootstrap parameters in bash for the java service  ragna-goodbye.conf and the properties file for the spring boot service ragna-goodbye.properties. Is noteworthy that we define the fileType for both telling to debian that we don’t want it to be replaced in the case of a new installation, once that it must contain custom properties for the specific machine. It follows:

ragna-goodbye.conf:

# The name of the folder to put log files in (/var/log by default).
LOG_FOLDER=/opt/local/ragna-goodbye/log

# The arguments to pass to the program (the Spring Boot app).
RUN_ARGS=--spring.config.location=file:/opt/local/ragna-goodbye/conf/ragna-goodbye.properties

ragnar-goodbye.properties:


server.port: 9000
server,address: 0.0.0.0
management.port: 9001
management.address: 127.0.0.1

Back to the gradle.build file, we define the creation of the Debian package (remember, nebula oss package builds rpm, too):

 

</pre>
<pre>
buildDeb {
   user "ragna-service"
   permissionGroup "ragna-service"
   directory("/opt/local/ragna-goodbye/log", 0755)
   link("/etc/init.d/ragna-goodbye", "/opt/local/ragna-goodbye/bin/ragna-goodbye.jar")
   link("/opt/local/ragna-goodbye/bin/ragna-goodbye.conf", "/opt/local/ragna-goodbye/conf/ragna-goodbye.conf")
}</pre>
<pre>

In Debian packaging we setup the user and permissionGroup for the service, the log directory and the link for our jar packaged service in the linux init system. Here we see how handy the Spring Boot executable jar can be.  Finally we set a link for the ragna-goodbye.conf in the bin folder, as Spring needs to find it in the same folder of the jar service.

To build the debian package we must issue in our project folder:

$ gradle clean build buildDeb

A tip. It’s important to pay attention in the generated pacakges, deb, jar, names and the names used in scripts. At this moment we don’t have any validation between jar, deb names and the scripts used to manage installation and uninstallation.

I misspelled the service name in the preUnintall script and had to manually edit the package name in the installed script to properly stop the service before issuing sudo apt-get remove. To find it I use the following snippet:


$ sudo find /var | grep ragna-goodbye

/var/lib/dpkg/info/ragna-goodbye.list
/var/lib/dpkg/info/ragna-goodbye.postinst
/var/lib/dpkg/info/ragna-goodbye.md5sums
/var/lib/dpkg/info/ragna-goodbye.prerm
/var/lib/dpkg/info/ragna-goodbye.postrm
/var/lib/dpkg/info/ragna-goodbye.preinst

Refactoring Java Provisioning using Ansible roles

In last post we provisioned oracle 8 java in our microservice machine using ansible. Now it’s time to install the debian package containing the Spring Boot service int the microservice machine.

Before provision our deb package, we’ll refactor the previous provisioning for our Machine using Ansible Roles. Roles are a modularization mechanism for ansible that organizes features like tasks, vars and handlers in a known file structure. We’ll create a role for the java 8 installation tasks included in our microservices.yml file.

First we’ll create the following file structure inside the provisioning directory in our Vagrant project.


vagrant_spring_boot_ha
  - roles
     - jdk8
        - tasks

We can create by issuing the following command in vagrant_spring_boot_ha folter:


$ mkdir -p provisioning/roles/jdk8/tasks

In the tasks folder we’ll create a main.yml file with the contents from the tasks block from microservices.yml, as follows:


 - block:
 - name: Install Oracle Java 8 repository
   apt_repository: repo='ppa:webupd8team/java'

 - name: Accept Java 8 License
   debconf: >
     name='oracle-java8-installer'
     question='shared/accepted-oracle-license-v1-1' value='true' vtype='select'

 - name: Install Oracle Java 8
     apt: name=oracle-java8-installer update_cache=yes state=present force=yes

In the microservices.yml file we remove the jdk installation commands and include a new roles directive with a sub-item pointing to jdk8, the name of our new role. The file new content follows:

---
- hosts: spring-boot-microservices
 tasks:
 - debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"

 roles:
 - jdk8

Despite the location of the role clause in the file, the jdk8 role will provisioned before the execution of any tasks defined in the playbook file.

Provisioning our microservice as a Debian package

To keep the solution simple, I’m keeping aside two important elements for our solution. Jenkins, as build pipeline automation tool, and Sonatype Nexus. I’ll provide the deb package trough github.

We’ll create a new role named ragna-packages with the following command:


$ mkdir -p provisioning/roles/ragna-packages/tasks

There we create the following main.yml file that installs the ragna-goodbye service using the ansible apt module. To provide a full runnable example, I deployed the deb file directly from github. As you can see we have several ansible variables delimited by “{{” and “}}”, some of them are provided by ansible facts environment gathering, some will be provided by us.  It follows our package provisioning role.


---

 - name: download '{{ package_repo }}/{{ package_name }}'
 get_url: url={{ package_repo }}/{{ package_name }} dest=/tmp/{{ package_name }} mode=0440

 - name: install '{{ package_name }}' service from '{{ package_repo }}'
 apt: deb=/tmp/{{ package_name }}

 - name: placing instance name '{{ inventory_hostname }}' in file '{{ package_repo }}'
 lineinfile: dest={{ conf_file }} line="ragna.gooodbye.instance:{{ inventory_hostname }}"
 notify: restart {{ service_name }}

Above, we download the debian package we created using nebula using the get_url module from ansible. The downloaded debian package is installed by apt module and finally we customize the configuration properties file used by the service using the lineinfile ansible module that will add the property ragna.goodbye.instance filled with the inventory_name ansible fact.

The notify clause in the above script is a handler. A handler is an abstraction for service lifecycle handling. The handler for the ragna-service is placed in the file main.yml from  directory provisining/ragna-packages/handlers, that we can create as we did for the tasks file before.


---
- name: restart {{ service_name }}
 service: name={{ service_name }} state=restarted enabled=yes

The notify clause  from linefile task  refers to the name for the service handler ‘ restart {{ service_name }}’.  This handler, will be notified to restart the service after the configuration property update.

Now we update our microservices.yml, adding the new role with the required parameters, package_repo, package_name and service_name:


---
- hosts: spring-boot-microservices
 tasks:
 - debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"

 roles:
 - jdk8
 - { role: ragna-packages, package_repo: "http://rawgit.com/ragnarokkrr/rgn_vm_containers/master/vagrant_springboot_ha/provisioning", package_name: "ragna-goodbye_0.1.0-1_all.deb", conf_file: "/opt/local/ragna-goodbye/conf/ragna-goodbye.properties", service_name: "ragna-goodbye" }

As you can notice, the ragna-packages role can be reused for any similar developed Spring Boot services  provisioned as debian packages. I omitted some important steps needed In a real-world deploy pipeline  performed by Jenkins and OSS Nexus. But this gap could be filled by a little googling.

Running

To run our project, we go to the vagrant_springboot_ha directory in the host machine and type:

$ vagrant destroy
$ vagrant up hello-service1
$ vagrant ssh hello-service1

Logged in the hello-service1 machine we can download the service using wget localhost:9000/goodbye that will save a file named goodbye with the following json content:

{"instanceId":"hello-service1",
"id":1,
"content":"Goodbye JavaEE Monolith"}

As you can see, the insanceId is updated with the machine name given by Vagrant Multimachine. Here follows the final ragna-goodbye.properties file modified by ansible and placed in the /opt/local/ragna-goodbye/conf/  direcotry, as specified in nebula:

server.port: 9000
server,address: 0.0.0.0
management.port: 9001
management.address: 127.0.0.1

ragna.gooodbye.instance:hello-service1

Concluding…

We saw how to provide a microservice that leverages the standard Linux services and administrative features. This is important as we don’t need to rely anymore in proprietary administration tools and GUI’s from traditional Java EE application servers and establishes the Linux as a new common and really standardized platform for deployment and administration for java applications.

 

 

 

 

 

Advertisements

We can use the facilities of the ruby based Vagranfile DSL to simplify the simulation of a wide topology park.

The typical configuration snippet for a private networked machine follows:


config.vm.define "nginx-1" do |nginx|

nginx.vm.network "forwarded_port", guest: 80, host: 8081
nginx.vm.network "private_network", ip: "192.168.33.10"

end

Once that we’re talking about a actual programming language we could just use a regular iteration logic in order to produce several nodes of same type:


(1..NGINX_INSTANCES).each do |i|
 config.vm.define "nginx-#{i}" do |nginx|

nginx.vm.network "forwarded_port", guest: 80, host: 8080 + i
 nginx.vm.network "private_network", ip: "192.168.33.1#{i}"

nginx.vm.provider "virtualbox" do |vb|
 vb.gui = false
 vb.memory = "256"
 end
 end
 end

Besides the counter generated machine name that we used, it would be necessary to restrict the amount of memory used by our number of VMs hold in NGINX_INSTANCES. So we could limit the amount of memory used by each instance by using  provider specific parameters.

In the same Vagrantfile we can provide another sort of machines the same way:


(1..NODE_INSTANCES).each do |i|
 config.vm.define "node-#{i}" do |nginx|

nginx.vm.network "forwarded_port", guest: 80, host: 8090 + i
 nginx.vm.network "private_network", ip: "192.168.33.2#{i}"

nginx.vm.provider "virtualbox" do |vb|
 vb.gui = false
 vb.memory = "256"
 end
 end
 end

Finally, to perform the provisioning of this machines we can use ansible (as shown here) with the Vagrant DSL to dynamically assemble the inventory groups based on our DSL.


config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/entry-playbook.yml"
ansible.groups = {
"static_web_servers" => ["nginx-[1:#{NGINX_INSTANCES}]"],
"application_servers" => ["node-[1:#{NODE_INSTANCES}]"]
}

ansible.sudo = true
ansible.raw_arguments = ["-vvvv"]

end

 

Full project here.

 

 

 

 

 

 

 

 


Simply stated High Availability is about a continuously operational system for an ideally long period of time.

For sure you have heard of the “99’s” percentage rates of Service Level Agreement, 99%, 99.9%, 99.999%… that are assigned to services around the internet or in your job deployed services. This refers to uptime (and downtime) time of a service during the year, in which 99% means an downtime of 3.65 days a year and the the “five nines” rate points to a downtime of 5.26 minutes of 525600 minutes that comprises a whole year.

This SLA is at same time the proud and joy of solution providers and the source of oganza bisasa moments for business contractors that imagine how such wonders could be achieved.

So, how such a thing could be achieved? Basically by the observation three system design principles:

  1. Elimination of SPOF (Single Point of Failures);
  2. Reliable Crossover;
  3. Detection of failures and unnoticed recover for the eyes of the user.

This design principles are built-in in a Java EE solution and defines the very need of such  middleware platform.

In this series we’ll walk trough an alternative approach  based on FOSS, virtualization, containerization and Linux infrastructure. Our tech stack for enabling a java microservices architecture is composed of:

  • Spring Boot
  • Ubuntu
  • Vagrant
  • Ansible
  • HAProxy
  • Keepalived
  • Docker
  • Docker Swarm

But first let’s rapidly review HA with Java EE.

Java EE on HA

An application server like Oracle Weblogic Server approaches HA by the implementation of clusters of instances that provides load balancing and failover features to a traditional Java EE application. And to do so OWS provides a mechanism to tie together independent instances of Web Containers and EJB Containers.

Each server maintains a list of other servers in a logically configured cluster via administration console. What happens now is that each cluster maintains the status of each other servers in the cluster trough the use of a heartbeat mechanism. The heartbeat consists of a  TCP/IP unicast or UDP multicast message for each other instance in a cluster. The messages that fails to arrive to its destiny node notices an down service readily removed from active instances list.

We could observe that OWS complies with the principles 1 and 3 of HA system design. For Web Applications, OWS suggested architecture relies on a Hardware Load Balancer or  a HTTP server like MS IIS, Apache or even another instance of OWS that could provide sort of reliable crossover for our Java EE Application.

If you are a savvy Java EE rat, bingo. I omitted session replication from the OWS  equation. That was for the sake of conciseness of this HA series.

No. I’m lying. Stateful session replication is deadly boring, brings doubtful outcomes and for our Spring Boot Stateless REST that will not be necessary. For a complete reference on OWS clustering, you could refer to this book.

HA with Commodity Software

Achieving HA in a one stop shop (well) paid solution could bring comfort and manageability once paid solutions often offers full administration consoles in order to deploy clustering environments.

Nevertheless, virtualization, the cloud,  NoSql solutions fronted by microservices APIs and  more recently process isolation containerization trough brings to table new possibilities in the implementation, and new issues.

I always has been fan of commoditization in technology. And FOSS, even when it was not accepted in enterprise environments. Commoditization and FOSS means that we could achieve results comparable to giants like Oracle.

In traditional Java EE we talk about Tomcat, Apache and JBoss (Wildfly). But in recent microservices movement upon virtualized environments we achieve one step further in commoditization. We are leveling down Application Development to Linux ecosystem.

One advantage of microservices aparted complex Java EE application servers for devops movement is that we can now rely on Linux software to unify the tech stack on the ops side. In HA that means that we don’t need a multicertified  professional shaman to cast spells on a proprietary Java EE stack in order to manage application server magic.

If we are talking on Linux commodity for HA we are talking about HAProxy and keepalived.

But let’s calm down Application Server refugees, like myself, that at this time may just wonder how to manage the configuration of several environments without an admin console.

Coding the Infrastructure

One of advantages of one stop shop solutions like (paid) OWS or (free) Wildfly is that they offer unified environment to implement HA solutions.

Some developers and Java architects may feel uncomfortable in rely only in Linux stack mainly because administration concerns and that’s a point. But in an age which we have to live togheter with a plethora of fine grain services I’m afraid that administrators driving a cozy console UI (ok, no so cozy…), or even supported by Jenkins, would be quickly overwhelmed by an infinitude of new error prone points and clicks and ssh sessions.

 

Differently of immemorial bash and bat files, Infrastructure as code incorporates new configuration capabilities to application code. With tools like Vagrant, Docker, and Ansible we can automate virtually anything that a sysadmin would, and even more. We can orchestrate infrastructure close to the needs of our architecture, integrating our application architecture to operating system resources.

This infrastructure coding is exactly what we will do here. First things first.

For now,  trust me when I tell you that we need at least 4 machines to demonstrate a full HA solution. That comprises the Spring Boot microservice machine and the load balancer machine.

And as we are talking about SPOF elimination we need double it all! That can be seen in next figure.

Untitled (2)

To start coding the infrastructure we begin with all different machine profiles planned to architecture. Enters Vagrant.

Vagrant

Vagrant is a tool that simplifies the workflow and reduces the workload necessary to run and operate virtual machines by offering a simple command-line interface  to manage VMs. To run a VM and connect to it from your host workstation you just need two commands:


$ vagrant up

$ vagrant ssh

Under the hood, with this simple commands vagrant saved to me the following steps:

  1. Download the VM image;
  2. Start the VM;
  3. Configure VM’s resources like RAM, CPUs, shared directories and network interfaces
  4. Optionally install software  within the VM with tools like Puppet, Chef, Ansible and Salt, wich we call provisioning.

Vagrant is not a VM tool, its works upon VM solutions like VirtualBox, Vmware and Hyper V. Vagrant unifies the management of these tools, and that’s all the magic.

Installing Vagrant

As we saw, Vagrant relies on well known VM tech, that in Vagrant parlance we call VM Provider, we could use the free solution Virtual Box. My host machine is a Ubuntu derived, and virtual box is available on default apt-get repositories, I can install it just by issuing:


$ sudo apt-get install virtualbox

If you want some other version than the available on repositories, you can follow these steps.  To test the installation, just run the following command and wait for VirtualBox visual console:


$ virtualbox 

Vagrant is also available in ubuntu repositories so you can install it just by issuing:


$ sudo apt-get install vagrant

Our first Vagrant machine

To code our infrastructure with Vagrant we use a VagrantFile. We use it to describes the type of machine that we need and how to configure and provision this machine. A VagrantFile is a kind of Ruby DSL that is supposed to be versioned in order to be shared by developers in individual workstations.

Let’s create a directory named vagrant_springboot_ha and there create a file
named Vagrantfile with the following contents:


VAGRANT_API = "2"

Vagrant.configure(VAGRANT_API) do |config|

  config.vm.box = ";ubuntu/trusty64"

end

Above we tell to Vagrant that we wish to create an ubuntu VM 64 bits. This VM file comes from the Hashicorp repository of images that we can search at https://atlas.hashicorp.com/boxes/search.  An image used by vagrant is called Vagrant Box. To define a custom path  to cache this images in our local machine we could use the following command:


$ export VAGRANT_HOME=/opt/pit/vagrant_home

And finally we could access this machine with the following commands issued inside the containing directory for our Vagrant File:


$ vagrant up
$ vagrant ssh

That’s it! Now we have our  configurable virtual machine based on ubuntu to use as our development environment for our architecture. If you start VMware you could see a new specific virtual box machine as follows:

Screenshot from 2016-04-18 22:45:20

 

Vagrant Multimachine: simulating our topology with Phoenix Servers

Our topology requires at least four machines. Fortunately vagrant supports the configuration of several machines with the mechanism called Multimachine, in which we can add as much machines as needed based on different Boxes if necessary.

Before,  we can fearlessly destroy this PhoenixServer with following command:


$ vagrant halt
$ vagrant destroy

Why I called it PhoenixServer? Just because, after destroy I could create a brand new
identical ubuntu machine instance base on standard Hashicorp repository image just by repeating the above steps.

To define the four above specified machines just update our Vagrant File with the following content:


VAGRANT_API = "2"

Vagrant.configure(VAGRANT_API) do |config|

 config.vm.box = "ubuntu/trusty64"

 config.vm.define "load_balancer_active" do |load_balancer_active|
  load_balancer.vm.network "private_network", ip: "192.168.33.10"
 end

 config.vm.define "load_balancer_bkp" do |load_balancer_bkp|
  load_balancer.vm.network "private_network", ip: "192.168.33.11"
 end

 config.vm.define "hello_service1" do |hello_service1|
  hello_service1.vm.network "private_network", ip: "192.168.33.12"
 end

 config.vm.define "hello_service2" do |hello_service2|
  hello_service2.vm.network "private_network", ip: "192.168.33.13"
 end

end

In the above code we defined a topology of 4 servers: hello_service1, hello_service2, load_balancer_bkp,  and load_balancer_active.  We can now use all the above by issuing the commands:

$ vagrant up hello_service1
$ vagrant up hello_service2
$ vagrant up load_balancer_bkp
$ vagrant up load_balancer_active

As you could see, we declarativelly configured private networking for all machines and assigned a specific ip address for each one. For the disbelievers, we could make a test. In two different terminals (don’t forget to assign VAGRANT_HOME) we could open ssh sessions for machines hello_service1 amd hello_service2 and in the first terminal type

$ vagrant ssh hello_service1
vagrant@vagrant-ubuntu-trusty-64:~$ ping 192.168.33.13

And in the second terminal:

$ vagrant ssh hello_service2
vagrant@vagrant-ubuntu-trusty-64:~$ ping 192.168.33.12

An oganza bisasa moment.

Once we have all the machines to build our topology we can add some purpose to them, Lets provision using Ansible.

Ansible

Ansible is an automation tool. It runs remotely trough ssh against machines registered in an machine inventory. It performs these machines provisioning acordingly definitions specified in an automation language call playbooks.

For each machine resource type in our architecture, like load balancer or microservice, we could use a reusable set of roles, comprising a set of configurations like java, keepalived, haproxy and load balancer. Vagrant has native support to ansible, and trough its integration could provide seamlessly inventory mount based on each defined machine in Vagrantfile.

To install ansible in an ubuntu derivative system, run the following commands.


$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

First things first, we’ll provide our microservices and load balancers machine without the concept of ansible roles for now. In the next posts we will just refactor our scripts. To make things clearer let’s create a sub directory provisioning in our vagrant project.


$ mkdir provisioning

So we’ll create our first playbook for java microservices machines named java-microservices.yml as follows:


---
- hosts: spring-boot-microservices
 tasks:
 - debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"

 - block:
 - name: Install Oracle Java 8 repository
 apt_repository: repo='ppa:webupd8team/java'

 - name: Accept Java 8 License
 sudo: yes
 debconf: >
 name='oracle-java8-installer'
 question='shared/accepted-oracle-license-v1-1' value='true' vtype='select'

 - name: Install Oracle Java 8
 sudo: yes
 apt: name=oracle-java8-installer update_cache=yes state=present force=yes

As we can see, we translated a bash sequence of commands into the YML based playbook dsl. We added ubuntu ppa repository, used debconf to validate the oracle java license and finally installed the java in our microservice machine.

You can see that in the hosts directive we specify a unique group of machines from our inventory named spring-boot-microservices.

Finally we come back to our Vagrantfile and include the ansible provider configuration for both spring boot machines  :

 


 config.vm.define "hello_service1" do |hello_service1|
 hello_service1.vm.network "private_network", ip: "192.168.33.12"

 hello_service1.vm.provision "ansible" do |ansible|
 ansible.playbook = "provisioning/microservices.yml"
 ansible.sudo = true
 ansible.raw_arguments = ["--connection=paramiko"]

 ansible.groups = {
 "spring-boot-microservices" => ["hello_service1"]
 }
 end
 end

 config.vm.define "hello_service2" do |hello_service2|
 hello_service2.vm.network "private_network", ip: "192.168.33.13"

 hello_service2.vm.provision "ansible" do |ansible|
 ansible.playbook = "provisioning/microservices.yml"
 ansible.sudo = true
 ansible.raw_arguments = ["--connection=paramiko"]

 ansible.groups = {
 "spring-boot-microservices" => ["hello_service1"]
 }
 end
 end

Vagrant can generate for us an inventory file to input in ansible. To do so we use the ansible.groups parameter in Vagrantfile. This is how we tell to ansible which inventory group the current Vagrant machine is part of.

Coming up next…

These are the first steps in Microservice HA infrastructure. We just framed-up the the necessary elements of a HA solution, in the next installments we’ll flesh up the solution by configuring load balancing, fault tolerance, process isolation and cluster management. You may ask how to monitor all these things, and why I didn’t talk about it till now. Monitoring is a huge topic that I’ll in another series of posts….


In this series I will show how to get rid with one of the Java EE Application Server era myths: High Availability.

During the hegemony run of Java EE container, often we developers are “protected” from the distributed system concerns  by these large pieces of enterprise middleware wonders known as containers.

I don’t consider a flaw per se in Java EE, but perhaps a necessary step in abstraction progress that worked well for a certain time in history that we are, gradually, depending on the shop, leaving behind us.

Today with virtualization and containerization tech, microservices, devops engineering techniques and  commodity linux tooling we can implement HA in a more self evident  and flexible way. We’ll  how to achieve HA with these “alternative” stack  in  the following posts:

Part 1 – Coding the infrastructure for Spring Boot Microservices with Ansible and Vagrant

Part 2 – Provisioning Spring Boot Microservices as Debian Package with Ansible and Netflix Nebula OS Package

Part 3 – Load Balancing Spring Boot Microservices with HAProxy

Part 4 – Adding Fault Tolerance to Spring Boot Microservices with VIPs and Keepalived

Part 5 – Isolating Spring Boot Microservices with Docker

Part 6 – Managing Spring Boot Microservices Clusters with Docker Swarm

See you in the first installment.

Update (2016-05-13):

Just included a new Part 2. Simply debian packaging topic grew too much.


Five Ws

  • Who is about it? IETF (Internet Engineering Task Force)
  • What happened? IETF defined a data modeling language for NETCONF  protocol for managing equipment  configuration.
  • When did it take place? SInce 2002 ( RFC 3535  – Overview of the 2002 IASB Network Managemenet Workshop)
  • Where did it take place? IETF
  • Why did it happen? In response Shortcomings of SNMP/SMI network configuration management (lack of backup-restore support, element configuration, transactions – single or multibox) …

 Outline

NETCONF, network management protocol desined to support mnagement of configuration including:

  • Distinction between configuration and state data
  • Multiple configuration data stores (candidate, running, startup)
  • Configuration change validations
  • Configuration change transactions
  • Selective data retrieval and filtering
  • Streaming and playback of event notifications
  • Extensible remote procedure call mechanism

YANG is a data modeling language designed to write data models for NETCONF protocol, with following features: 

  • Human readable
  • Hierarchical data model configuration
  • Resuable types and groupings (structured types)
  • Extensibility through augmentation mechanims
  • Support definitions of opertaions (RPCs)
  • Formal constratints for configuration validation
  • Data modularity through modules and submodules
  • Well defined versioning rules

NETCONF Layering

NETCONF Operations

  • <get-config>: retrieve all or part of a configuration from a data store;
  • <get>: retrieve running configuration and device state information;
  • <edit-config>: loads all  or part of a specified configuration to the specified target configuration;
  • <copy-config>: create or replace an entire configuration datastore with the contentes of anothert complete configuration datastore;
  • <delete-config>: delete  a configuration datastore (Not applicable to running)
  • <lock>: locks a device;
  • <unlock>: unlocks a device;
  • <close_session>: graceful session termination;
  • <kill-session>: forced session termination.

YANG Features

  • maps directly to NETCONF (XML) content;
  • Compact C/Java syntax focused on readability;
  • Data type system compatible with next-gen SNMP type system, XML and XSD;
  • Translatable do DSDL, RelaxNG(!), Schematron and DSRL… (RFC 6110);
  • Informal translation to  W3C XML Schema (Pyang, Jyang?);
  • Organization
    • Leaf, leaf-list, container, lists, grouping, choice
  • Data model structure
    • Module, submodule, augment, if-feature, when
  • Constraints
    • must, unique, min-elements, max-elements, mandatory
  • Data types
    • many built-in, sub-typing, restrictions
  • Reusable groupings
    • Grouping, uses

YANG example

 module acme-sytem {

  namespace "http://acme.example.com/system";

  prefix "acme";

  organization "ACME Inc.";

  contact "joe@acme.example.com";

  description

    "The module for entities implementing the ACME system.";

  revision 2007-11-05 {

    description "Initial revision.";

  }

  container system {

      leaf host-name {

        type string;

        description "Hostname for this system";

      }

      list interface {

        key "name";

        description "List of interfaces in the system";

        leaf name {

          type string;

        }

        leaf type {

          type string;

        }

        leaf mtu {

          type int32;

        }

      }

  }

}

NETCONF Open Source

YANG Open Source

A Layered Comparison

  SNMP NETCONF SOAP
Data Models MIBs Modules  
Data Modeling Language SMI YANG  
Management Operations SNMP NETCONF  
RPC Protocol BER XML XML
Transport Stack UDP

SSH

BEEP

SOAP

TLS

SSL

HTTP

TCP

 

References


Balanced Scorecard Sample

For my first post series I choose to Personal Development Plan.  Why I choose it? Simply, for the last years (after I finished my masters)  I’m trying to balance my personal and professional lives. It is though, at least for me, once that sometimes I tend to be workaholic and even a little… obsessed.

My first attempt was apply a method roughly similar to Balanced Score Card.  I was satisfied with the end result, but had the sensation that some  gaps in my system where quite  disturbing, most specifically, which Key Performance Indicators could I use?

After some digging in web I found some good material about career planning and balance with personal life. I reached to PDP concepts, then methods and templates and finally some applied examples. As I progress in my personal tailoring of the process I will post my advances here.


After an almost two-year hiatus. I’ve decided to start a new blog.

I won’t  post anymore in my former blogspot’s blog: http://soft-shaman.blogspot.com/ (in Portuguese).  Due personal matter problems, I couldn’t resume  in a reasonable time my blogging activities, which resulted in a rapid erosion and then a final lack of interest in resume it. So I’ve decided to cease it, take a deep breath and start this new endeavor in a new home.

Obviously, some things that I’ve intended those days has changed, once that I have some new projects now. They comprehends hard and soft skills, some business and computer science topics yet, but not necessarily that “old” ones.

My interests by now are take a time from Java EE,  and explore new paradigms, new civilizations (ok, I’ll let this for Spock Prime,  but you know, it’s hard to stop the stream). Uhhh, resetting … I intend to take my head from Java in favor of a broader horizon, like Erlang, Scala, Software Engineering, some business and some soft skills for dealing with Software Architecture and whatever roars and gnashing of teeth that could be in my disturbed mind.

Besides that I’ll write about some humanities matters too in a counterpart of this blog Ragnarokkrr – The Quickening.  At last, back to the computer science craft, in this meantime I’ve became a college teacher, so I’ll write about some academic matters regarding algorithms and  didactic method in courses that I teach. I don’t know yet if I will write it there or I will create a specific blog.

So… see ya, now it’s time to tame this wordpress thing.

UPDATE! (02-24-2011)

Some more personal,  frequent and raw material I’ll post in my tumblr blog Nilseu Padilha’s Ragnarokkrr.