Single-node Kubernetes deployment

In order to test k8s you can always deploy a single-node setup locally using minikube, however it is a bit limited if you want to test interactions that require your services to be externally accessible from a mobile or web front-end.

For this reason, I created a basic k8s setup for a Core OS single node in Azure using https://coreos.com/kubernetes/docs/latest/getting-started.html . Once I did this, I decided to automate its deployment via script.

It requires a Core OS instance running, then connect to it and:

git clone https://github.com/vtuson/k8single.git k8
cd k8
./kubeform.sh [myip-address] –> ip associated to eth, you can find it using ifconfig

This will deploy k8 into a single node, it sets up kubectl in the node and deploys skydns add on.

It also includes a busybox node file that can be deployed by:

kubectl create -f files/busybox

This might come useful to debug issues with the set up. To execute commands in busybox run:
kubectl exec busybox — [command]

The script and config files can be access at https://github.com/vtuson/k8single

If you hit any issues while deploying k8s in a single node a few things worth checking are:


sudo systemctl status etcd
sudo systemctl status flanneld
sudo systemctl status docker

Also it is worth checking what docker containers are running and if necessarily check the logs

docker ps -a
docker logs [container-id]

Deploying Heapster to Kubernetes

I recently blogged about deploying kubernetes in Azure.  After doing so, I wanted to keep an eye on usage of the instances and pods.

Kubernetes recommends Heapster as a cluster aggregator to monitor usage of nodes and pods. Very handy if you are deploying in Google Compute (GCE) as it has a pre-build dashboard to hook it to.

Heapster runs on each node, collects statistics of the system and pods which pipes to a storage backend of your choice. A very handy part of Heapster is that export user labels as part of metadata, which I believe can be used to create custom reports on services across nodes.

monitoring-architecture

If you are not using GCE or just don’t want to use their dashboard, you can deploy a combo of InfluxDB and Grafana as a DIY solution. While this seems promising the documentation, as usual, is pretty short on details..

Start by using the “detailed” guide to deploy the add on, which basically consists of:

**wait! don’t run this yet until you finished reading article**

git clone https://github.com/kubernetes/heapster.git
cd heapster
kubectl create -f deploy/kube-config/influxdb/

These steps exposes Grafana and InfluxDB via the api proxy, you can see them in your deployment by doing:

kubectl cluster-info

This didn’t quite work for me, and while rummaging in the yamls, I found out that this is not really the recommended configuration for live deployments anyway…

So here is what I did:

  1. Remove env variables influxdb-grafana-controller.yaml
  2. Expose service as NodePort or LoadBalancer depends of your preference in grafana-service.yaml. E.g. Under spec section add: type: NodePort
  3. Now run >kubectl create -f deploy/kube-config/influxdb/

You can see the expose port for Grafana by running:
kubectl --namespace=kube-system describe service grafana-service

In this deployment, all the services, rc and pods are added under the kube-system namespace, so remember to add the –namespace flag to your kubectl commands.

Now you should be able to access Grafana on any external ip or dns on the port listed under NodePort. But I was not able to see any data.

Login to Grafana as admin (admin:admin by default), select DataSources>influxdb-datasource and test the connection. The connection is set up as http://monitoring-influxdb:8086, this failed for me.

Since InfluxDB and Grafana are both in the same pod, you can use localhost to access the service. So change the url to http://localhost:8086, save and test the connection again. This worked for me and a minute later I was getting realtime data from nodes and pods.

Proxying Grafana

I run an nginx proxy that terminates https  requests for my domain and a created a https://mydomain/monitoring/ end point as part of it.

For some reason, Grafana needs to know the root-url format that is being accessed from to work properly. This is defined in a config file.. while you could change it and rebuild the image, I preferred to override it via an enviroment variable in the influxdb-grafana-controller.yaml kubernetes file. Just add to the Grafana container section:

env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s:%(http_port)s/monitoring"

You can do this with any of the Grafana config values, which allows you to reuse the official Grafana docker image straight from the main registry.

Deploying Kubernetes on Azure

I recently looked to do my first live deployment of kubernetes, after having playing succesfully with minikube.

When trying to deploy kubernetes in public cloud, there is a couple of base options. You could start from scratch or use one of the turnkey solutions.

You have two turnkey solutions fro Azure, Fannel or Weave based. Basically these are two different networking solutions, but the actual turnkey solutions differ more than just the networking layer. I tried both and had issues with both, yeay!! However, I liked the fannel solution over Weave’s straight away. Flannel’s seems to be able to configure and used Azure better. For example, It uses a VM scale sets for the slave nodes, and configures external ips and security groups. This might be because the Flannel solution is sponsored by Microsoft, so I ended up focusing on it over Weave’s.

The documentation is not bad, but a bit short on some basic details. I  did the deployment in both Ubuntu 14.04 and OSX10 and worked in both. The documetation details jq and docker as the main dependencies. I found issues with older versions of jq that are part of the 14.04 Ubuntu archive, so make sure to install the lastest version from the jq website.

Ultimately, Kube-up.sh seems to be a basic configuration wrapper around azkube, a link to it is burried at the end of the kubernetes doc. Cole Mickens is the main developer for azkube and the turnkey soultion. While looking around his github profile, I found this very useful link on the status of support for Kubernetes in Azure. I would hope this eventually lands in the main kubernetes doc site.

As part of the first install instructions, you will need to provide the subscription and tenant id. I found the subscription id easily enough from the web console, but the tenant id was a bit more elusive. Altough the tenant id is not required for installations of 1.3, the script failed to execute without it. It seems like the best way to find it is the Azure cli tool, which you can get from node.js


npm install azure
azure login
azure account show

This will give you ll the details that you need to set it up. You can then just go ahead or you can edit deatils in  cluster/azure/config-default.sh

You might want to edit the number of VMs that the operation will create. Once you run kube-up.sh, you should hopefully get a working kubernetes deployment.

If for any reason, you would like to change the version to be install, you will need to edit the file called “version” under the kubernetes folder setup by the first installation step.

The deployment comes with a ‘utils’ script that makes it very simple do a few things. One is to copy the ssh key that will give you access to the slaves to the master.

$ ./util.sh copykey

From the master, you just need to access the internal ip using the “kube” username and specify your private key for authentication.

Next, I would suggest to configure your local kubectl and deploy the SkyDNS addon. You will really need this to easly access services.

$ ./util.sh configure-kubectl
$ kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/skydns.yaml

And that is it, if you run kubectl get nodes, you will be able to see the master and the slaves.

Since Azure does not have direct integretion for loadbalancer, any services that you expose you will need to configure with a self-deployed solution. But it seems that version 1.4  ofKubernetes is comming with equivalent support for Azure that the current versions boast for  AWS and Co.

Standing up a private Docker Registry

First of all, I wanted to recommend the following recipe from Digital Ocean on how to rollout your own Docker Registry in Ubuntu 14.04. As with most of their stuff, it is super easy to follow.

I also wanted to share a small improvement on the recipe to include a UI front-end to the registry.

Once you have completed the recipe and have a repository secured and running, you extend your docker-compose file to look like this:

nginx:
 image: "nginx:1.9"
 ports:
 - 443:443
 - 8080:8080
 links:
 - registry:registry
 - web:web
 volumes:
 - ./nginx/:/etc/nginx/conf.d:ro

web:
 image: hyper/docker-registry-web
 ports:
 - 8000:8080
 links:
 - registry
 environment:
 REGISTRY_HOST: registry

registry:
 image: registry:2
 ports:
 - 127.0.0.1:5000:5000
 environment:
 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
 volumes:
 - ./data:/data

You will also need to include a configuration file for web in the nginx folder.

file: ~/docker-registry/nginx/web.conf

upstream docker-registry-web {
 server web:8080;
 }

server {
 listen 8080;
 server_name [YOUR DOMAIN];

# SSL
 ssl on;
 ssl_certificate /etc/nginx/conf.d/domain.crt;
 ssl_certificate_key /etc/nginx/conf.d/domain.key;

location / {

# To add basic authentication to v2 use auth_basic setting plus add_header
 auth_basic "registry.localhost";
 auth_basic_user_file /etc/nginx/conf.d/registry.password;

proxy_pass http://docker-registry-web;
 proxy_set_header Host $http_host; # required for docker client's sake
 proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_read_timeout 900;
 }
 }

docker-compose up and you should be able to have a ssl secured UI frontend in port 8080 (https://yourdomain:8080/)
If you have any improvement tips I am all ears!

[Juju Adventure] Droidcon talk video

Last month I attended DroidCon 2012 and did a talked about using Juju and Ubuntu to deliver Android applications with a Web Service background. The guys at Skillmatters were kind enough to record and edit the video, and now it publicly available.

 

[Juju Adventure] Live testing

If you thought I had concluded my blog series on demonstrating how Ubuntu is the best environment to write up “connected” or “cloud backend” Android Apps, think again!

So far this is what we covered:

  • Proof that you can access a Juju local environment from the Android Emulator  done!
  • Using a few charms from the charm store plus a custom one, set up a MySQL database that can be exposed through a web service with simple commands/steps – done!
  • Develop a TODO list android app and connect it to the web service, so they talk to each other. – done!

The next step is “How to test that it all works on a production environment”.  If you have tested to death both your Android application and your web service locally, it is time to check if they will still work in real life. How do we do this? With few simple commands, we are going to deploy the same web service into the Amazon Cloud, and  the application in a mobile phone. All managed from the comfort of my Ubuntu Desktop.

Deploying to Amazon Web Services (AWS)

The only pre-requisite here is that you do have an AWS account. Once you are logged into the AWS website, you can find the credentials that you will need to set up your juju environment.  You can find a tutorial on how to set up your Elastic Compute Cloud (ec2) environment –> here.

The required information for Juju is stored in the environment.yaml file in the ~/.juju folder. In the following sample file you can see that two environments have been defined:

  • “local” is the environment that I have been using in my PC to test my web service using LXC containers.
  • “aws” gives Juju the information required to deploy services using my Amazon account.
  • “local” is set as default. This means that if I just run “juju bootstrap” this command applies to the local environment. To bootstrap the AWS environment, I would do “juju bootstrap -e aws”.
default: local
environments:
 aws:
  type: ec2
  access-key: YOUR-ACCESS-KEY-GOES-HERE
  secret-key: YOUR-SECRET-KEY-GOES-HERE
  control-bucket: juju-faefb490d69a41f0a3616a4808e0766b
  admin-secret: 81a1e7429e6847c4941fda7591246594
  default-series: precise
  juju-origin: ppa
  ssl-hostname-verification: true
 local:
  type: local
  control-bucket: juju-a14dfae3830142d9ac23c499395c2785999
  admin-secret: 6608267bbd6b447b8c90934167b2a294999
  default-series: precise
  juju-origin: distro
  data-dir: /home/victorp/myjuju_data

With my environments now configured, it’s time to deploy my services. This first step is to bootstrap my environment:

juju bootstrap -e aws

With the command completed successfully, I can check the status and I will see that the juju control instance is now up and running in Amazon:

juju status -e aws
2012-09-19 11:43:34,248 INFO Connected to environment.
machines:
  0:
    agent-state: running
    dns-name: ec2-75-101-189-208.compute-1.amazonaws.com
    instance-id: i-0e4f7174
    instance-state: running
services: {}
2012-09-19 11:43:35,322 INFO 'status' command finished successfully

Lets continue deploying the services. As I am only doing testing, I want to pay the minimum for it, it will ask juju to set a constrain to only use micro instances. Then I will deploy a mysql and a lamp service:

juju set-constraints instance-type=t1.micro -e aws
juju deploy mysql -e aws
juju deploy --repository ~/mycharm local:lamp -e aws
juju set lamp website-database="android_todo" -e aws
juju set lamp website-bzr="lp:~vtuson/+junk/mytodo_web" -e aws
juju expose lamp -e aws
juju add-relation lamp mysql -e aws

With all my services now running I can go to the Amazon EC2 instance console and see how they have been deployed as micro instances. I can now also enter the public address for my lamp service and see the ToDo list table as expected.

Testing the Android App on a real phone

Running Juju status, I can retrieve the public url for the lamp service and I replace the uri vairable in the TodoSourceData class with “ec2-107-22-151-171.compute-1.amazonaws.com/database.php”.  The next step is to plug a HTC Desire set up on debug mode into my laptop’s usb port. The rest is taken care by the Android Eclipse plug-ins. When I click  the run project button, I am presented with a choice of targets:

I just need to press “OK” and my ToDo app is launched in the handset. Opening the menu options and pressing “Sync” fetches the ToDo data from the Amazon services, as expected:

That is all for today! Let me know if you have any suggestions on what else I should cover on this blog series.

[Juju Adventure] Android ToDo and Juju

Time for the next chapter of my blog series about demonstrating how Ubuntu is the best environment to write up “connected” or “cloud backend” Android Apps.  As you might know, the Android SDK allows you to set up a sandboxed environment to develop Mobile apps in your desktop, using Juju  you can do the same for Cloud apps.

To walk you through how to put these great development tools together, I set out to accomplish:

  • Proof that you can access a Juju local environment from the Android Emulator  done!
  • Using a few charms from the charm store plus a custom one, set up a MySQL database that can be exposed through a web service with simple commands/steps  done!
  • Develop a TODO list android app
  • Connect the android app and the webservice, so they talk to each other.

Today I am going to cover the bottom two bullet points in one go!  For this post, I am going to assume that you know a bit of Android development. If you want a great source of introductory material check Lars Vogel’s website.

I have created a simple ToDo Android application that can store tasks into a local SQLite db and allows you to “Star” important items. The code for my Simple Todo app is hosted in Launchpad. I have written my application for Android 2.3, but you can use a later version.

Reading remote data from the MySQL server is confined to a small class that retrieves a JSON object and translates it into a TodoItem object.  Equally , the server code that prints that content table into a JSON object is extremely simple. Beyond this, you can go crazy and implement a RESTfull API to sync the databases.

At the moment, I am just inserting Server side data into the Local database and making sure I don’t add duplicates.Here is a video that shows how easy is to work and test both environments:

Or Click Here for the video.

The same environment should then work if you are running the Android application on an external phone. But that is another blog post 😉