Bottle of OpenShift Express

Finally, Redhat has managed to jump the bandwagon and provide a PaaS platform that promises much, Openshift. OpenShift provides PaaS in three flavours, OpenShift Express, OpenShift Flex and OpenShift Power. Each othen read like this.

OpenShift Express
Free and easy cloud deployments
PHP, Python, Ruby
Git push and leave the rest to us

OpenShift Flex
Auto-scale new and existing apps in the cloud
PHP, Java EE, MySQL, MongoDB, Memcache, DFS
Full control over configuration & Built-in monitoring

OpenShift Power*
Complete control over cloud deployments
Custom topologies, root access, multi-tier dependencies
Turn any application into a cloud deployment template

* Coming soon.

Continue reading

What is Cloud?

Part of my job involves inducting people to Cloud or Cloud Computing. Though most my work specifically revolves around the Amazon Web Services and other IaaS stuff, there is one question that is constantly asked , “What is Cloud?”. This has recurred too many times that I have tried to answer it too many ways.

Every time I answer it I actually end up answering, “How do YOU define Cloud?” rather than, “What is Cloud?”. And I am convinced that “What is Cloud?” cannot actually be answered, just like “Whats is Divinity”. Though Cloud is more accessible than Divinity; today on the technological horizon Cloud demands the same status.

Continue reading

GoogleCode: Switching to Mercurial

After many years of committing the same crime of SVN I have come to realize the world is moving toward more robust and distributed version control systems like git, mercurial, bazaar or fossil. The primary reason for me to switch was the inability to turn around branches or merges quickly. Anyone who has used SVN will agree with me that its as painful to do branches and merges as it is define Cloud.

So I had to many choices in front of me I could either move to github, an extremely popular code hosting service around git or bitbucket which is also equally popular code hosting service around mercurial. But I saved either for later and chose to just migrate from SVN to mercurial on Google Code.

Continue reading

Password Forget Hell

Well. I guess from time to time everyone forgets somethings and I hope you will not forget passwords. But if you are like me wishing to pass through the hell of forgetting passwords, you are welcome.

The down-sides of system that hardly crashes or performs without any issues is that hardly visit it and hence you forget all the access credentials to that system. We run our issue tracking systems on one such machine and it performs so well that we hardly have to fix anything on that box. But the other day we figured out it too is a computer system and it crashed.

The hell broke loose and all those people connected to this dashboard filled my inbox with “Whats happening?” queries. There were two lessons learnt.

  • The need for proactive monitoring
  • The need to central key management for logins

Continue reading

EC2 Instances List to CSV

NOTE: This is post that I had already posted here.

Since we use a shared account of AWS to work. It becomes incredibly painful to manage it. So I thought it would be good to have a mechanism write or export the list to of instances to a CSV so we could send a claim mail to see who claims these instances. So I wrote this

#!/usr/bin/env python

from boto.ec2 import EC2Connection

csv_file = open('instances.csv','w+')

def process_instance_list(connection):

def build_instance_list(reservation):

def write_instances(instance):
    if (instance.virtualizationType == 'hvm'):
        platform = 'Windows'
        platform = 'Linux'
                                          instance.state,instance.placement,instance.architecture, platform))

if __name__=="__main__":
    connection = EC2Connection(aws_access_key_id='XXXXXXX',aws_secret_access_key='XXXXXXXX')

This piece of code is pretty straight and any EC2 noop could understand it. What I do is, get list of instances and layout their details line-wise and comma separated into a csv file. Now this file could be easily imported into excel sheet. I need to get a better mechanism, I know. But, for now this works.

GlusterFS: Distributed Filesystem on Euca Instances

Been about two months since I have done any serious blogging. Got a little too busy I guess, both at work and at home. Yes doing many things and nothing really exceptionally fruitful personally though.

For a project requirement I was exploring some distributed filesystems. And hit upon GlusterFS, for which the installation documentation said it will work just by doing apt-get on Ubuntu 10.04, and well jumped in my seat. And just did that.

# apt-get install glusterfs-*

Continue reading

Bundling an Euca Instance, euca-bundle-vol: SOLVED

I hope some of you who have been following me are as anxious to know what happened, than I was to find the solution to this euca-bundle-vol issue.

Well it has worked eventually thanks to this forum post. I know there is enough information there but I will still write what I did, in more detail. And I was suprised that it wasn’t documented by now. At least I did not find it till now. So here we go.

I have done this an Karmic server instance, running on the Karmic UEC. There must be similar steps for CentOS and Fedora which I guess someone will leave a comment on.

The catch it seems is to download the latest euca2ools, and it worked magic, thanks Kiran for pointing me to the forum post and I hope this post will help you in you manual preparation for UEC.

Continue reading

Bundling Euca Instance into an EMI: euca-bundle-vol[UNSOLVED]

The urge to do this was more than the need to do it. I am kinda a stuck with too many things at moment, so I could not get hands to this earlier. I am suprised that there are not many who are bundling a running instance into an EMI. But things are not really rosy at the moment.

The first step was to get the certs, and unzip to certain directory, ‘euca’ in my case. And went the following way.

I bundled the instance excluding the home directory
Continue reading

From My Archives: Hadoop Install

Well, this is was an install document that I am picking from my archives. The hadoop, is at .20v but the one described here is .18v, but i should presume the installation would not have changed much in that course.

First I will start with explaining the a single machine install and then extend it with another slave node. Forgive me, if this too raw to digest.

Good luck and enjoy.

Setting it up on a single machine

  • Java 1.5, I did it with 1.6 install(JAVA_HOME=/opt/SDK/jdk)
  • ssh and rsync must be installed.
Continue reading