No Huddle Offense

"Individual commitment to a group effort-that is what makes a team work, a company work, a society work, a civilization work."

SmartStack = SmartOS + OpenStack (Part 2)

February 15th, 2012 • 2 Comments

This is the second post in this series. Part 1 can be found here.

After setting up nova on SmartOS you should be able to launch the nova-compute service – Andy actually managed to get the nova-compute instance up and running:

./bin/nova-compute --connection_type=fake --sql_connection=mysql://root:admin@192.168.0.189/nova --fake_rabbit
2012-02-15 10:26:32,205 WARNING nova.virt.libvirt.firewall [-] Libvirt module could not be loaded. NWFilterFirewall will not work correctly.
2012-02-15 10:26:32,374 AUDIT nova.service [-] Starting compute node (version 2012.1-LOCALBRANCH:LOCALREVISION)
2012-02-15 10:26:33,099 INFO nova.rpc [-] Connected to AMQP server on localhost:5672

Note that for now we will use a fake RabbitMQ connection. The connection to the MySQL server however is mandatory.

This is the first step in plan to get OpenStack running on SmartOS. Next step will be to look into integration vmadm as a driver in OpenStack. The overall Ideas we have can be found in this slideset on google docs.

SmartStack = SmartOS + OpenStack (Part 1)

February 12th, 2012 • 4 Comments

[Update] – The first shot didn’t work – so I updated this post!

This is the first post of hopefully a series of post about getting OpenStack up and running on SmartOS. There is a blueprint for this effort. For this first post we will focus on setting up SmartOS and try to get some needed packages up and running.

I used the the following SmartOS iso image which will you – when booted – guide you through a small setup routine: smartos-20120210T043623Z.iso. After this you need to configure the pkg repositories as described in this tutorial. I encourage you all to have two discs attached – One for the zones – and one for the data. So since the setup routine created the zones pool I will first create a data pool on the second disc:

zpool create data c0t2d0
zfs create data/ec
zfs set mountpoint=/ec data/ec
zfs mount data/ec
cd / && wget -O- -q http://svn.everycity.co.uk/public/solaris/misc/ \
                    pkg5-smartos-bootstrap-20111221.tar.gz | gtar -zxf-

One major dependency is python, gcc and some tools and libs – and we will also need git so we can get the source code of nova later on:

/ec/bin/pkg install pkg:/database/mysql-55/client@5.5.16-0.162 \
                    pkg:/library/python/mysqldb@1.2.3-0.162
/ec/bin/pkg install python26 git setuptools-26 gcc-44 libxslt gnu-binutils

Next step will be to create a workspace file-system using ZFS (Optional) – this will allow me to do rollbacks later on during the porting of OpenStack:

zfs create data/workspace
cd /data/workspace/

Let’s clone nova and install pip:

export PATH=/ec/bin:$PATH
easy_install pip
git clone git://github.com/tmetsch/nova.git
cd nova/tools
export CFLAGS="-D_XPG6 -std=c99"
pip install -r pip-requires

Now let’s see if we can get the test up and running:

cd ..
./run_tests.sh

This will do for the first steps – I’ll be going to get the dependencies of nova up and running next – Will than post the next blog post.

Percent done: 1%

Coffee instead of snakes – Openshift fun (2)

February 8th, 2012 • Comments Off on Coffee instead of snakes – Openshift fun (2)

In my last post I demoed how OpenShift can be used to deploy WSGI based Python application. But since OpenShift also supports other languages including Java I wanted to give it a shot.

So I developed a very minimalistic application which emulates a RESTful interface. Of course it takes the simplistic – all time favorite – Hello World approach for this. Client can create resources, which when queried will return the obligatory ‘Hello <resource name>’.

To start create a new java application in your OpenShift dashboard. When cloning the git repository – which was created – you will notice that you basically get a maven project with some templates in it. Now you can simple use maven to test and develop your application. When done do a git push and your application is ready to go.

The implementation is pretty straight forward:

/**
 * Simple REST Hello World Service.
 *
 * @author tmetsch
 *
 */
public class HelloServlet extends HttpServlet {
    [...]
    @Override
    protected final void doGet(final HttpServletRequest req,
            final HttpServletResponse resp) throws ServletException,
            IOException {
            [...]
    }
    [...]

So beside from extending the HttpServlet class all you need to do is implement the doGet and doPost methods. When done edit the ‘web.xml’ file in the folder src/main/webapp:

[...]
<servlet>
    <servlet-name>HelloWorld</servlet-name>
    <servlet-class>the.ultimate.test.pkg.HelloServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>HelloWorld</servlet-name>
    <url-pattern>/users/*</url-pattern>
</servlet-mapping>
[...]

This will make sure you the application can be found under the right route. If you like you can also add an index.html file in the same folder so people can read a brief introduction when visting the top layer of you application. In my case that would be http://testus-sample.rhcloud.com – The application itself can be found under: http://testus-sample.rhcloud.com/users/.

So I like the approach OpenShift takes here with maven. You can simply add you dependencies like jmock, junit etc. and during deployment it is taken care of that everything falls in place. Also if you write Unittest it’ll also ensure that you’re application will work – Non functional apps will not be deployed obviously if you write a Unittest and use maven. It’s pretty easy to write Unittest for the HttpServlet class if you use a mocking framework like jmock. You can get the source code at github.

Fun with OpenShift

February 3rd, 2012 • 1 Comment

I’ve been playing around with OpenShift for the past hour. More specifically OpenShift Express which allows you to quickly develop apps Java, Ruby, PHP, Perl and Python and run them in the ‘Cloud’.

As a first exercise I wanted to check that I was able to get a simple Task board up and running. A good tutorial to create such a simple application with Pyramid can be found here.

After signing into OpenShift you can create a new application in the dashboard. Make sure that before doing so you have your namespace setup and your public ssh key configured in OpenShift. After creating the application a git repository is created for you. Simple clone this so you have a local working copy on your machine.

Now have a look inside you local working copy. You will find the directories data, libs, wsgi and some files like a README and the python setuptools related setup.py. Let’s start with editing the setup.py file:

from setuptools import setup

setup(name='A simple TODO Application to test OpenShift Features.',
      version='1.0',
      description='An OpenShift App',
      author='Thijs Metsch',
      author_email='thijs.metsch@opensolaris.org',
      install_requires=['pyramid'],
      classifiers=[
                     'Operating System :: OS Independent',
                     'Programming Language :: Python',
                    ],
     )

Please note the install_required parameter. Using this parameter will ensure that during deployment of you app the necessary requirements get installed – in this case Pyramid.

The following steps are pretty straight forward. The file application in the wsgi directory is as the name states a simple WSGI app. What we need to do is edit this file and let it call the todo app:

#!/usr/bin/python

import os

virtenv = os.environ['APPDIR'] + '/virtenv/'
os.environ['PYTHON_EGG_CACHE'] = os.path.join(virtenv, 'lib/python2.6/site-packages')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
    execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
    pass

from todo import application

Please note that the file has no ‘.py’ extension.

Now we are ready to create the finally application. Please have a look at this tutorial as the code will almost be identical.

Create a file todo.py and follow the tutorial as described. Make sure you application runs locally. Put the ‘*.mako’ files in the subdirectory static under the wsgi directory. You can place the schema.sql file in the data directory. We will be using a sqlite database which will also be place in there.

The final step is to alter the todo.py file and remove the __main__ part. Instead we will introduce the following application function:

here = os.environ['OPENSHIFT_APP_DIR']

def application(environ, start_response):
    '''
    A WSGI application.
    
    Configure pyramid, add routes and finally serve the app.

    environ -- the environment.
    start_response -- the response object.
    '''
    settings = {}
    settings['reload_all'] = True
    settings['debug_all'] = True
    settings['mako.directories'] = os.path.join(here, 'static')
    settings['db'] = os.path.join(here, '..', 'data', 'tasks.db')

    session_factory = UnencryptedCookieSessionFactoryConfig('itsaseekreet')

    config = Configurator(settings=settings, session_factory=session_factory)

    config.add_route('list', '/')
    config.add_route('new', '/new')
    config.add_route('close', '/close/{id}')

    config.add_static_view('static', os.path.join(here, 'static'))

    config.scan()

    return config.make_wsgi_app()(environ, start_response)

When done we can push the code to the repository. When done you can visit you application at the given URL.

With this I’ve been able to create the app http://todo-sample.rhcloud.com/ and I also deployed my OCCI example service: http://occi-sample.rhcloud.com/. As you can see pretty straight forward & fast and of course fun!

What would be fun is to create a simple unittesting framework for OpenShift. For now you can find the code pieces at github.

Forget static callgraphs – Use Python & DTrace!

October 20th, 2011 • Comments Off on Forget static callgraphs – Use Python & DTrace!

Forget about static analyzed callgrahs! No more running the code closing it and then looking at the callgraph. With DTrace you can attach yourself to any (running) process on the (running/production) system and get life up to date information about what the programm is doing. No need to restart the application or anything. This works for most programming languages which have DTrace providers (like C, Java and Python :-)). All you need to know is the pid.

Based on the information you get from DTrace (using the Python consumer) you can draw life updating callgraphs of what is currently happening in the program. Not only is it possible to look at the callgraph but you can also look at the time it took to reach a certain piece of code to analyze bottle necks and the flow of the program:

$ pgrep python # get the pid of the process you want to trace
123456
$ ./callgraph.py 123456 # trace the program and create a callgraph

So if you would have the following Python code:

class A(object):

    def sayHello(self):
        return 'I am A'


class B(object):

    def __init__(self):
        self.a = A()

    def sayHello(self):
        return self.a.sayHello()


if __name__ == '__main__':
    print B().sayHello()

You would get the following life generated callgraph – the GUI can start, stop and restart tracing and get live updates as the DTrace probes fires:

Click to enlarge

The following screenshot was taken while looking into the printer manager:

Click to enlarge

DTrace for the win!

[Updated] Updated the screenshots.