No Huddle Offense

"Individual commitment to a group effort-that is what makes a team work, a company work, a society work, a civilization work."

Coffee instead of snakes – Openshift fun (2)

February 8th, 2012 • Comments Off on Coffee instead of snakes – Openshift fun (2)

In my last post I demoed how OpenShift can be used to deploy WSGI based Python application. But since OpenShift also supports other languages including Java I wanted to give it a shot.

So I developed a very minimalistic application which emulates a RESTful interface. Of course it takes the simplistic – all time favorite – Hello World approach for this. Client can create resources, which when queried will return the obligatory ‘Hello <resource name>’.

To start create a new java application in your OpenShift dashboard. When cloning the git repository – which was created – you will notice that you basically get a maven project with some templates in it. Now you can simple use maven to test and develop your application. When done do a git push and your application is ready to go.

The implementation is pretty straight forward:

/**
 * Simple REST Hello World Service.
 *
 * @author tmetsch
 *
 */
public class HelloServlet extends HttpServlet {
    [...]
    @Override
    protected final void doGet(final HttpServletRequest req,
            final HttpServletResponse resp) throws ServletException,
            IOException {
            [...]
    }
    [...]

So beside from extending the HttpServlet class all you need to do is implement the doGet and doPost methods. When done edit the ‘web.xml’ file in the folder src/main/webapp:

[...]
<servlet>
    <servlet-name>HelloWorld</servlet-name>
    <servlet-class>the.ultimate.test.pkg.HelloServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>HelloWorld</servlet-name>
    <url-pattern>/users/*</url-pattern>
</servlet-mapping>
[...]

This will make sure you the application can be found under the right route. If you like you can also add an index.html file in the same folder so people can read a brief introduction when visting the top layer of you application. In my case that would be http://testus-sample.rhcloud.com – The application itself can be found under: http://testus-sample.rhcloud.com/users/.

So I like the approach OpenShift takes here with maven. You can simply add you dependencies like jmock, junit etc. and during deployment it is taken care of that everything falls in place. Also if you write Unittest it’ll also ensure that you’re application will work – Non functional apps will not be deployed obviously if you write a Unittest and use maven. It’s pretty easy to write Unittest for the HttpServlet class if you use a mocking framework like jmock. You can get the source code at github.

Fun with OpenShift

February 3rd, 2012 • 1 Comment

I’ve been playing around with OpenShift for the past hour. More specifically OpenShift Express which allows you to quickly develop apps Java, Ruby, PHP, Perl and Python and run them in the ‘Cloud’.

As a first exercise I wanted to check that I was able to get a simple Task board up and running. A good tutorial to create such a simple application with Pyramid can be found here.

After signing into OpenShift you can create a new application in the dashboard. Make sure that before doing so you have your namespace setup and your public ssh key configured in OpenShift. After creating the application a git repository is created for you. Simple clone this so you have a local working copy on your machine.

Now have a look inside you local working copy. You will find the directories data, libs, wsgi and some files like a README and the python setuptools related setup.py. Let’s start with editing the setup.py file:

from setuptools import setup

setup(name='A simple TODO Application to test OpenShift Features.',
      version='1.0',
      description='An OpenShift App',
      author='Thijs Metsch',
      author_email='thijs.metsch@opensolaris.org',
      install_requires=['pyramid'],
      classifiers=[
                     'Operating System :: OS Independent',
                     'Programming Language :: Python',
                    ],
     )

Please note the install_required parameter. Using this parameter will ensure that during deployment of you app the necessary requirements get installed – in this case Pyramid.

The following steps are pretty straight forward. The file application in the wsgi directory is as the name states a simple WSGI app. What we need to do is edit this file and let it call the todo app:

#!/usr/bin/python

import os

virtenv = os.environ['APPDIR'] + '/virtenv/'
os.environ['PYTHON_EGG_CACHE'] = os.path.join(virtenv, 'lib/python2.6/site-packages')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
    execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
    pass

from todo import application

Please note that the file has no ‘.py’ extension.

Now we are ready to create the finally application. Please have a look at this tutorial as the code will almost be identical.

Create a file todo.py and follow the tutorial as described. Make sure you application runs locally. Put the ‘*.mako’ files in the subdirectory static under the wsgi directory. You can place the schema.sql file in the data directory. We will be using a sqlite database which will also be place in there.

The final step is to alter the todo.py file and remove the __main__ part. Instead we will introduce the following application function:

here = os.environ['OPENSHIFT_APP_DIR']

def application(environ, start_response):
    '''
    A WSGI application.
    
    Configure pyramid, add routes and finally serve the app.

    environ -- the environment.
    start_response -- the response object.
    '''
    settings = {}
    settings['reload_all'] = True
    settings['debug_all'] = True
    settings['mako.directories'] = os.path.join(here, 'static')
    settings['db'] = os.path.join(here, '..', 'data', 'tasks.db')

    session_factory = UnencryptedCookieSessionFactoryConfig('itsaseekreet')

    config = Configurator(settings=settings, session_factory=session_factory)

    config.add_route('list', '/')
    config.add_route('new', '/new')
    config.add_route('close', '/close/{id}')

    config.add_static_view('static', os.path.join(here, 'static'))

    config.scan()

    return config.make_wsgi_app()(environ, start_response)

When done we can push the code to the repository. When done you can visit you application at the given URL.

With this I’ve been able to create the app http://todo-sample.rhcloud.com/ and I also deployed my OCCI example service: http://occi-sample.rhcloud.com/. As you can see pretty straight forward & fast and of course fun!

What would be fun is to create a simple unittesting framework for OpenShift. For now you can find the code pieces at github.

Forget static callgraphs – Use Python & DTrace!

October 20th, 2011 • Comments Off on Forget static callgraphs – Use Python & DTrace!

Forget about static analyzed callgrahs! No more running the code closing it and then looking at the callgraph. With DTrace you can attach yourself to any (running) process on the (running/production) system and get life up to date information about what the programm is doing. No need to restart the application or anything. This works for most programming languages which have DTrace providers (like C, Java and Python :-)). All you need to know is the pid.

Based on the information you get from DTrace (using the Python consumer) you can draw life updating callgraphs of what is currently happening in the program. Not only is it possible to look at the callgraph but you can also look at the time it took to reach a certain piece of code to analyze bottle necks and the flow of the program:

$ pgrep python # get the pid of the process you want to trace
123456
$ ./callgraph.py 123456 # trace the program and create a callgraph

So if you would have the following Python code:

class A(object):

    def sayHello(self):
        return 'I am A'


class B(object):

    def __init__(self):
        self.a = A()

    def sayHello(self):
        return self.a.sayHello()


if __name__ == '__main__':
    print B().sayHello()

You would get the following life generated callgraph – the GUI can start, stop and restart tracing and get live updates as the DTrace probes fires:

Click to enlarge

The following screenshot was taken while looking into the printer manager:

Click to enlarge

DTrace for the win!

[Updated] Updated the screenshots.

Python traces Python using DTrace

October 19th, 2011 • Comments Off on Python traces Python using DTrace

Another example of how to use Python as a DTrace consumer. This little program traces a Python program while is runs and shows you the flow of the code. The output is displayed in a Treeview (An indent mean that python called another function – Stepping back means that the function returned) and when double clicking the source code is displayed (Would be nice to open pydev as well).

Click to enlarge

Another example of Python as a DTrace consumer: This small GUI gives an up to date view of the number of syscalls made by an executable. Since this GUI is a live up to date view you can watch the circles appear, grow and become smaller again 🙂

Click to enlarge

Now on to other things…maybe creating live animated callgraphs as your program runs? 😛

Python as a DTrace consumer – Part 2 walk the aggregate

October 8th, 2011 • Comments Off on Python as a DTrace consumer – Part 2 walk the aggregate

Yesterday I blogged about how to use Python as a DTrace consumer with the help of ctypes. The examples in there are very rudimentary and only captured the normal output of DTrace – not the aggregates.

The examples in the last post have been altered and now we let DTrace work for a few seconds and then walk the aggregate:

    # aggregate data for a few sec...
    i = 0
    chew = CHEW_FUNC(chew_func)
    chew_rec = CHEWREC_FUNC(chewrec_func)
    while i < 2:
        LIBRARY.dtrace_sleep(handle)
        LIBRARY.dtrace_work(handle, None, chew, chew_rec, None)

        time.sleep(1)
        i += 1

    LIBRARY.dtrace_stop(handle)

    walk_func = WALK_FUNC(walk)
    # sorting instead of dtrace_aggregate_walk
    if LIBRARY.dtrace_aggregate_walk_valsorted(handle, walk_func, None) != 0:
        txt = LIBRARY.dtrace_errmsg(handle, LIBRARY.dtrace_errno(handle))
        raise Exception(c_char_p(txt).value)

The walk function is right now very simple but does work – please note the TODO 🙂

def walk(data, arg):
    '''
    Aggregate walker.
    '''
    # TODO: pickup the 16 and 272 from offset in dtrace_aggdesc struct...

    tmp = data.contents.dtada_data
    name = cast(tmp + 16, c_char_p).value
    instance = deref(tmp + 272, c_int).value

    print '+--> walking', name, instance

    return 0

When run the Python script will output (Would be fun to run this DTrace script with the help of Python – Python as a DTrace consumer tracing Python as DTrace provider :-P):

./dtrace.py 
+--> In chew: cpu : 0
  +--> In out:  Hello World
+--> walking updatemanagernot 2
+--> walking mixer_applet2 4
+--> walking gnome-netstatus- 135
+--> walking firefox-bin 139
+--> walking gnome-terminal 299
+--> walking python2.7 545
Error 0

Overall this works pretty smoothly – but needs a lot of updating before it is production ready – Still it gives an rough overview that Python can be a simple DTrace consumer while using ctypes. So now Python can be consumer and provider for DTrace *happy days* 🙂

The code (examples) have been updated on github.