Primitives


This is a simple video showing the Kittyhawk primitives for acquiring nodes.

<click here>

 

Demos

MOST OF THESE ARE OLD AND PRE-DATE KITTYHAWK’S OPEN SOURCE LIFE. HOWEVER ALL THIS FUNCTIONALITY STILL EXISTS.  THE VIDEOS ARE JUST A HINT OF WHAT IS POSSIBLE.  THE REAL FUN BEGINS WHEN GOING PAST THE USE OF STANDARD SOFTWARE WHICH THESE DEMOS DO NOT SHOW.

appdemo


This is a simple video showing a user construct a farm of nodes running the apache webserver.

<click here>

fsdemo


This is a video showing a user construct a more complex environment with multiple networks, workers, a file server, disk server and disk nodes.

<click here>

f8demo


Building on the above fsdemo environment a user boots nodes running full blown RedHat Fedora Core 8 PowerPC installations (all self contained eg.  no external file servers). 

<click here>

wdemo


Open software tricks... idle hands make devils of us all ;-) Scrubbing forward might make it more enjoyable.

<click here>

New Demos from Argonne recorded prior to Science Clouds and FastOS 2010 talks. Old demos are below them.  In all the demos you are seeing a users desktop that is running on a machine outside of argonne with 3 green xterms that have ssh sessions to the Argonne Surveyor login nodes.  From here the users interacts with Surveyor allocations running kittyhawk.  The videos in general also show a visualizer that attempts to illustrate the state of the free pool and configurations of the allocated nodes.  The visualizer is just to help understand what is going on, it is neither robust or a real feature of the system.

Bootstrapping KH at Argonne (khqsub, khget, & u-boot)


This is a simple video showing the Kittyhawk (khqsub) and basic khget function and interactions with u-boot.

<click here>

Boot nodes with Linux Appliances Argonne


This is a simple video building on the above video in which the user boots the node and then acquires 100 more nodes and boots those too.

< click here>

Boot nodes with L4


This video starts showing a user, jappavoo, with 101 nodes in a communication domain.  Then a second user, l4user, acquires a new node in a private communication domain and then loads and boots it with the L4 microkernel.  The user then recompiles L4 and boots a second node with the modified L4 user application.  The first user, jappavoo, then adds 100 more nodes to his existing communication domain and boots them with L4 to yield a communication domain containing 201 nodes of which 101 are running linux and 100 are running L4.


< click here>

Boot a node with Debian Lenny apt-get Appliance.


This video shows a user acquiring a node which is on the external communication domain.  The user loads the node with a Debian lenny apt-get appliance and boots it.  This appliance contains a minimal system that can fetch and install software from Debian linux packages repository.  Given the restricted external network access ssh is used to allow the node to connect to ftp.debian.org to fetch packages via a tunnel.  The user first updates the packages list and then installs several standard packages from ftp.debian.org; xauth, emacs, octave, and xterm.  The user then adds and new user jappavoo to the node.  Then from the external network the jappavoo user starts an xterm running on the node.  A simple 3d.m file contain octave/matlab commands to plot a 3D saddle is copied to the jappavoo user home directory.  The jappavoo user on the node then starts an emacs to display the contents of the 3d.m file.  Finally the jappavoo user then starts an instance of octave on the node and creates a couple of 3D plots.   All of the above is done using a single blue gene node. 


< click here>

Creating Private and public Clusters.


This video shows a users using a simple script khbootapp to boot linux appliances in one of two network configurations.  Either with all the nodes on the external network or with all the nodes on a private network with a gateway to access them.  A 100 node cluster running our standard worker appliance is  created for user Dan with all nodes on the external network. 

A ping is done to the nodes to verify that they are aiive and accessible.  Then a second 100 node cluster is created for user jappavoo, however, this time they are on a private network and an additional node is configured as a gateway for the cluster.  A public key is specified so that jappavoo can access the gateway to get to his nodes. 


< click here>

AOE Root Demo 1


In this video a more complex multi ethernet environment has been created the user adds a new node which boots a standard Debian lenny root file system from a aoe disk that is hosted on another node.   The user then logs in to the node and installs a vncserver and starts a vnc desktop on the node.  The user then connects to the desktop an installs a few more packages and runs some commands to mount the pre existing file server and then show some facts like the mounted filesystems, memory stats, and process list.


< click here>

Longer AOE Root Demo


Same as above but user goes on to install more packages and run them on the vnc desktop of the node. 


< click here>