KestrelCluster is a set of tools which help setting up a diskless Cluster. In general configuring a cluster is something which requires lot of work since it requires configuring many services:
Clusters are something interesting, they are simply a group of computers working together. We developed KestrelCluster originally to ease the creation of High Performane Clusters (HPC) for our university. We wanted to use unused computers when labs were closed or at night for researchers.
But clusters can be used for much more things, lets say for example for building a giant videowall, or maybe we want to construct a cheap render farm at night for an small 3D animation company, or to launch a demo of somekind of software among many machines without the need to install everything on each machine. These scenarios have in common that they require sharing a system from one computer to several nodes, and of course this would be better if we don't need to install anything on these nodes.
So we created an extensible template based system which configures a common Debian system. With KestrelCluster 3.0 we provide several modules for HPC computing, and a videowall using Xdmx and Chromium.
New templates can be added easily, and and every template can be easily overloaded.
The behaviour of templates can be changed through variables.
Modules can be disabled or enabled for each image. For example we may want only to install the videowall support on a single image.
Minimal Debian based systems are run on the nodes to reduce the memory usage and the traffic on the local net.
Only required software is installed automatically on each image.
Register new nodes
kestrel-nodes --register "group1"
And start up each node you want to add to the cluster using PXE. KestrelCluster will add the node to the group and save its macs to send a Wake-On-LAN "Magic Packet" to start the machine.
Start a group of nodes
kestrel-nodes --wake-group group1
Create a new image:
kestrel-image --new "image-1"
Install or uninstall packages on the image:
kestrel-apt --image "image1" --install vim --uninstall nano
A cron job checks if each node is alive or freezed.
An extensible RPC daemon listens to the events of nodes.
Filter the access to the shared home exported through NFS by the MAC addresses.
Users can modify or install software on the images without the need of root capabilities.
Note: As KestrelCluster 3.0 this is a work in progress.
Create customized Live CDs easily with kestrel-live
After months of work we finally publish a new stable version.
Best wishes,
KestrelHPC dev. Team
Finally, the second version of KestrelHPC is released. We have upload 2 packages to our LaunchPad server. The installation process and all the related information can be found in the documentation section
Last but no least, this version is still in development, it is functional, but is a BETA version, so during this weeks, many changes will be made until a stable version is published.
Best wishes,
KestrelHPC dev. Team
We have created two new mailing lists :
As part of the changes comming to the KestrelHPC project, we have created a new website.
KestrelHPC dev. team
The first approach of the new version of KestrelHPC is been submitted to the SVN server. This first version is not yet fully functional but is close to be stable and usable, so stay alert to check the development of KestrelHPC.
With the version 2 of KestrelHPC we will publish a deb package in a launchpad created for KestrelHPC.
A new Era for KestrelHPC is coming close. In the past months we, the development team, have been discussing about KestrelHPC's flaws and have decided to renew and reimplement KestrelHPC.
In this new reimplementation, KestrelHPC will no longer be an installation script, but a debian package which will be installed as a normal Debian package. This change will carry a creation of a LaunchPad server, so that it will be accesible to Ubuntu users.
Apart from other technical changes, that will be documented in due time, some of the PelicanHPC new features are being added, as well as support for other parallel languages.
All this changes are being done in order to have a stable system and also more open and modifiable than the previous one, having in mind different applications to monitor and handle the clusters resources in a 1-click fashion.
This is all, at the end of this month or the beginning of October this major release will be done.
This version fixes some errors in the installation script and changes the way of the image of the nodes are made.