The Raspberry Pi Cluster (RPiC) Project was initiated in the College of Information and Mathematical Sciences (CIMS) at Clayton State University in January 2014 by Scott Bailey, Michael Dancs, and Jarrett Terry. Currently, Scott Bailey, Michael Dancs, and Jillian Jones are the lead researchers on the project. The focus of this project is to involve students and faculty from STEM disciplines in building a supercomputer cluster from commercially-available components. The idea is to link together a group of ordinary computers (nodes) via network, and distribute computationally-intensive tasks across the network in order to leverage the collective computational power of the individual nodes. When built from inexpensive parts, such a system is colloquially known as a Beowulf cluster, named for the Old English hero who had "thirty men's heft of grasp in the gripe of his hand."
In 1994, NASA's Thomas Sterling and Donald Becker developed the first Beowulf cluster. The concept of constructing such a cluster of Raspberry Pi single-board computers is not a new one: Computational engineers, led by Simon Cox, at the University of Southampton constructed a 64 node cluster made from Raspberry Pi computers; Joshua Kiepert at Boise State University constructed a 32 node cluster. Each expressed the desire to have an inexpensive, powerful, and fully customizable resource for both independent research and educational development purposes alike.
Supercomputers are currently used in a wide variety of industrial and academic fields, including network security, robotics and artificial intelligence, data mining, bioinformatics, 3D graphics and animation, climatology, molecular modeling, etc. Even in cybersecurity, it is well-known that the number of processors is more important than the power of each individual processor. The basic principles of a Beowulf cluster are also applicable to computer networking in general, and particularly to the emerging field of "cloud-based" computing.
As with many high-performance supercomputers, we use the open-source Linux operating system and the Message Passing Interface (MPI) protocal in C++ for communication between nodes; students have the opportunity to learn and use both. The MPI standard was designed for portability, so applicationgs developed on the RPiC could easily be transferred to a more sophisticated supercomputer. Students lacking programming experience can instead write code in Python, a widely-used language with a relatively shallow learning curve, but still capable of interesting computations.
Students gain valuable hands-on experience related to cluster construction and maintenance. They become familiar with all hardware components of the cluster, including power distribution and networking technologies, and be able to troubleshoot problems which invariably arise. Students then move on to software-related issues and learning to develop computationally-intensive applications that can executed in parallel on the cluster. Students have the opportunity to work with faculty whose research would benefit from access to high-performance computing, since many faculty do not possess the appropriate programming experience to leverage such technology.
- Construct a working 16-node prototype by September 1, 2014.
- Construct a working 64-node RPi cluster by December 31, 2015.