Home
What
Is The Jane Cluster?
Overview
Jane
Option Black is a privately built Beowulf-style clustered computing environment.
Jane is a network of dedicated nodes built from COTS hardware, and using open
source software.
Hardware
Architecture
Nodes comprising the cluster range from 486DX4-100 machines to
Pentium-333 machines, with most machines falling into the Pentium range. Most
machines in the cluster serve as nodes only, with some minor exceptions. One
machine is also a Sybase SQL server, and another serves as the front
end/primary node for the cluster. Nodes on the cluster also expose several
directories to the main node in order to share storage space on the cluster.
The
network backplane for Jane is currently an Ethernet 100baseTX non-switched
network, with plans on upgrading to a switched network in the near future.
There are currently no plans for upgrading Jane to channel bonding
architecture.
Software
architecture
All
machines on the cluster use the Linux Mandrake 6.0 installation, slightly
augmented. Kernels on all machines are upgraded to version 2.2.14, and have
been altered with the Mosix kernel patch in order to implement kernel level
load balancing within the cluster. In addition, all nodes run MPICH version
1.2.1 as the primary parallel computing paradigm.
Directory
structure
The directory structure for Jane is as follows:
Primary Node (Mandrake01)
Exported:
/home
/mpich-1.2.1
/mnt/cdrom
Imported:
/,/vol1, /vol2
from every node in the cluster
On
the primary node, there is a directory named /cluster,
which
maps to each of the worker nodes:
/cluster
|___Mandrake01 --
root of Mandrake01 (the primary node,
|
i.e., localhost, is mounted
here)
|___Mandrake02 --
root of Mandrake02 (worker node 1) is
|
mounted here
|___MandrakeNN --
root of MandrakeNN (worker node
|
NN-1) is mounted here
|___vols
|___vol1
|___vol2
|___vol3
...
|___volN --
volumes exposed by the worker
nodes are mounted here
Worker
Nodes (Mandrake02..MandrakeNN)
Exported :
/
/vol1
/vol2
Imported :
/home
-- /home is mounted from the primary node
/mpich-1.2.1
-- each node runs MPICH from the primary nnode
/usr
-- each node mounts the /usr directory froom the primary node
Home