10.3 Installing PVM

10.3 Installing PVM

This section describes how to set up the PVM software package, how to configure a simple virtual machine, and how to compile and run the example programs supplied with PVM. The first part describes the straightforward use of PVM and the most common problems in setting up and running PVM. The latter part describes some of the more advanced options available for customizing your PVM environment.

10.3.1 Setting Up PVM

One of the reasons for PVM's popularity is that PVM is simple to set up and use. It does not require special privileges to be installed. Anyone with a valid login on the hosts can do so. In addition, only one person at an organization needs to get and install PVM for everyone at that organization to use it.

PVM uses two environment variables when starting and running. Each PVM user needs to set these two variables to use PVM. The first variable is PVM_ROOT, which is set to the location of the installed pvm3 directory. The second variable is PVM_ARCH, which tells PVM the architecture of this host and thus what executables to pick from the PVM_ROOT directory.

Because of security concerns many sites no longer allow any of their computers, including those in clusters, to use rsh, which is what PVM uses by default to add hosts to a virtual machine. It is easy to configure PVM to use ssh instead. Just edit the file 'PVM_ROOT/conf/PVM_ARCH.def' and replace rsh with ssh then recompile PVM and your applications.

If PVM is already installed at your site, you can skip ahead to "Creating Your Personal PVM." The PVM source comes with directories and makefiles for Linux and most architectures you are likely to have in your cluster. Building for each architecture type is done automatically by logging on to a host, going into the PVM_ROOT directory, and typing make. The 'makefile' will automatically determine which architecture it is being executed on, create appropriate subdirectories, and build pvmd3, libpvm3.a, and libfpvm3.a, pvmgs, and libgpvm3.a. It places all these files in 'PVM_ROOT/lib/PVM_ARCH' with the exception of pvmgs, which is placed in 'PVM_ROOT/bin/PVM_ARCH'.

Setup Summary

  • Set PVM_ROOT and PVM_ARCH in your '.cshrc' file.

  • Build PVM for each architecture type.

  • Create an '.rhosts' file on each host listing all the hosts.

  • Create a '$HOME/.xpvm_hosts' file listing all the hosts prepended by an "&".

10.3.2 Creating Your Personal PVM

Before we go over the steps to compile and run parallel PVM programs, you should be sure you can start up PVM and configure a virtual machine. On any host on which PVM has been installed you can type

% pvm

and you should get back a PVM console prompt signifying that PVM is now running on this host. You can add hosts to your virtual machine by typing at the console prompt

pvm> add hostname

You also can delete hosts (except the one you are on) from your virtual machine by typing

pvm> delete hostname

If you get the message "Can't Start pvmd," PVM will run autodiagnostics and report the reason found.

To see what the present virtual machine looks like, you can type

pvm> conf

To see what PVM tasks are running on the virtual machine, you can type

pvm> ps -a

Of course, you don't have any tasks running yet. If you type "quit" at the console prompt, the console will quit, but your virtual machine and tasks will continue to run. At any command prompt on any host in the virtual machine, you can type

% pvm

and you will get the message "pvm already running" and the console prompt. When you are finished with the virtual machine you should type

pvm> halt

This command kills any PVM tasks, shuts down the virtual machine, and exits the console. This is the recommended method to stop PVM because it makes sure that the virtual machine shuts down cleanly.

You should practice starting and stopping and adding hosts to PVM until you are comfortable with the PVM console. A full description of the PVM console and its many command options is given in the documentation that comes with the PVM software.

If you don't wish to type in a bunch of hostnames each time, there is a hostfile option. You can list the hostnames in a file one per line and then type

% pvm hostfile

PVM will then add all the listed hosts simultaneously before the console prompt appears. Several options can be specified on a per host basis in the hostfile, if you wish to customize your virtual machine for a particular application or environment.

PVM may also be started in other ways. The functions of the console and a performance monitor have been combined in a graphical user interface called XPVM, which is available from the PVM Web site. If XPVM has been installed at your site, then it can be used to start PVM. To start PVM with this interface, type

% xpvm

The menu button labeled "hosts" will pull down a list of hosts you can add. If you click on a hostname, it is added, and an icon of the machine appears in an animation of the virtual machine. A host is deleted if you click on a hostname that is already in the virtual machine. On startup XPVM reads the file '$HOME/. xpvm_hosts', which is a list of hosts to display in this menu. Hosts without leading "&" are added all at once at startup.

The quit and halt buttons work just like the PVM console. If you quit XPVM and then restart it, XPVM will automatically display what the running virtual machine looks like. Practice starting and stopping and adding hosts with XPVM. If any errors occur, they should appear in the window where you started XPVM.

10.3.3 Running PVM Programs

This section describes how to compile and run the example programs supplied with the PVM software. These example programs make useful templates on which to base your own PVM programs.

The first step is to copy the example programs into your own area:

% cp -r $PVM_ROOT/examples $HOME/pvm3/examples
% cd $HOME/pvm3/examples

The examples directory contains a 'Makefile.aimk' and 'Readme' file that describe how to build the examples. PVM supplies an architecture-independent make, aimk, that automatically determines PVM_ARCH and links any operating system-specific libraries to your application. when you placed the 'cshrc.stub' in your '.cshrc' file, aimk was automatically added to your $PATH. Using aimk allows you to leave the source code and makefile unchanged as you compile across different architectures.

The master/worker programming model is the most popular model used in cluster computing. To compile the master/slave C example, type

% aimk master slave

If you prefer to work with Fortran, compile the Fortran version with

% aimk fmaster fslave

Depending on the location of PVM_ROOT, the INCLUDE statement at the top of the Fortran examples may need to be changed. If PVM_ROOT is not 'HOME/pvm3', then change the include to point to '$PVM_ROOT/include/f pvm3.h'. Note that PVM_ROOT is not expanded inside the Fortran, so you must insert the actual path.

The makefile moves the executables to '$HOME/pvm3/bin/PVM_ARCH', which is the default location where PVM will look for them on all hosts. If your file system is not common across all your cluster nodes, then you will have to copy these executables on all your nodes.

From one window start up PVM and configure some hosts. These examples are designed to run on any number of hosts, including one. In another window, cd to the location of the PVM executables and type

% master

The program will ask about the number of tasks. This number does not have to match the number of hosts in these examples. Try several combinations.

The first example illustrates the ability to run a PVM program from a prompt on any host in the virtual machine. This is how you would run a serial a.out program on a front console of a cluster. The next example, which is also a master/slave model called codehitc, shows how to spawn PVM jobs from the PVM console and also from XPVM.

The model hitc illustrates dynamic load balancing using the pool of tasks paradigm. In this paradigm, the master program manages a large queue of tasks, always sending idle slave programs more work to do until the queue is empty. This paradigm is effective in situations where the hosts have very different computational powers because the least-loaded or more powerful hosts do more of the work and all the hosts stay busy until the end of the problem. To compile hitc, type

% aimk hitc hitc_slave

Since hitc does not require any user input, it can be spawned directly from the PVM console. Start the PVM console, and add some cluster nodes. At the PVM console prompt, type

pvm> spawn -> hitc

The "->" spawn option causes all the print statements in hitc and in the slaves to appear in the console window. This can be a useful feature when debugging your first few PVM programs. You may wish to experiment with this option by placing print statements in 'hitc.f' and 'hitc_slave.f' and recompiling.

10.3.4 PVM Console Details

The PVM console, called pvm, is a standalone PVM task that allows you to interactively start, query, and modify the virtual machine. The console may be started and stopped multiple times on any of the hosts in the virtual machine without affecting PVM or any applications that may be running.

When the console is started, pvm determines whether PVM is already running and, if not, automatically executes pvmd on this host, passing pvmd the command-line options and hostfile. Thus, PVM need not be running to start the console.

        pvm [-n<hostname>] [hostfile]

The -n option is useful for specifying another name for the master pvmd (in case hostname doesn't match the IP address you want). This feature becomes very useful with Beowulf clusters because the nodes of the cluster sometime are on their own network. In this case the front-end node will have two hostnames: one for the cluster and one for the external network. The -n option lets you specify the cluster name directly during PVM startup.

Once started, the console prints the prompt


and accepts commands from standard input. The available commands are as follows:

  • add followed by one or more hostnames (cluster nodes), adds these hosts to the virtual machine.

  • alias defines or lists command aliases.

  • conf lists the configuration of the virtual machine including hostname, pvmd task ID, architecture type, and a relative speed rating.

  • delete followed by one or more hostnames, deletes these hosts from the virtual machine. PVM processes still running on these hosts are lost .

  • echo echoes arguments.

  • halt kills all PVM processes including console and then shuts down PVM. All daemons exit.

  • help can be used to get information about any of the interactive commands. The help command may be followed by a command name that will list options and flags available for this command.

  • id prints console task id.

  • jobs lists running jobs.

  • kill can be used to terminate any PVM process.

  • mstat shows status of specified hosts.

  • ps -a lists all processes currently on the virtual machine, their locations, their task IDs, and their parents' task IDs.

  • pstat shows status of a single PVM process.

  • quit exits the console, leaving daemons and PVM jobs running.

  • reset kills all PVM processes except consoles, and resets all the internal PVM tables and message queues. The daemons are left in an idle state.

  • setenv displays or sets environment variables.

  • sig followed by a signal number and tid, sends the signal to the task.

  • spawn starts a PVM application. Options include the following:

    • -count shows the number of tasks; default is 1

    • -(host) spawn on host; default is any

    • -(PVM_ARCH) spawn of hosts of type PVM_ARCH

    • -? enable debugging

    • -> redirect task output to console

    • ->file redirect task output to file

    • ->>file redirect task output append to file

    • -@ trace job; display output on console

    • -@file trace job; output to file

  • unalias undefines command alias.

  • version prints version of PVM being used.

PVM supports the use of multiple consoles. It is possible to run a console on any host in an existing virtual machine and even multiple consoles on the same machine. It is possible to start up a console in the middle of a PVM application and check on its progress.

Part III: Managing Clusters