One single line in the /etc/hosts file on my Ubuntu Linux machine wasted my precious whole afternoon. I had some assumptions about the cause of this problem but I don't want to spend too much time investigating on what really caused the trouble. Anyway, here are the listings of what worked and what didn't:
/etc/hostname
mpd.hosts.cluster
Bad /etc/hosts
Result
Good /etc/hosts
Result
At this point, with the /etc/hosts now fixed, the MPICH2 cluster could be booted up successfully.
I don't want to invest my time trying to understand this right now but I would really appreciate anyone explaining to me what went wrong.
Keywords: parallel-computing, mpich2, ubuntu-linux, problems, solutions, troubleshooting
/etc/hostname
ubuntu
mpd.hosts.cluster
master
worker-01
Bad /etc/hosts
127.0.0.1 localhost localhost.localdomain ubuntu
192.168.200.128 master
192.168.200.129 worker-01
Result
thitiv@ubuntu:~$ mpdboot -n 2 -f mpd.hosts.cluster
thitiv@master's password:
mpdboot_ubuntu (handle_mpd_output 359): failed to ping mpd on master; recvd output={}
Good /etc/hosts
127.0.0.1 localhost localhost.localdomain
192.168.200.128 ubuntu master
192.168.200.129 worker-01
Result
thitiv@ubuntu:~$ mpdboot -n 2 -f mpd.hosts.cluster
thitiv@worker-01's password:
thitiv@ubuntu:~$
At this point, with the /etc/hosts now fixed, the MPICH2 cluster could be booted up successfully.
I don't want to invest my time trying to understand this right now but I would really appreciate anyone explaining to me what went wrong.
Keywords: parallel-computing, mpich2, ubuntu-linux, problems, solutions, troubleshooting
Comments
Usually, you have to make sure that all of the hosts are reachable from all of the machines on the cluster's internal network.
Is ubuntu an alias for one of the machines in the hosts file?
With your original /etc/hosts file:
127.0.0.1 localhost localhost.localdomain ubuntu
192.168.200.128 master
192.168.200.129 worker-01
If you share that hosts file on all of the machines, the name 'ubuntu' always refers to the machine you're on. That is, regardless of whether you're on master or worker-01, each machine things it's 'ubuntu' as well.
You can get serious identity issues when the MPDs try to ping each other if things are inconsistent. It helps to make sure that all of the nodes are listening on interfaces on the same subnet -- which you did by moving the name.
Matthew
ssh hostname date
requires password.
Yes, I was running Ubuntu on all nodes. I used apt-get install to install the OpenSSH package supplied with the Ubuntu distribution. I also compiled MPICH-2 from the latest source.
Unlike the MPICH-1 version that I used many years ago, I didn't have to setup an rlogin file for MPICH-2. I think MPICH-2 has switched from rsh/rlogin to SSH. So basically we should make sure that SSH itself works before we start installing MPICH-2.
I tried to simplify things a little bit by creating the same login names and passwords on all machines.
I would suggest you to consult an SSH how-to document and try to SSH between machines before you proceed with the MPICH-2 installation.
Thiti.
kanibal@kubuntu:~$ mpiexec -n 2 cpi
problem with execution of cpi on nodo2: [Errno 2] No such file or directory
problem with execution of cpi on kabuntu: [Errno 2] No such file or directory
do you know a posible answer?
kanibal
kanibalv@gmail(NOSPAM).com
Something might have gone wrong with your shared file system – perhaps the NFS shared directory was not properly mounted. I would suggest you log on to each machine manually and verify that “cpi” is accessible from each node.
--
Thiti.
i was thinking, if there is not much problem, a small tutorial for MPICH2 on ubuntu.
Thanks for your motivation.
kanibal.-
hi, I made a small tutorial, please tell me how it is... (specially my english)
the link is:
http://kanibalv.blogspot.com/
"Installation and configuration of MPICH2 for a Beowulf Cluster".
thank's for all...
kanibal..