...
- Qlogin sessions should not be used to "reserve" nodes or slots.
- Please only launch the number of processes that you requested. For instance, if you only request 1 slot on one machine, then please only use 1 processor core on that machine. It is quite possible that your qlogin session is shared with another job on the node and oversubscribing the node would have an adverse impact on someone else. If you need more slots, request them.
- Please logout of your interactive session when you are done to free up the resources.
- There is a default wall clock time limit of 24 hours. This can be overridden by specifying a value via
-l h_rt
but please make it reasonable.
Info |
---|
It is possible to run X11 programs via qlogin. First, log in to to the cluster with X11 forwarding enabled ssh -Y argon.hpc.uiowa.edu Then, launch your qlogin session. Once it is established then you can run programs that need X11. This is likely only to be usable from an on-campus connection for performance reasons. If you use FastX then X11 forwarding is already in place. |
It is important to remember that a qlogin job is just like any other job that is submitted via SGE. You can request resources in exactly the same way, including parallel environments with the appropriate number of slots. However, in the end, the qlogin session is an ssh session connection to one of the compute nodes. This means that the environment that the user is in at the shell prompt is a "fresh" environment. None of the special SGE variables that are used during a normal batch job are present in the qlogin environment. Probably the most important of these is the $PE_HOSTFILE environment variable that contains the list of hosts selected by SGE to be used for a parallel job. To be clear, the hostfile created by SGE is still created, but the environment variable that points to it is not present in the environment of the interactive session, and MPI implementations will not be able to detect that they are running in a queue environment.
As such, the environment variables that are needed for a job need to be handled explicitly in an interactive session, just as one would do on the login host. For single slot interactive jobs, or even multi-slot, single node jobs, it is simply a matter of setting up the environment that you need for your session. However, qlogin requests that claim several hosts in a session and need to manage those hosts require a bit more work. As an example, say that an openmpi job is being debugged in a qlogin session. The qlogin session could be started as:
...
The two bits of information that are important in the output are the job number on line 2 and the masterq host on the last line. To find out the names of the hosts allocated by SGE in the above example, one could do:
Code Block |
---|
cat /opt/gridengine/default/spool/compute-3-41/active_jobs/665450.1/pe_hostfile$PE_HOSTFILE |
This would return something like:
...