Visualization and Virtual Network Computing (VNC) Sessions
Frontera uses Intel's Cascade Lake (CLX) processors for all visualization and rendering operations. We use the Intel OpenSWR library to render raster graphics with OpenGL, and the Intel OSPRay framework for ray traced images inside visualization software. OpenSWR can be loaded by executing "
module load swr".
Frontera currently has no separate visualization queue. All visualization apps are available on all nodes. VNC and DCV sessions are available on any queue, either through the command line or via the TACC Visualization Portal. We recommend submitting to Frontera's
development queue for interactive sessions. If you are interested in an application that is not yet available, please submit a help desk ticket through the Frontera Portal.
Remote Desktop Access
Remote desktop access to Frontera is formed through a DCV or VNC connection to one or more compute nodes. Users must first connect to a Frontera login node (see Accessing the System and submit a special interactive batch job that:
- allocates a set of Frontera compute nodes
- starts a
vncserverremote desktop process on the first allocated node
- sets up a tunnel through the login node to the dcvserver or vncserver access port
Once the remote desktop process is running on the compute node and a tunnel through the login node is created, an output message identifies the access port for connecting a remote desktop viewer. A remote desktop viewer application is run on the user's remote system and presents the desktop to the user.
Note: If this is your first time connecting to Frontera using VNC, you must run
vncpasswd to create a password for your VNC servers. This should NOT be your login password! This mechanism only deters unauthorized connections; it is not fully secure, as only the first eight characters of the password are saved. All VNC connections are tunneled through SSH for extra security, as described below.
Follow the steps below to start an interactive session.
Start a Remote Desktop
TACC has provided a DCV job script (
/share/doc/slurm/job.dcv), a VNC job script (
/share/doc/slurm/job.vnc) and a combined job script that prefers DCV and fails over to VNC if a DCV license is not available (
/share/doc/slurm/job.dcv2vnc). Each script requests one node in the development queue for two hours, creating a remote desktop session, either DCV or VNC.
login1$ sbatch /share/doc/slurm/job.vnc login1$ sbatch /share/doc/slurm/job.dcv login1$ sbatch /share/doc/slurm/job.dcv2vnc
You may modify or overwrite script defaults with sbatch command-line options (note: the command options must come between
sbatchand the script):
-t hours:minutes:secondsmodify the job runtime
-A projectnumberspecify the project/allocation to be charged
-N nodesspecify number of nodes needed
-p partitionspecify an alternate queue
See Table 6. for more
All arguments after the job script name are sent to the vncserver command. For example, to set the desktop resolution to 1440x900, use:
login1$ sbatch /share/doc/slurm/job.vnc -geometry 1440x900
vnc.job" script starts a
vncserverprocess and writes to the output file, "
vncserver.out" in the job submission directory, with the connect port for the vncviewer.
Note that the DCV viewer adjusts desktop resolution to your browser or DCV client, so desktop resolution does not need to be specified.
Watch for the "To connect" message at the end of the output file, or watch the output stream in a separate window with the commands:
login1$ touch vncserver.out ; tail -f vncserver.out login1$ touch dcvserver.out ; tail -f dcvserver.out
The lightweight window manager,
xfce, is the default DCV and VNC desktop and is recommended for remote performance. Gnome is available; to use gnome, open the "
~/.vnc/xstartup" file (created after your first VNC session) and replace "
startxfce4" with "
gnome-session". Note that gnome may lag over slow internet connections.
Create an SSH Tunnel to Frontera
DCV connections are encrypted via TLS and are secure. For VNC connections, TACC requires users to create an SSH tunnel from the local system to the Frontera login node to assure that the connection is secure. The tunnels created for the VNC job operate only on the
localhostinterface, so you must use
localhostin the port forward argument, not the Frontera hostname. On a Unix or Linux system, execute the following command once the port has been opened on the Frontera login node:
localhost$ ssh -f -N -L xxxx:localhost:yyyy firstname.lastname@example.org
yyyyis the port number given by the vncserver batch job
xxxxis a port on the remote system. Generally, the port number specified on the Frontera login node,
yyyy, is a good choice to use on your local system as well
-finstructs SSH to only forward ports, not to execute a remote command
-Nputs the ssh command into the background after connecting
-Lforwards the port
On Windows systems find the menu in the Windows SSH client where tunnels can be specified, and enter the local and remote ports as required, then ssh to Frontera.
Connecting the vncviewer
Once the SSH tunnel has been established, use a VNC client to connect to the local port you created, which will then be tunneled to your VNC server on Frontera. Connect to localhost:xxxx, where xxxx is the local port you used for your tunnel. In the examples above, we would connect the VNC client to
localhost::xxxx. (Some VNC clients accept
We recommend the TigerVNC VNC Client, a platform independent client/server application.
Once the desktop has been established, two initial xterm windows are presented (which may be overlapping). One, which is white-on-black, manages the lifetime of the VNC server process. Killing this window (typically by typing "
exit" or "
ctrl-D" at the prompt) will cause the vncserver to terminate and the original batch job to end. Because of this, we recommend that this window not be used for other purposes; it is just too easy to accidentally kill it and terminate the session.
The other xterm window is black-on-white, and can be used to start both serial programs running on the node hosting the vncserver process, or parallel jobs running across the set of cores associated with the original batch job. Additional xterm windows can be created using the window-manager left-button menu.
Running Applications on the Remote Desktop
From an interactive desktop, applications can be run from icons or from xterm command prompts. Two special cases arise: running parallel applications, and running applications that use OpenGL.
Running Parallel Applications from the Desktop
Parallel applications are run on the desktop using the same ibrun wrapper described above (see Running). The command:
c101-001$ ibrun ibrunoptions application applicationoptions
will run application on the associated nodes, as modified by the ibrun options.
Running OpenGL/X Applications On The Desktop
Frontera uses the OpenSWR OpenGL library to perform efficient rendering. At present, the compute nodes on Frontera do not support native X instances. All windowing environments should use a DCV desktop launched via the job script in
/share/doc/slurm/job.dcv, a VNC desktop launched via the job script in
/share/doc/slurm/job.vnc or using the TACC Vis portal.
swr: To access the accelerated OpenSWR OpenGL library, it is necessary to use the
swr module to point to the
swr OpenGL implementation and configure the number of threads to allocate to rendering.
c101-001$ module load swr c101-001$ swr options application application-args
Parallel VisIt on Frontera
VisIt was compiled under the Intel compiler and the mvapich2 and MPI stacks.
After connecting to a VNC server on Frontera, as described above, load the VisIt module at the beginning of your interactive session before launching the Visit application:
c101-001$ module load swr visit c101-001$ swr visit
VisIt first loads a dataset and presents a dialog allowing for selecting either a serial or parallel engine. Select the parallel engine. Note that this dialog will also present options for the number of processes to start and the number of nodes to use; these options are actually ignored in favor of the options specified when the VNC server job was started.
Preparing data for Parallel Visit
In order to take advantage of parallel processing, VisIt input data must be partitioned and distributed across the cooperating processes. This requires that the input data be explicitly partitioned into independent subsets at the time it is input to VisIt. VisIt supports SILO data, which incorporates a parallel, partitioned representation. Otherwise, VisIt supports a metadata file (with a
.visit extension) that lists multiple data files of any supported format that are to be associated into a single logical dataset. In addition, VisIt supports a "brick of values" format, also using the .visit metadata file, which enables single files containing data defined on rectilinear grids to be partitioned and imported in parallel. Note that VisIt does not support VTK parallel XML formats (
.pvts). For more information on importing data into VisIt, see Getting Data Into VisIt though this documentation refers to VisIt version 2.0, it appears to be the most current available.
Parallel ParaView on Frontera
After connecting to a VNC server on Frontera, as described above, do the following:
Set up your environment with the necessary modules. Load the
paraviewmodules in this order:
c101-001$ module load swr qt5 ospray paraview
c101-001$ swr -p 1 paraview [paraview client options]
Click the "Connect" button, or select File -> Connect
Select the "auto" configuration, then press "Connect". In the Paraview Output Messages window, you'll see what appears to be an 'lmod' error, but can be ignored. Then you'll see the parallel servers being spawned and the connection established.