Last Updated: 2020-11-04
In this codelab, you are going to deploy an auto-scaling HPC cluster on Google Cloud that comes with OpenFOAM, Paraview, and mesh generation tools. You will use this infrastructure to connect your local Paraview client with Paraview server deployed on ephemeral compute nodes on Cloud CFD.
This setup will allow you to leverage Google Cloud Platform as a Paraview render farm for visualization and post-processing of scientific data.
In HPC, there are clear distinctions between system administrators and system users. System administrators generally have "root access" enabling them to manage and operate compute resources. System users are generally researchers, scientists, and application engineers that only need to leverage the resources to execute jobs.
On Google Cloud Platform, the OS Login API provisions POSIX user information from GSuite, Cloud Identity, and Gmail accounts. Additionally, OS Login integrates with GCP's Identity and Access Management (IAM) system to determine if users should be allowed to escalate privileges on Linux systems.
In this tutorial, we assume you are filling the system administrator and compute engine administrator roles. We will configure IAM policies to give you sufficient permissions to accomplish the following tasks
To give yourself the necessary IAM roles to complete this tutorial
In this section you will configure your firewall rules in Google Cloud Platform to permit a reverse SSH connection from Paraview server to your local Paraview Client.
In this section, you will deploy the Cloud CFD solution, an auto-scaling HPC cluster with the Slurm job scheduler and software that supports computational fluid dynamics workflows, including Paraview.
In this section, we will access the cluster's login node to configure Slurm accounting, so that you can submit jobs using the Slurm job scheduler.
sudo su
cluster-services list all > config.yaml
config.yaml
file. cluster-services sample slurm_accounts >> config.yaml
openfoam
partition. Make sure you remove the empty slurm_accounts: []
that is pre-populated in the cluster-configuration file.slurm_account
configuration below will create a Slurm account called cfd
with the user joe
added to it. Users in this account will be allowed to submit jobs to the meshing
, openfoam
, and paraview
partitions.slurm_accounts:
- allowed_partitions:
- meshing
- openfoam
- paraview
name: cfd
users:
- joe
slurm_accounts
. Verify that you have entered in the Slurm accounting information correctly.cluster-services update slurm_accounts --config=config.yaml --preview
cluster-services update slurm_accounts --config=config.yaml
exit
In this section, you will use Paraview on your local workstation to connect to paraview server, deployed to compute nodes on your cluster.
paraview-pvsc/
mkdir paraview-pvsc
paraview-gcp.pvsc
file from your login node to paraview-pvsc/
scp USERNAME@LOGIN-IP:/apps/share/paraview-gcp.pvsc ./paraview-pvsc/
paraview &
From here, your paraview client will launch an Xterm window.
Within this window, a series of commands are run automatically for you.
Additionally, you will be able to monitor the status of the node configuration.
Once the job starts and the Paraview server is connected, you will be able to open files in your Paraview client that are located on your Cloud CFD cluster.
In this codelab, you created a cloud-native HPC cluster and connected your local Paraview client to Paraview server that runs on auto-scaling compute nodes on Google Cloud Platform!