Last Updated: 2021-07-23
In this codelab, you are going to deploy a compute optimized Google Compute Engine (GCE) instance Google Cloud with FEOTS pre-installed. You will use Terraform to deploy the instance, it's VPC Network, Firewall rules, and service account. Then, you will use this infrastructure to run the Argentine Basin test case.
In HPC, there are clear distinctions between system administrators and system users. System administrators generally have "root access" enabling them to manage and operate compute resources. System users are generally researchers, scientists, and application engineers who only need to use the resources to execute jobs.
On Google Cloud, the OS Login API provisions POSIX user information from Google Workspace, Cloud Identity, and Gmail accounts. Additionally, OS Login integrates with GCP's Identity and Access Management (IAM) system to determine if users should be allowed to escalate privileges on Linux systems.
In this tutorial, we assume you are filling the system administrator and compute engine administrator roles. We will configure IAM policies to give you sufficient permissions to accomplish the following tasks
To give yourself the necessary IAM roles to complete this tutorial, in the Google Cloud Console:
Your login now has the permissions required to initiate the creation of the HPC cluster.
To verify you have assigned the correct roles, open your Cloud Shell, and run the following command, replacing YOUR_PROJECT
and EMAIL_ADDRESS
with your project and email address.
$ gcloud projects get-iam-policy YOUR_PROJECT --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members=user:EMAIL_ADDRESS"
This command will yield the output:
ROLE roles/compute.osLogin roles/iam.serviceAccountUser roles/compute.admin
In this section, you will deploy a single compute optimized GCE instance that will be used to run FEOTS. This instance will have 16 Intel Cascade Lake vCPU's.
cd ~ git clone https://github.com/FluidNumerics/rcc-apps.git
feots
terraform directory:cd ~/rcc-apps/feots/tf/gce_cluster
make plan
make apply
Option: This pair of gcloud commands will figure out the login node name and SSH into it:
export CLUSTER_LOGIN_NODE=$(gcloud compute instances list --zones us-west1-b --filter="name ~ feots.*" --format="value(name)" | head -n1)
gcloud compute ssh ${CLUSTER_LOGIN_NODE} --zone us-west1-b
$ feots --help FEOTS (feots) Command Line Interface Copyright Los Alamos National Laboratory (2017-2020) Licensed for use under 3-Clause BSD License For support related issues, https://github.com/lanl/feots/issues/new A program for performing creating impulse functions, diagnosing transport operators from POP IRFs, and conducting offline tracer simulations using diagnosed transport operators. feots [tool] [options] [tool] can be : impulse Use a POP-Mesh, with land-mask, and a chosen advection-difussion stencil to create impulse fields for capturing impulse response functions. popmesh Extract POP-Mesh information from POP standard output. genmask Create a regional FEOTS mask using lat-lon domain bounds operator-diagnosis Diagnose transport operators using impulse fields and POP IRF output. You must specify the IRF file using the --irf option. region-extraction Create regional transport operators from global transport operators. Regional operators are stored in the --regional-db directory. genmaps Create a mappings.regional file from a valid mask file. The mappings.regional file is stored in the --out directory. initialize Use the built in initialization routines to create tracer initial conditions integrate Run the offline tracer simulation in a forward integration mode equilibrate Run the offline tracer simulation using JFNK to find the equilibrated tracer field [options] can be : --help Display this help message --param-file /path/to/param/file Specifies the full path to a file with namelist settings for the feots application. If not provided, runtime.params in your current directory is assumed. --pop-file /path/to/irf-file Specifies the full path to a netcdf file with standard POP output (For popmesh) --irf /path/to/irf-file Specifies the full path to a netcdf file with IRFs (For operator diagnosis and regional extraction) --oplevel 0 Specifies the index of the operator in the operator sequence This option determines the time level encoded to _advect.{oplevel}.data/conn --dbroot /path/to/feot/db Specifies the path to a FEOTS database --out /path/to/output/directory Specifies the path to write model output. Defaults to ./ --no-vertical-mixing Disables the vertical mixing operator for forward integration and equilibration --regional-db /path/to/regional-database/directory Specifies the path to read/write regional operators. Defaults to ./
/opt/feots/examples/zapiola
has the contents listed below. $ ls /opt/feots/examples/zapiola bash demo.sh FEOTSInitialize.f90 FEOTSInitialize.o genmask GenMask.f90 GenMask.o init irfs makefile README.md runtime.params slurm
To run the Argentine Basin demo, you will execute a provided script that
The input decks for this example are included in the FEOTS VM image under /opt/feots-db
. This includes a global mesh file and the regional extracted transport and vertical mixing operators for the Argentine Basin simulation.
For this section, you must be SSH connected to the feots-0 node created in the previous section
$ bash /opt/feots/examples/zapiola/demo.sh
Wait for the job to complete.
feots/
that contains the simulation output. The directory will contain NetCDF output for each dye tracer for every 200 iterations ( 2.5 simulation days )$ ls feots/ genmask Tracer.00000.0000001000.nc Tracer.00001.0000001400.nc Tracer.00002.init.nc Tracer.00004.0000000400.nc Tracer.00005.0000000800.nc gmon.out Tracer.00000.0000001200.nc Tracer.00001.0000001600.nc Tracer.00003.0000000200.nc Tracer.00004.0000000600.nc Tracer.00005.0000001000.nc init Tracer.00000.0000001400.nc Tracer.00001.init.nc Tracer.00003.0000000400.nc Tracer.00004.0000000800.nc Tracer.00005.0000001200.nc mappings.regional Tracer.00000.0000001600.nc Tracer.00002.0000000200.nc Tracer.00003.0000000600.nc Tracer.00004.0000001000.nc Tracer.00005.0000001400.nc mask.nc Tracer.00000.init.nc Tracer.00002.0000000400.nc Tracer.00003.0000000800.nc Tracer.00004.0000001200.nc Tracer.00005.0000001600.nc mesh.nc Tracer.00001.0000000200.nc Tracer.00002.0000000600.nc Tracer.00003.0000001000.nc Tracer.00004.0000001400.nc Tracer.00005.init.nc runtime.params Tracer.00001.0000000400.nc Tracer.00002.0000000800.nc Tracer.00003.0000001200.nc Tracer.00004.0000001600.nc Tracer.00000.0000000200.nc Tracer.00001.0000000600.nc Tracer.00002.0000001000.nc Tracer.00003.0000001400.nc Tracer.00004.init.nc Tracer.00000.0000000400.nc Tracer.00001.0000000800.nc Tracer.00002.0000001200.nc Tracer.00003.0000001600.nc Tracer.00005.0000000200.nc Tracer.00000.0000000600.nc Tracer.00001.0000001000.nc Tracer.00002.0000001400.nc Tracer.00003.init.nc Tracer.00005.0000000400.nc Tracer.00000.0000000800.nc Tracer.00001.0000001200.nc Tracer.00002.0000001600.nc Tracer.00004.0000000200.nc Tracer.00005.0000000600.nc
In this codelab, you created a compute optimized GCE instance on Google Cloud and ran an offline tracer simulation using FEOTS and ocean transport operators generated from a state-of-the-art climate simulation.
To avoid incurring charges to your Google Cloud account for the resources used in this codelab:
The easiest way to eliminate billing is to delete the project you created for the codelab.
Caution: Deleting a project has the following effects:
If you plan to explore multiple codelabs and quickstarts, reusing projects can help you avoid exceeding project quota limits.
feots/tf/gce_cluster
example directorycd ~/rcc-apps/feots/tf/gce_cluster
make destroy