Last Updated: 2021-07-23

What you will build

In this codelab, you are going to deploy a compute optimized Google Compute Engine (GCE) instance Google Cloud with FEOTS pre-installed. You will use Terraform to deploy the instance, it's VPC Network, Firewall rules, and service account. Then, you will use this infrastructure to run the Argentine Basin test case.

What you will learn

What you will need

Set IAM Policies

In HPC, there are clear distinctions between system administrators and system users. System administrators generally have "root access" enabling them to manage and operate compute resources. System users are generally researchers, scientists, and application engineers who only need to use the resources to execute jobs.

On Google Cloud, the OS Login API provisions POSIX user information from Google Workspace, Cloud Identity, and Gmail accounts. Additionally, OS Login integrates with GCP's Identity and Access Management (IAM) system to determine if users should be allowed to escalate privileges on Linux systems.

In this tutorial, we assume you are filling the system administrator and compute engine administrator roles. We will configure IAM policies to give you sufficient permissions to accomplish the following tasks

To give yourself the necessary IAM roles to complete this tutorial, in the Google Cloud Console:

  1. Navigate to IAM & Admin > IAM in the Products and Services menu.
  2. Click "+Add" near the top of the page.
  3. Type in your Google Workspace account, Cloud Identity Account, or Gmail account under "New members"
  4. Add the following roles : Compute Admin, Compute OS Login, and Service Account User
  5. Click Save

Your login now has the permissions required to initiate the creation of the HPC cluster.

To verify you have assigned the correct roles, open your Cloud Shell, and run the following command, replacing YOUR_PROJECT and EMAIL_ADDRESS with your project and email address.

$ gcloud projects get-iam-policy YOUR_PROJECT --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members=user:EMAIL_ADDRESS"

This command will yield the output:

ROLE
roles/compute.osLogin
roles/iam.serviceAccountUser
roles/compute.admin

In this section, you will deploy a single compute optimized GCE instance that will be used to run FEOTS. This instance will have 16 Intel Cascade Lake vCPU's.

  1. Open your Cloud Shell on GCP.
  2. Clone the FluidNumerics/rcc-apps repository
cd ~
git clone https://github.com/FluidNumerics/rcc-apps.git
  1. Change to the feots terraform directory:
cd  ~/rcc-apps/feots/tf/gce_cluster
  1. Create the plan with the make command, which will concretize a `fluid.auto.tfvars` file for you and run `terraform init && terraform plan`.
make plan
  1. Deploy the cluster. The setup process only takes a few minutes since FEOTS and its dependencies come pre-installed on your cluster.
make apply
  1. SSH to the compute node created in the previous step. You can see this node in the previous step (probably called feots-0). You can do this by clicking on the SSH button next to the list of VM Instances in the console menu item Compute Engine -> VM instance.

Option: This pair of gcloud commands will figure out the login node name and SSH into it:

export CLUSTER_LOGIN_NODE=$(gcloud compute instances list --zones us-west1-b --filter="name ~ feots.*" --format="value(name)" | head -n1)
gcloud compute ssh ${CLUSTER_LOGIN_NODE} --zone us-west1-b
  1. Once you are connected to the login node, to verify your cluster setup, check that FEOTS is installed
$ feots --help
 FEOTS (feots) Command Line Interface
  Copyright Los Alamos National Laboratory (2017-2020)
  Licensed for use under 3-Clause BSD License
  
  For support related issues, https://github.com/lanl/feots/issues/new
  
  A program for performing creating impulse functions, diagnosing transport
  operators from POP IRFs, and conducting offline tracer simulations using 
  diagnosed transport operators.
  
   feots [tool] [options]
  
  [tool] can be :
  
    impulse
      Use a POP-Mesh, with land-mask, and a chosen advection-difussion stencil
      to create impulse fields for capturing impulse response functions.
  
    popmesh
      Extract POP-Mesh information from POP standard output.
  
    genmask
      Create a regional FEOTS mask using lat-lon domain bounds
  
    operator-diagnosis
      Diagnose transport operators using impulse fields and POP IRF output.
      You must specify the IRF file using the --irf option.
  
    region-extraction
      Create regional transport operators from global transport operators. Regional
      operators are stored in the --regional-db directory.
  
    genmaps
      Create a mappings.regional file from a valid mask file. The mappings.regional
      file is stored in the --out directory.
  
    initialize
      Use the built in initialization routines to create tracer initial conditions
  
    integrate
      Run the offline tracer simulation in a forward integration mode
  
    equilibrate
      Run the offline tracer simulation using JFNK to find the equilibrated tracer field
  
   [options] can be :
  
    --help
      Display this help message
  
     --param-file /path/to/param/file
        Specifies the full path to a file with namelist settings for
        the feots application. If not provided, runtime.params in  
        your current directory is assumed.                          
  
     --pop-file /path/to/irf-file
        Specifies the full path to a netcdf file with standard POP output
        (For popmesh)
  
     --irf /path/to/irf-file
        Specifies the full path to a netcdf file with IRFs
        (For operator diagnosis and regional extraction)
  
     --oplevel 0
        Specifies the index of the operator in the operator sequence
        This option determines the time level encoded to _advect.{oplevel}.data/conn
  
     --dbroot /path/to/feot/db
        Specifies the path to a FEOTS database
  
     --out /path/to/output/directory
        Specifies the path to write model output. Defaults to ./
  
     --no-vertical-mixing
        Disables the vertical mixing operator for forward integration and equilibration
  
     --regional-db /path/to/regional-database/directory
        Specifies the path to read/write regional operators. Defaults to ./
  1. Verify that /opt/feots/examples/zapiola has the contents listed below.
$ ls /opt/feots/examples/zapiola
bash  demo.sh  FEOTSInitialize.f90  FEOTSInitialize.o  genmask  GenMask.f90  GenMask.o  init  irfs  makefile  README.md  runtime.params  slurm

To run the Argentine Basin demo, you will execute a provided script that

The input decks for this example are included in the FEOTS VM image under /opt/feots-db. This includes a global mesh file and the regional extracted transport and vertical mixing operators for the Argentine Basin simulation.

For this section, you must be SSH connected to the feots-0 node created in the previous section

  1. Run the Argentine Basin (Zapiola Rise) demo using the provided script.
$ bash /opt/feots/examples/zapiola/demo.sh

Wait for the job to complete.

  1. When the job completes, you will see a directory called feots/ that contains the simulation output. The directory will contain NetCDF output for each dye tracer for every 200 iterations ( 2.5 simulation days )
$ ls feots/
genmask                     Tracer.00000.0000001000.nc  Tracer.00001.0000001400.nc  Tracer.00002.init.nc        Tracer.00004.0000000400.nc  Tracer.00005.0000000800.nc
gmon.out                    Tracer.00000.0000001200.nc  Tracer.00001.0000001600.nc  Tracer.00003.0000000200.nc  Tracer.00004.0000000600.nc  Tracer.00005.0000001000.nc
init                        Tracer.00000.0000001400.nc  Tracer.00001.init.nc        Tracer.00003.0000000400.nc  Tracer.00004.0000000800.nc  Tracer.00005.0000001200.nc
mappings.regional           Tracer.00000.0000001600.nc  Tracer.00002.0000000200.nc  Tracer.00003.0000000600.nc  Tracer.00004.0000001000.nc  Tracer.00005.0000001400.nc
mask.nc                     Tracer.00000.init.nc        Tracer.00002.0000000400.nc  Tracer.00003.0000000800.nc  Tracer.00004.0000001200.nc  Tracer.00005.0000001600.nc
mesh.nc                     Tracer.00001.0000000200.nc  Tracer.00002.0000000600.nc  Tracer.00003.0000001000.nc  Tracer.00004.0000001400.nc  Tracer.00005.init.nc
runtime.params              Tracer.00001.0000000400.nc  Tracer.00002.0000000800.nc  Tracer.00003.0000001200.nc  Tracer.00004.0000001600.nc
Tracer.00000.0000000200.nc  Tracer.00001.0000000600.nc  Tracer.00002.0000001000.nc  Tracer.00003.0000001400.nc  Tracer.00004.init.nc
Tracer.00000.0000000400.nc  Tracer.00001.0000000800.nc  Tracer.00002.0000001200.nc  Tracer.00003.0000001600.nc  Tracer.00005.0000000200.nc
Tracer.00000.0000000600.nc  Tracer.00001.0000001000.nc  Tracer.00002.0000001400.nc  Tracer.00003.init.nc        Tracer.00005.0000000400.nc
Tracer.00000.0000000800.nc  Tracer.00001.0000001200.nc  Tracer.00002.0000001600.nc  Tracer.00004.0000000200.nc  Tracer.00005.0000000600.nc

In this codelab, you created a compute optimized GCE instance on Google Cloud and ran an offline tracer simulation using FEOTS and ocean transport operators generated from a state-of-the-art climate simulation.

Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this codelab:

Delete the project

The easiest way to eliminate billing is to delete the project you created for the codelab.

Caution: Deleting a project has the following effects:

If you plan to explore multiple codelabs and quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Cloud Console, go to the Manage resources page.
    Go to the Manage resources page
  2. In the project list, select the project that you want to delete and then click Delete .
  3. In the dialog, type the project ID and then click Shut down to delete the project.

Delete the individual resources

  1. Open your cloud shell and navigate to the feots/tf/gce_cluster example directory
cd  ~/rcc-apps/feots/tf/gce_cluster
  1. Run make destroy to delete all of the resources.
make destroy