Last updated: 2019-07-31

workflowr checks: (Click a bullet for more information)
  • R Markdown file: up-to-date

    Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

  • Environment: empty

    Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

  • Seed: set.seed(20190513)

    The command set.seed(20190513) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

  • Session information: recorded

    Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

  • Repository version: 498909b

    Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

    Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
    
    Ignored files:
        Ignored:    .Rhistory
        Ignored:    .Rproj.user/
        Ignored:    data/ALL_anom.Rda
        Ignored:    data/ERA5_lhf.Rda
        Ignored:    data/ERA5_lwr.Rda
        Ignored:    data/ERA5_qnet.Rda
        Ignored:    data/ERA5_qnet_anom.Rda
        Ignored:    data/ERA5_qnet_clim.Rda
        Ignored:    data/ERA5_shf.Rda
        Ignored:    data/ERA5_swr.Rda
        Ignored:    data/ERA5_t2m.Rda
        Ignored:    data/ERA5_t2m_anom.Rda
        Ignored:    data/ERA5_t2m_clim.Rda
        Ignored:    data/ERA5_u.Rda
        Ignored:    data/ERA5_u_anom.Rda
        Ignored:    data/ERA5_u_clim.Rda
        Ignored:    data/ERA5_v.Rda
        Ignored:    data/ERA5_v_anom.Rda
        Ignored:    data/ERA5_v_clim.Rda
        Ignored:    data/GLORYS_mld.Rda
        Ignored:    data/GLORYS_mld_anom.Rda
        Ignored:    data/GLORYS_mld_clim.Rda
        Ignored:    data/GLORYS_u.Rda
        Ignored:    data/GLORYS_u_anom.Rda
        Ignored:    data/GLORYS_u_clim.Rda
        Ignored:    data/GLORYS_v.Rda
        Ignored:    data/GLORYS_v_anom.Rda
        Ignored:    data/GLORYS_v_clim.Rda
        Ignored:    data/NAPA_clim_U.Rda
        Ignored:    data/NAPA_clim_V.Rda
        Ignored:    data/NAPA_clim_W.Rda
        Ignored:    data/NAPA_clim_emp_ice.Rda
        Ignored:    data/NAPA_clim_emp_oce.Rda
        Ignored:    data/NAPA_clim_fmmflx.Rda
        Ignored:    data/NAPA_clim_mldkz5.Rda
        Ignored:    data/NAPA_clim_mldr10_1.Rda
        Ignored:    data/NAPA_clim_qemp_oce.Rda
        Ignored:    data/NAPA_clim_qla_oce.Rda
        Ignored:    data/NAPA_clim_qns.Rda
        Ignored:    data/NAPA_clim_qsb_oce.Rda
        Ignored:    data/NAPA_clim_qt.Rda
        Ignored:    data/NAPA_clim_runoffs.Rda
        Ignored:    data/NAPA_clim_ssh.Rda
        Ignored:    data/NAPA_clim_sss.Rda
        Ignored:    data/NAPA_clim_sst.Rda
        Ignored:    data/NAPA_clim_taum.Rda
        Ignored:    data/NAPA_clim_vars.Rda
        Ignored:    data/NAPA_clim_vecs.Rda
        Ignored:    data/OAFlux.Rda
        Ignored:    data/OISST_sst.Rda
        Ignored:    data/OISST_sst_anom.Rda
        Ignored:    data/OISST_sst_clim.Rda
        Ignored:    data/node_mean_all_anom.Rda
        Ignored:    data/packet_all.Rda
        Ignored:    data/packet_all_anom.Rda
        Ignored:    data/packet_nolab.Rda
        Ignored:    data/packet_nolabgsl.Rda
        Ignored:    data/som_all_anom.Rda
        Ignored:    data/som_nolab.Rda
        Ignored:    data/som_nolabgsl.Rda
        Ignored:    data/synoptic_states.Rda
        Ignored:    data/synoptic_vec_states.Rda
    
    Untracked files:
        Untracked:  data/node_mean_nolab.Rda
        Untracked:  data/node_mean_nolabgsl.Rda
        Untracked:  output/som_plot_mld_anom.pdf
        Untracked:  output/som_plot_qnet_anom.pdf
        Untracked:  output/som_plot_sst_anom_nogsl.pdf
        Untracked:  output/som_plot_t2m_anom.pdf
        Untracked:  output/som_plot_u10_anom.pdf
        Untracked:  output/som_plot_u_anom.pdf
        Untracked:  output/som_plot_v10_anom.pdf
        Untracked:  output/som_plot_v_anom.pdf
    
    Unstaged changes:
        Modified:   code/functions.R
        Modified:   code/workflow.R
        Modified:   data/.gitignore
        Modified:   output/som_plot_sst_anom.pdf
        Deleted:    output/som_plot_uoce_anom.pdf
    
    
    Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
Expand here to see past versions:
    File Version Author Date Message
    Rmd 498909b robwschlegel 2019-07-31 Re-publish entire site.
    Rmd 256a7f1 robwschlegel 2019-07-31 Completed re-run on ERA 5 data. Working on combining anomalies.
    Rmd 3b805e4 robwschlegel 2019-07-30 Completed anomaly runs on GLORYS variables
    Rmd bd987b4 robwschlegel 2019-07-29 Completed run on anomalies with the exception of the three in the GLORYS product
    Rmd 51ed681 robwschlegel 2019-07-25 Completed anoms for OISST
    Rmd 0b6f065 robwschlegel 2019-07-25 Push before beginning to write code for loading entire obs/reanalysis products into memory for clim calculations
    Rmd 04a6bf3 robwschlegel 2019-07-24 Creating new download script for obs/reanalysis products
    html 81e961d robwschlegel 2019-07-09 Build site.
    Rmd 7ff9b8b robwschlegel 2019-06-17 More work on the talk
    html c23c50b robwschlegel 2019-06-10 Build site.
    html 028d3cc robwschlegel 2019-06-10 Build site.
    html c61a15f robwschlegel 2019-06-06 Build site.
    Rmd 49c6105 robwschlegel 2019-06-06 Finished vector data packet pipeline
    Rmd 44ac335 robwschlegel 2019-06-06 Working on inclusion of vectors into SOM pipeline
    html 6dd6da8 robwschlegel 2019-06-06 Build site.
    Rmd 990693a robwschlegel 2019-06-05 First SOM result visuals
    Rmd 4838cc8 robwschlegel 2019-06-04 Working on SOM functions
    Rmd 94ce8f6 robwschlegel 2019-06-04 Functions for creating data packets are up and running
    Rmd 65301ed robwschlegel 2019-05-30 Push before getting rid of some testing structure
    Rmd 0717c84 robwschlegel 2019-05-29 Working towards getting the variable climatologies in order
    Rmd 2c3f68c robwschlegel 2019-05-28 Working on the variable prep vignette
    html c09b4f7 robwschlegel 2019-05-24 Build site.
    Rmd 5dc8bd9 robwschlegel 2019-05-24 Finished initial creation of SST prep vignette.

Introduction

This vignette will walk through the steps needed to create mean ‘whatever’ states during all of the MHWs detected in the previous SST preparation vignette. These ‘whatever’ states are any of the abiotic variables that have a presence in the ocean heat budget equation and that have been deemed relevant w.r.t. forcing of extreme ocean surface temperatures.

After first going through this entire process with the NAPA model output it become clear that this analysis should be run first with obs/reanalysis data, and the NAPA model results may be compared against this as desired. To this end we will now use the following variables as found in the following data products:

  • NOAA OISST: SST
  • GLORYS: Mixed layer depth, U & V current vectors (~0.5m)
  • ERA 5: U & V wind vectors (10m), air temperature (2m)
    • Net heat flux term: Latent & sensible heat flux, shortwave and longwave radiation

All of these variables have a daily resolution from at least 1993 – 2018. The spatial resolution varies from a standard 1/4 degree data to everything except for the hi-res (1/12 degree) GLORYS data from 2016 – 2018. The OISST, ERA 5, and GLORYS (1993 – 2015) data are natively at 1/4 degree, though the coordinates are slightly off between OISST and ERA 5 + GLORYS. The ERA5 data are also hourly, so will need to be binned into mean daily values. In order to trade between efficiency and accuracy we will be converting everything to the 1/4 degree grid of the GLORYS/ERA 5 data.

In order to compare the OISST dataset to the other 1/4 degree products the lat values for OISST will have 0.125 added to them, and the longitude values will have 0.125 subtracted from them. This is because the OISST coordinates give the centre value of each pixel, whereas the other coordinate system gives the top left corner (north-west corner).

# Packages used in this vignette
library(jsonlite, lib.loc = "../R-packages/")
library(tidyverse) # Base suite of functions
library(tidync, lib.loc = "../R-packages/")
library(lubridate) # For convenient date manipulation

# Set number of cores
doMC::registerDoMC(cores = 50)

# Disable scientific notation for numeric values
  # I just find it annoying
options(scipen = 999)

# Corners of the study area
NWA_corners <- readRDS("data/NWA_corners.Rda")

# The region pixel lon/lat values
NWA_info <- readRDS("data/NWA_info.Rda")

# Load the OISST grid
OISST_grid <- readRDS("data/OISST_grid.Rda")

# Load the OISST grid trimmed to the region coords
OISST_grid_region <- readRDS("data/OISST_grid_region.Rda")

# Load MHW results
OISST_region_MHW <- readRDS("data/OISST_region_MHW.Rda")

# MHW Events
MHW_event <- OISST_region_MHW %>%
  select(-cats) %>%
  unnest(events) %>%
  filter(row_number() %% 2 == 0) %>%
  unnest(events)

For the upcoming variable prep we are also going to want the OISST coordinates that are within our chosen study area as seen with NWA_corners.

# The OISST coordinates for the study area only
OISST_grid_study <- OISST_grid %>% 
  filter(lon >= NWA_corners[1], lon <= NWA_corners[2],
           lat >= NWA_corners[3], lat <= NWA_corners[4])
# saveRDS(OISST_grid_study, "data/OISST_grid_study.Rda")

Base datasets

Rather than go about performing all of the calculations below on piecemeal bits of data, it should be possible to load the entire datasets for each variable into memory at once. So in this section we will create functions that load each of the variables for the different data products. We will then create and save these complete datasets for ease of use later on.

# Load the functions used below
  # I've chosen to house a lot of code here to keep this vignette tidier
source("code/functions.R")

## SST
# The files with data in the study area
OISST_sst_files <- data.frame(files = dir("../../data/OISST", full.names = T),
                              lon = c(seq(0.125, 179.875, by = 0.25), seq(-179.875, -0.125, by = 0.25))) %>% 
  filter(lon >= NWA_corners[1]+0.125, lon <= NWA_corners[2]+0.125) %>% 
  mutate(files = as.character(files))
OISST_sst_files <- as.vector(OISST_sst_files$files)
# Aggregate data
system.time(
OISST_sst <- load_all_OISST(OISST_sst_files)
) # xxx seconds
# save
saveRDS(OISST_sst, "data/OISST_sst.Rda")

# The code used to create the rest of the base files may be seen in code/workflow.R

Net heat flux

This is the one variable I was not able to source. The OAFlux product does have a net heat flux layer, but it ends in 2009. This is not long enough so we are going to create our own net heat flux layer by adding together the latent & sensible heat flux layers with the short & long wave radiation layers all from the ERA 5 product. When doing so we must ensure that the directions of the terms are matched correctly (i.e. that they all represent positive downward flux).

# The code used to perform these steps may be seen in code/workflow.R

Climatologies

In the data packets we need to create for the SOMs (see below) we will need to include anomaly values. In order to do this we need to first create daily climatology values for each variable for each pixel. In order to create a climatology of values we will need to load all of the files and then pixel-wise go about getting the seasonal (daily) climatologies. This will be done with the same function (ts2clm()) that is used for the MHW climatologies. One-by-one we will load an entire dataset (created above) into memory so that we can perform the necessary calculations. Hold on to your hats, this is going to be RAM heavy…

# Load functions required below
source("code/functions.R")

## SST
# Load
OISST_sst <- readRDS("data/OISST_sst.Rda")
# Calculate clims
system.time(
OISST_sst_clim <- OISST_sst %>% 
  group_by(lon, lat) %>% 
  nest() %>% 
  mutate(clims = map(data, ts2clm,
                     climatologyPeriod = c("1993-01-01", "2018-12-31"),
                     clmOnly = T, roundClm = FALSE)) %>% 
  select(-data) %>% 
  unnest()
) # 1139 seconds
OISST_sst_clim$thresh <- NULL
# Save
saveRDS(OISST_sst_clim, "data/OISST_sst_clim.Rda")

# The code used to create the other climatologies may be found in code/workflow.R

Anomalies

The last step before we can begin creating our data packets is to now subtract our climatology data from our base data for each of the variables used in this study. We will then save this large anomaly cubes to allow for easy synoptic state creation.

## SST
# Load
OISST_sst <- readRDS("data/OISST_sst.Rda") %>% 
  mutate(doy = format(t, "%m-%d"))
OISST_sst_clim <- readRDS("data/OISST_sst_clim.Rda") %>% 
  left_join(doy_index, by = c("doy" = "doy_int")) %>% 
  select(-doy) %>% 
  dplyr::rename(doy = doy.y)
# Join and calculate anomalies
system.time(
OISST_sst_anom <- left_join(OISST_sst, OISST_sst_clim, by = c("lon", "lat", "doy")) %>% 
  mutate(sst_anom = round(temp-seas, 2)) %>% 
  select(lon, lat, t, sst_anom)
) # 43 seconds
# Save
saveRDS(OISST_sst_anom, "data/OISST_sst_anom.Rda")

# The code used to create the rest of the anomalies may be found in code/workflow.R

Synoptic states

The next step is to create mean synoptic states for each variable during each of the MHWs detected in each region. To do this we simply load the combined anomaly dataset of all of our chosen variables and slice off only the bits we need during each of the observed MHWs. These slices are then meaned, pixel-wise, to create the synoptic states during each event.

One thing to note here is that during this process, before the slices are created, each pixel for the mixed layer depth anomaly is divided by the max value observed at that pixel, scaling these values to 1. This is done as the range of MLD can vary dramatically over the Labrador sea and we want to avoid this from influencing the results.

# Load anomalies and scale MLD
system.time(
  if(!exists("ALL_anom")) ALL_anom <- readRDS("data/ALL_anom.Rda") %>%
    group_by(lon, lat) %>%
    mutate(mld_anom = mld_anom/max(abs(mld_anom), na.rm = T)) %>%
    ungroup()
) # 99 seconds

# Function for creating packets
data_packet <- function(event_sub){

  # Filter base anomally range
  packet_base <- ALL_anom %>%
    filter(t >= event_sub$date_start, t <= event_sub$date_end)

  # Create mean synoptic values
  packet_mean <- packet_base %>%
    # select(-doy) %>%
    group_by(lon, lat) %>%
    summarise_all(mean, na.rm = T) %>%
    arrange(lon, lat) %>%
    ungroup()
  packet_mean[is.na(packet_mean)] <- NA
  packet_mean <- nest(packet_mean, .key = "synoptic")

  # Combine results with MHW dataframe
  packet_res <- cbind(event_sub, packet_mean)

  # Exit
  return(packet_res)
}

# Set number of cores
doMC::registerDoMC(cores = 50)

# Create one big packet
system.time(
synoptic_states <- plyr::ddply(OISST_MHW_event, c("region", "event_no"), data_packet, .parallel = T)
) # 3 seconds for first event, 50 for all events

# Save
saveRDS(synoptic_states, "data/synoptic_states.Rda")

With all of our mean synoptic states created it is now time to feed them to the Self-organising map (SOM) analysis.

Session information

sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.5 LTS

Matrix products: default
BLAS:   /usr/lib/openblas-base/libblas.so.3
LAPACK: /usr/lib/libopenblasp-r0.2.18.so

locale:
 [1] LC_CTYPE=en_CA.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_CA.UTF-8        LC_COLLATE=en_CA.UTF-8    
 [5] LC_MONETARY=en_CA.UTF-8    LC_MESSAGES=en_CA.UTF-8   
 [7] LC_PAPER=en_CA.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] bindrcpp_0.2.2  lubridate_1.7.4 tidync_0.2.1    forcats_0.3.0  
 [5] stringr_1.3.1   dplyr_0.7.6     purrr_0.2.5     readr_1.1.1    
 [9] tidyr_0.8.1     tibble_1.4.2    ggplot2_3.0.0   tidyverse_1.2.1
[13] jsonlite_1.6   

loaded via a namespace (and not attached):
 [1] tidyselect_0.2.4  haven_1.1.2       lattice_0.20-35  
 [4] colorspace_1.3-2  htmltools_0.3.6   yaml_2.2.0       
 [7] rlang_0.2.2       R.oo_1.22.0       pillar_1.3.0     
[10] glue_1.3.0        withr_2.1.2       R.utils_2.7.0    
[13] RNetCDF_1.9-1     doMC_1.3.5        modelr_0.1.2     
[16] readxl_1.1.0      foreach_1.4.4     bindr_0.1.1      
[19] plyr_1.8.4        munsell_0.5.0     gtable_0.2.0     
[22] workflowr_1.1.1   cellranger_1.1.0  rvest_0.3.2      
[25] R.methodsS3_1.7.1 codetools_0.2-15  evaluate_0.11    
[28] knitr_1.20        parallel_3.6.1    broom_0.5.0      
[31] Rcpp_0.12.18      backports_1.1.2   scales_1.0.0     
[34] hms_0.4.2         digest_0.6.16     stringi_1.2.4    
[37] ncdf4_1.16.1      grid_3.6.1        rprojroot_1.3-2  
[40] cli_1.0.0         tools_3.6.1       magrittr_1.5     
[43] lazyeval_0.2.1    crayon_1.3.4      whisker_0.3-2    
[46] pkgconfig_2.0.2   xml2_1.2.0        iterators_1.0.10 
[49] assertthat_0.2.0  rmarkdown_1.10    httr_1.3.1       
[52] rstudioapi_0.7    R6_2.2.2          ncmeta_0.0.4     
[55] nlme_3.1-137      git2r_0.23.0      compiler_3.6.1   

This reproducible R Markdown analysis was created with workflowr 1.1.1