🌍 Analyzing Sea Level Rise Using Earth Data in the Cloud¶
This notebook is entirely based on Jinbo Wang's tutorial¶
We are going to reproduce the plot from this infographic Source: NASA-led Study Reveals the Causes of Sea Level Rise Since 1900
import earthaccess
print(f"using earthaccess v{earthaccess.__version__}")
auth = earthaccess.login()
# are we authenticated?
if not auth.authenticated:
# ask for credentials and persist them in a .netrc file
auth.login(strategy="interactive", persist=True)
using earthaccess v0.11.0
Why earthaccess?¶
earthaccess is a Python library that simplifies data discovery and access to NASA Earth science data by providing an abstraction layer for NASA’s Common Metadata Repository (CMR) Search API so that searching for data can be done using a simpler notation instead of low level HTTP queries.
Authentication in the cloud¶
If the collection is a cloud-hosted collection we can print the summary()
and get the S3 credential endpoint. These S3 credentials are temporary and we can use them with third party libraries such as s3fs, boto3 or awscli.
from pprint import pprint
# We'll get 4 collections that match with our keywords
collections = earthaccess.search_datasets(
keyword="SEA SURFACE HEIGHT",
cloud_hosted=True,
count=4,
)
# Let's print 2 collections
for collection in collections[0:2]:
# pprint(collection.summary())
print(
pprint(collection.summary()),
collection.abstract(),
"\n",
collection["umm"]["DOI"],
"\n\n",
)
{'cloud-info': {'Region': 'us-west-2', 'S3BucketAndObjectPrefixNames': ['podaac-swot-ops-cumulus-protected/SWOT_L2_LR_SSH_2.0/', 'podaac-swot-ops-cumulus-public/SWOT_L2_LR_SSH_2.0/'], 'S3CredentialsAPIDocumentationURL': 'https://archive.swot.podaac.earthdata.nasa.gov/s3credentialsREADME', 'S3CredentialsAPIEndpoint': 'https://archive.swot.podaac.earthdata.nasa.gov/s3credentials'}, 'concept-id': 'C2799438306-POCLOUD', 'file-type': "[{'FormatType': 'Native', 'Format': 'netCDF-4'}]", 'get-data': ['https://cmr.earthdata.nasa.gov/virtual-directory/collections/C2799438306-POCLOUD', 'https://search.earthdata.nasa.gov/search/granules?p=C2799438306-POCLOUD', 'https://search.earthdata.nasa.gov/search/granules?p=C2799438306-POCLOUD&pg[0][id]=*PIC0*', 'https://search.earthdata.nasa.gov/search/granules?p=C2799438306-POCLOUD&pg[0][id]=*PGC0*'], 'short-name': 'SWOT_L2_LR_SSH_2.0', 'version': '2.0'} None The SWOT Level 2 KaRIn Low Rate Sea Surface Height Data Product from the Surface Water Ocean Topography (SWOT) mission provides global sea surface height and significant wave height observations derived from low rate (LR) measurements from the Ka-band Radar Interferometer (KaRIn). SWOT launched on December 16, 2022 from Vandenberg Air Force Base in California into a 1-day repeat orbit for the "calibration" or "fast-sampling" phase of the mission, which completed in early July 2023. After the calibration phase, SWOT entered a 21-day repeat orbit in August 2023 to start the "science" phase of the mission, which is expected to continue through 2025. <br> The L2 sea surface height data product is distributed in one netCDF-4 file per pass (half-orbit) covering the full KaRIn swath width, which spans 10-60km on each side of the nadir track. Sea surface height, sea surface height anomaly, wind speed, significant waveheight, and related parameters are provided on a geographically fixed, swath-aligned 2x2 km2 grid (Basic, Expert, Windwave). The sea surface height data are also provided on a finer 250x250 m2 "native" grid with minimal smoothing applied (Unsmoothed).<br> Please note that this collection contains SWOT Version C science data products. <br> This dataset is the parent collection to the following sub-collections: <br> https://podaac.jpl.nasa.gov/dataset/SWOT_L2_LR_SSH_Basic_2.0 <br> https://podaac.jpl.nasa.gov/dataset/SWOT_L2_LR_SSH_WindWave_2.0 <br> https://podaac.jpl.nasa.gov/dataset/SWOT_L2_LR_SSH_Expert_2.0 <br> https://podaac.jpl.nasa.gov/dataset/SWOT_L2_LR_SSH_Unsmoothed_2.0 <br> {'DOI': '10.5067/SWOT-SSH-2.0'} {'cloud-info': {'Region': 'us-west-2', 'S3BucketAndObjectPrefixNames': ['podaac-ops-cumulus-public/SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL2205/', 'podaac-ops-cumulus-protected/SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL2205/'], 'S3CredentialsAPIDocumentationURL': 'https://archive.podaac.earthdata.nasa.gov/s3credentialsREADME', 'S3CredentialsAPIEndpoint': 'https://archive.podaac.earthdata.nasa.gov/s3credentials'}, 'concept-id': 'C2270392799-POCLOUD', 'file-type': "[{'Format': 'netCDF-4', 'FormatType': 'Native', " "'AverageFileSize': 9.7, 'AverageFileSizeUnit': 'MB'}]", 'get-data': ['https://cmr.earthdata.nasa.gov/virtual-directory/collections/C2270392799-POCLOUD', 'https://search.earthdata.nasa.gov/search/granules?p=C2270392799-POCLOUD'], 'short-name': 'SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL2205', 'version': '2205'} None This dataset provides gridded Sea Surface Height Anomalies (SSHA) above a mean sea surface, on a 1/6th degree grid every 5 days. It contains the fully corrected heights, with a delay of up to 3 months. The gridded data are derived from the along-track SSHA data of TOPEX/Poseidon, Jason-1, Jason-2, Jason-3 and Jason-CS (Sentinel-6) as reference data from the level 2 along-track data found at https://podaac.jpl.nasa.gov/dataset/MERGED_TP_J1_OSTM_OST_CYCLES_V51, plus ERS-1, ERS-2, Envisat, SARAL-AltiKa, CryoSat-2, Sentinel-3A, Sentinel-3B, depending on the date, from the RADS database. The date given in the grid files is the center of the 5-day window. The grids were produced from altimeter data using Kriging interpolation, which gives best linear prediction based upon prior knowledge of covariance. {'DOI': '10.5067/SLREF-CDRV3', 'Authority': 'https://doi.org'}
A year of data¶
Things to keep in mind:
- this particular dataset has data until 2019
- this is a global dataset, each granule represents the whole world
- temporal coverage is 1 granule each 5 days
granules = earthaccess.search_data(
short_name="SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL2205",
count=1,
)
Working with the URLs directly¶
If we chose, we can use earthdata
to grab the file's URLs and then download them with another library (if we have a .netrc
file configured with NASA's EDL credentials)
Getting the links to our data is quiet simple with the data_links()
method on each of the results:
# the collection is cloud hosted, but we can access it out of AWS with the regular HTTPS URL
granules[0].data_links(access="out_of_region")
['https://archive.podaac.earthdata.nasa.gov/podaac-ops-cumulus-protected/SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL2205/ssh_grids_v2205_1992101012.nc']
granules[0].data_links(access="direct")
['s3://podaac-ops-cumulus-protected/SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL2205/ssh_grids_v2205_1992101012.nc']
POC: streaming into xarray¶
We can use earthaccess
to stream files directly into xarray, this will work well for cases where:
- Data is in NetCDF/HDF5/Zaar format
- xarray can read bytes directly for remote datasets only with
h5netcdf
andscipy
back-ends, if we deal with a format that won't be recognized by these 2 backends xarray will raise an exception.
- xarray can read bytes directly for remote datasets only with
- Data is not big data (multi TB)
- not fully tested with Dask distributed
- Data is gridded
- xarray works better with homogeneous coordinates, working with swath data will be cumbersome.
- Data is chunked using reasonable large sizes (1MB or more)
- If our files are chunked in small pieces the access time will be orders of magnitude bigger than just downloading the data and accessing it locally.
Opening a year of SSH (SEA_SURFACE_HEIGHT_ALT_GRIDS_L4_2SATS_5DAY_6THDEG_V_JPL1812) data (1.1 GB approx) can take up to 5 minutes streaming the data out of region (not in AWS) The reason for this is not that the data transfer is order of magnitude slower but due the client libraries not fetching data concurrently and the metadata of the files in HDF is usually not consolidated like in Zaar, hence h5netcdf has to issue a lot of requests to get the info it needs.
Note: we are looping through each year and getting the metadata for the first granule in May
# storing the resulting granule metadata
granules = []
# we just grab 1 granule from May for each year of the dataset
for year in range(1999, 2019):
print(f"Retrieving data for year: {year}")
results = earthaccess.search_data(
doi="10.5067/SLREF-CDRV3",
temporal=(f"{year}-05", f"{year}-05"),
count=1,
)
granules.append(results[0])
print(f"Total granules: {len(granules)}")
Retrieving data for year: 1999
Retrieving data for year: 2000
Retrieving data for year: 2001
Retrieving data for year: 2002
Retrieving data for year: 2003
Retrieving data for year: 2004
Retrieving data for year: 2005
Retrieving data for year: 2006
Retrieving data for year: 2007
Retrieving data for year: 2008
Retrieving data for year: 2009
Retrieving data for year: 2010
Retrieving data for year: 2011
Retrieving data for year: 2012
Retrieving data for year: 2013
Retrieving data for year: 2014
Retrieving data for year: 2015
Retrieving data for year: 2016
Retrieving data for year: 2017
Retrieving data for year: 2018
Total granules: 20
What does earthaccess.open()
do?¶
earthaccess.open()
takes a list of results from earthaccess.search_data()
or a list of URLs and creates a list of Python File-like objects that can be used in our code as if the remote files were local. When executed in AWS the file system used is S3FS when we open files outside of AWS we get a regular HTTPS file session.
%%time
import xarray as xr
fileset = earthaccess.open(granules)
print(f" Using {type(fileset[0])} filesystem")
ds = xr.open_mfdataset(fileset, chunks={})
ds
Using <class 'earthaccess.store.EarthAccessFile'> filesystem
CPU times: user 2.68 s, sys: 360 ms, total: 3.04 s Wall time: 1min 6s
<xarray.Dataset> Size: 332MB Dimensions: (Time: 20, Longitude: 2160, nv: 2, Latitude: 960) Coordinates: * Longitude (Longitude) float32 9kB 0.08333 0.25 0.4167 ... 359.8 359.9 * Latitude (Latitude) float32 4kB -79.92 -79.75 -79.58 ... 79.75 79.92 * Time (Time) datetime64[ns] 160B 1999-05-02T12:00:00 ... 2018-05-0... Dimensions without coordinates: nv Data variables: Lon_bounds (Time, Longitude, nv) float32 346kB dask.array<chunksize=(1, 2160, 2), meta=np.ndarray> Lat_bounds (Time, Latitude, nv) float32 154kB dask.array<chunksize=(1, 960, 2), meta=np.ndarray> Time_bounds (Time, nv) datetime64[ns] 320B dask.array<chunksize=(1, 2), meta=np.ndarray> SLA (Time, Latitude, Longitude) float32 166MB dask.array<chunksize=(1, 960, 2160), meta=np.ndarray> SLA_ERR (Time, Latitude, Longitude) float32 166MB dask.array<chunksize=(1, 960, 2160), meta=np.ndarray> Attributes: (12/21) Conventions: CF-1.6 ncei_template_version: NCEI_NetCDF_Grid_Template_v2.0 Institution: Jet Propulsion Laboratory geospatial_lat_min: -79.916664 geospatial_lat_max: 79.916664 geospatial_lon_min: 0.083333336 ... ... version_number: 2205 Data_Pnts_Each_Sat: {"16": 764589, "1002": 637661} source_version: commit 7ba4d2404171bab4068e03a666a335c7a44c5f09 SLA_Global_MEAN: -0.012298334728518514 SLA_Global_STD: 0.09172898644796465 latency: final
Plotting¶
(
ds.SLA.where((ds.SLA >= 0) & (ds.SLA < 10))
.std("Time")
.plot(figsize=(14, 6), x="Longitude", y="Latitude")
)
/home/docs/checkouts/readthedocs.org/user_builds/earthaccess/envs/latest/lib/python3.10/site-packages/dask/array/numpy_compat.py:57: RuntimeWarning: invalid value encountered in divide x = np.divide(x1, x2, out)
Matplotlib is building the font cache; this may take a moment.
<matplotlib.collections.QuadMesh at 0x7f5d2e7d0580>