Python Library for NASA Earthdata APIs
This release will fix #212 and implements more testing for Auth and S3Credentials
endpoints. Eventually they are going to support bearer tokens but only ASF does at the moment.
S3Credentials
python_magic
from core dependencies (will fix Windows for conda)auth = earthaccess.login()
profile = auth.user_profile
email = profile["email_address"]
This release will fix some bugs and bring new capabilities to the top level API
import earthaccess
auth = earthaccess.login()
will automatically try all strategies, there is no need to specify one, if our credentials are not found it will ask the user to provide them interactively.
s3_credentials = earthaccess.get_s3_credentials(daac="PODAAC")
# use them with your fav library, e.g. boto3
# another thing we can do with our auth instance is to refresh our EDL tokens
auth.refresh_tokens()
We can also get authenticated fsspec sessions:
url = "https://data.lpdaac.earthdatacloud.nasa.gov/lp-prod-protected/EMITL2ARFL.001/EMIT_L2A_RFL_001_20220903T163129_2224611_012/EMIT_L2A_RFL_001_20220903T163129_2224611_012.nc"
fs = earthaccess.get_fsspec_https_session()
with fs.open(lpcloud_url) as f:
data = f.read(10)
data
or we can use them in tandem with xarray/rioxarray
import xarray as xr
ds = xr.open_mfdataset(earthaccess.open([url]))
ds
This PR will fix #195 #187 and completes #167
Bug fixes:
us-west-2
This is the first formal release under the new name. 0.4.6 will be available in both pypi and conda-forge.
The first thing to mention is the new API notation that should evolve to support all the use cases,
import earthaccess
earthaccess.login(strategy="netrc")
granules = earthaccess.search_data(params)
earthaccess.download(granules, local_path= "./test")
is equivalent to
from earthdata import Store, Auth, DataGranules
auth = Auth()
auth.login(strategy="netrc")
store = Store(auth)
granules = DataGranules().params(params).get()
store.get(granules, local_path="./test")
We can still use the classes the same way but eventually we should support only module-level API.
Features:
datasets = earthaccess.search_datasets(
doi="10.5067/AQR50-3Q7CS"
cloud_hosted=True
)
searching by DOI should usually return only one dataset but I'm not sure what would happen if the same data is also in the cloud so to be sure we can use the cloud_hosted
parameter if we want to operate on the AWS hosted version.
The documentation started to get updated and soon we should have a "gallery" with more examples of how to use the library.
First release under the new name, pypi was updated and the current earthaccess package installs v0.4.5
, conda-forge is still pending.
The old notation is still supported, we can import the classes and instantiate them the same way but having a simpler notation is probably a better idea. From now on we can do the following:
import earthaccess
earthaccess.login(strategy="netrc")
granules = earthaccess.search_data(params)
earthaccess.download(granules, local_path= "./test")
and voila!
This is still beta and the though is that we can have a stable package starting on v0.5.0, we need to add more tests and deal with EULAs as they represent a big obstacle for programmatic access specially for new accounts with NASA.
This is a minor release with some bug fixes but the last one with the old name. The next release will come with the earthaccess
name.
store.get()
had a bug when we used it with empty listsearthdata can now persist user's credentials into a .netrc
file
from earthdata import Auth, DataCollections, DataGranules, Store
auth = Auth().login(strategy="netrc")
# are we authenticated?
if not auth.authenticated:
# ask for credentials and persist them in a .netrc file
auth.login(strategy="interactive", persist=True)
We can also renew our CMR token to make sure our authenticated queries work:
auth.refresh_token()
collections = DataCollections(auth).concept_id("c-some-restricted-dataset").get()
We can get authenticated fsspec
file sessions. closes #41
store = Store(auth)
fs = store.get_https_session()
# we can use fsspec to get any granule from any DAAC!
fs.get("https://DAAC/granule", "./data")
We can use Store
to get our files from a URL list. closes #43
store = Store(auth)
files = store.get(["https://GRANULE_URL"], "./data/")
Lastly, we can stream certain datasets directly into xarray (even if we are not in AWS)
%%time
import xarray as xr
query_results = DataGranules().concept_id("C2036880672-POCLOUD").temporal("2018-01-01", "2018-12-31").get()
ds = xr.open_mfdataset(store.open(query_results))
ds
This is probably the first usable version for earthdata
New features:
python-cmr:
Documentation:
Authentication:
Core features are now working making the library usable.
New features and improvements
.netrc
file or environment variables.debug(True)
.data_links(direct_s3=False)
Bug fixes
Initial beta release of earthdata a client library for NASA CMR and EDL.
New features and improvements
Acknowledgments