iris.fileformats.netcdf.loader#
Support loading Iris cubes from NetCDF files using the CF conventions for metadata interpretation.
See : NetCDF User’s Guide and netCDF4 python module.
Also : CF Conventions.
- iris.fileformats.netcdf.loader.CHUNK_CONTROL: ChunkControl = <iris.fileformats.netcdf.loader.ChunkControl object>#
The global
ChunkControl
object providing user-control of Dask chunking when Iris loads NetCDF files.
- class iris.fileformats.netcdf.loader.ChunkControl(var_dim_chunksizes=None)[source]#
Bases:
_local
Provide user control of Chunk Control.
Provide user control of Dask chunking.
The NetCDF loader is controlled by the single instance of this: the
CHUNK_CONTROL
object.A chunk size can be set for a specific (named) file dimension, when loading specific (named) variables, or for all variables.
When a selected variable is a CF data-variable, which loads as a
Cube
, then the given dimension chunk size is also fixed for all variables which are components of thatCube
, i.e. anyCoord
,CellMeasure
,AncillaryVariable
etc. This can be overridden, if required, by variable-specific settings.For this purpose,
MeshCoord
andConnectivity
are notCube
components, and chunk control on aCube
data-variable will not affect them.- class Modes(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
Bases:
Enum
Modes Enums.
- AS_DASK = 3#
- DEFAULT = 1#
- FROM_FILE = 2#
- classmethod __contains__(value)#
Return True if value is in cls.
value is in cls if: 1) value is a member of cls, or 2) value is the value of one of the cls’s members. 3) value is a pseudo-member (flags)
- classmethod __getitem__(name)#
Return the member matching name.
- classmethod __iter__()#
Return members in definition order.
- classmethod __len__()#
Return the number of members (no aliases)
- as_dask()[source]#
Rely on Dask Array to control chunk sizes.
Notes
This function acts as a context manager, for use in a
with
block.- Return type:
Iterator[None]
- from_file()[source]#
Ensure the chunk sizes are loaded in from NetCDF file variables.
- Raises:
KeyError – If any NetCDF data variables - those that become
Cube
- do not specify chunk sizes.- Return type:
Iterator[None]
Notes
This function acts as a context manager, for use in a
with
block.
- set(var_names=None, **dimension_chunksizes)[source]#
Control the Dask chunk sizes applied to NetCDF variables during loading.
This function can also be used to provide a size hint for the unknown array lengths when loading “variable-length” NetCDF data types. See https://unidata.github.io/netcdf4-python/#netCDF4.Dataset.vltypes
- Parameters:
var_names (str or list of str, default=None) – Apply the dimension_chunksizes controls only to these variables, or when building
Cube
from these data variables. IfNone
, settings apply to all loaded variables.**dimension_chunksizes (dict of {str: int}) – Kwargs specifying chunksizes for dimensions of file variables. Each key-value pair defines a chunk size for a named file dimension, e.g.
{'time': 10, 'model_levels':1}
. Values of-1
will lock the chunk size to the full size of that dimension. To specify a size hint for “variable-length” data types use the special name _vl_hint.
- Return type:
Iterator[None]
Notes
This function acts as a context manager, for use in a
with
block.>>> import iris >>> from iris.fileformats.netcdf.loader import CHUNK_CONTROL >>> with CHUNK_CONTROL.set("air_temperature", time=180, latitude=-1): ... cube = iris.load(iris.sample_data_path("E1_north_america.nc"))[0]
When var_names is present, the chunk size adjustments are applied only to the selected variables. However, for a CF data variable, this extends to all components of the (raw)
Cube
created from it.Un-adjusted dimensions have chunk sizes set in the ‘usual’ way. That is, according to the normal behaviour of
iris._lazy_data.as_lazy_data()
, which is: chunk size is based on the file variable chunking, or full variable shape; this is scaled up or down by integer factors to best match the Dask default chunk size, i.e. the setting configured bydask.config.set({'array.chunk-size': '250MiB'})
.For variable-length data types the size of the variable (or “ragged”) dimension of the individual array elements cannot be known without reading the data. This can make it difficult for Iris to determine whether to load the data lazily or not. If the user has some apriori knowledge of the mean variable array length this can be passed as as a size hint via the special _vl_hint name. For example a hint that variable-length string array that contains 4 character experiment identifiers:
CHUNK_CONTROL.set("expver", _vl_hint=4)
- iris.fileformats.netcdf.loader.load_cubes(file_sources, callback=None, constraints=None)[source]#
Load cubes from a list of NetCDF filenames/OPeNDAP URLs.
- Parameters:
file_sources (str or list) – One or more NetCDF filenames/OPeNDAP URLs to load from. OR open datasets.
callback (function, optional) – Function which can be passed on to
iris.io.run_callback()
.constraints (optional)
- Return type:
Generator of loaded NetCDF
iris.cube.Cube
.