iris#

A package for handling multi-dimensional data and associated metadata.

Note

The Iris documentation has further usage information, including a user guide which should be the first port of call for new users.

The functions in this module provide the main way to load and/or save your data.

The load() function provides a simple way to explore data from the interactive Python prompt. It will convert the source data into Cubes, and combine those cubes into higher-dimensional cubes where possible.

The load_cube() and load_cubes() functions are similar to load(), but they raise an exception if the number of cubes is not what was expected. They are more useful in scripts, where they can provide an early sanity check on incoming data.

The load_raw() function is provided for those occasions where the automatic combination of cubes into higher-dimensional cubes is undesirable. However, it is intended as a tool of last resort! If you experience a problem with the automatic combination process then please raise an issue with the Iris developers.

To persist a cube to the file-system, use the save() function.

All the load functions share very similar arguments:

  • uris:

    Either a single filename/URI expressed as a string or pathlib.PurePath, or an iterable of filenames/URIs.

    Filenames can contain ~ or ~user abbreviations, and/or Unix shell-style wildcards (e.g. * and ?). See the standard library function os.path.expanduser() and module fnmatch for more details.

    Warning

    If supplying a URL, only OPeNDAP Data Sources are supported.

  • constraints:

    Either a single constraint, or an iterable of constraints. Each constraint can be either a string, an instance of iris.Constraint, or an instance of iris.AttributeConstraint. If the constraint is a string it will be used to match against cube.name().

    For example:

    # Load air temperature data.
    load_cube(uri, 'air_temperature')
    
    # Load data with a specific model level number.
    load_cube(uri, iris.Constraint(model_level_number=1))
    
    # Load data with a specific STASH code.
    load_cube(uri, iris.AttributeConstraint(STASH='m01s00i004'))
    
  • callback:

    A function to add metadata from the originating field and/or URI which obeys the following rules:

    1. Function signature must be: (cube, field, filename).

    2. Modifies the given cube inplace, unless a new cube is returned by the function.

    3. If the cube is to be rejected the callback must raise an iris.exceptions.IgnoreCubeException.

    For example:

    def callback(cube, field, filename):
        # Extract ID from filenames given as: <prefix>__<exp_id>
        experiment_id = filename.split('__')[1]
        experiment_coord = iris.coords.AuxCoord(
            experiment_id, long_name='experiment_id')
        cube.add_aux_coord(experiment_coord)
    
class iris.AttributeConstraint(**attributes)[source]#

Bases: Constraint

Provides a simple Cube-attribute based Constraint.

Provide a simple Cube-attribute based Constraint.

Example usage:

iris.AttributeConstraint(STASH='m01s16i004')

iris.AttributeConstraint(
    STASH=lambda stash: str(stash).endswith('i005'))

Note

Attribute constraint names are case sensitive.

extract(cube)#

Return the subset of the given cube which matches this constraint.

Return the subset of the given cube which matches this constraint, else return None.

class iris.Constraint(name=None, cube_func=None, coord_values=None, **kwargs)[source]#

Bases: object

Cubes can be pattern matched and filtered according to specific criteria.

Constraints are the mechanism by which cubes can be pattern matched and filtered according to specific criteria.

Once a constraint has been defined, it can be applied to cubes using the Constraint.extract() method.

Use for filtering cube loading or cube list extraction.

Creates a new instance of a Constraint which can be used for filtering cube loading or cube list extraction.

Parameters:
  • name (str or None, optional) – If a string, it is used as the name to match against the iris.cube.Cube.names property.

  • cube_func (callable or None, optional) – If a callable, it must accept a Cube as its first and only argument and return either True or False.

  • coord_values (dict or None, optional) – If a dict, it must map coordinate name to the condition on the associated coordinate.

  • ***kwargs (dict, optional) –

    The remaining keyword arguments are converted to coordinate constraints. The name of the argument gives the name of a coordinate, and the value of the argument is the condition to meet on that coordinate:

    Constraint(model_level_number=10)
    

    Coordinate level constraints can be of several types:

    • string, int or float - the value of the coordinate to match. e.g. model_level_number=10

    • list of values - the possible values that the coordinate may have to match. e.g. model_level_number=[10, 12]

    • callable - a function which accepts a iris.coords.Cell instance as its first and only argument returning True or False if the value of the Cell is desired. e.g. model_level_number=lambda cell: 5 < cell < 10

Examples

The user guide covers cube much of constraining in detail, however an example which uses all of the features of this class is given here for completeness:

Constraint(name='air_potential_temperature',
           cube_func=lambda cube: cube.units == 'kelvin',
           coord_values={'latitude':lambda cell: 0 < cell < 90},
           model_level_number=[10, 12])
           & Constraint(ensemble_member=2)

Note

Whilst & is supported, the | that might reasonably be expected is not. This is because each constraint describes a boxlike region, and thus the intersection of these constraints (obtained with &) will also describe a boxlike region. Allowing the union of two constraints (with the | symbol) would allow the description of a non-boxlike region. These are difficult to describe with cubes and so it would be ambiguous what should be extracted.

To generate multiple cubes, each constrained to a different range of the same coordinate, use iris.load_cubes() or iris.cube.CubeList.extract_cubes().

A cube can be constrained to multiple ranges within the same coordinate using something like the following constraint:

def latitude_bands(cell):
    return (0 < cell < 30) or (60 < cell < 90)

Constraint(cube_func=latitude_bands)

Constraint filtering is performed at the cell level. For further details on how cell comparisons are performed see iris.coords.Cell.

extract(cube)[source]#

Return the subset of the given cube which matches this constraint.

Return the subset of the given cube which matches this constraint, else return None.

iris.FUTURE = Future(datum_support=False, pandas_ndim=False, save_split_attrs=False)#

Object containing all the Iris run-time options.

class iris.Future(datum_support=False, pandas_ndim=False, save_split_attrs=False)[source]#

Bases: _local

Run-time configuration controller.

Container for run-time options controls.

To adjust the values simply update the relevant attribute from within your code. For example:

# example_future_flag is a fictional example.
iris.FUTURE.example_future_flag = False

If Iris code is executed with multiple threads, note the values of these options are thread-specific.

Parameters:
  • datum_support (bool, default=False) – Opts in to loading coordinate system datum information from NetCDF files into CoordSystem, wherever this information is present.

  • pandas_ndim (bool, default=False) – See iris.pandas.as_data_frame() for details - opts in to the newer n-dimensional behaviour.

  • save_split_attrs (bool, default=False) – Save “global” and “local” cube attributes to netcdf in appropriately different ways : “global” ones are saved as dataset attributes, where possible, while “local” ones are saved as data-variable attributes. See iris.fileformats.netcdf.saver.save().

context(**kwargs)[source]#

Return context manager for temp modification of option values for the active thread.

On entry to the with statement, all keyword arguments are applied to the Future object. On exit from the with statement, the previous state is restored.

For example:

# example_future_flag is a fictional example.
with iris.FUTURE.context(example_future_flag=False):
    # ... code that expects some past behaviour
deprecated_options = {}#
exception iris.IrisDeprecation[source]#

Bases: UserWarning

An Iris deprecation warning.

Note this subclasses UserWarning for backwards compatibility with Iris’ original deprecation warnings. Should subclass DeprecationWarning at the next major release.

add_note()#

Exception.add_note(note) – add a note to the exception

args#
with_traceback()#

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

class iris.NameConstraint(standard_name='none', long_name='none', var_name='none', STASH='none')[source]#

Bases: Constraint

Provide a simple Cube name based Constraint.

Provide a simple Cube name based Constraint.

Provide a simple Cube name based Constraint, which matches against each of the names provided, which may be either standard name, long name, NetCDF variable name and/or the STASH from the attributes dictionary.

The name constraint will only succeed if all of the provided names match.

Parameters:
  • standard_name (optional) – A string or callable representing the standard name to match against.

  • long_name (optional) – A string or callable representing the long name to match against.

  • var_name (optional) – A string or callable representing the NetCDF variable name to match against.

  • STASH (optional) – A string or callable representing the UM STASH code to match against.

Notes

The default value of each of the keyword arguments is the string “none”, rather than the singleton None, as None may be a legitimate value to be matched against e.g., to constrain against all cubes where the standard_name is not set, then use standard_name=None.

Return type:

bool

Examples

Example usage:

iris.NameConstraint(long_name='air temp', var_name=None)

iris.NameConstraint(long_name=lambda name: 'temp' in name)

iris.NameConstraint(standard_name='air_temperature',
                    STASH=lambda stash: stash.item == 203)
extract(cube)#

Return the subset of the given cube which matches this constraint.

Return the subset of the given cube which matches this constraint, else return None.

iris.load(uris, constraints=None, callback=None)[source]#

Load any number of Cubes for each constraint.

For a full description of the arguments, please see the module documentation for iris.

Parameters:
  • uris (str or pathlib.PurePath) – One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

  • constraints (optional) – One or more constraints.

  • callback (optional) – A modifier/filter function.

Returns:

An iris.cube.CubeList. Note that there is no inherent order to this iris.cube.CubeList and it should be treated as if it were random.

Return type:

iris.cube.CubeList

iris.load_cube(uris, constraint=None, callback=None)[source]#

Load a single cube.

For a full description of the arguments, please see the module documentation for iris.

Parameters:
  • uris – One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

  • constraints (optional) – A constraint.

  • callback (optional) – A modifier/filter function.

Return type:

iris.cube.Cube

iris.load_cubes(uris, constraints=None, callback=None)[source]#

Load exactly one Cube for each constraint.

For a full description of the arguments, please see the module documentation for iris.

Parameters:
  • uris – One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

  • constraints (optional) – One or more constraints.

  • callback (optional) – A modifier/filter function.

Returns:

An iris.cube.CubeList. Note that there is no inherent order to this iris.cube.CubeList and it should be treated as if it were random.

Return type:

iris.cube.CubeList

iris.load_raw(uris, constraints=None, callback=None)[source]#

Load non-merged cubes.

This function is provided for those occasions where the automatic combination of cubes into higher-dimensional cubes is undesirable. However, it is intended as a tool of last resort! If you experience a problem with the automatic combination process then please raise an issue with the Iris developers.

For a full description of the arguments, please see the module documentation for iris.

Parameters:
  • uris – One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

  • constraints (optional) – One or more constraints.

  • callback (optional) – A modifier/filter function.

Return type:

iris.cube.CubeList

iris.sample_data_path(*path_to_join)[source]#

Given the sample data resource, returns the full path to the file.

Note

This function is only for locating files in the iris sample data collection (installed separately from iris). It is not needed or appropriate for general file access.

iris.save(source, target, saver=None, **kwargs)[source]#

Save one or more Cubes to file (or other writeable).

Iris currently supports three file formats for saving, which it can recognise by filename extension:

A custom saver can be provided to the function to write to a different file format.

Parameters:
  • source (iris.cube.Cube or iris.cube.CubeList)

  • target (str or pathlib.PurePath or io.TextIOWrapper) – When given a filename or file, Iris can determine the file format.

  • saver (str or function, optional) –

    Specifies the file format to save. If omitted, Iris will attempt to determine the format. If a string, this is the recognised filename extension (where the actual filename may not have it).

    Otherwise the value is a saver function, of the form: my_saver(cube, target) plus any custom keywords. It is assumed that a saver will accept an append keyword if its file format can handle multiple cubes. See also iris.io.add_saver().

  • **kwargs (dict, optional) – All other keywords are passed through to the saver function; see the relevant saver documentation for more information on keyword arguments.

Warning

Saving a cube whose data has been loaded lazily (if cube.has_lazy_data() returns True) to the same file it expects to load data from will cause both the data in-memory and the data on disk to be lost.

cube = iris.load_cube("somefile.nc")
# The next line causes data loss in 'somefile.nc' and the cube.
iris.save(cube, "somefile.nc")

In general, overwriting a file which is the source for any lazily loaded data can result in corruption. Users should proceed with caution when attempting to overwrite an existing file.

Examples

>>> # Setting up
>>> import iris
>>> my_cube = iris.load_cube(iris.sample_data_path('air_temp.pp'))
>>> my_cube_list = iris.load(iris.sample_data_path('space_weather.nc'))
>>> # Save a cube to PP
>>> iris.save(my_cube, "myfile.pp")
>>> # Save a cube list to a PP file, appending to the contents of the file
>>> # if it already exists
>>> iris.save(my_cube_list, "myfile.pp", append=True)
>>> # Save a cube to netCDF, defaults to NETCDF4 file format
>>> iris.save(my_cube, "myfile.nc")
>>> # Save a cube list to netCDF, using the NETCDF3_CLASSIC storage option
>>> iris.save(my_cube_list, "myfile.nc", netcdf_format="NETCDF3_CLASSIC")

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.site_configuration = {}#

Iris site configuration dictionary.

iris.use_plugin(plugin_name)[source]#

Import a plugin.

Parameters:

plugin_name (str) – Name of plugin.

Examples

The following:

use_plugin("my_plugin")

is equivalent to:

import iris.plugins.my_plugin

This is useful for plugins that are not used directly, but instead do all their setup on import. In this case, style checkers would not know the significance of the import statement and warn that it is an unused import.

Subpackages#

Submodules#