Iris API

A package for handling multi-dimensional data and associated metadata.

Note

The Iris documentation has further usage information, including a user guide which should be the first port of call for new users.

The functions in this module provide the main way to load and/or save your data.

The load() function provides a simple way to explore data from the interactive Python prompt. It will convert the source data into Cubes, and combine those cubes into higher-dimensional cubes where possible.

The load_cube() and load_cubes() functions are similar to load(), but they raise an exception if the number of cubes is not what was expected. They are more useful in scripts, where they can provide an early sanity check on incoming data.

The load_raw() function is provided for those occasions where the automatic combination of cubes into higher-dimensional cubes is undesirable. However, it is intended as a tool of last resort! If you experience a problem with the automatic combination process then please raise an issue with the Iris developers.

To persist a cube to the file-system, use the save() function.

All the load functions share very similar arguments:

  • uris:

    Either a single filename/URI expressed as a string or pathlib.PurePath, or an iterable of filenames/URIs.

    Filenames can contain ~ or ~user abbreviations, and/or Unix shell-style wildcards (e.g. * and ?). See the standard library function os.path.expanduser() and module fnmatch for more details.

    Warning

    If supplying a URL, only OPeNDAP Data Sources are supported.

  • constraints:

    Either a single constraint, or an iterable of constraints. Each constraint can be either a string, an instance of iris.Constraint, or an instance of iris.AttributeConstraint. If the constraint is a string it will be used to match against cube.name().

    For example:

    # Load air temperature data.
    load_cube(uri, 'air_temperature')
    
    # Load data with a specific model level number.
    load_cube(uri, iris.Constraint(model_level_number=1))
    
    # Load data with a specific STASH code.
    load_cube(uri, iris.AttributeConstraint(STASH='m01s00i004'))
    
  • callback:

    A function to add metadata from the originating field and/or URI which obeys the following rules:

    1. Function signature must be: (cube, field, filename).

    2. Modifies the given cube inplace, unless a new cube is returned by the function.

    3. If the cube is to be rejected the callback must raise an iris.exceptions.IgnoreCubeException.

    For example:

    def callback(cube, field, filename):
        # Extract ID from filenames given as: <prefix>__<exp_id>
        experiment_id = filename.split('__')[1]
        experiment_coord = iris.coords.AuxCoord(
            experiment_id, long_name='experiment_id')
        cube.add_aux_coord(experiment_coord)
    

In this module:

Provides a simple Cube-attribute based Constraint.

class iris.AttributeConstraint(**attributes)[source]

Example usage:

iris.AttributeConstraint(STASH='m01s16i004')

iris.AttributeConstraint(
    STASH=lambda stash: str(stash).endswith('i005'))

Note

Attribute constraint names are case sensitive.

extract(cube)

Return the subset of the given cube which matches this constraint, else return None.

↑ top ↑

Constraints are the mechanism by which cubes can be pattern matched and filtered according to specific criteria.

Once a constraint has been defined, it can be applied to cubes using the Constraint.extract() method.

class iris.Constraint(name=None, cube_func=None, coord_values=None, **kwargs)[source]

Creates a new instance of a Constraint which can be used for filtering cube loading or cube list extraction.

Args:

  • name: string or None

    If a string, it is used as the name to match against the ~iris.cube.Cube.names property.

  • cube_func: callable or None

    If a callable, it must accept a Cube as its first and only argument and return either True or False.

  • coord_values: dict or None

    If a dict, it must map coordinate name to the condition on the associated coordinate.

  • **kwargs:

    The remaining keyword arguments are converted to coordinate constraints. The name of the argument gives the name of a coordinate, and the value of the argument is the condition to meet on that coordinate:

    Constraint(model_level_number=10)
    

    Coordinate level constraints can be of several types:

    • string, int or float - the value of the coordinate to match. e.g. model_level_number=10

    • list of values - the possible values that the coordinate may have to match. e.g. model_level_number=[10, 12]

    • callable - a function which accepts a iris.coords.Cell instance as its first and only argument returning True or False if the value of the Cell is desired. e.g. model_level_number=lambda cell: 5 < cell < 10

The user guide covers cube much of constraining in detail, however an example which uses all of the features of this class is given here for completeness:

Constraint(name='air_potential_temperature',
           cube_func=lambda cube: cube.units == 'kelvin',
           coord_values={'latitude':lambda cell: 0 < cell < 90},
           model_level_number=[10, 12])
           & Constraint(ensemble_member=2)

Note

Whilst & is supported, the | that might reasonably be expected is not. This is because each constraint describes a boxlike region, and thus the intersection of these constraints (obtained with &) will also describe a boxlike region. Allowing the union of two constraints (with the | symbol) would allow the description of a non-boxlike region. These are difficult to describe with cubes and so it would be ambiguous what should be extracted.

To generate multiple cubes, each constrained to a different range of the same coordinate, use iris.load_cubes() or iris.cube.CubeList.extract_cubes().

A cube can be constrained to multiple ranges within the same coordinate using something like the following constraint:

def latitude_bands(cell):
    return (0 < cell < 30) or (60 < cell < 90)

Constraint(cube_func=latitude_bands)

Constraint filtering is performed at the cell level. For further details on how cell comparisons are performed see iris.coords.Cell.

extract(cube)[source]

Return the subset of the given cube which matches this constraint, else return None.

↑ top ↑

iris.FUTURE

Object containing all the Iris run-time options.

↑ top ↑

Run-time configuration controller.

class iris.Future[source]

A container for run-time options controls.

To adjust the values simply update the relevant attribute from within your code. For example:

iris.FUTURE.example_future_flag = False

If Iris code is executed with multiple threads, note the values of these options are thread-specific.

Note

iris.FUTURE.example_future_flag does not exist. It is provided as an example because there are currently no flags in iris.Future.

context(**kwargs)[source]

Return a context manager which allows temporary modification of the option values for the active thread.

On entry to the with statement, all keyword arguments are applied to the Future object. On exit from the with statement, the previous state is restored.

For example::
with iris.FUTURE.context(example_future_flag=False):

# … code that expects some past behaviour

Note

iris.FUTURE.example_future_flag does not exist and is provided only as an example since there are currently no flags in Future.

deprecated_options = {}

↑ top ↑

An Iris deprecation warning.

class iris.IrisDeprecation[source]

An Iris deprecation warning.

with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

args

↑ top ↑

Provides a simple Cube name based Constraint.

class iris.NameConstraint(standard_name='none', long_name='none', var_name='none', STASH='none')[source]

Provides a simple Cube name based Constraint, which matches against each of the names provided, which may be either standard name, long name, NetCDF variable name and/or the STASH from the attributes dictionary.

The name constraint will only succeed if all of the provided names match.

Kwargs:

  • standard_name:

    A string or callable representing the standard name to match against.

  • long_name:

    A string or callable representing the long name to match against.

  • var_name:

    A string or callable representing the NetCDF variable name to match against.

  • STASH:

    A string or callable representing the UM STASH code to match against.

Note

The default value of each of the keyword arguments is the string “none”, rather than the singleton None, as None may be a legitimate value to be matched against e.g., to constrain against all cubes where the standard_name is not set, then use standard_name=None.

Returns:

  • Boolean

Example usage:

iris.NameConstraint(long_name='air temp', var_name=None)

iris.NameConstraint(long_name=lambda name: 'temp' in name)

iris.NameConstraint(standard_name='air_temperature',
                    STASH=lambda stash: stash.item == 203)
extract(cube)

Return the subset of the given cube which matches this constraint, else return None.

↑ top ↑

iris.load(uris, constraints=None, callback=None)[source]

Loads any number of Cubes for each constraint.

For a full description of the arguments, please see the module documentation for iris.

Args:

  • uris:

    One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

Kwargs:

  • constraints:

    One or more constraints.

  • callback:

    A modifier/filter function.

Returns

An iris.cube.CubeList. Note that there is no inherent order to this iris.cube.CubeList and it should be treated as if it were random.

↑ top ↑

iris.load_cube(uris, constraint=None, callback=None)[source]

Loads a single cube.

For a full description of the arguments, please see the module documentation for iris.

Args:

  • uris:

    One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

Kwargs:

  • constraints:

    A constraint.

  • callback:

    A modifier/filter function.

Returns

An iris.cube.Cube.

↑ top ↑

iris.load_cubes(uris, constraints=None, callback=None)[source]

Loads exactly one Cube for each constraint.

For a full description of the arguments, please see the module documentation for iris.

Args:

  • uris:

    One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

Kwargs:

  • constraints:

    One or more constraints.

  • callback:

    A modifier/filter function.

Returns

An iris.cube.CubeList. Note that there is no inherent order to this iris.cube.CubeList and it should be treated as if it were random.

↑ top ↑

iris.load_raw(uris, constraints=None, callback=None)[source]

Loads non-merged cubes.

This function is provided for those occasions where the automatic combination of cubes into higher-dimensional cubes is undesirable. However, it is intended as a tool of last resort! If you experience a problem with the automatic combination process then please raise an issue with the Iris developers.

For a full description of the arguments, please see the module documentation for iris.

Args:

  • uris:

    One or more filenames/URIs, as a string or pathlib.PurePath. If supplying a URL, only OPeNDAP Data Sources are supported.

Kwargs:

  • constraints:

    One or more constraints.

  • callback:

    A modifier/filter function.

Returns

An iris.cube.CubeList.

↑ top ↑

iris.sample_data_path(*path_to_join)[source]

Given the sample data resource, returns the full path to the file.

Note

This function is only for locating files in the iris sample data collection (installed separately from iris). It is not needed or appropriate for general file access.

↑ top ↑

iris.save(source, target, saver=None, **kwargs)[source]

Save one or more Cubes to file (or other writeable).

Iris currently supports three file formats for saving, which it can recognise by filename extension:

A custom saver can be provided to the function to write to a different file format.

Args:

  • source:

    iris.cube.Cube, iris.cube.CubeList or sequence of cubes.

  • target:

    A filename (or writeable, depending on file format). When given a filename or file, Iris can determine the file format. Filename can be given as a string or pathlib.PurePath.

Kwargs:

  • saver:

    Optional. Specifies the file format to save. If omitted, Iris will attempt to determine the format.

    If a string, this is the recognised filename extension (where the actual filename may not have it). Otherwise the value is a saver function, of the form: my_saver(cube, target) plus any custom keywords. It is assumed that a saver will accept an append keyword if it’s file format can handle multiple cubes. See also iris.io.add_saver().

All other keywords are passed through to the saver function; see the relevant saver documentation for more information on keyword arguments.

Examples:

# Save a cube to PP
iris.save(my_cube, "myfile.pp")

# Save a cube list to a PP file, appending to the contents of the file
# if it already exists
iris.save(my_cube_list, "myfile.pp", append=True)

# Save a cube to netCDF, defaults to NETCDF4 file format
iris.save(my_cube, "myfile.nc")

# Save a cube list to netCDF, using the NETCDF3_CLASSIC storage option
iris.save(my_cube_list, "myfile.nc", netcdf_format="NETCDF3_CLASSIC")

Warning

Saving a cube whose data has been loaded lazily (if cube.has_lazy_data() returns True) to the same file it expects to load data from will cause both the data in-memory and the data on disk to be lost.

cube = iris.load_cube("somefile.nc")
# The next line causes data loss in 'somefile.nc' and the cube.
iris.save(cube, "somefile.nc")

In general, overwriting a file which is the source for any lazily loaded data can result in corruption. Users should proceed with caution when attempting to overwrite an existing file.

↑ top ↑

iris.site_configuration

Iris site configuration dictionary.