You are viewing the latest unreleased documentation 3.10.0.dev17. You can switch to a stable version.

iris.util#

Miscellaneous utility functions.

iris.util.approx_equal(a, b, max_absolute_error=1e-10, max_relative_error=1e-10)[source]#

Check if two numbers are almost equal.

Returns whether two numbers are almost equal, allowing for the finite precision of floating point numbers.

Notes

This function does maintain laziness when called; it doesn’t realise data. See more at Real and Lazy Data.

Deprecated since version 3.2.0: Instead use math.isclose(). For example, rather than calling approx_equal(a, b, max_abs, max_rel) replace with math.isclose(a, b, max_rel, max_abs). Note that isclose() will return True if the actual error equals the maximum, whereas util.approx_equal() will return False.

iris.util.array_equal(array1, array2, withnans=False)[source]#

Return whether two arrays have the same shape and elements.

Parameters:
  • array1 (arraylike) – Args to be compared, normalised if necessary with np.asarray().

  • array2 (arraylike) – Args to be compared, normalised if necessary with np.asarray().

  • withnans (bool, default=False) – When unset (default), the result is False if either input contains NaN points. This is the normal floating-point arithmetic result. When set, return True if inputs contain the same value in all elements, _including_ any NaN values.

Notes

This provides much the same functionality as numpy.array_equal(), but with additional support for arrays of strings and NaN-tolerant operation.

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.between(lh, rh, lh_inclusive=True, rh_inclusive=True)[source]#

Provide convenient way of defining a 3 element inequality.

Such as a < number < b.

Parameters:
  • lh – The left hand element of the inequality.

  • rh – The right hand element of the inequality.

  • lh_inclusive (bool, default=True) – Affects the left hand comparison operator to use in the inequality. True for <= false for <. Defaults to True.

  • rh_inclusive (bool, default=True) – Same as lh_inclusive but for right hand operator.

Examples

between_3_and_6 = between(3, 6)
for i in range(10):
   print(i, between_3_and_6(i))

between_3_and_6 = between(3, 6, rh_inclusive=False)
for i in range(10):
   print(i, between_3_and_6(i))

Notes

This function does maintain laziness when called; it doesn’t realise data. See more at Real and Lazy Data.

iris.util.broadcast_to_shape(array, shape, dim_map, chunks=None)[source]#

Broadcast an array to a given shape.

Each dimension of the array must correspond to a dimension in the given shape. The result is a read-only view (see numpy.broadcast_to()). If you need to write to the resulting array, make a copy first.

Parameters:
  • array (numpy.ndarray-like) – An array to broadcast.

  • shape (list, tuple etc) – The shape the array should be broadcast to.

  • dim_map (list, tuple etc) – A mapping of the dimensions of array to their corresponding element in shape. dim_map must be the same length as the number of dimensions in array. Each element of dim_map corresponds to a dimension of array and its value provides the index in shape which the dimension of array corresponds to, so the first element of dim_map gives the index of shape that corresponds to the first dimension of array etc.

  • chunks (tuple, optional) – If the source array is a dask.array.Array and a value is provided, then the result will use these chunks instead of the same chunks as the source array. Setting chunks explicitly as part of broadcast_to_shape is more efficient than rechunking afterwards. See also dask.array.broadcast_to(). Note that the values provided here will only be used along dimensions that are new on the result or have size 1 on the source array.

Examples

Broadcasting an array of shape (2, 3) to the shape (5, 2, 6, 3) where the first dimension of the array corresponds to the second element of the desired shape and the second dimension of the array corresponds to the fourth element of the desired shape:

a = np.array([[1, 2, 3], [4, 5, 6]])
b = broadcast_to_shape(a, (5, 2, 6, 3), (1, 3))

Broadcasting an array of shape (48, 96) to the shape (96, 48, 12):

# a is an array of shape (48, 96)
result = broadcast_to_shape(a, (96, 48, 12), (1, 0))

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.clip_string(the_str, clip_length=70, rider='...')[source]#

Return clipped version of the string based on the specified clip length.

Return a clipped version of the string based on the specified clip length and whether or not any graceful clip points can be found.

If the string to be clipped is shorter than the specified clip length, the original string is returned.

If the string is longer than the clip length, a graceful point (a space character) after the clip length is searched for. If a graceful point is found the string is clipped at this point and the rider is added. If no graceful point can be found, then the string is clipped exactly where the user requested and the rider is added.

Parameters:
  • the_str (str) – The string to be clipped.

  • clip_length (int, default=70) – The length in characters that the input string should be clipped to. Defaults to a preconfigured value if not specified.

  • rider (str, default="...") – A series of characters appended at the end of the returned string to show it has been clipped. Defaults to a preconfigured value if not specified.

Returns:

The string clipped to the required length with a rider appended. If the clip length was greater than the original string, the original string is returned unaltered.

Return type:

str

Notes

This function does maintain laziness when called; it doesn’t realise data. See more at Real and Lazy Data.

iris.util.column_slices_generator(full_slice, ndims)[source]#

Return a dictionary mapping old data dimensions to new.

Given a full slice full of tuples, return a dictionary mapping old data dimensions to new and a generator which gives the successive slices needed to index correctly (across columns).

This routine deals with the special functionality for tuple based indexing e.g. [0, (3, 5), :, (1, 6, 8)] by first providing a slice which takes the non tuple slices out first i.e. [0, :, :, :] then subsequently iterates through each of the tuples taking out the appropriate slices i.e. [(3, 5), :, :] followed by [:, :, (1, 6, 8)]

This method was developed as numpy does not support the direct approach of [(3, 5), : , (1, 6, 8)] for column based indexing.

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.create_temp_filename(suffix='')[source]#

Return a temporary file name.

Parameters:

suffix (str, optional, default="") – Filename extension.

iris.util.delta(ndarray, dimension, circular=False)[source]#

Calculate the difference between values along a given dimension.

Parameters:
  • ndarray – The array over which to do the difference.

  • dimension – The dimension over which to do the difference on ndarray.

  • circular (bool, default=False) –

    If not False then return n results in the requested dimension with the delta between the last and first element included in the result otherwise the result will be of length n-1 (where n is the length of ndarray in the given dimension’s direction)

    If circular is numeric then the value of circular will be added to the last element of the given dimension if the last element is negative, otherwise the value of circular will be subtracted from the last element.

    The example below illustrates the process:

    original array              -180, -90,  0,    90
    delta (with circular=360):    90,  90, 90, -270+360
    

Notes

Note

The difference algorithm implemented is forward difference:

>>> import numpy as np
>>> import iris.util
>>> original = np.array([-180, -90, 0, 90])
>>> iris.util.delta(original, 0)
array([90, 90, 90])
>>> iris.util.delta(original, 0, circular=360)
array([90, 90, 90, 90])

Note

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.demote_dim_coord_to_aux_coord(cube, name_or_coord)[source]#

Demote a dimension coordinate on the cube to an auxiliary coordinate.

The DimCoord is demoted to an auxiliary coordinate on the cube. The dimension of the cube that was associated with the DimCoord becomes anonymous. The class of the coordinate is left as DimCoord, it is not recast as an AuxCoord instance.

Parameters:

Examples

>>> print(cube)
air_temperature / (K)               (time: 240; latitude: 37; longitude: 49)
    Dimension coordinates:
        time                             x              -              -
        latitude                         -              x              -
        longitude                        -              -              x
    Auxiliary coordinates:
        forecast_period                  x              -              -
        year                             x              -              -
>>> demote_dim_coord_to_aux_coord(cube, "time")
>>> print(cube)
air_temperature / (K)               (-- : 240; latitude: 37; longitude: 49)
    Dimension coordinates:
        latitude                        -              x              -
        longitude                       -              -              x
    Auxiliary coordinates:
        forecast_period                 x              -              -
        time                            x              -              -
        year                            x              -              -

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.describe_diff(cube_a, cube_b, output_file=None)[source]#

Print the differences that prevent compatibility between two cubes.

Print the differences that prevent compatibility between two cubes, as defined by iris.cube.Cube.is_compatible().

Parameters:
  • cube_a – An instance of iris.cube.Cube or iris.cube.CubeMetadata.

  • cube_b – An instance of iris.cube.Cube or iris.cube.CubeMetadata.

  • output_file (optional) – A file or file-like object to receive output. Defaults to sys.stdout.

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

Note

Compatibility does not guarantee that two cubes can be merged. Instead, this function is designed to provide a verbose description of the differences in metadata between two cubes. Determining whether two cubes will merge requires additional logic that is beyond the scope of this function.

See also

iris.cube.Cube.is_compatible

Check if a Cube is compatible with another.

iris.util.equalise_attributes(cubes)[source]#

Delete cube attributes that are not identical over all cubes in a group.

This function deletes any attributes which are not the same for all the given cubes. The cubes will then have identical attributes, and the removed attributes are returned. The given cubes are modified in-place.

Parameters:

cubes (iterable of iris.cube.Cube) – A collection of cubes to compare and adjust.

Returns:

A list of dicts holding the removed attributes.

Return type:

list

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.file_is_newer_than(result_path, source_paths)[source]#

Determine if the ‘result’ file was modified last.

Return whether the ‘result’ file has a later modification time than all of the ‘source’ files.

If a stored result depends entirely on known ‘sources’, it need only be re-built when one of them changes. This function can be used to test that by comparing file timestamps.

Parameters:
  • result_path (str) – The filepath of a file containing some derived result data.

  • source_paths (str or iterable of str) – The path(s) to the original datafiles used to make the result. May include wildcards and ‘~’ expansions (like Iris load paths), but not URIs.

Returns:

True if all the sources are older than the result, else False. If any of the file paths describes no existing files, an exception will be raised.

Return type:

bool

Notes

Note

There are obvious caveats to using file timestamps for this, as correct usage depends on how the sources might change. For example, a file could be replaced by one of the same name, but an older timestamp.

If wildcards and ‘~’ expansions are used, this introduces even more uncertainty, as then you cannot even be sure that the resulting list of file names is the same as the originals. For example, some files may have been deleted or others added.

Note

The result file may often be a pickle file. In that case, it also depends on the relevant module sources, so extra caution is required. Ideally, an additional check on iris.__version__ is advised.

iris.util.find_discontiguities(cube, rel_tol=1e-05, abs_tol=1e-08)[source]#

Identify spatial discontiguities.

Searches the ‘x’ and ‘y’ coord on the cube for discontiguities in the bounds array, returned as a boolean array (True for all cells which are discontiguous with the cell immediately above them or to their right).

Parameters:
  • cube (iris.cube.Cube) – The cube to be checked for discontinuities in its ‘x’ and ‘y’ coordinates. These coordinates must be 2D.

  • rel_tol (float, default=1e-5) – The relative equality tolerance to apply in coordinate bounds checking.

  • abs_tol (float, default=1e-8) – The absolute value tolerance to apply in coordinate bounds checking.

Returns:

Boolean True/false map of which cells in the cube XY grid have discontiguities in the coordinate points array.

This can be used as the input array for iris.util.mask_cube().

Return type:

numpy.ndarray

Examples

# Find any unknown discontiguities in your cube's x and y arrays:
discontiguities = iris.util.find_discontiguities(cube)

# Pass the resultant boolean array to `iris.util.mask_cube`
# with a cube slice; this will use the boolean array to mask
# any discontiguous data points before plotting:
masked_cube_slice = iris.util.mask_cube(cube[0], discontiguities)

# Plot the masked cube slice:
iplt.pcolormesh(masked_cube_slice)

Notes

This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.

iris.util.format_array(arr)[source]#

Create a new axis as the leading dimension of the cube.

Returns the given array as a string, using the python builtin str function on a piecewise basis.

Useful for xml representation of arrays.

For customisations, use the numpy.core.arrayprint directly.

Notes

This function does maintain laziness when called; it doesn’t realise data. See more at Real and Lazy Data.

iris.util.guess_coord_axis(coord)[source]#

Return a “best guess” axis name of the coordinate.

Heuristic categorisation of the coordinate into either label ‘T’, ‘Z’, ‘Y’, ‘X’ or None.

Parameters:

coord – The iris.coords.Coord.

Return type:

{‘T’, ‘Z’, ‘Y’, ‘X’} or None.

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

The guess_coord_axis behaviour can be skipped by setting the ignore_axis property on coord to False.

iris.util.is_masked(array)[source]#

Equivalent to numpy.ma.is_masked(), but works for both lazy AND realised arrays.

Parameters:

array (numpy.Array or dask.array.Array) – The array to be checked for masks.

Returns:

Whether or not the array has any masks.

Return type:

bool

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.is_regular(coord)[source]#

Determine if the given coord is regular.

Notes

This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.

iris.util.mask_cube(cube, points_to_mask, in_place=False, dim=None)[source]#

Mask any cells in the cube’s data array.

Masks any cells in the cube’s data array which correspond to cells marked True (or non zero) in points_to_mask. points_to_mask may be specified as a numpy.ndarray, dask.array.Array, iris.coords.Coord or iris.cube.Cube, following the same broadcasting approach as cube arithmetic (see Cube Maths).

Parameters:
  • cube (iris.cube.Cube) – Cube containing data that requires masking.

  • points_to_mask (numpy.ndarray, dask.array.Array, iris.coords.Coord or iris.cube.Cube) – Specifies booleans (or ones and zeros) indicating which points will be masked.

  • in_place (bool, default=False) – If True, masking is applied to the input cube. Otherwise a copy is masked and returned.

  • dim (int, optional) – If points_to_mask is a coord which does not exist on the cube, specify the dimension to which it should be mapped.

Returns:

A cube whose data array is masked at points specified by points_to_mask.

Return type:

iris.cube.Cube

Notes

If either cube or points_to_mask is lazy, the result will be lazy.

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.mask_cube_from_shapefile(cube, shape, minimum_weight=0.0, in_place=False)[source]#

Take a shape object and masks all points not touching it in a cube.

Finds the overlap between the shape and the cube in 2D xy space and masks out any cells with less % overlap with shape than set. Default behaviour is to count any overlap between shape and cell as valid

Parameters:
  • cube (Cube object) – The Cube object to masked. Must be singular, rather than a CubeList.

  • shape (Shapely.Geometry object) – A single shape of the area to remain unmasked on the cube. If it a line object of some kind then minimum_weight will be ignored, because you cannot compare the area of a 1D line and 2D Cell.

  • minimum_weight (float , default=0.0) – A number between 0-1 describing what % of a cube cell area must the shape overlap to include it.

  • in_place (bool, default=False) – Whether to mask the cube in-place or return a newly masked cube. Defaults to False.

Returns:

A masked version of the input cube, if in_place is False.

Return type:

iris.Cube

See also

mask_cube()

Mask any cells in the cube’s data array.

Notes

This function allows masking a cube with any cartopy projection by a shape object, most commonly from Natural Earth Shapefiles via cartopy. To mask a cube from a shapefile, both must first be on the same coordinate system. Shapefiles are mostly on a lat/lon grid with a projection very similar to GeogCS The shapefile is projected to the coord system of the cube using cartopy, then each cell is compared to the shapefile to determine overlap and populate a true/false array This array is then used to mask the cube using the iris.util.mask_cube function This uses numpy arithmetic logic for broadcasting, so you may encounter unexpected results if your cube has other dimensions the same length as the x/y dimensions

Examples

>>> import shapely
>>> from iris.util import mask_cube_from_shapefile
>>> cube = iris.load_cube(iris.sample_data_path("E1_north_america.nc"))
>>> shape = shapely.geometry.box(-100,30, -80,40) # box between 30N-40N 100W-80W
>>> masked_cube = mask_cube_from_shapefile(cube, shape)
iris.util.monotonic(array, strict=False, return_direction=False)[source]#

Return whether the given 1d array is monotonic.

Note that, the array must not contain missing data.

Parameters:
  • strict (bool, default=False) – Flag to enable strict monotonic checking.

  • return_direction (bool, default=False) – Flag to change return behaviour to return (monotonic_status, direction). Direction will be 1 for positive or -1 for negative. The direction is meaningless if the array is not monotonic.

Returns:

Whether the array was monotonic. If the return_direction flag was given then the returned value will be: (monotonic_status, direction).

Return type:

bool

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.new_axis(src_cube, scalar_coord=None, expand_extras=())[source]#

Create a new axis as the leading dimension of the cube.

Create a new axis as the leading dimension of the cube, promoting a scalar coordinate if specified.

Parameters:
  • src_cube (iris.cube.Cube) – Source cube on which to generate a new axis.

  • scalar_coord (iris.coord.Coord or str, optional) – Scalar coordinate to promote to a dimension coordinate.

  • expand_extras (iterable, optional) – Auxiliary coordinates, ancillary variables and cell measures which will be expanded so that they map to the new dimension as well as the existing dimensions.

Returns:

A new iris.cube.Cube instance with one extra leading dimension (length 1). Chosen auxiliary coordinates, cell measures and ancillary variables will also be given an additional dimension, associated with the leading dimension of the cube.

Return type:

iris.cube.Cube

Examples

>>> cube.shape
(360, 360)
>>> ncube = iris.util.new_axis(cube, 'time')
>>> ncube.shape
(1, 360, 360)

Notes

This function does maintain laziness when called; it doesn’t realise data. See more at Real and Lazy Data.

iris.util.points_step(points)[source]#

Determine whether points has a regular step.

Parameters:

points (numeric, array-like) – The sequence of values to check for a regular difference.

Returns:

A tuple containing the average difference between values, and whether the difference is regular.

Return type:

numeric, bool

Notes

This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.

iris.util.promote_aux_coord_to_dim_coord(cube, name_or_coord)[source]#

Promote an auxiliary to a dimension coordinate on the cube.

This AuxCoord must be associated with a single cube dimension. If the AuxCoord is associated with a dimension that already has a DimCoord, that DimCoord gets demoted to an AuxCoord.

Parameters:

Examples

>>> print(cube)
air_temperature / (K)               (time: 240; latitude: 37; longitude: 49)
    Dimension coordinates:
        time                             x              -              -
        latitude                         -              x              -
        longitude                        -              -              x
    Auxiliary coordinates:
        forecast_period                  x              -              -
        year                             x              -              -
>>> promote_aux_coord_to_dim_coord(cube, "year")
>>> print(cube)
air_temperature / (K)               (year: 240; latitude: 37; longitude: 49)
    Dimension coordinates:
        year                             x              -              -
        latitude                         -              x              -
        longitude                        -              -              x
    Auxiliary coordinates:
        forecast_period                  x              -              -
        time                             x              -              -

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.regular_points(zeroth, step, count)[source]#

Make an array of regular points.

Create an array of count points from zeroth + step, adding step each time. In float32 if this gives a sufficiently regular array (tested with points_step) and float64 if not.

Parameters:
  • zeroth (number) – The value prior to the first point value.

  • step (number) – The numeric difference between successive point values.

  • count (number) – The number of point values.

Notes

This function does maintain laziness when called; it doesn’t realise data. See more at Real and Lazy Data.

iris.util.regular_step(coord)[source]#

Return the regular step from a coord or fail.

Notes

This function does not maintain laziness when called; it realises data. See more at Real and Lazy Data.

iris.util.reverse(cube_or_array, coords_or_dims)[source]#

Reverse the cube or array along the given dimensions.

Parameters:
  • cube_or_array (iris.cube.Cube or numpy.ndarray) – The cube or array to reverse.

  • coords_or_dims (int, str, iris.coords.Coord or sequence of these) – Identify one or more dimensions to reverse. If cube_or_array is a numpy array, use int or a sequence of ints, as in the examples below. If cube_or_array is a Cube, a Coord or coordinate name (or sequence of these) may be specified instead.

Examples

>>> import numpy as np
>>> a = np.arange(24).reshape(2, 3, 4)
>>> print(a)
[[[ 0  1  2  3]
  [ 4  5  6  7]
  [ 8  9 10 11]]

 [[12 13 14 15]
  [16 17 18 19]
  [20 21 22 23]]]
>>> print(reverse(a, 1))
[[[ 8  9 10 11]
  [ 4  5  6  7]
  [ 0  1  2  3]]

 [[20 21 22 23]
  [16 17 18 19]
  [12 13 14 15]]]
>>> print(reverse(a, [1, 2]))
[[[11 10  9  8]
  [ 7  6  5  4]
  [ 3  2  1  0]]

 [[23 22 21 20]
  [19 18 17 16]
  [15 14 13 12]]]

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.rolling_window(a, window=1, step=1, axis=-1)[source]#

Make an ndarray with a rolling window of the last dimension.

Parameters:
  • a (array_like) – Array to add rolling window to.

  • window (int, default=1) – Size of rolling window.

  • step (int, default=1) – Size of step between rolling windows.

  • axis (int, default=-1) – Axis to take the rolling window over.

Returns:

Array that is a view of the original array with an added dimension of the size of the given window at axis + 1.

Return type:

array

Examples

>>> x = np.arange(10).reshape((2, 5))
>>> rolling_window(x, 3)
array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]],
       [[5, 6, 7], [6, 7, 8], [7, 8, 9]]])

Calculate rolling mean of last dimension:

>>> np.mean(rolling_window(x, 3), -1)
array([[ 1.,  2.,  3.],
       [ 6.,  7.,  8.]])

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.squeeze(cube)[source]#

Remove any dimension of length one.

Remove any dimension of length one. If it has an associated DimCoord or AuxCoord, this becomes a scalar coord.

Parameters:

cube (iris.cube.Cube) – Source cube to remove length 1 dimension(s) from.

Returns:

A new iris.cube.Cube instance without any dimensions of length 1.

Return type:

iris.cube.Cube

Examples

For example:

>>> cube.shape
(1, 360, 360)
>>> ncube = iris.util.squeeze(cube)
>>> ncube.shape
(360, 360)

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.

iris.util.unify_time_units(cubes)[source]#

Perform an in-place conversion of the time units.

Perform an in-place conversion of the time units of all time coords in the cubes in a given iterable. One common epoch is defined for each calendar found in the cubes to prevent units being defined with inconsistencies between epoch and calendar. During this process, all time coordinates have their data type converted to 64-bit floats to allow for smooth concatenation.

Each epoch is defined from the first suitable time coordinate found in the input cubes.

Parameters:

cubes – An iterable containing iris.cube.Cube instances.

Notes

This function maintains laziness when called; it does not realise data. See more at Real and Lazy Data.