v1.6 (26 Jan 2014)
This document explains the changes made to Iris for this release (View all changes.)
Showcase Feature - Back to the future …
>>> from iris.coords import DimCoord >>> iris.FUTURE.cell_datetime_objects = True >>> coord = DimCoord([1, 2, 3], "time", units="hours since epoch") >>> print([str(cell) for cell in coord.cells()]) ['1970-01-01 01:00:00', '1970-01-01 02:00:00', '1970-01-01 03:00:00']
Note that, either a
netcdftime.datetime object instance will be returned, depending on
the calendar of the time reference coordinate.
This capability permits the ability to express time constraints more naturally when the cell represents a datetime-like object.
# Ignore the 1st of January. iris.Constraint(time=lambda cell: cell.point.month != 1 and cell.point.day != 1)
>>> print(iris.FUTURE) Future(cell_datetime_objects=False) >>> with iris.FUTURE.context(cell_datetime_objects=True): ... # Code that expects to deal with datetime-like objects. ... print(iris.FUTURE) ... Future(cell_datetime_objects=True) >>> print(iris.FUTURE) Future(cell_datetime_objects=False)
Showcase Feature - Partial date/time …
The year, month, day, hour, minute, second and microsecond attributes of
iris.time.PartialDateTime object may be fully or partially
specified for any given comparison.
This is particularly useful for time based constraints, whilst enabling the
iris.FUTURE.cell_datetime_objects, see here for
further details on this new release feature.
from iris.time import PartialDateTime # Ignore the 1st of January. iris.Constraint(time=lambda cell: cell != PartialDateTime(month=1, day=1)) # Constrain by a specific year. iris.Constraint(time=PartialDateTime(year=2013))
Also see the User Guide Constraining on Time section for further commentary.
GRIB loading supports latitude/longitude or Gaussian reduced grids for version 1 and version 2.
NAME loading supports vertical coordinates.
UM land/sea mask de-compression for Fieldsfiles and PP files.
Lateral boundary condition Fieldsfile support.
Staggered grid support for Fieldsfiles extended to type 6 (Arakawa C grid with v at poles).
Extend support for Fieldsfiles with grid codes 11, 26, 27, 28 and 29.
Interpreting cell methods from NAME.
GRIB2 export without forecast_period, enabling NAME to GRIB2.
Loading height levels from GRIB2.
iris.coord_categorisation.add_categorised_coord()now supports multi-dimensional coordinate categorisation.
Fieldsfiles and PP support for loading and saving of air potential temperature.
iris.experimental.regrid.regrid_weighted_curvilinear_to_rectilinear()regrids curvilinear point data to a target rectilinear grid using associated area weights.
Extended capability of the NetCDF saver
iris.fileformats.netcdf.Saver.write()for fine-tune control of a
netCDF4.Variable. Also allows multiple dimensions to be nominated as unlimited.
A new utility function
A new utility function
Iris tests can now be run on systems where directory write permissions previously did not allow it. This is achieved by writing to the current working directory in such cases.
Support for 365 day calendar PP fields.
Added phenomenon translation between cf and grib2 for wind (from) direction.
PP files now retain lbfc value on save, derived from the stash attribute.
A New Utility Function to Assist With Caching
To assist with management of caching results to file, the new utility
iris.util.file_is_newer_than() may be used to easily determine whether
the modification time of a specified cache file is newer than one or more other files.
Typically, the use of caching is a means to circumvent the cost of repeating time consuming processing, or to reap the benefit of fast-loading a pickled cube.
# Determine whether to load from the cache or source. if iris.util.file_is_newer(cache_file, source_file): with open(cache_file, "rb") as fh: cube = cPickle.load(fh) else: cube = iris.load_cube(source_file) # Perhaps perform some intensive processing ... # Create the cube cache. with open(cache_file, 'wb') as fh: cPickle.dump(cube, fh)
The RMS Aggregator Supports Weights
iris.analysis.RMS aggregator has been extended to allow the use of
weights using the new keyword argument
For example, an RMS weighted cube collapse is performed as follows:
from iris.analysis import RMS collapsed_cube = cube.collapsed("height", RMS, weights=weights)
Equalise Cube Attributes
To assist with
iris.cube.Cube merging, the new experimental in-place
that a sequence of cubes contains a common set of
This attempts to smooth the merging process by ensuring that all candidate cubes have the same attributes.
Masking a Collapsed Result by Missing-Data Tolerance
The result from collapsing masked cube data may now be completely
masked by providing a
mdtol missing-data tolerance keyword
This tolerance provides a threshold that will completely mask the collapsed result whenever the fraction of data to missing-data is less than or equal to the provided tolerance.
Promote a Scalar Coordinate
The new utility function
iris.util.new_axis() creates a new cube with
a new leading dimension of size unity. If a scalar coordinate is provided, then
the scalar coordinate is promoted to be the dimension coordinate for the new
Note that, this function will load the data payload of the cube.
A New PEAK Aggregator Providing Spline Interpolation
For example, to calculate the peak time:
from iris.analysis import PEAK collapsed_cube = cube.collapsed("time", PEAK)
iris.cube.Cube.rolling_window()has been extended to support masked arrays.
iris.cube.Cube.collapsed()now handles string coordinates.
Default LBUSER(2) to -99 for Fieldsfile and PP saving.
iris.util.monotonic()returns the correct direction.
File loaders correctly parse filenames containing colons.
ABF loader now correctly loads the ABF data payload once.
Support for 1D array
GRIB bounded level saving fix.
iris.analysis.cartography.project()now associates a coordinate system with the resulting target cube, where applicable.
iris.analysis.interpolate.linear()now retains a mask in the resulting cube.
iris.util.rolling_window()handling of masked arrays (degenerate masks) fixed.
Exception no longer raised for any ellipsoid definition in nimrod loading.
iris.cube.Cube.extract_by_trajectory()has been removed. Instead, use
iris.coords.Coord.sin()have been removed.
iris.coords.Coord.unit_converted()has been removed. Instead, make a copy of the coordinate using
iris.coords.Coord.copy()and then call the
iris.coords.Coord.convert_units()method of the new coordinate.
Unitdeprecated methods/properties have been removed.
As a result of deprecating
iris.cube.Cube.add_history()and removing the automatic appending of history by operations such as cube arithmetic, collapsing, and aggregating, the signatures of a number of functions within
iris.analysis.mathshave been modified along with that of
The experimental ABF and ABL functionality has now been promoted to core functionality in
iris.coord_categorisationdeprecated functions have been removed.
When a cube is loaded from PP or GRIB and it has both time and forecast period coordinates, and the time coordinate has bounds, the forecast period coordinate will now also have bounds. These bounds will be aligned with the bounds of the time coordinate taking into account the forecast reference time. Also, the forecast period point will now be aligned with the time point.
iris.cube.Cube.add_history()has been deprecated in favour of users modifying/creating the history metadata directly. This is because the automatic behaviour did not deliver a sufficiently complete, auditable history and often prevented the merging of cubes.
iris.util.broadcast_weights()has been deprecated and replaced by the new utility function
Callback mechanism iris.run_callback has had its deprecation of return values revoked. The callback can now return cube instances as well as inplace changes to the cube.