You are viewing the latest unreleased documentation v3.2.dev0. You may prefer a stable version.

iris.fileformats.pp

Provides UK Met Office Post Process (PP) format specific capabilities.

In this module:

iris.fileformats.pp.EARTH_RADIUS

Convert a string or number to a floating point number, if possible.

↑ top ↑

A generic class for PP fields - not specific to a particular header release number.

A PPField instance can easily access the PP header “words” as attributes with some added useful capabilities:

for field in iris.fileformats.pp.load(filename):
    print(field.lbyr)
    print(field.lbuser)
    print(field.lbuser[0])
    print(field.lbtim)
    print(field.lbtim.ia)
    print(field.t1)
class iris.fileformats.pp.PPField(header=None)[source]

A generic class for PP fields - not specific to a particular header release number.

A PPField instance can easily access the PP header “words” as attributes with some added useful capabilities:

for field in iris.fileformats.pp.load(filename):
    print(field.lbyr)
    print(field.lbuser)
    print(field.lbuser[0])
    print(field.lbtim)
    print(field.lbtim.ia)
    print(field.t1)
__getattr__(key)[source]

This method supports deferred attribute creation, which offers a significant loading optimisation, particularly when not all attributes are referenced and therefore created on the instance.

When an ‘ordinary’ HEADER_DICT attribute is required, its associated header offset is used to lookup the data value/s from the combined header longs and floats data cache. The attribute is then set with this value/s on the instance. Thus future lookups for this attribute will be optimised, avoiding the __getattr__ lookup mechanism again.

When a ‘special’ HEADER_DICT attribute (leading underscore) is required, its associated ‘ordinary’ (no leading underscore) header offset is used to lookup the data value/s from the combined header longs and floats data cache. The ‘ordinary’ attribute is then set with this value/s on the instance. This is required as ‘special’ attributes have supporting property convenience functionality base on the attribute value e.g. see ‘lbpack’ and ‘lbtim’. Note that, for ‘special’ attributes the interface is via the ‘ordinary’ attribute but the underlying attribute value is stored within the ‘special’ attribute.

__repr__()[source]

Return a string representation of the PP field.

coord_system()[source]

Return a CoordSystem for this PPField.

Returns

Currently, a GeogCS or RotatedGeogCS.

copy()[source]

Returns a deep copy of this PPField.

Returns

A copy instance of the PPField.

core_data()[source]
save(file_handle)[source]

Save the PPField to the given file object (typically created with open()).

# to append the field to a file
with open(filename, 'ab') as fh:
    a_pp_field.save(fh)

# to overwrite/create a file
with open(filename, 'wb') as fh:
    a_pp_field.save(fh)

Note

The fields which are automatically calculated are: ‘lbext’, ‘lblrec’ and ‘lbuser[0]’. Some fields are not currently populated, these are: ‘lbegin’, ‘lbnrec’, ‘lbuser[1]’.

time_unit(time_unit, epoch='epoch')[source]
property calendar

Return the calendar of the field.

property data

The numpy.ndarray representing the multidimensional data of the pp file

property lbcode
property lbpack
property lbproc
property lbtim
property stash

A stash property giving access to the associated STASH object, now supporting __eq__

abstract property t1
abstract property t2
property x_bounds
property y_bounds

↑ top ↑

A class to hold a single STASH code.

Create instances using:
>>> model = 1
>>> section = 2
>>> item = 3
>>> my_stash = iris.fileformats.pp.STASH(model, section, item)
Access the sub-components via:
>>> my_stash.model
1
>>> my_stash.section
2
>>> my_stash.item
3
String conversion results in the MSI format:
>>> print(iris.fileformats.pp.STASH(1, 16, 203))
m01s16i203

A stash object can be compared directly to its string representation: >>> iris.fileformats.pp.STASH(1, 0, 4) == ‘m01s0i004’ True

class iris.fileformats.pp.STASH(model, section, item)[source]

Args:

  • model

    A positive integer less than 100, or None.

  • section

    A non-negative integer less than 100, or None.

  • item

    A positive integer less than 1000, or None.

static __new__(cls, model, section, item)[source]

Args:

  • model

    A positive integer less than 100, or None.

  • section

    A non-negative integer less than 100, or None.

  • item

    A positive integer less than 1000, or None.

count(value, /)

Return number of occurrences of value.

static from_msi(msi)[source]

Convert a STASH code MSI string to a STASH instance.

index(value, start=0, stop=9223372036854775807, /)

Return first index of value.

Raises ValueError if the value is not present.

lbuser3()[source]

Return the lbuser[3] value that this stash represents.

lbuser6()[source]

Return the lbuser[6] value that this stash represents.

property is_valid
item

Alias for field number 2

model

Alias for field number 0

section

Alias for field number 1

↑ top ↑

iris.fileformats.pp.as_fields(cube, field_coords=None, target=None)[source]

Use the PP saving rules (and any user rules) to convert a cube to an iterable of PP fields.

Args:

Kwargs:

  • field_coords:

    List of 2 coords or coord names which are to be used for reducing the given cube into 2d slices, which will ultimately determine the x and y coordinates of the resulting fields. If None, the final two dimensions are chosen for slicing.

  • target:

    A filename or open file handle.

↑ top ↑

iris.fileformats.pp.load(filename, read_data=False, little_ended=False)[source]

Return an iterator of PPFields given a filename.

Args:

  • filename - string of the filename to load.

Kwargs:

  • read_data - boolean

    Flag whether or not the data should be read, if False an empty data manager will be provided which can subsequently load the data on demand. Default False.

  • little_ended - boolean

    If True, file contains all little-ended words (header and data).

To iterate through all of the fields in a pp file:

for field in iris.fileformats.pp.load(filename):
    print(field)

↑ top ↑

iris.fileformats.pp.load_cubes(filenames, callback=None, constraints=None)[source]

Loads cubes from a list of pp filenames.

Args:

  • filenames - list of pp filenames to load

Kwargs:

  • constraints - a list of Iris constraints

  • callback - a function which can be passed on to

    iris.io.run_callback()

Note

The resultant cubes may not be in the order that they are in the file (order is not preserved when there is a field with orography references)

↑ top ↑

iris.fileformats.pp.load_pairs_from_fields(pp_fields)[source]

Convert an iterable of PP fields into an iterable of tuples of (Cubes, PPField).

Args:

Returns

An iterable of iris.cube.Cubes.

This capability can be used to filter out fields before they are passed to the load pipeline, and amend the cubes once they are created, using PP metadata conditions. Where this filtering removes a significant number of fields, the speed up to load can be significant:

>>> import iris
>>> from iris.fileformats.pp import load_pairs_from_fields
>>> filename = iris.sample_data_path('E1.2098.pp')
>>> filtered_fields = []
>>> for field in iris.fileformats.pp.load(filename):
...     if field.lbproc == 128:
...         filtered_fields.append(field)
>>> cube_field_pairs = load_pairs_from_fields(filtered_fields)
>>> for cube, field in cube_field_pairs:
...     cube.attributes['lbproc'] = field.lbproc
...     print(cube.attributes['lbproc'])
128

This capability can also be used to alter fields before they are passed to the load pipeline. Fields with out of specification header elements can be cleaned up this way and cubes created:

>>> filename = iris.sample_data_path('E1.2098.pp')
>>> cleaned_fields = list(iris.fileformats.pp.load(filename))
>>> for field in cleaned_fields:
...     if field.lbrel == 0:
...         field.lbrel = 3
>>> cubes_field_pairs = list(load_pairs_from_fields(cleaned_fields))

↑ top ↑

iris.fileformats.pp.save(cube, target, append=False, field_coords=None)[source]

Use the PP saving rules (and any user rules) to save a cube to a PP file.

:param * cube - A iris.cube.Cube: :param * target - A filename or open file handle.:

Kwargs:

  • append - Whether to start a new file afresh or add the cube(s)

    to the end of the file. Only applicable when target is a filename, not a file handle. Default is False.

  • field_coords - list of 2 coords or coord names which are to be used

    for reducing the given cube into 2d slices, which will ultimately determine the x and y coordinates of the resulting fields. If None, the final two dimensions are chosen for slicing.

See also iris.io.save(). Note that iris.save() is the preferred method of saving. This allows a iris.cube.CubeList or a sequence of cubes to be saved to a PP file.

↑ top ↑

iris.fileformats.pp.save_fields(fields, target, append=False)[source]

Save an iterable of PP fields to a PP file.

Args:

  • fields:

    An iterable of PP fields.

  • target:

    A filename or open file handle.

Kwargs:

  • append:

    Whether to start a new file afresh or add the cube(s) to the end of the file. Only applicable when target is a filename, not a file handle. Default is False.

See also iris.io.save().

↑ top ↑

iris.fileformats.pp.save_pairs_from_cube(cube, field_coords=None, target=None)[source]

Use the PP saving rules to convert a cube or iterable of cubes to an iterable of (2D cube, PP field) pairs.

Args:

Kwargs:

  • field_coords:

    List of 2 coords or coord names which are to be used for reducing the given cube into 2d slices, which will ultimately determine the x and y coordinates of the resulting fields. If None, the final two dimensions are chosen for slicing.

  • target:

    A filename or open file handle.