equimage package

Image processing module.

Submodules

equimage.helpers module

Image processing helpers.

class equimage.helpers.Container

Bases: object

An empty container class.

equimage.helpers.fpepsilon(dtype)

Return the distance between 1 and the nearest number for the input float class.

Parameters:

dtype (class) – A float class (numpy.float32 or numpy.float64).

Returns:

The distance between 1 and the nearest number for the input float class.

Return type:

float

equimage.helpers.failsafe_divide(A, B)

Return A/B, ignoring errors (division by zero, …).

Parameters:
  • A (numpy.ndarray) – The numerator array.

  • B (numpy.ndarray) – The denominator array.

Returns:

The element-wise division A/B.

Return type:

numpy.ndarray

equimage.helpers.scale_pixels(image, source, target, cutoff=None)

Scale all pixels of the input image by the ratio target/source.

Wherever abs(source) < cutoff, set all channels to target.

Parameters:
  • image (numpy.ndarray) – The input image.

  • source (numpy.ndarray) – The source values for scaling (must be the same size as the input image).

  • target (numpy.ndarray) – The target values for scaling (must be the same size as the input image).

  • cutoff (float, optional) – Threshold for scaling. If None, defaults to equimage.helpers.fpepsilon(source.dtype).

Returns:

The scaled image.

Return type:

numpy.ndarray

equimage.helpers.lookup(x, xlut, ylut, slut, nlut)

Linearly interpolate y = f(x) between the values ylut = f(xlut) of an evenly spaced look-up table.

Parameters:
  • x (float) – The input abscissa for interpolation.

  • xlut (numpy.ndarray) – The x values of the look-up table (must be evenly spaced).

  • ylut (numpy.ndarray) – The y values of the look-up table ylut = f(xlut).

  • slut (numpy.ndarray) – The slopes (ylut[1:]-ylut[:-1])/(xlut[1:]-xlut[:-1]) used for linear interpolation between the elements of ylut.

  • nlut (int) – The number of elements in the look-up table.

Returns:

The interpolated value y = f(x).

Return type:

float

equimage.helpers.at_least_3D(x)

Return a view on the input array with at least 3 dimensions (by prepending extra dimensions).

For example, for an input array x with shape (230, 450), returns a view with shape (1, 230, 450).

Parameters:

x (numpy.ndarray) – The input array.

Returns:

A view on the input array with at least 3 dimensions.

Return type:

numpy.ndarray

equimage.image module

Image class.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“Image”.

class equimage.image.Image(image, channels=0, colorspace='sRGB', colormodel='RGB')

Bases: NDArrayOperatorsMixin, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage, MixinImage

Image class.

The image is stored as self.image, a numpy.ndarray with dtype numpy.float32 or numpy.float64. Color images are represented as arrays with shape (3, height, width) and grayscale images as arrays with shape (1, height, width). The leading axis spans the color channels, and the last two the height and width of the image.

The class embeds colorspace and colormodel attributes for the color space and model of the image.

The colorspace attribute can be:

  • “lRGB” for the linear RGB color space.

  • “sRGB” for the sRGB color space.

  • “CIELab” for the CIELab color space.

  • “CIELuv” for the CIELuv color space.

In the lRGB and sRGB color spaces, the colormodel attribute can be:

  • “gray”: grayscale image with one single channel within [0, 1].

  • “RGB”: the 3 channels of the image are the red, blue, and green levels within [0, 1].

  • “HSV”: the 3 channels of the image are the HSV hue, saturation and value within [0, 1].

  • “HSL”: the 3 channels of the image are the HSL hue, saturation and lightness within [0, 1].

In the CIELab color space, the colormodel attribute can be:

  • “Lab”: the 3 channels of the image are the CIELab components L*/100, a*/100 and b*/100. The lightness L*/100 fits within [0, 1], but a* and b* are signed and not bounded.

  • “Lch”: the 3 channels of the image are the CIELab components L*/100, c*/100 and h*/(2π). The lightness L*/100 and the reduced hue angle h*/(2π) fit within [0, 1], but the chroma c* is not bounded by 1.

In the CIELuv color space, the colormodel attribute can be:

  • “Luv”: the 3 channels of the image are the CIELuv components L*/100, u*/100 and v*/100. The lightness L*/100 fits within [0, 1], but u* and v* are signed and not bounded.

  • “Lch”: the 3 channels of the image are the CIELuv components L*/100, c*/100 and h*/(2π). The lightness L*/100 and the reduced hue angle h*/(2π) fit within [0, 1], but the chroma c* is not bounded by 1.

  • “Lsh”: the 3 channels of the image are the CIELuv components L*/100, s*/100 and h*/(2π). The lightness L*/100 and the reduced hue angle h*/(2π) fit within [0, 1], but the saturation s* = c*/L* is not bounded by 1.

The default color space is sRGB and the default color model is RGB.

The dtype of the images (numpy.float32 or numpy.float64) can be set with params.set_image_type().

__init__(image, channels=0, colorspace='sRGB', colormodel='RGB')

Initialize a new Image object with the input image.

Parameters:
  • image (numpy.ndarray or Image) – The input image.

  • channels (int, optional) – The position of the channel axis for color images (default 0).

  • colorspace (str, optional) – The color space of the image (default “sRGB”). Can be “lRGB” (linear RGB color space), “sRGB” (sRGB color space), “CIELab” (CIELab colorspace), or “CIELuv” (CIELuv color space).

  • colormodel (str, optional) – The color model of the image (default “RGB”). In the lRGB/SRGB color spaces, can be “RGB” (RGB color model), “HSV” (HSV color model), “HSL” (HSL color model) or “gray” (grayscale image). In the CIELab color space, can be “Lab” (L*a*b* color model) or “Lch” (L*c*h* color model). In the CIELuv color space, can be “Luv” (L*u*v* color model), “Lch” (L*c*h* color model) or “Lsh” (L*s*h* model).

newImage(image, **kwargs)

Return a new Image object with the input image (with, by default, the same color space and color model as self).

Parameters:
  • image (numpy.ndarray) – The input image.

  • colorspace (str, optional) – The color space of the image (default self.colorspace). Can be “lRGB” (linear RGB color space), “sRGB” (sRGB color space), or “CIELab” (CIELab color space).

  • colormodel (str, optional) – The color model of the image (default self.colormodel). In the lRGB/SRGB color spaces, can be “RGB” (RGB color model), “HSV” (HSV color model), “HSL” (HSL color model) or “gray” (grayscale image). In the CIELab color space, can be “Lab” (L*a*b* color model) or “Lch” (L*c*h* color model). In the CIELuv color space, can be “Luv” (L*u*v* color model), “Lch” (L*c*h* color model) or “Lsh” (L*s*h* model).

Returns:

The new Image object.

Return type:

Image

copy()

Return a copy of the object.

Returns:

A copy of the object.

Return type:

Image

get_image(channels=0, copy=False)

Return the image data.

Parameters:
  • channels (int, optional) – The position of the channel axis (default 0).

  • copy (bool, optional) – If True, return a copy of the image data; If False (default), return a view.

Returns:

The image data.

Return type:

numpy.ndarray

get_shape()

Return the shape of the image data.

Returns:

(number of channels, height of the image in pixels, width of the image in pixels).

Return type:

tuple

get_size()

Return the width and height of the image.

Returns:

(width, height) of the image in pixels.

Return type:

tuple

get_nc()

Return the number of channels of the image.

Returns:

The number of channels of the image.

Return type:

int

get_color_space()

Return the color space of the image.

Returns:

The color space of the image.

Return type:

str

get_color_model()

Return the color model of the image.

Returns:

The color model of the image.

Return type:

str

int8()

Return the image as an array of 8 bits integers with shape (height, width, channels).

Warning

This method maps [0., 1.] onto [0, 255]. Not suitable for the CIELab and CIELuv color spaces !

Returns:

The image as an array of 8 bits integers with shape (height, width, channels).

Return type:

numpy.ndarray

int16()

Return the image as an array of 16 bits integers with shape (height, width, channels).

Warning

This method maps [0., 1.] onto [0, 65535]. Not suitable for the CIELab and CIELuv color spaces !

Returns:

The image as an array of 16 bits integers with shape (height, width, channels).

Return type:

numpy.ndarray

int32()

Return the image as an array of 32 bits integers with shape (height, width, channels).

Warning

This method maps [0., 1.] onto [0, 4294967295]. Not suitable for the CIELab and CIELuv color spaces !

Returns:

The image as an array of 32 bits integers with shape (height, width, channels).

Return type:

numpy.ndarray

equimage.image_colors module

Color management.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“HSV_wheel”.

equimage.image_colors.HSV_wheel()

Return a HSV wheel as an Image object, to test color transformations.

Returns:

An image object with a HSV wheel.

Return type:

Image

equimage.image_colors.parse_hue_kwargs(D, kwargs)

Parse hue keywords in the kwargs.

This function looks for the keywords ‘R’ (red, hue H = 0), ‘Y’ (yellow, H = 1/6), ‘G’ (green, H = 1/3), ‘C’ (cyan, H = 1/2), ‘B’ (blue, H = 2/3) and ‘M’ (magenta, H = 5/6) in the kwargs and returns the grid of H’s and the corresponding values of the kwargs as numpy arrays. Whenever a keyword is missing, its value is replaced by the default, D. Additional points may be inserted in the grid by providing the keywords ‘RY’ (red-yellow, H = 1/12), ‘YG’ (yellow-green, H = 1/4), ‘GC’ (green-cyan, H = 5/12), ‘CB’ (cyan-blue, H = 7/12), ‘BM’ (blue- magenta, H = 3/4) and ‘MR’ (magenta-red, H = 11/12).

Parameters:
  • D (float) – The default value for the R/Y/G/C/B/M hues.

  • kwargs (dict) – The dictionary of kwargs.

Returns:

The grid of hues (numpy.ndarray), the corresponding keyword values (numpy.ndarray), and the curated kwargs (with the used keys deleted).

equimage.image_colors.interpolate_hue(hue, hgrid, param, interpolation)

Interpolate a parameter param defined on a grid of hues.

Parameters:
  • hue (numpy.ndarray) – The input hues at which the parameter must be interpolated.

  • hgrid (numpy.ndarray) – The grid of hues on which the parameter is defined.

  • param (numpy.ndarray) – The parameter on the grid of hues.

  • interpolation (str, optional) –

    The interpolation method:

    • ”nearest”: Nearest neighbor interpolation.

    • ”linear”: Linear spline interpolation.

    • ”cubic”: Cubic spline interpolation.

    • ”akima”: Akima spline interpolation (default).

Returns:

The parameter interpolated for all input hues.

Return type:

numpy.ndarray

class equimage.image_colors.MixinImage

Bases: object

To be included in the Image class.

is_grayscale_RGB()

Return True if a RGB image is actually a grayscale (same RGB channels), False otherwise.

Returns:

True if the RGB image is a grayscale (same RGB channels), False otherwise.

Return type:

bool

negative()

Return the negative of a RGB or grayscale image.

Returns:

The negative of the image.

Return type:

Image

grayscale(channel='L*', RGB=False)

Convert the selected channel of a RGB image into a grayscale image.

Parameters:
  • channel – The converted channel (“V” for the HSV value, “L’” for HSL lightness, “L” for the luma, “Y” or “L*” for the luminance/lightness). Namely, the output grayscale image has the same value/luma/luminance and lightness as the original RGB image.

  • RGB (bool, optional) – If True, return the grayscale as a RGB image (with identical R/G/B channels). If False (default), return the grayscale as a single channel image.

Returns:

The grayscale image.

Return type:

Image

neutralize_background(source, neutral=None, mode='additive')

Neutralize the background of a RGB image (turn a background color into gray).

Given a source background color (Rs, Gs, Bs), and a target neutral level N, this method transforms the RGB channels as:

  • R ← R-Rs+N

  • G ← G-Gs+N

  • B ← B-Bs+N

if mode = “additive”, or as:

  • R ← R*N/Rs

  • G ← G*N/Gs

  • B ← B*N/Bs

if mode = “multiplicative”, with N = max(Rs, Gs, Bs) by default. On output, the source background color becomes, therefore, the gray (N, N, N) color in both cases.

Parameters:
  • source (float) – The source background color (tuple/list/array of the Rs, Gs, Bs levels).

  • neutral (float, optional) – The target neutral level [max(Rs, Gs, Bs) if None (default)].

  • mode (str, optional) – The neutralization mode [“additive” (default) or “multiplicative”].

Returns:

The processed image.

Return type:

Image

color_balance(red=1.0, green=1.0, blue=1.0, neutral=0.0)

Adjust the color balance of a RGB image.

This method linearly scales the RGB channels as:

  • R ← red*(R-neutral)+neutral

  • G ← green*(G-neutral)+neutral

  • B ← blue*(B-neutral)+neutral

The neutral color thus remains unchanged.

Parameters:
  • red (float, optional) – The multiplier for the red channel (default 1).

  • green (float, optional) – The multiplier for the green channel (default 1).

  • blue (float, optional) – The multiplier for the blue channel (default 1).

  • neutral (float, optional) – The neutral level or color (default 0). Can be a scalar or a tuple/list/array of neutral (R, G, B) levels.

Returns:

The processed image.

Return type:

Image

match_RGB(source, target, neutral=0.0)

Adjust the color balance of a RGB image to transform a source into a target color.

Given a source color (Rs, Gs, Bs), a target color (Rt, Gt, Bt), and a neutral level N, this method linearly scales the RGB channels as:

  • R ← (Rt-N)/(Rs-N)*(R-N)+N

  • G ← (Gt-N)/(Gs-N)*(G-N)+N

  • B ← (Bt-N)/(Bs-N)*(B-N)+N

On output, the source color becomes the target color, while the neutral color remains unchanged.

Parameters:
  • source (float) – The source color (tuple/list/array of the Rs, Gs, Bs levels).

  • target (float) – The target color (tuple/list/array of the Rt, Gt, Bt levels).

  • neutral (float, optional) – The neutral level or color (default 0). Can be a scalar or a tuple/list/array of neutral (R, G, B) levels.

Returns:

The processed image.

Return type:

Image

mix_RGB(M, neutral=0.0)

Mix RGB channels.

Transforms each pixel P = (R, G, B) of the image as:

P ← M@(P-neutral)+neutral,

with M a 3x3 mixing matrix. The neutral color thus remains unchanged.

Parameters:
  • M (numpy.ndarray) – The mixing matrix.

  • neutral (float, optional) – The neutral level or color (default 0). Can be a scalar or a tuple/list/array of neutral (R, G, B) levels.

Returns:

The processed image.

Return type:

Image

color_temperature(T, T0=6650.0, lightness=False)

Adjust the color temperature of a RGB image.

Adjusts the color balance assuming that the scene is (or is lit by) a black body source whose temperature is changed from T0 (default 6650K) to T. Setting T < T0 casts a red tint on the image, while setting T > T0 casts a blue tint. This is not a rigorous transformation and is intended for “cosmetic” purposes. The colors are balanced in the linear RGB color space.

Parameters:
  • T (float) – The target temperature between 1000K and 40000K.

  • T0 (float, optional) – The initial temperature between 1000K and 40000K (default 6650K).

  • lightness (bool, optional) – If True, preserve the lightness L* of the original image. Note that this may result in some out-of-range pixels. Default is False.

Returns:

The processed image.

Return type:

Image

HSX_color_saturation(D=0.0, mode='midsat', colormodel='HSV', colorspace=None, interpolation='akima', lightness=False, trans=True, **kwargs)

Adjust color saturation in the HSV or HSL color models.

The image is converted (if needed) to the HSV or HSL color model, then the color saturation S is transformed according to the mode kwarg:

  • “addsat”: Shift the saturation S ← S+delta.

  • “mulsat”: Scale the saturation S ← S*(1+delta).

  • “midsat”: Apply a midtone stretch function S ← f(S) = (m-1)S/((2m-1)S-m) with midtone m = (1-delta)/2. This function increases monotonously from f(0) = 0 to f(m) = 1/2 and f(1) = 1, and thus leaves the saturation of the least/most saturated pixels unchanged.

The image is then converted back to the original color model after this operation. delta is expected to be > -1, and to be < 1 in the “midsat” mode. Whatever the mode, delta = 0 leaves the image unchanged, delta > 0 saturates the colors, and delta < 0 turns the image into a gray scale. delta is set for the red (‘R’), yellow (‘Y’), green (‘G’), cyan (‘C’), blue (‘B’) and magenta (‘M’) hues by the corresponding kwarg (delta = D if missing). It is interpolated for arbitrary hues using nearest neighbor, linear, cubic or akima spline interpolation according to the interpolation kwarg. Midpoint deltas may also be specified for finer interpolation by providing the kwargs ‘RY’ (red-yellow), ‘YG’ (yellow-green), ‘GC’ (green-cyan), ‘CB’ (cyan-blue), ‘BM’ (blue-magenta) and ‘MR’ (magenta-red).

Parameters:
  • D (float, optional) – The delta for all hues (default 0).

  • R (float, optional) – The red delta (default D).

  • Y (float, optional) – The yellow delta (default D).

  • G (float, optional) – The green delta (default D).

  • C (float, optional) – The cyan delta (default D).

  • B (float, optional) – The blue delta (default D).

  • M (float, optional) – The magenta delta (default D).

  • mode (str, optional) – The saturation mode [“addsat”, “mulsat” or “midsat” (default)].

  • colormodel (str, optional) – The color model for saturation [“HSV” (default) or “HSL”].

  • colorspace (str, optional) – The color space for saturation [“lRGB”, “sRGB”, or None (default) to use the color space of the image].

  • interpolation (str, optional) –

    The interpolation method for delta(hue):

    • ”nearest”: Nearest neighbor interpolation.

    • ”linear”: Linear spline interpolation.

    • ”cubic”: Cubic spline interpolation.

    • ”akima”: Akima spline interpolation (default).

  • lightness (bool, optional) – If True, preserve the lightness L* of the original image. Note that this may result in some out-of-range pixels. Default is False.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The processed image.

Return type:

Image

CIE_chroma_saturation(D=0.0, mode='midsat', colormodel='Lsh', interpolation='akima', ref=None, trans=True, **kwargs)

Adjust color chroma or saturation in the CIELab or CIELuv color spaces.

The image is converted (if needed) to the CIELab or CIELuv colorspace, then the CIELab chroma CS = c* = sqrt(a*^2+b*^2) (colormodel = “Lab”), or the CIELuv chroma CS = c* = sqrt(u*^2+v*^2) (colormodel = “Luv”), or the CIELuv saturation CS = s* = c*/L* (colormodel = “Lsh”) is transformed according to the mode kwarg:

  • “addsat”: Shift the chroma/saturation CS ← CS+delta.

  • “mulsat”: Scale the chroma/saturation CS ← CS*(1+delta).

  • “midsat”: Apply a midtone stretch function CS ← f(CS) = (m-1)CS/((2m-1)CS/ref-m) with midtone m = (1-delta)/2. This function increases monotonously from f(0) = 0 to f(m*ref) = ref/2 and f(ref) = ref, where ref is a reference chroma/saturation (ref = max(CS) by default).

The image is then converted back to the original color space and model after this operation. delta is expected to be > -1, and to be < 1 in the “midsat” mode. Whatever the mode, delta = 0 leaves the image unchanged, delta > 0 saturates the colors, and delta < 0 turns the image into a gray scale. However, please keep in mind that the chroma/saturation in the CIELab/CIELuv color spaces is not bounded by 1 as it is in the lRGB and sRGB color spaces (HSV and HSL color models). The choice of the reference can, therefore, be critical in the “midsat” mode. In particular, pixels with chroma/saturation > ref get desaturated if delta > 0, and oversaturated if delta < 0 (with a possible singularity at CS = -ref*(1-delta)/(2*delta)). delta is set for the red (‘R’), yellow (‘Y’), green (‘G’), cyan (‘C’), blue (‘B’) and magenta (‘M’) hues by the corresponding kwarg (delta = D if missing). It is interpolated for arbitrary hues using nearest neighbor, linear, cubic or akima spline interpolation according to the interpolation kwarg. Midpoint deltas may also be specified for finer interpolation by providing the kwargs ‘RY’ (red-yellow), ‘YG’ (yellow-green), ‘GC’ (green-cyan), ‘CB’ (cyan-blue), ‘BM’ (blue-magenta) and ‘MR’ (magenta-red). Contrary to the saturation of HSV or HSL images, chroma/saturation transformations in the CIELab and CIELuv color spaces preserve the lightness by design. They may, however, result in out-of- range RGB pixels (as not all points of of these color spaces correspond to physical RGB colors).

Note

Chroma and saturation are related, but different quantities (s* = c*/L* in the CIELuv color space). There is no rigorous definition of saturation in the CIELab color space.

Parameters:
  • D (float, optional) – The delta for all hues (default 0).

  • R (float, optional) – The red delta (default D).

  • Y (float, optional) – The yellow delta (default D).

  • G (float, optional) – The green delta (default D).

  • C (float, optional) – The cyan delta (default D).

  • B (float, optional) – The blue delta (default D).

  • M (float, optional) – The magenta delta (default D).

  • mode (str, optional) – The saturation mode [“addsat”, “mulsat” or “midsat” (default)].

  • colormodel (str, optional) – The color model for saturation [“Lab”, “Luv” or “Lsh” (default)].

  • interpolation (str, optional) –

    The interpolation method for delta(hue angle):

    • ”nearest”: Nearest neighbor interpolation.

    • ”linear”: Linear spline interpolation.

    • ”cubic”: Cubic spline interpolation.

    • ”akima”: Akima spline interpolation (default).

  • ref (float, optional) – The reference chroma/saturation for the “midsat” mode. If None, defaults to the maximum chroma/saturation of the input image.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The processed image.

Return type:

Image

rotate_HSX_hue(D=0.0, colorspace=None, interpolation='akima', lightness=False, trans=True, **kwargs)

Rotate color hues in the HSV/HSL color models.

The image is converted (if RGB) to the HSV color model, and the hue H is rotated:

H ← (H+delta)%1.

The image is then converted back to the original color model after this operation. delta is set for the original red (‘R’), yellow (‘Y’), green (‘G’), cyan (‘C’), blue (‘B’) and magenta (‘M’) hues by the corresponding kwarg (delta = D if missing). It is interpolated for arbitrary hues using nearest neighbor, linear, cubic or akima spline interpolation according to the interpolation kwarg. Midpoint deltas may also be specified for finer interpolation by providing the kwargs ‘RY’ (red-yellow), ‘YG’ (yellow-green), ‘GC’ (green-cyan), ‘CB’ (cyan-blue), ‘BM’ (blue-magenta) and ‘MR’ (magenta-red).

Note

H(red) = 0, H(yellow) = 1/6, H(green) = 1/3, H(cyan) = 1/2, H(blue) = 2/3, and H(magenta) = 5/6. A uniform rotation D = 1/6 converts red → yellow, yellow → green, green → cyan, cyan → blue, blue → magenta, and magenta → red. A uniform rotation D = -1/6 converts red → magenta, yellow → red, green → yellow, cyan → green, blue → cyan, and magenta → blue.

Parameters:
  • D (float, optional) – The delta for all hues (default 0).

  • R (float, optional) – The red delta (default D).

  • Y (float, optional) – The yellow delta (default D).

  • G (float, optional) – The green delta (default D).

  • C (float, optional) – The cyan delta (default D).

  • B (float, optional) – The blue delta (default D).

  • M (float, optional) – The magenta delta (default D).

  • colorspace (str, optional) – The color space for saturation [“lRGB”, “sRGB”, or None (default) to use the color space of the image].

  • interpolation (str, optional) –

    The interpolation method for delta(hue):

    • ”nearest”: Nearest neighbor interpolation.

    • ”linear”: Linear spline interpolation.

    • ”cubic”: Cubic spline interpolation.

    • ”akima”: Akima spline interpolation (default).

  • lightness (bool, optional) – If True, preserve the lightness L* of the original image. Note that this may result in some out-of-range pixels. Default is False.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The processed image.

Return type:

Image

rotate_CIE_hue(D=0.0, colorspace='CIELab', interpolation='akima', trans=True, **kwargs)

Rotate color hues in the CIELab or CIELuv color space.

The image is converted (if needed) to the CIELab or CIELuv color space, and the reduced hue angle h* (within [0, 1]) is rotated:

h* ← (h*+delta)%1.

The image is then converted back to the original color model after this operation. delta is set for the original red (‘R’), yellow (‘Y’), green (‘G’), cyan (‘C’), blue (‘B’) and magenta (‘M’) hues by the corresponding kwarg (delta = D if missing). It is interpolated for arbitrary hues using nearest neighbor, linear, cubic or akima spline interpolation according to the interpolation kwarg. Midpoint deltas may also be specified for finer interpolation by providing the kwargs ‘RY’ (red-yellow), ‘YG’ (yellow-green), ‘GC’ (green-cyan), ‘CB’ (cyan-blue), ‘BM’ (blue-magenta) and ‘MR’ (magenta-red). Contrary to the rotation of HSV or HSL images, rotations in the CIELab and CIELuv color spaces preserve the lightness by design. They may, however, result in out-of-range RGB pixels (as not all points of of these color spaces correspond to physical RGB colors).

Note

h*(red) ~ 0, h*(yellow) ~ 1/6, h*(green) ~ 1/3, h*(cyan) ~ 1/2, h*(blue) ~ 2/3, and h*(magenta) ~ 5/6. A uniform rotation D = 1/6 converts red → yellow, yellow → green, green → cyan, cyan → blue, blue → magenta, and magenta → red. A uniform rotation D = -1/6 converts red → magenta, yellow → red, green → yellow, cyan → green, blue → cyan, and magenta → blue.

Parameters:
  • D (float, optional) – The delta for all hues (default 0).

  • R (float, optional) – The red delta (default D).

  • Y (float, optional) – The yellow delta (default D).

  • G (float, optional) – The green delta (default D).

  • C (float, optional) – The cyan delta (default D).

  • B (float, optional) – The blue delta (default D).

  • M (float, optional) – The magenta delta (default D).

  • colorspace (str, optional) – The color space for rotation [“CIELab” (default) or “CIELuv”].

  • interpolation (str, optional) –

    The interpolation method for delta(hue angle):

    • ”nearest”: Nearest neighbor interpolation.

    • ”linear”: Linear spline interpolation.

    • ”cubic”: Cubic spline interpolation.

    • ”akima”: Akima spline interpolation (default).

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The processed image.

Return type:

Image

SCNR(hue='green', protection='avgneutral', amount=1.0, colorspace=None, lightness=True)

Subtractive Chromatic Noise Reduction of a given hue of a RGB image.

The input hue is reduced according to the protection kwarg. For the green hue for example,

  • G ← min(G, C) with C = (R+B)/2 for average neutral protection (protection = “avgneutral”).

  • G ← min(G, C) with C = max(R, B) for maximum neutral protection (protection = “maxneutral”).

  • G ← G*[(1-A)+C*A] with C = (R+B)/2 for additive mask protection (protection = “addmask”).

  • G ← G*[(1-A)+C*A] with C = max(R, B) for maximum mask protection (protection = “maxmask”).

The parameter A in [0, 1] controls the strength of the mask protection.

Parameters:
  • hue (str, optional) – The hue to be reduced [“red” alias “R”, “yellow” alias “Y”, “green” alias “G” (default), “cyan” alias “C”, “blue” alias “B”, or “magenta” alias “M”].

  • protection (str, optional) – The protection mode [“avgneutral” (default), “maxneutral”, “addmask” or “maxmask”].

  • amount (float, optional) – The parameter A for mask protection (protection = “addmask” or “maxmask”, default 1).

  • colorspace (str, optional) – The color space for SCNR [“lRGB”, “sRGB”, or None (default) to use the color space of the image].

  • lightness (bool, optional) – If True (default), preserve the lightness L* of the original image. Note that this may result in some out-of-range pixels.

Returns:

The processed image.

Return type:

Image

equimage.image_colorspaces module

Color spaces and models management.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“luma”, “lRGB_lightness”, “sRGB_lightness”.

equimage.image_colorspaces.lRGB_to_sRGB(image)

Convert the input linear RGB image into a sRGB image.

See also

The reciprocal sRGB_to_lRGB() function.

Parameters:

image (numpy.ndarray) – The input lRGB image.

Returns:

The converted sRGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.sRGB_to_lRGB(image)

Convert the input sRGB image into a linear RGB image.

See also

The reciprocal lRGB_to_sRGB() function.

Parameters:

image (numpy.ndarray) – The input sRGB image.

Returns:

The converted lRGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.lRGB_to_CIELab(image)

Convert the input linear RGB image into a CIELab image.

Note that the CIE lightness L* is conventionally defined within [0, 100], and that the CIE chromatic components a*, b* are signed. This function actually returns L*/100, a*/100 and b*/100.

See also

The reciprocal CIELab_to_lRGB() function.

Parameters:

image (numpy.ndarray) – The input lRGB image.

Returns:

The converted CIELab image.

Return type:

numpy.ndarray

equimage.image_colorspaces.CIELab_to_lRGB(image)

Convert the input CIELab image into a linear RGB image.

See also

The reciprocal lRGB_to_CIELab() function.

Parameters:

image (numpy.ndarray) – The input CIELab image.

Returns:

The converted lRGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.lRGB_to_CIELuv(image)

Convert the input linear RGB image into a CIELuv image.

Note that the CIE lightness L* is conventionally defined within [0, 100], and that the CIE chromatic components u*, v* are signed. This function actually returns L*/100, u*/100 and v*/100.

See also

The reciprocal CIELuv_to_lRGB() function.

Parameters:

image (numpy.ndarray) – The input lRGB image.

Returns:

The converted CIELuv image.

Return type:

numpy.ndarray

equimage.image_colorspaces.CIELuv_to_lRGB(image)

Convert the input CIELuv image into a linear RGB image.

See also

The reciprocal lRGB_to_CIELuv() function.

Parameters:

image (numpy.ndarray) – The input CIELuv image.

Returns:

The converted lRGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.sRGB_to_CIELab(image)

Convert the input sRGB image into a CIELab image.

Note that the CIE lightness L* is conventionally defined within [0, 100], and that the CIE chromatic components a*, b* are signed. This function actually returns L*/100, a*/100 and b*/100.

See also

The reciprocal CIELab_to_sRGB() function.

Parameters:

image (numpy.ndarray) – The input sRGB image.

Returns:

The converted CIELab image.

Return type:

numpy.ndarray

equimage.image_colorspaces.CIELab_to_sRGB(image)

Convert the input CIELab image into a sRGB image.

See also

The reciprocal sRGB_to_CIELab() function.

Parameters:

image (numpy.ndarray) – The input CIELab image.

Returns:

The converted sRGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.sRGB_to_CIELuv(image)

Convert the input sRGB image into a CIELuv image.

Note that the CIE lightness L* is conventionally defined within [0, 100], and that the CIE chromatic components u*, v* are signed. This function actually returns L*/100, u*/100 and v*/100.

See also

The reciprocal CIELuv_to_sRGB() function.

Parameters:

image (numpy.ndarray) – The input sRGB image.

Returns:

The converted CIELuv image.

Return type:

numpy.ndarray

equimage.image_colorspaces.CIELuv_to_sRGB(image)

Convert the input CIELuv image into a sRGB image.

See also

The reciprocal sRGB_to_CIELuv() function.

Parameters:

image (numpy.ndarray) – The input CIELuv image.

Returns:

The converted sRGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.RGB_to_HSV(image)

Convert the input RGB image into a HSV image.

See also

The reciprocal HSV_to_RGB() function.

Note

This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The converted HSV image.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSV_to_RGB(image)

Convert the input HSV image into a RGB image.

See also

The reciprocal RGB_to_HSV() function.

Note

This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input HSV image.

Returns:

The converted RGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSV_to_HSL(image)

Convert the input HSV image into a HSL image.

See also

The reciprocal HSL_to_HSV() function.

Note

This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input HSV image.

Returns:

The converted HSL image.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSL_to_HSV(image)

Convert the input HSL image into a HSV image.

See also

The reciprocal HSV_to_HSL() function.

Note

This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input HSL image.

Returns:

The converted HSV image.

Return type:

numpy.ndarray

equimage.image_colorspaces.RGB_to_HSL(image)

Convert the input RGB image into a HSL image.

See also

The reciprocal HSL_to_RGB() function.

Note

This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The converted HSL image.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSL_to_RGB(image)

Convert the input HSL image into a RGB image.

See also

The reciprocal RGB_to_HSL() function.

Note

This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input HSL image.

Returns:

The converted RGB image.

Return type:

numpy.ndarray

equimage.image_colorspaces.Lxx_to_Lch(image)

Convert the input Lab/Luv image into a Lch image.

See also

The reciprocal Lch_to_Lxx() function.

Parameters:

image (numpy.ndarray) – The input Lab/Luv image.

Returns:

The converted Lch image.

Return type:

numpy.ndarray

equimage.image_colorspaces.Lch_to_Lxx(image)

Convert the input Lch image into a Lab/Luv image.

See also

The reciprocal Lxx_to_Lch() function.

Parameters:

image (numpy.ndarray) – The input Lch image.

Returns:

The converted Lab/Luv image.

Return type:

numpy.ndarray

equimage.image_colorspaces.Lch_to_Lsh(image)

Convert the input Lch image into a Lsh image.

See also

The reciprocal Lsh_to_Lch() function.

Parameters:

image (numpy.ndarray) – The input Lch image.

Returns:

The converted Lsh image.

Return type:

numpy.ndarray

equimage.image_colorspaces.Lsh_to_Lch(image)

Convert the input Lsh image into a Lch image.

See also

The reciprocal Lch_to_Lsh() function.

Parameters:

image (numpy.ndarray) – The input Lsh image.

Returns:

The converted Lch image.

Return type:

numpy.ndarray

equimage.image_colorspaces.Luv_to_Lsh(image)

Convert the input Luv image into a Lsh image.

See also

The reciprocal Lsh_to_Luv() function.

Parameters:

image (numpy.ndarray) – The input Luv image.

Returns:

The converted Lsh image.

Return type:

numpy.ndarray

equimage.image_colorspaces.Lsh_to_Luv(image)

Convert the input Lsh image into a Luv image.

See also

The reciprocal Luv_to_Lsh() function.

Parameters:

image (numpy.ndarray) – The input Lsh image.

Returns:

The converted Luv image.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSX_hue(image)

Return the HSV/HSL hue of the input RGB image.

Note

This function clips the input image below 0.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The HSV/HSL hue.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSV_value(image)

Return the HSV value V = max(RGB) of the input RGB image.

Note

Compatible with single channel grayscale images. This function clips the input image below 0.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The HSV value V.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSV_saturation(image)

Return the HSV saturation S = 1-min(RGB)/max(RGB) of the input RGB image.

Note

Compatible with single channel grayscale images. This function clips the input image below 0.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The HSV saturation S.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSL_lightness(image)

Return the HSL lightness L’ = (max(RGB)+min(RGB))/2 of the input RGB image.

Note

Compatible with single channel grayscale images. This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The HSL lightness L’.

Return type:

numpy.ndarray

equimage.image_colorspaces.HSL_saturation(image)

Return the HSL saturation S’ = (max(RGB)-min(RGB))/(1-abs(max(RGB)+min(RGB)-1)) of the input RGB image.

Note

Compatible with single channel grayscale images. This function clips the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The HSL saturation S’.

Return type:

numpy.ndarray

equimage.image_colorspaces.luma(image)

Return the luma L of the input RGB image.

The luma L is the average of the RGB components weighted by rgbluma = get_RGB_luma():

L = rgbluma[0]*image[0]+rgbluma[1]*image[1]+rgbluma[2]*image[2].

Note

Compatible with single channel grayscale images.

Parameters:

image (numpy.ndarray) – The input RGB image.

Returns:

The luma L.

Return type:

numpy.ndarray

equimage.image_colorspaces.luminance_to_lightness(Y)

Compute the CIE lightness L* from the lRGB luminance Y.

The CIE lightness L* is defined from the lRGB luminance Y as:

L* = 116*Y**(1/3)-16 if Y > 0.008856 and L* = 903.3*Y if Y < 0.008856.

Note that L* is conventionally defined within [0, 100]. However, this function returns the scaled lightness L*/100 within [0, 1].

See also

The reciprocal lightness_to_luminance() function.

Parameters:

Y (numpy.ndarray) – The luminance Y.

Returns:

The CIE lightness L*/100.

Return type:

numpy.ndarray

equimage.image_colorspaces.lightness_to_luminance(Lstar)

Compute the lRGB luminance Y from the CIE lightness L*.

See also

The reciprocal luminance_to_lightness() function.

Parameters:

Lstar (numpy.ndarray) – The CIE lightness L*/100.

Returns:

The luminance Y.

Return type:

numpy.ndarray

equimage.image_colorspaces.lRGB_luminance(image)

Return the luminance Y of the input linear RGB image.

The luminance Y of a lRGB image is defined as:

Y = 0.212671*R+0.715160*G+0.072169*B

It is equivalently the luma of the lRGB image for RGB weights (0.212671, 0.715160, 0.072169), and is the basic ingredient of the perceptual lightness L* in the CIELab and CIELuv color spaces.

Note

Compatible with single channel grayscale images.

Parameters:

image (numpy.ndarray) – The input lRGB image.

Returns:

The luminance Y.

Return type:

numpy.ndarray

equimage.image_colorspaces.lRGB_lightness(image)

Return the CIE lightness L* of the input linear RGB image.

The CIE lightness L* is defined from the lRGB luminance Y as:

L* = 116*Y**(1/3)-16 if Y > 0.008856 and L* = 903.3*Y if Y < 0.008856.

It is a measure of the perceptual lightness of the image. Note that L* is conventionally defined within [0, 100]. However, this function returns the scaled lightness L*/100 within [0, 1].

Note

Compatible with single channel grayscale images.

Parameters:

image (numpy.ndarray) – The input lRGB image.

Returns:

The CIE lightness L*/100.

Return type:

numpy.ndarray

equimage.image_colorspaces.sRGB_luminance(image)

Return the luminance Y of the input sRGB image.

The image is converted to the lRGB color space to compute the luminance Y.

Note: Although they have the same functional forms, the luma and luminance are different concepts for sRGB images: the luma is computed in the sRGB color space as a substitute for the perceptual lightness, whereas the luminance is computed after conversion in the lRGB color space and is the basic ingredient of the genuine perceptual lightness (see lRGB_lightness).

Note

Compatible with single channel grayscale images.

Parameters:

image (numpy.ndarray) – The input sRGB image.

Returns:

The luminance Y.

Return type:

numpy.ndarray

equimage.image_colorspaces.sRGB_lightness(image)

Return the CIE lightness L* of the input sRGB image.

The image is converted to the lRGB color space to compute the CIE lightness L*. L* is a measure of the perceptual lightness of the image. Note that L* is conventionally defined within [0, 100]. However, this function returns the scaled lightness L*/100 within [0, 1].

Note

Compatible with single channel grayscale images. This function does not clip the input image to the [0, 1] range.

Parameters:

image (numpy.ndarray) – The input sRGB image.

Returns:

The CIE lightness L*/100.

Return type:

numpy.ndarray

class equimage.image_colorspaces.MixinImage

Bases: object

To be included in the Image class.

color_space_error()

Raise a color space error.

color_model_error()

Raise a color model error.

check_color_space(*colorspaces)

Raise a color space error if the color space of the image is not in the arguments.

check_color_model(*colormodels)

Raise a color model error if the color model of the image is not in the arguments.

check_is_not_gray()

Raise a color model error if the image is a grayscale.

lRGB()

Convert the image to the linear RGB color space.

Warning

Does not apply to the HSV, HSL, Lch and Lsh color models.

Returns:

The converted lRGB image (a copy of the original image if already lRGB).

Return type:

Image

sRGB()

Convert the image to the sRGB color space.

Warning

Does not apply to the HSV, HSL, Lch and Lsh color models.

Returns:

The converted sRGB image (a copy of the original image if already sRGB).

Return type:

Image

CIELab()

Convert the image to the CIELab color space.

Note that the CIE lightness L* is conventionally defined within [0, 100], and that the CIE chromatic components a*, b* are signed. This function actually returns L*/100, a*/100 and b*/100.

Warning

Does not apply to the HSV, HSL, Lch and Lsh color models, and to grayscale images.

Returns:

The converted CIELab image (a copy of the original image if already CIELab).

Return type:

Image

CIELuv()

Convert the image to the CIELuv color space.

Note that the CIE lightness L* is conventionally defined within [0, 100], and that the CIE chromatic components u*, v* are signed. This function actually returns L*/100, u*/100 and v*/100.

Warning

Does not apply to the HSV, HSL, Lch and Lsh color models, and to grayscale images.

Returns:

The converted CIELuv image (a copy of the original image if already CIELuv).

Return type:

Image

RGB()

Convert the image to the RGB color model.

Warning

Only applies in the lRGB and sRGB color spaces.

Note

This method clips HSV and HSL images to the [0, 1] range.

Returns:

The converted RGB image (a copy of the original image if already RGB).

Return type:

Image

HSV()

Convert the image to the HSV color model.

Warning

Only applies in the lRGB and sRGB color spaces. The conversion from a grayscale to a HSV image is ill-defined (no hue).

Note

This method clips RGB and HSL images to the [0, 1] range.

Returns:

The converted HSV image (a copy of the original image if already HSV).

Return type:

Image

HSL()

Convert the image to the HSL color model.

Warning

Only applies in the lRGB and sRGB color spaces. The conversion from a grayscale to a HSL image is ill-defined (no hue).

Note

This method clips RGB and HSV images to the [0, 1] range.

Returns:

The converted HSL image (a copy of the original image if already HSL).

Return type:

Image

Lab()

Convert the image to the Lab color model.

Warning

Only applies in the CIELab color space.

Returns:

The converted Lab image (a copy of the original image if already Lab).

Return type:

Image

Luv()

Convert the image to the Luv color model.

Warning

Only applies in the CIELuv color space.

Returns:

The converted Luv image (a copy of the original image if already Luv).

Return type:

Image

Lch()

Convert the image to the Lch color model.

Warning

Only applies in the CIELab and CIELuv color spaces.

Returns:

The converted Lch image (a copy of the original image if already Lch).

Return type:

Image

Lsh()

Convert the image to the Lsh color model.

Warning

Only applies in the CIELuv color space.

Returns:

The converted Lsh image (a copy of the original image if already Lsh).

Return type:

Image

convert(colorspace=None, colormodel=None, copy=True)

Convert the image to the target color space and color model.

This method is more versatile than the dedicated conversion methods such as Image.sRGB(), Image.HSV(), etc… In particular, it can chain conversions to reach the target color space and model. For example, (sRGB, HSV) → (lRGB, RGB) = (sRGB, HSV) → (sRGB, RGB) → (lRGB, RGB).

Parameters:
  • colorspace (str, optional) – The target color space (“lRGB”, “sRGB”, “CIELab” or “CIELuv”). If None (default), keep the original color space.

  • colormodel (str, optional) – The target color model (“RGB”, “HSV” or “HSL” in the lRGB and sRGB color spaces, “Lab” or “Lch” in the CIELab color space, “Luv”, “Lch” or “Lsh” in the CIELuv color space). If None (default), keep (if possible) the original color model.

  • copy (bool, optional) – If True (default), return a copy of the original image if already in the target color space and color model. If False, return the original image.

Returns:

The converted image.

Return type:

Image

HSX_hue()

Return the HSV/HSL hue of the image.

Warning

Available only for RGB, HSV and HSL images.

Returns:

The HSV/HSL hue H.

Return type:

numpy.ndarray

HSV_value()

Return the HSV value V = max(RGB) of the image.

Warning

Available only for RGB, HSV, and grayscale images.

Returns:

The HSV value V.

Return type:

numpy.ndarray

HSV_saturation()

Return the HSV saturation S = 1-min(RGB)/max(RGB) of the image.

Warning

Available only for RGB, HSV, and grayscale images.

Returns:

The HSV saturation S.

Return type:

numpy.ndarray

HSL_lightness()

Return the HSL lightness L’ = (max(RGB)+min(RGB))/2 of the image.

Warning

Available only for RGB, HSL, and grayscale images.

Returns:

The HSL lightness L’.

Return type:

numpy.ndarray

HSL_saturation()

Return the HSL saturation S’ = (max(RGB)-min(RGB))/(1-abs(max(RGB)+min(RGB)-1)) of the image.

Warning

Available only for RGB, HSL, and grayscale images.

Returns:

The HSL saturation S’.

Return type:

numpy.ndarray

luma()

Return the luma L of the image.

The luma L is the average of the RGB components weighted by rgbluma = get_RGB_luma():

L = rgbluma[0]*image[0]+rgbluma[1]*image[1]+rgbluma[2]*image[2].

Warning

Available only for RGB and grayscale images.

Returns:

The luma L.

Return type:

numpy.ndarray

luminance()

Return the luminance Y of the image.

Warning

Available only for RGB, grayscale, CIELab and CIELuv images.

Returns:

The luminance Y.

Return type:

numpy.ndarray

lightness()

Return the CIE lightness L* of the image.

L* is a measure of the perceptual lightness of the image. Note that L* is conventionally defined within [0, 100]. However, this method returns the scaled lightness L*/100 within [0, 1].

Warning

Available only for RGB, grayscale, CIELab and CIELuv images.

Returns:

The CIE lightness L*/100.

Return type:

numpy.ndarray

CIE_chroma()

Return the CIE chroma c* of a CIELab or CIELuv image.

The CIE chroma is c* = sqrt(a*^2+b*^2) in the CIELab color space and c* = sqrt(u*^2+v*^2) in the CIELuv color space. The values of the CIE chroma thus differ in the two color spaces. This method actually returns the scaled CIE chroma c*/100.

Warning

Available only for CIELab and CIELuv images.

Returns:

The CIE chroma c*/100.

Return type:

numpy.ndarray

CIE_saturation()

Return the CIE saturation s* of a CIELuv image.

The CIE saturation is s* = c*/L* = sqrt(u*^2+v*^2)/L* in the CIELuv color space. This method actually returns the scaled CIE saturation s*/100.

Warning

Available only for CIELuv images.

Returns:

The CIE saturation s*/100.

Return type:

numpy.ndarray

CIE_hue()

Return the hue angle h* of a CIELab or CIELuv image.

The hue angle is h* = atan2(b*, a*) in the CIELab color space and c* = atan2(v*, u*) in the CIELuv color space. The values of the hue angle thus differ in the two color spaces. This method actually returns the reduced hue angle h*/(2π) within [0, 1].

Warning

Available only for CIELab and CIELuv images.

Returns:

The reduced hue angle h*/(2π).

Return type:

numpy.ndarray

get_channel(channel)

Return the selected channel of the image.

Parameters:

channel (str) –

The selected channel:

  • ”1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): The first/second/third channel (all images).

  • ”V”: The HSV value (RGB, HSV and grayscale images).

  • ”S”: The HSV saturation (RGB, HSV and grayscale images).

  • ”L’”: The HSL lightness (RGB, HSL and grayscale images).

  • ”S’”: The HSL saturation (RGB, HSL and grayscale images).

  • ”H”: The HSV/HSL hue (RGB, HSV and HSL images).

  • ”L”: The luma (RGB and grayscale images).

  • ”Y”: The luminance (RGB, grayscale, CIELab and CIELuv images).

  • ”L*”: The CIE lightness L* (RGB, grayscale, CIELab and CIELuv images).

  • ”c*”: The CIE chroma c* (CIELab and CIELuv images).

  • ”s*”: The CIE saturation s* (CIELuv images).

  • ”h*”: The CIE hue angle h* (CIELab and CIELuv images).

Also see:

The magic method Image.__getitem__(), which returns self.image[ic] as self[ic], with ic the Python channel index within [0, 1, 2].

Returns:

The selected channel.

Return type:

numpy.ndarray

set_channel(channel, data, inplace=False)

Update the selected channel of the image.

Parameters:
  • channel (str) –

    The updated channel:

    • ”1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): Update the first/second/third channel (all images).

    • ”V”: Update the HSV value (RGB, HSV and grayscale images).

    • ”S”: Update the HSV saturation (RGB and HSV images).

    • ”L’”: Update the HSL lightness (RGB, HSL and grayscale images).

    • ”S’”: Update the HSL saturation (RGB and HSL images).

    • ”L”: Update the luma (RGB and grayscale images).

    • ”L*”: Update the CIE lightness L* (CIELab and CIELuv images; equivalent to “L*ab” for lRGB and sRGB images).

    • ”L*ab”: Update the CIE lightness L* in the CIELab/Lab color space and model (CIELab, lRGB and sRGB images).

    • ”L*uv”: Update the CIE lightness L* in the CIELuv/Luv color space and model (CIELuv, lRGB and sRGB images).

    • ”L*sh”: Update the CIE lightness L* in the CIELuv/Lsh color space and model (CIELuv, lRGB and sRGB images).

    • ”c*”: Update the CIE chroma c* (CIELab and CIELuv images).

    • ”s*”: Update the CIE saturation s* (CIELuv images).

  • data (numpy.ndarray) – The updated channel data, as a 2D array with the same width and height as the image.

  • inplace (bool, optional) – If True, update the image “in place”; if False (default), return a new image.

Also see:

The magic method Image.__setitem__(), which implements self.image[ic] = data as self[ic] = data, with ic the Python channel index within [0, 1, 2].

Returns:

The updated image.

Return type:

Image

apply_channels(f, channels, multi=True, trans=False)

Apply the operation f(channel) to selected channels of the image.

Note

When applying an operation to the luma, the RGB components of the image are rescaled by the ratio f(luma)/luma. This preserves the hue and HSV saturation, but may result in some out-of-range RGB components even though f(luma) fits within [0, 1]. These out-of-range components can be regularized with three highlights protection methods:

  • “Desaturation”: The out-of-range pixels are desaturated at constant hue and luma (namely, the out-of-range components are decreased while the in-range components are increased so that the hue and luma are preserved). This tends to bleach the out-of-range pixels. f(luma) must fit within [0, 1] to make use of this highlights protection method.

  • “Blending”: The out-of-range pixels are blended with f(RGB) (the same operation applied to the RGB channels). This tends to bleach the out-of-range pixels too. f(RGB) must fit within [0, 1] to make use of this highlights protection method.

  • “Normalization”: The whole output image is rescaled so that all pixels fit in the [0, 1] range (output → output/max(1., numpy.max(output))).

Alternatively, applying the operation to the HSV value V also preserves the hue and HSV saturation and can not produce out-of-range pixels if f([0, 1]) fits within [0, 1]. However, this may strongly affect the balance of the image, the HSV value being a very poor approximation to the perceptual lightness L*.

Parameters:
  • f (function) – The function f(numpy.ndarray) → numpy.ndarray applied to the selected channels.

  • channels (str) –

    The selected channels:

    • An empty string: Apply the operation to all channels (all images).

    • A combination of “1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): Apply the operation to the first/second/third channel (all images).

    • ”V”: Apply the operation to the HSV value (RGB, HSV and grayscale images).

    • ”S”: Apply the operation to the HSV saturation (RGB and HSV images).

    • ”L’”: Apply the operation to the HSL lightness (RGB, HSL and grayscale images).

    • ”S’”: Apply the operation to the HSL saturation (RGB and HSL images).

    • ”L”: Apply the operation to the luma (RGB and grayscale images).

    • ”Ls”: Apply the operation to the luma, and protect highlights by desaturation. (after the operation, the out-of-range pixels are desaturated at constant luma).

    • ”Lb”: Apply the operation to the luma, and protect highlights by blending. (after the operation, the out-of-range pixels are blended with f(RGB)).

    • ”Ln”: Apply the operation to the luma, and protect highlights by normalization. (after the operation, the image is normalized so that all pixels fall back in the [0, 1] range).

    • ”L*”: Apply the operation to the CIE lightness L* (CIELab and CIELuv images; equivalent to “L*ab” for lRGB and sRGB images).

    • ”L*ab”: Apply the operation to the CIE lightness L* in the CIELab/Lab color space and model (CIELab, lRGB and sRGB images).

    • ”L*uv”: Apply the operation to the CIE lightness L* in the CIELuv/Luv color space and model (CIELuv, lRGB and sRGB images).

    • ”L*sh”: Apply the operation to the CIE lightness L* in the CIELuv/Lsh color space and model (CIELuv, lRGB and sRGB images).

    • ”c*”: Apply the operation to the CIE chroma c* (CIELab and CIELuv images).

    • ”s*”: Apply the operation to the CIE saturation s* (CIELuv images).

  • multi (bool, optional) – if True (default), the operation can be applied to the whole image at once; if False, the operation must be applied one channel at a time.

  • trans (bool, optional) –

    If True (default False), embeds the transformation y = f(x in [0, 1]) in the output image as output.trans, where:

    • output.trans.type = “hist”.

    • output.trans.input is a reference to the input image (self).

    • output.trans.channels are the channels selected for the transformation.

    • output.trans.x is a mesh of the [0, 1] interval.

    • output.trans.y = f(output.trans.x)

    • output.trans.ylabel is a label for output.trans.y.

    • output.trans.xticks is a list of remarkable x values for this transformation (if any).

    trans shall be set True only for local histogram transformations f.

Returns:

The processed image.

Return type:

Image

clip_channels(channels, vmin=0.0, vmax=1.0)

Clip selected channels of the image in the range [vmin, vmax].

Parameters:
Returns:

The clipped image.

Return type:

Image

protect_highlights_saturation()

Normalize out-of-range pixels with HSV value > 1 by adjusting the saturation at constant hue and luma.

The out-of-range RGB components of the pixels are decreased while the in-range RGB components are increased so that the hue and luma are conserved. This desaturates (whitens) the pixels with out-of-range components. This aims at protecting the highlights from overflowing when stretching the luma.

Warning

The luma must be <= 1 even though some pixels have HSV value > 1.

Returns:

The processed image.

Return type:

Image

protect_highlights_blend(inrange)

Normalize out-of-range pixels with HSV value > 1 by blending with an “in-range” image with HSV values <= 1.

Each pixel of the image with out-of-range RGB components is brought back in the [0, 1] range by blending with the corresponding pixel of the input “in-range” image. This aims at protecting the highlights from overflowing when stretching the luma.

Parameters:

inrange (Image) – The “in-range” image to blend with. All pixels must have HSV values <= 1.

Returns:

The processed image.

Return type:

Image

equimage.image_editors module

External image editors.

class equimage.image_editors.MixinImage

Bases: object

To be included in the Image class.

edit_with(command, export='tiff', depth=16, script=None, editor='<Editor>', interactive=True, cwd=None)

Edit the image with an external tool.

The image is saved on disk; the editor is then run on this file, which is finally reloaded in eQuimageLab and returned.

The user/editor must simply overwrite the edited file on exit.

Parameters:
  • command (str) – The editor command to be run (e.g., “gimp -n $FILE$”). Any “$FILE$” is replaced by the name of the image file to be opened by the editor. Any “$SCRIPT$” is replaced by the name of the script file to be run by the editor.

  • export (str, optional) –

    The format used to export the image. Can be:

    • ”png”: PNG file with depth = 8 or 16 bits integer per channel.

    • ”tiff” (default): TIFF file with depth = 8, 16 or 32 bits integer per channel.

    • ”fits”: FITS file width depth = 32 bits float per channel.

  • depth (int, optional) – The color depth (bits per channel) used to export the image (see above; default 16).

  • script (str, optional) – A string containing a script to be executed by the editor (default None). Any “$FILE$” is replaced by the name of the image file.

  • editor (str, optional) – The name of the editor (for pretty print; default “<Editor>”).

  • interactive (bool, optional) – If True (default), the editor is interactive (awaits commands from the user); if False, the editor processes the image autonomously and does not require inputs from the user.

  • cwd (str, optional) – If not None (default), change to this working directory before running the editor.

Returns:

The edited image.

Return type:

Image

edit_with_gimp(export='tiff', depth=16)

Edit the image with Gimp.

The image is saved on disk; Gimp is then run on this file, which is finally reloaded in eQuimageLab and returned.

The user must simply overwrite the edited file when leaving Gimp.

Warning

The command “gimp” must be in the PATH.

Parameters:
  • export (str, optional) –

    The format used to export the image. Can be:

    • ”png”: PNG file with depth = 8 or 16 bits integer per channel.

    • ”tiff” (default): TIFF file with depth = 8, 16 or 32 bits integer per channel.

    • ”fits”: FITS file width depth = 32 bits float per channel.

  • depth (int, optional) – The color depth (bits per channel) used to export the image (see above; default 16).

Returns:

The edited image.

Return type:

Image

edit_with_siril()

Edit the image with Siril.

The image is saved as a FITS file (32 bits float per channel); Siril is then run on this file, which is finally reloaded in eQuimageLab and returned.

The user must simply overwrite the edited file when leaving Siril.

Warning

The command “siril” must be in the PATH.

Returns:

The edited image.

Return type:

Image

equimage.image_filters module

Image filters.

class equimage.image_filters.MixinImage

Bases: object

To be included in the Image class.

remove_hot_pixels(ratio, mode='reflect', channels='')

Remove hot pixels in selected channels of the image.

All pixels of a selected channel greater than ratio times the eight nearest-neighbors average are replaced by this average.

Parameters:
  • ratio (float) – The threshold for hot pixels detection.

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

Returns:

The processed image.

Return type:

Image

remove_hot_cold_pixels(hot_ratio, cold_ratio, mode='reflect', channels='')

Remove hot and cold pixels in selected channels of the image.

All pixels of a selected channel greater than A*hot_ratio or smaller than A/cold_ratio, with A the eight nearest-neighbors average, are replaced by this average.

Parameters:
  • hot_ratio (float) – The threshold for hot pixels detection.

  • cold_ratio (float) – The threshold for cold pixels detection.

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

Returns:

The processed image.

Return type:

Image

sharpen(mode='reflect', channels='')

Apply a sharpening (Laplacian) convolution filter to selected channels of the image.

Parameters:
  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

Returns:

The sharpened image.

Return type:

Image

LDBS(sigma, amount, threshold, channels='L*', mode='reflect', full_output=False)

Light-dependent blur & sharpen (LDBS).

Blurs low-brightness and sharpens high-brightness areas.

The background of astronomical images remains usually noisy. This is the Poisson (photon counting) noise typical of low brightness areas. We may want to “blur” this background by applying a “low-pass” filter that softens small scale features - such as a convolution with a gaussian:

blurred = image.gaussian_filter(sigma = 5) # Gaussian blur with a std dev of 5 pixels.

Yet this operation would also blur the objects of interest (the galaxy, nebula…) !

As a matter of fact, most of these objects already lack sharpness… We may thus want, on the opposite, to apply a “high-pass” filter that enhances small scale features. Since the convolution with a gaussian is a low-pass filter, the following operation:

sharpened = (1 + q) * image - q * blurred, q > 0

is a high-pass filter known as an “unsharp mask”. We can also rewrite this operation as a conventional blend with a mixing coefficient m > 1:

sharpened = (1 - m) * blurred + m * image, m > 1

Yet such an unsharp mask would also enhance the noise in the background !

We can meet both requirements by making m dependent on the lightness. Namely, we want m ~ 0 where the lightness is “small” (the background), and m > 1 where the lightness is “large” (the object of interest). We may use as a starting point:

m = (1 + a) * image

where a > 0 controls image sharpening in the bright areas. In practice, we gain flexibility by stretching the image to control how fast we switch from blurring to sharpening, e.g.:

m = (1 + a) * hms(image, D)

where hms is the harmonic stretch with strength D. The latter can be calculated to switch (m = 1) at a given threshold.

Application of the LDBS to all channels (as in the above equations) can lead to significant color spilling. It is preferable to apply LDBS to the lightness L* or luma L (i.e. setting image = L* or L and updating that channel with the output of the LDBS).

Parameters:
  • sigma (float) – The standard deviation of the gaussian blur (pixels).

  • amount (float) – The full strength of the unsharp mask (must be > 0).

  • threshold (float) – The threshold for sharpening (expected in ]0, 1[). The image is blurred below the threshold, and sharpened above.

  • channels (str, optional) – The channel(s) for LDBS. Can be “” (for all channels), “V”, “L’”, “L”, “L*” (default), “L*ab”, “L*uv” or “L*sh”. See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • mode (str, optional) –

    How to extend the image across its boundaries (for the gaussian blur):

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

  • full_output (bool, optional) –

    If False (default), only return the processed image. If True, return the processed image, as well as:

    • The blurred image if channels = “”.

    • The input, blurred and output channel as grayscale images otherwise.

Returns:

The processed image(s) (see the full_output argument).

Return type:

Image

equimage.image_geometry module

Image geometry management.

class equimage.image_geometry.MixinImage

Bases: object

To be included in the Image class.

flipud()

Flip the image upside/down.

Returns:

The flipped image.

Return type:

Image

fliplr()

Flip the image left/right.

Returns:

The flipped image.

Return type:

Image

rot90(n=1)

Rotate the image by (a multiple of) 90°.

Parameters:

n (int, optional) – The number of 90° rotations (positive for counter-clockwise rotations, negative for clockwise rotations; default 1).

Returns:

The rotated image.

Return type:

Image

resize(width, height, method='lanczos')

Resize the image.

Parameters:
  • width (int) – New image width (pixels).

  • height (int) – New image height (pixels).

  • method (str, optional) –

    Resampling method:

    • ”nearest”: Nearest neighbor interpolation.

    • ”bilinear”: Linear interpolation.

    • ”bicubic”: Cubic spline interpolation.

    • ”lanczos”: Lanczos (truncated sinc) filter (default).

    • ”box”: Box average (equivalent to “nearest” for upscaling).

    • ”hamming”: Hamming (cosine bell) filter.

Returns:

The resized image.

Return type:

Image

rescale(scale, method='lanczos')

Rescale the image.

Parameters:
  • scale (float) – Scaling factor.

  • method (str, optional) –

    Resampling method:

    • ”nearest”: Nearest neighbor interpolation.

    • ”bilinear”: Linear interpolation.

    • ”bicubic”: Cubic spline interpolation.

    • ”lanczos”: Lanczos (truncated sinc) filter (default).

    • ”box”: Box average (equivalent to “nearest” for upscaling).

    • ”hamming”: Hamming (cosine bell) filter.

Returns:

The rescaled image.

Return type:

Image

crop(xmin, xmax, ymin, ymax)

Crop the image.

Parameters:
  • xmin (float) – Crop from x = xmin to x = xmax (along the width).

  • xmax (float) – Crop from x = xmin to x = xmax (along the width).

  • ymin (float) – Crop from y = ymin to y = ymax (along the height).

  • ymax (float) – Crop from y = ymin to y = ymax (along the height).

Returns:

The cropped image.

Return type:

Image

equimage.image_io module

Image I/O management.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“load_image”, “save_image”.

equimage.image_io.load_image_as_array(filename, verbose=True)

Load a RGB or grayscale image from a file.

Parameters:
  • filename (str) – The file name.

  • verbose (bool, optional) – If True (default), print information about the image.

Returns:

The image as numpy.ndarray and the file meta-data (including exif if available) as a dictionary.

equimage.image_io.load_image(filename, colorspace='sRGB', verbose=True)

Load a RGB or grayscale image from a file.

Parameters:
  • filename (str) – The file name.

  • colorspace (str, optional) – The colorspace of the image [either “sRGB” (default) or “lRGB” for linear RGB images].

  • verbose (bool, optional) – If True (default), print information about the image.

Returns:

The image as an Image object and the file meta-data (including exif if available) as a dictionary.

equimage.image_io.save_image(image, filename, depth=8, compress=6, verbose=True)

Save a RGB or grayscale image as a file.

Note: The color space is not embedded in the file at present.

Parameters:
  • image (Image) – The image.

  • filename (str) –

    The file name. The file format is chosen according to the extension:

    • .png: PNG file with depth = 8 or 16 bits integer per channel.

    • .tif, .tiff: TIFF file with depth = 8, 16 or 32 bits integer per channel.

    • .fit, .fits, .fts: FITS file with 32 bits float per channel (irrespective of depth).

  • depth (int, optional) – The color depth of the file in bits per channel (default 8).

  • compress (int, optional) – The compression level for TIFF files (Default 6; 0 = no zlib compression; 9 = maximum zlib compression).

  • verbose (bool, optional) – If True (default), print information about the file.

class equimage.image_io.MixinImage

Bases: object

To be included in the Image class.

save(filename, depth=8, compress=6, verbose=True)

Save image as a file.

Note: The color model must be “RGB” or “gray”, but the color space is not embedded in the file at present.

Parameters:
  • filename (str) –

    The file name. The file format is chosen according to the extension:

    • .png: PNG file with depth = 8 or 16 bits integer per channel.

    • .tif, .tiff: TIFF file with depth = 8, 16 or 32 bits integer per channel.

    • .fit, .fits, .fts: FITS file with 32 bits float per channel (irrespective of depth).

  • depth (int, optional) – The color depth of the file in bits per channel (default 8).

  • compress (int, optional) – The compression level for TIFF files (Default 6; 0 = no zlib compression; 9 = maximum zlib compression).

  • verbose (bool, optional) – If True (default), print information about the file.

equimage.image_masks module

Image masks.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“float_mask”, “extend_bmask”, “smooth_mask”, “threshold_bmask”, “threshold_fmask”, “shape_bmask”.

equimage.image_masks.float_mask(mask)

Convert a binary mask into a float mask.

Parameters:

mask (numpy.ndarray) – The input binary mask.

Returns:

A float mask with datatype equimage.params.imagetype and values 1 where mask is True and 0 where mask is False. If already a float array, the input mask is returned as is.

Return type:

numpy.ndarray

equimage.image_masks.extend_bmask(mask, extend)

Extend or erode a binary mask.

Parameters:
  • mask (bool numpy.ndarray) – The input binary mask.

  • extend (int) – The number of pixels by which the mask is extended. The mask is extended if extend > 0, and eroded if extend < 0.

Returns:

The extended binary mask.

Return type:

numpy.ndarray

equimage.image_masks.smooth_mask(mask, radius, mode='zero')

Smooth a binary or float mask.

The input mask is converted into a float mask and convolved with a disk of radius smooth.

Parameters:
  • mask (numpy.ndarray) – The input binary or float mask.

  • radius (float) – The smoothing radius (pixels). The edges of the output float mask get smoothed over 2*radius pixels.

  • mode (str, optional) –

    How to extend the mask across its boundaries for the convolution:

    • ”reflect”: the mask is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the mask is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the mask is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero” (default): the mask is padded with zeros (abcd → 0000|abcd|0000).

    • ”one”: the mask is padded with ones (abcd → 1111|abcd|1111).

Returns:

The smoothed, float mask.

Return type:

numpy.ndarray

equimage.image_masks.threshold_bmask(filtered, threshold, extend=0)

Set-up a threshold binary mask.

Returns the pixels of the image such that filtered >= threshold as a binary mask.

Parameters:
  • filtered (numpy.ndarray) – The output of a filter (e.g., local average, …) applied to the image (see Image.filter()).

  • threshold (float) – The threshold for the mask. The mask is True wherever filtered >= threshold, and False elsewhere.

  • extend (int, optional) – Once computed, the mask can be extended/eroded by extend pixels (default 0). The mask is is extended if extend > 0, and eroded if extend < 0.

Returns:

The mask as a boolean array with the same shape as filtered.

Return type:

numpy.ndarray

equimage.image_masks.threshold_fmask(filtered, threshold, extend=0, smooth=0.0, mode='zero')

Set-up a threshold float mask.

Returns the pixels of the image such that filtered >= threshold as a float mask.

Parameters:
  • filtered (numpy.ndarray) – The output of a filter (e.g., local average, …) applied to the image (see Image.filter()).

  • threshold (float) – The threshold for the mask. The mask is 1 wherever filtered >= threshold, and 0 elsewhere.

  • extend (int, optional) – Once computed, the mask can be extended/eroded by extend pixels (default 0). The mask is is extended if extend > 0, and eroded if extend < 0.

  • smooth (float, optional) – Once extended, the edges of the mask can be smoothed over 2*smooth pixels (default 0; see smooth_mask()).

  • mode (str, optional) –

    How to extend the mask across its boundaries for smoothing:

    • ”reflect”: the mask is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the mask is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the mask is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero” (default): the mask is padded with zeros (abcd → 0000|abcd|0000).

    • ”one”: the mask is padded with ones (abcd → 1111|abcd|1111).

Returns:

The mask as a float array with the same shape as filtered.

Return type:

numpy.ndarray

equimage.image_masks.shape_bmask(shape, x, y, width, height)

Return a binary mask defined by the input shape.

Parameters:
  • shape (str) – Either “rectangle” for a rectangle, “ellipse” for an ellipse, or “polygon” for a polygon.

  • x (tuple, list or numpy.ndarray) – The x coordinates of the shape (pixels along the width).

  • y (tuple, list or numpy.ndarray) –

    The y coordinates of the shape (pixels along the height):

    • If shape = “rectangle”, x = (x1, x2) and y = (y1, y2) define the coordinates of two opposite corners C1 = (x1, y1) and C2 = (x2, y2) of the rectangle.

    • If shape = “ellipse”, x = (x1, x2) and y = (y1, y2) define the coordinates of two opposite corners C1 = (x1, y1) and C2 = (x2, y2) of the rectangle that bounds the ellipse.

    • If shape = “polygon”, the points P[n] = (x[n], y[n]) (0 <= n < len(x)) are the vertices of the polygon.

  • width (int) – The width of the mask (pixels).

  • height (int) – The height of the mask (pixels).

Returns:

A boolean array with shape (height, width) and values True in the shape and False outside.

Return type:

numpy.ndarray

class equimage.image_masks.MixinImage

Bases: object

To be included in the Image class.

filter(channel, filter, radius, mode='reflect')

Apply a spatial filter to a selected channel of the image.

The main purpose of this method is to prepare masks for image processing.

Parameters:
  • channel (str) –

    The selected channel:

    • ”1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): The first/second/third channel (all images).

    • ”H”: The HSV/HSL hue (RGB, HSV and HSL images).

    • ”V”: The HSV value (RGB, HSV and grayscale images).

    • ”S”: The HSV saturation (RGB, HSV and grayscale images).

    • ”L’”: The HSL lightness (RGB, HSL and grayscale images).

    • ”S’”: The HSL saturation (RGB, HSL and grayscale images).

    • ”L”: The luma (RGB and grayscale images).

    • ”L*”: The CIE lightness L* (RGB, grayscale, CIELab and CIELuv images).

    • ”c*”: The CIE chroma c* (CIELab and CIELuv images).

    • ”s*”: The CIE saturation s* (CIELuv images).

  • filter (str) –

    The filter:

    • ”mean”: Return the average of the channel within a disk around each pixel.

    • ”median”: Return the median of the channel within a disk around each pixel.

    • ”gaussian”: Return the gaussian average of the channel around each pixel.

    • ”maximum”: Return the maximum of the channel within a disk around each pixel.

  • radius (float) – The radius of the disk (pixels). The standard deviation for gaussian average is radius/3.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

Returns:

The output of the filter as an array with shape (image height, image width).

Return type:

numpy.ndarray

shape_bmask(shape, x, y)

Return a binary mask defined by the input shape.

Parameters:
  • shape (str) – Either “rectangle” for a rectangle, “ellipse” for an ellipse, or “polygon” for a polygon.

  • x (tuple, list or numpy.ndarray) – The x coordinates of the shape (pixels along the width).

  • y (tuple, list or numpy.ndarray) –

    The y coordinates of the shape (pixels along the height):

    • If shape = “rectangle”, x = (x1, x2) and y = (y1, y2) define the coordinates of two opposite corners C1 = (x1, y1) and C2 = (x2, y2) of the rectangle.

    • If shape = “ellipse”, x = (x1, x2) and y = (y1, y2) define the coordinates of two opposite corners C1 = (x1, y1) and C2 = (x2, y2) of the rectangle that bounds the ellipse.

    • If shape = “polygon”, the points P[n] = (x[n], y[n]) (0 <= n < len(x)) are the vertices of the polygon.

Returns:

A boolean array with shape (image height, image width), and values True in the shape and False outside.

Return type:

numpy.ndarray

equimage.image_multiscale module

Multiscale transforms.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“dwt”, “swt”, “slt”, “anscombe”, “inverse_anscombe”.

equimage.image_multiscale.std_centered(data, std, **kwargs)

Return the standard deviation of a centered data set.

Parameters:
  • data (numpy.ndarray) – The data set (whose average must be zero).

  • std (str) –

    The method used to compute the standard deviation:

    • ”variance”: std_centered = sqrt(numpy.mean(data**2))

    • ”median”: std_centered = numpy.median(abs(data))/0.6744897501960817. This estimate is more robust to outliers.

  • kwargs – Optional keyword arguments are passed to the numpy.mean (std = “variance”) or numpy.median (std = “median”) functions.

Returns:

The standard deviation of data.

Return type:

float

equimage.image_multiscale.anscombe(data, gain=1.0, average=0.0, sigma=0.0)

Return the generalized Anscombe transform (gAt) of the input data.

The gAt transforms the sum data = gain*P+N of a white Poisson noise P and a white Gaussian noise N (characterized by its average and standard deviation sigma) into an approximate white Gaussian noise with variance 1.

For gain = 1, average = 0 and sigma = 0 (default), the gAt is the original Anscombe transform T(data) = 2*sqrt(data+3/8).

Parameters:
  • data (numpy.ndarray) – The input data.

  • gain (float, optional) – The gain (default 1).

  • average (float, optional) – The average of the Gaussian noise (default 0).

  • sigma (float, optional) – The standard deviation of the Gaussian noise (default 0).

Returns:

The generalized Anscombe transform of the input data.

Return type:

numpy.ndarray

equimage.image_multiscale.inverse_anscombe(data, gain=1.0, average=0.0, sigma=0.0)

Return the inverse generalized Anscombe transform of the input data.

See also

anscombe()

Parameters:
  • data (numpy.ndarray) – The input data.

  • gain (float, optional) – The gain (default 1).

  • average (float, optional) – The average of the Gaussian noise (default 0).

  • sigma (float, optional) – The standard deviation of the Gaussian noise (default 0).

Returns:

The inverse generalized Anscombe transform of the input data.

Return type:

numpy.ndarray

class equimage.image_multiscale.WaveletTransform

Bases: object

Wavelet transform class.

iwt(asarray=False)

Inverse wavelet transform.

Parameters:

asarray (bool, optional) – If True, return the inverse wavelet transform as a numpy.ndarray. If False (default), return the inverse wavelet transform as an Image object if the original image was an Image object, and as a numpy.ndarray otherwise.

Returns:

The inverse wavelet transform of the object.

Return type:

Image or numpy.ndarray

copy()

Return a (deep) copy of the object.

Returns:

A copy of the object.

Return type:

WaveletTransform

apply_same_transform(image)

Apply the wavelet transform of the object to the input image.

Parameters:

image (Image or numpy.ndarray) – The input image.

Returns:

The wavelet transform of the input image.

Return type:

WaveletTransform

scale_levels(mult, inplace=False)

Scale wavelet levels.

Parameters:
  • mult (numpy.ndarray or dict) – The scaling factor for each wavelet level. Level 0 is the smallest scale. If a dictionary, must be of the form {level: scaling factor, …} (e.g. {0: .8, 1: 1.5}). Default scaling factor is 1 for all unspecified wavelet levels.

  • inplace (bool, optional) – If True, update the object “in place”; if False (default), return a new WaveletTransform object.

Returns:

The updated WaveletTransform object.

Return type:

WaveletTransform

threshold_levels(threshold, mode='soft', inplace=False)

Threshold wavelet levels.

Parameters:
  • threshold (numpy.ndarray or dict) – The threshold for each wavelet level. Level 0 is the smallest scale. Can be a 1D array [threshold for each level], a 2D array [threshold for each level (rows) & channel (columns)], or a dictionary of the form {level: threshold, …} or of the form {level: (threshold channel #1, threshold channel #2, …), …} (e.g., {0: 1.e-2, 1: 1.e-3}). Default threshold is 0 for all unspecified wavelet levels.

  • mode (string, optional) –

    The thresholding mode:

    • ”soft” (default): Wavelet coefficients with absolute value < threshold are replaced by 0, while those with absolute value >= threshold are shrunk toward 0 by the value of threshold.

    • ”hard”: Wavelet coefficients with absolute value < threshold are replaced by 0, while those with absolute value >= threshold are left unchanged.

    • ”garrote”: Non-negative Garrote threshold (soft for small wavelet coefficients, and hard for large wavelet coefficients).

    • ”greater”: Wavelet coefficients < threshold are replaced by 0.

    • ”less”: Wavelet coefficients > threshold are replaced by 0.

  • inplace (bool, optional) – If True, update the object “in place”; if False (default), return a new WaveletTransform object.

Returns:

The updated WaveletTransform object.

Return type:

WaveletTransform

threshold_firm_levels(thresholds, inplace=False)

Firm threshold of wavelet levels.

Firm thresholding behaves the same as soft-thresholding for wavelet coefficients with absolute values below threshold_low, and the same as hard-thresholding for wavelet coefficients with absolute values above threshold_high. For intermediate values, the outcome is in between soft and hard thresholding.

See also

WaveletTransform.threshold()

Parameters:
  • thresholds (numpy.ndarray or dict) – The thresholds for each wavelet level. Level 0 is the smallest scale. Can be a 2D array [threshold_low (column 1) and threshold_high (column 2) for each level (rows)], a 3D array [threshold_low and threshold_high (second axis) for each level (first axis) & channel (third axis)], or a dictionary of the form {level: (threshold_low, threshold_high), …} or of the form {level: ((threshold_low channel #1, threshold_low channel #2, …), (threshold_high channel #1, threshold_high channel #2, …)), …} (e.g. {0: (1.e-2, 5e-2), 1: (1.e-3, 5e-3)}). Default thresholds are (threshold_low = 0, threshold_high = 0) for all unspecified wavelet levels.

  • inplace (bool, optional) – If True, update the object “in place”; if False (default), return a new WaveletTransform object.

Returns:

The updated WaveletTransform object.

Return type:

WaveletTransform

noise_scale_factors(std='median', numerical=False, size=None, samples=1)

Compute the standard deviation of a white gaussian noise with variance 1 at all wavelet levels.

This method returns the partition of a white gaussian noise with variance 1 across all wavelet levels. It does so analytically when the distribution of the variance is known for the object transformation & wavelet. If not, it does so numerically by transforming random images with white gaussian noise and computing the standard deviation of the wavelet coefficients at all scales.

Parameters:
  • std (str, optional) – The method used to compute standard deviations. Can be “variance” or “median” (default). See std_centered() for details.

  • numerical (bool, optional) – If False (default), use analytical results when known. If True, always compute the standard deviations numerically.

  • size (tuple of int, optional) – The size (height, width) of the random images used to compute the standard deviations numerically. If None, defaults to the object image size.

  • samples (int, optional) – The number of random images used to compute the standard deviations numerically. The standard deviations of all random images are averaged at each scale.

Returns:

The standard deviation of a white gaussian noise with variance 1 at all wavelet levels. Level #0 is the smallest scale.

Return type:

numpy.ndarray

estimate_noise0(std='median', clip=None, eps=0.001, maxit=8)

Estimate noise as the standard deviation of the wavelet coefficients at the smallest scale.

This method estimates the noise of the image as the standard deviation sigma0 of the (diagonal) wavelet coefficients at the smallest scale (level #0). If the clip kwarg is provided, it then excludes wavelets whose absolute coefficients are greater than clip*sigma0, and iterates until sigma0 is converged.

Parameters:
  • std (str, optional) – The method used to compute standard deviations. Can be “variance” or “median” (default). See std_centered() for details.

  • clip (float, optional) – If not None (default), exclude wavelets whose absolute coefficients are greater than clip*sigma0 and iterate until sigma0 is converged (see the eps and maxit kwargs).

  • eps (float, optional) – If clip is not None, iterate until abs(delta sigma0) < eps*sigma0, where delta sigma0 is the variation of sigma0 between two successive iterations. Default is 1e-3.

  • maxit (int, optional) – Maximum number of iterations if clip is not None. Default is 8.

Returns:

The noise sigma0 in each channel.

Return type:

numpy.ndarray

estimate_noise(std='median', clip=None, eps=0.001, maxit=8, scale_factors=None)

Estimate noise at each wavelet level.

This method first estimates the noise at the smallest scale as the standard deviation sigma0 of the (diagonal) wavelet coefficients at level #0. It then extrapolates sigma0 to all wavelet levels assuming the noise is white and gaussian.

Parameters:
  • std (str, optional) – The method used to compute standard deviations. Can be “variance” or “median” (default). See std_centered() for details.

  • clip (float, optional) – If not None (default), exclude level #0 wavelets whose absolute coefficients are greater than clip*sigma0 and iterate until sigma0 is converged (see the eps and maxit kwargs).

  • eps (float, optional) – If clip is not None, iterate until abs(delta sigma0) < eps*sigma0, where delta sigma0 is the variation of sigma0 between two successive iterations. Default is 1e-3.

  • maxit (int, optional) – Maximum number of iterations if clip is not None. Default is 8.

  • scale_factors (numpy.ndarray) – The expected standard deviation of a white gaussian noise with variance 1 at each wavelet level. If None (default), this method calls WaveletTransform.noise_scale_factors() to compute these factors.

Returns:

The noise in each channel (columns) and wavelet level (rows), and the total noise in each channel.

Return type:

numpy.ndarray, numpy.ndarray

VisuShrink_clip()

Return the VisuShrink clip factor.

The VisuShrink method computes the thresholds for the wavelet coefficients from the standard deviation sigma of the noise in each level as threshold = clip*sigma, with clip = sqrt(2*log(npixels)) and npixels the number of pixels in the image.

Note

Borrowed from scikit-image. See L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage”, Biometrika 81, 425 (1994) (DOI:10.1093/biomet/81.3.425).

Returns:

The VisuShrink clip factor clip = sqrt(2*log(npixels)).

Return type:

float

VisuShrink(sigmas, mode='soft', inplace=False)

Threshold wavelet coefficients using the VisuShrink method.

The VisuShrink method computes the thresholds for the wavelet coefficients from the standard deviations sigma of the noise in each level as threshold = clip*sigma, with clip = sqrt(2*log(npixels)) and npixels the number of pixels in the image.

VisuShrink produces softer images than BayesShrink (see WaveletTransform.BayesShrink()), but may oversmooth and loose many details.

Note

Borrowed from scikit-image. See L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage”, Biometrika 81, 425 (1994) (DOI:10.1093/biomet/81.3.425).

Parameters:
  • sigmas (numpy.ndarray) – The noise in each channel (columns) and wavelet level (rows).

  • mode (string, optional) –

    The thresholding mode:

    • ”soft” (default): Wavelet coefficients with absolute value < threshold are replaced by 0, while those with absolute value >= threshold are shrunk toward 0 by the value of threshold.

    • ”hard”: Wavelet coefficients with absolute value < threshold are replaced by 0, while those with absolute value >= threshold are left unchanged.

    • ”garrote”: Non-negative Garrote threshold (soft for small wavelet coefficients, and hard for large wavelet coefficients).

  • inplace (bool, optional) – If True, update the object “in place”; if False (default), return a new WaveletTransform object.

Returns:

The updated WaveletTransform object.

Return type:

WaveletTransform

BayesShrink(sigmas, std='median', mode='soft', inplace=False)

Threshold wavelet coefficients using the BayeShrink method.

This method computes the thresholds for the wavelet coefficients from the standard deviation sigma of the noise in each level as threshold = <c²>/sqrt(<c²>-sigma²), where <c²> is the variance of the wavelet coefficients.

This level-dependent strategy preserves more details than the VisuShrink method (see WaveletTransform.VisuShrink()).

Note

Borrowed from scikit-image. See Chang, S. Grace, Bin Yu, and Martin Vetterli. “Adaptive wavelet thresholding for image denoising and compression”, IEEE Transactions on Image Processing 9, 1532 (2000) (DOI:10.1109/83.862633).

Parameters:
  • sigmas (numpy.ndarray) – The noise in each channel (columns) and wavelet level (rows).

  • std (str, optional) – The method used to compute standard deviations. Can be “variance” or “median” (default). See std_centered() for details.

  • mode (string, optional) –

    The thresholding mode:

    • ”soft” (default): Wavelet coefficients with absolute value < threshold are replaced by 0, while those with absolute value >= threshold are shrunk toward 0 by the value of threshold.

    • ”hard”: Wavelet coefficients with absolute value < threshold are replaced by 0, while those with absolute value >= threshold are left unchanged.

    • ”garrote”: Non-negative Garrote threshold (soft for small wavelet coefficients, and hard for large wavelet coefficients).

  • inplace (bool, optional) – If True, update the object “in place”; if False (default), return a new WaveletTransform object.

Returns:

The updated WaveletTransform object.

Return type:

WaveletTransform

iterative_noise_reduction(std='median', clip=3.0, eps=0.001, maxit=8, scale_factors=None)

Iterative noise reduction.

This method first estimates the noise sigma in each channel and wavelet level (using WaveletTransform.estimate_noise()), then clips the wavelet coefficients whose absolute values are smaller than clip*sigma. It then computes the inverse wavelet transform I0 and the difference D0 = I-I0 with the original image I.

It next computes the wavelet transform of D0, estimates the noise sigma_D in each channel and wavelet level, clips the wavelet coefficients whose absolute values are smaller than clip*sigma_D, calculates the inverse wavelet transform dD0, and a new image I1 = I0+dD0 that contains the significant residual structures thus identified in D0.

It then repeats this procedure with D1 = I-I1, D2 = I-I2… until sigma_D is converged (which means that no further residual structure can be indentified in Dn).

The method returns the denoised image In and the noise Dn = I-In. Dn shall hence be (almost) structureless.

Note

See: Image processing and data analysis: The multiscale approach, Jean-Luc Starck, Fionn Murtagh, and Albert Bijaoui, Cambridge University Press (1998). https://www.researchgate.net/publication/220688988_Image_Processing_and_Data_Analysis_The_Multiscale_Approach

Parameters:
  • std (str, optional) – The method used to compute standard deviations. Can be “variance” or “median” (default). See std_centered() for details.

  • clip (float, optional) – Clip wavelets whose absolute coefficients are smaller than clip*sigma, where sigma is the estimated noise at that wavelet level. Default is 3.

  • eps (float, optional) – Iterate until abs(delta sigma_D) < eps*sigma_D, where delta sigma_D is the variation of sigma_D between two successive iterations. Default is 1e-3.

  • maxit (int, optional) – Maximum number of iterations. Default is 8.

  • scale_factors (numpy.ndarray) – The expected standard deviation of a white gaussian noise with variance 1 at each wavelet level. If None (default), this method calls WaveletTransform.noise_scale_factors() to compute these factors.

Returns:

The denoised image In and the noise Dn = I-In.

Return type:

Image or numpy.ndarray

enhance_details(alphas, betas=1.0, thresholds=0.0, alphaA=1.0, betaA=1.0, inplace=False)

Enhance the detail coefficients of a starlet transformation.

This method (only implemented for starlet transformations at present) enhances the details coefficients c → f(abs(c))*c of each wavelet level, with:

  • f(x) = 1 if x <= threshold,

  • f(x) = (cmax/x)*((x-c0)/(cmax-c0))**alpha if x > threshold,

where cmax = beta*max(abs(c)) and c0 is computed to ensure continuity at x = threshold.

With alpha < 1 this transformations enhances the detail coefficients whose absolute values are within [threshold, cmax], and softens detail coefficients whose absolute values are above cmax (dynamic range compression).

Parameters:
  • alphas (float) – The alpha exponent for each wavelet level (expected < 1). Can be a scalar (same alpha for all scales) or a list/tuple/array (level #0 is the smallest scale). If alpha = 1, the wavelet level is not enhanced.

  • betas (float, optional) – The beta factor for each wavelet level (expected < 1). Can be a scalar (same beta for all scales) or a list/tuple/array (level #0 is the smallest scale).

  • thresholds (float, optional) – The threshold for each wavelet level. Can be a scalar (same threshold for all scales) or a list/tuple/array (level #0 is the smallest scale).

  • alphaA (float, optional) – The alpha exponent for the approximation coefficients (default 1 = not enhanced).

  • betaA (float, optional) – The beta factor for the approximation coefficients (default 1).

  • inplace (bool, optional) – If True, update the object “in place”; if False (default), return a new WaveletTransform object.

Returns:

The updated WaveletTransform object.

Return type:

WaveletTransform

equimage.image_multiscale.dwt(image, levels, wavelet='default', mode='reflect')

Discrete wavelet transform of an image.

Parameters:
  • image (Image or numpy.ndarray) – The input image.

  • levels (int) – The number of wavelet levels.

  • wavelet (string or pywt.Wavelet object, optional) – The wavelet used for the transformation. Default is “default”, a shortcut for equimage.params.defwavelet.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

    • ”wrap”: the image is periodized (abcd → abcd|abcd|abcd).

Returns:

The discrete wavelet transform of the input image.

Return type:

WaveletTransform

equimage.image_multiscale.swt(image, levels, wavelet='default', mode='reflect', start=0)

Stationary wavelet transform (also known as undecimated or “à trous” transform) of an image.

Parameters:
  • image (Image or numpy.ndarray) – The input image.

  • levels (int) – The number of wavelet levels.

  • wavelet (string or pywt.Wavelet object, optional) – The wavelet used for the transformation. Default is “default”, a shortcut for equimage.params.defwavelet.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

    • ”wrap”: the image is periodized (abcd → abcd|abcd|abcd).

Returns:

The stationary wavelet transform of the input image.

Return type:

WaveletTransform

equimage.image_multiscale.slt(image, levels, starlet='cubic', mode='reflect')

Starlet (isotropic undecimated wavelet) transform of the input image.

Note

See: Image processing and data analysis: The multiscale approach, Jean-Luc Starck, Fionn Murtagh, and Albert Bijaoui, Cambridge University Press (1998). https://www.researchgate.net/publication/220688988_Image_Processing_and_Data_Analysis_The_Multiscale_Approach

Parameters:
  • image (Image or numpy.ndarray) – The input image.

  • levels (int) – The number of starlet levels.

  • starlet (string, optional) – The starlet used for the transformation (“linear” for the 3x3 linear spline or “cubic” for the 5x5 cubic spline). Default is “cubic”.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”wrap”: the image is periodized (abcd → abcd|abcd|abcd).

Returns:

The starlet transform of the input image.

Return type:

WaveletTransform

class equimage.image_multiscale.MixinImage

Bases: object

To be included in the Image class.

dwt(levels, wavelet='default', mode='reflect')

Discrete wavelet transform of the image.

Parameters:
  • levels (int) – The number of wavelet levels.

  • wavelet (string or pywt.Wavelet object, optional) – The wavelet used for the transformation. Default is “default”, a shortcut for equimage.params.defwavelet.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

    • ”wrap”: the image is periodized (abcd → abcd|abcd|abcd).

Returns:

The discrete wavelet transform of the image.

Return type:

WaveletTransform

swt(levels, wavelet='default', mode='reflect', start=0)

Stationary wavelet transform (also known as undecimated or “à trous” transform) of the image.

Parameters:
  • levels (int) – The number of wavelet levels.

  • wavelet (string or pywt.Wavelet object, optional) – The wavelet used for the transformation. Default is “default”, a shortcut for equimage.params.defwavelet.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

    • ”wrap”: the image is periodized (abcd → abcd|abcd|abcd).

Returns:

The stationary wavelet transform of the image.

Return type:

WaveletTransform

slt(levels, starlet='cubic', mode='reflect')

Starlet (isotropic undecimated wavelet) transform of the image.

Note

See: Image processing and data analysis: The multiscale approach, Jean-Luc Starck, Fionn Murtagh, and Albert Bijaoui, Cambridge University Press (1998). https://www.researchgate.net/publication/220688988_Image_Processing_and_Data_Analysis_The_Multiscale_Approach

Parameters:
  • levels (int) – The number of starlet levels.

  • starlet (string, optional) – The starlet used for the transformation (“linear” for the 3x3 linear spline or “cubic” for the 5x5 cubic spline). Default is “cubic”.

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”wrap”: the image is periodized (abcd → abcd|abcd|abcd).

Returns:

The starlet transform of the image.

Return type:

WaveletTransform

anscombe(gain=1.0, average=0.0, sigma=0.0)

Return the generalized Anscombe transform (gAt) of the image.

The gAt transforms the sum gain*P+N of a white Poisson noise P and a white Gaussian noise N (characterized by its average and standard deviation sigma) into an approximate white Gaussian noise with variance 1.

For gain = 1, average = 0 and sigma = 0 (default), the gAt is the original Anscombe transform T(image) = 2*sqrt(image+3/8).

Parameters:
  • gain (float, optional) – The gain (default 1).

  • average (float, optional) – The average of the Gaussian noise (default 0).

  • sigma (float, optional) – The standard deviation of the Gaussian noise (default 0).

Returns:

The generalized Anscombe transform of the image.

Return type:

Image

inverse_anscombe(gain=1.0, average=0.0, sigma=0.0)

Return the inverse generalized Anscombe transform of the image.

See also

Image.anscombe

Parameters:
  • gain (float, optional) – The gain (default 1).

  • average (float, optional) – The average of the Gaussian noise (default 0).

  • sigma (float, optional) – The standard deviation of the Gaussian noise (default 0).

Returns:

The inverse generalized Anscombe transform of the image.

Return type:

Image

equimage.image_skimage module

Interface with scikit-image.

class equimage.image_skimage.MixinImage

Bases: object

To be included in the Image class.

gaussian_filter(sigma, mode='reflect', channels='')

Convolve (blur) selected channels of the image with a gaussian.

Parameters:
  • sigma (float) – The standard deviation of the gaussian (pixels).

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

Returns:

The processed image.

Return type:

Image

See also

skimage.filters.gaussian()

butterworth_filter(cutoff, order=2, padding=0, channels='')

Apply a Butterworth low-pass filter to selected channels of the image.

The Butterworh filter reads in the frequency domain:

\[H(f) = 1/(1+(f/f_c)^{2n})\]

where \(n\) is the order of the filter and \(f_c\) is the cut-off frequency. The data are Fast-Fourier Transformed back and forth to apply the filter.

Parameters:
  • cutoff (float) – The normalized cutoff frequency in [0, 1]. Namely, fc = (1-cutoff)fs/2 with fs the FFT sampling frequency.

  • order (int, optional) – The order of the filter (default 2).

  • padding (int, optional) – Number of pixels to pad the image with (default 0; increase if the filter leaves visible artifacts on the edges).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

Returns:

The processed image.

Return type:

Image

See also

skimage.filters.butterworth()

unsharp_mask(sigma, strength, channels='')

Apply an unsharp mask to selected channels of the image.

Given a channel \(C_{in}\), returns

\[C_{out} = C_{in}+\mathrm{strength}[C_{in}-\mathrm{BLUR}(C_{in})]\]

where BLUR(\(C_{in}\)) is the convolution of \(C_{in}\) with a gaussian of standard deviation sigma. As BLUR(\(C_{in}\)) is a low-pass filter, \(C_{in}\)-BLUR(\(C_{in}\)) is a high-pass filter whose output is admixed in the original image. This enhances details; the larger the mixing strength, the sharper the image, at the expense of noise and fidelity.

Parameters:
Returns:

The processed image.

Return type:

Image

See also

skimage.filters.unsharp_mask()

estimate_noise()

Estimate the rms noise of the image, averaged over all channels.

Returns:

The rms noise of the image, averaged over the channels.

Return type:

float

See also

skimage.restoration.estimate_sigma()

To do:

Estimate the noise in arbitrary channels.

wavelets_filter(sigma, wavelet='coif4', mode='soft', method='BayesShrink', shifts=0, channels='L')

Wavelets filter for denoising selected channels of the image.

Performs a wavelets transform on the selected channels and filters the wavelets to reduce noise.

Parameters:
  • sigma (float) – The estimated noise standard deviation used to compute the wavelets filter threshold. The larger sigma, the smoother the output image.

  • wavelet (str, optional) –

    The wavelets used to decompose the image (default “coif4”). Can be any of the orthogonal wavelets of pywavelets.wavelist. Recommended wavelets are:

    • Daubechies wavelets (“db1”…”db8”),

    • Symlets (“sym2”…”sym8”),

    • Coiflets (“coif1”…”coif8”).

  • mode (str, optional) – Denoising method [either “soft” (default) or “hard”].

  • method (str, optional) – Thresholding method [either “BayesShrink” (default) or “VisuShrink”]. Separate thresholds are applied to the wavelets bands for “BayesShrink”, whereas a single threshold is applied for “VisuShrink” (best in principle for Gaussian noise, but may appear overly smooth).

  • shifts (int, optional) – Number of spin cycles (default 0). The wavelets transform is not shift-invariant. To mimic a shift-invariant transform as best as possible, the output image is an average of the original image shifted shifts times in each direction, filtered, then shifted back to the original position.

  • channels (str, optional) – The selected channels (default “L” = luma). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

Returns:

The processed image.

Return type:

Image

See also

skimage.restoration.denoise_wavelet(), skimage.restoration.cycle_spin()

bilateral_filter(sigma_space, sigma_color=0.1, mode='reflect', channels='')

Bilateral filter for denoising selected channels of the image.

The bilateral filter convolves the selected channel(s) \(C_{in}\) with a gaussian \(g_s\) of standard deviation sigma_space weighted by a gaussian \(g_c\) in color space (with standard deviation sigma_color):

\[C_{out}(\mathbf{r}) \propto \sum_{\mathbf{r}'} C_{in}(\mathbf{r}') g_s(|\mathbf{r}-\mathbf{r}'|) g_c(|C_{in}(\mathbf{r})-C_{in}(\mathbf{r}')|)\]

Therefore, the bilateral filter averages the neighboring pixels whose colors are sufficiently similar. The bilateral filter may tend to produce cartoon-like (piecewise-constant) images.

Parameters:
  • sigma_space (float) – The standard deviation of the gaussian in real space (pixels).

  • sigma_color (float, optional) – The standard deviation of the gaussian in color space (default 0.1).

  • mode (str, optional) –

    How to extend the image across its boundaries:

    • ”reflect” (default): the image is reflected about the edge of the last pixel (abcd → dcba|abcd|dcba).

    • ”mirror”: the image is reflected about the center of the last pixel (abcd → dcb|abcd|cba).

    • ”nearest”: the image is padded with the value of the last pixel (abcd → aaaa|abcd|dddd).

    • ”zero”: the image is padded with zeros (abcd → 0000|abcd|0000).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

Returns:

The processed image.

Return type:

Image

See also

skimage.restoration.denoise_bilateral()

total_variation(weight=0.1, algorithm='Chambolle', channels='')

Total variation denoising of selected channels of the image.

Given a noisy channel \(C_{in}\), the total variation filter finds an image \(C_{out}\) with less total variation than \(C_{in}\) under the constraint that \(C_{out}\) remains similar to \(C_{in}\). This can be expressed as the Rudin–Osher–Fatemi (ROF) minimization problem:

\[\text{Minimize} \sum_{\mathbf{r}} |\nabla C_{out}(\mathbf{r})| + (\lambda/2)[C_{out}(\mathbf{r})-C_{in}(\mathbf{r})]^2\]

where the weight \(1/\lambda\) controls denoising (the larger the weight, the stronger the denoising at the expense of image fidelity). The minimization can either be performed with the Chambolle or split Bregman algorithm. Total variation denoising tends to produce cartoon-like (piecewise-constant) images.

Parameters:
Returns:

The processed image.

Return type:

Image

See also

skimage.restoration.denoise_tv_chambolle(), skimage.restoration.denoise_tv_bregman()

non_local_means(size=7, dist=11, h=0.01, sigma=0.0, fast=True, channels='')

Non-local means filter for denoising selected channels of the image.

Given a channel \(C_{in}\), returns

\[C_{out}(\mathbf{r}) \propto \sum_{\mathbf{r}'} f(\mathbf{r},\mathbf{r}') C_{in}(\mathbf{r}')\]

where:

\[f(\mathbf{r},\mathbf{r}') = \exp[-(M(\mathbf{r})-M(\mathbf{r}'))^2/h^2]\text{ for }|\mathbf{r}-\mathbf{r}'| < d\]

and \(M(\mathbf{r})\) is an average of \(C_{in}\) in a patch around \(\mathbf{r}\). Therefore, the non-local means filter averages the neighboring pixels whose patches (texture) are sufficiently similar. The non-local means filter can restore textures that would be blurred by other denoising algorithms such as the bilateral and total variation filters.

Parameters:
  • size (int, optional) – The size of the (square) patch used to compute M(r) (default 7).

  • dist (int, optional) – The maximal distance d between the patches (default 11).

  • h (float, optional) – The cut-off in gray levels (default 0.01; the filter is applied to all channels independently).

  • sigma (float, optional) – The standard deviation of the noise (if known), subtracted out when computing f(r, r’). This can lead to a modest improvement in image quality (keep default 0 if unknown).

  • fast (bool, optional) – If true (default), the pixels within the patch are averaged uniformly. If false, they are weighted by a gaussian (better yet slower).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

Returns:

The processed image.

Return type:

Image

See also

skimage.restoration.denoise_nl_means()

CLAHE(size=None, clip=0.01, nbins=256, channels='')

Contrast Limited Adaptive Histogram Equalization (CLAHE) of selected channels of the image.

See https://en.wikipedia.org/wiki/Adaptive_histogram_equalization.

Parameters:
  • size (int, optional) – The size of the tiles (in pixels) used to sample local histograms, given as a single integer or as pair of integers (width, height of the tiles). If None (default), the tile size defaults to 1/8 of the image width and height.

  • clip (float, optional) – The clip limit used to control contrast enhancement (default 0.01).

  • nbins (int, optional) – The number of bins in the local histograms (default 256).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html. CLAHE works best for the “V”, “L” and “L*” channels.

Returns:

The processed image.

Return type:

Image

See also

skimage.exposure.equalize_adapthist()

equimage.image_stars module

Stars transformations.

class equimage.image_stars.MixinImage

Bases: object

To be included in the Image class.

starnet(midtone=0.5, starmask=False)

Remove the stars from the image with StarNet++.

See: https://www.starnetastro.com/

The image is saved as a TIFF file (16 bits integer per channel); the stars are removed from this file with StarNet++, and the starless image is finally reloaded in eQuimageLab and returned.

Warning

The command “starnet++” must be in the PATH.

Parameters:
  • midtone (float, optional) – If different from 0.5 (default), apply a midtone stretch to the input image before running StarNet++, then apply the inverse stretch to the output starless. This can help StarNet++ find stars on low contrast, linear RGB images. See Image.midtone_stretch(); midtone can either be “auto” (for automatic stretch) or a float in ]0, 1[.

  • starmask (bool, optional) – If True, return both the starless image and the star mask. If False (default), only return the starless image [the star mask being the difference between the original image (self) and the starless].

Returns:

The starless image if starmask is False, and both the starless image and star mask if starmask is True.

Return type:

Image

resynthetize_stars_siril()

Resynthetize stars with Siril.

This method saves the image as a FITS file (32 bits float per channel) then runs Siril to find the stars and resynthetize them with “perfect” Gaussian or Moffat shapes. It returns a synthetic star mask that must be blended with the original or starless image. This can be used to correct coma and other aberrations.

Note

Star resynthesis works best on the star mask produced by Image.starnet().

Warning

The command “siril-cli” must be in the PATH.

Returns:

The synthetic star mask produced by Siril.

Return type:

Image

reduce_stars(amount, starless=None)

Reduce the size of the stars on the image.

This method makes use of the starless image produced by starnet++ to identify and unstretch stars, thus effectively reducing their apparent diameter. It shall be applied to a streched (non-linear) image.

Note

Inspired from https://gitlab.com/free-astro/siril-scripts/-/blob/main/processing/DSA-Star_Reduction.py by Rich Stevenson - Deep Space Astro.

See also

Image.starnet()

Parameters:
  • amount (float) – The strength of star reduction, expected in ]-1, 1[. amount < 0 reduces star size, while amount > 0 increases star size.

  • starless (Image, optional) – The starless image. If None (default), the starless image is computed with StarNet++. The command “starnet++” must then be in the PATH.

Returns:

The edited image, with the stars reduced.

Return type:

Image

equimage.image_stats module

Image statistics & histograms.

equimage.image_stats.parse_channels(channels, errors=True)

Parse channel keys.

Parameters:
  • channels (str) –

    A combination of channel keys:

    • ”1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): The first/second/third channel (all images).

    • ”V”: The HSV value (RGB, HSV and grayscale images).

    • ”S”: The HSV saturation (RGB, HSV and grayscale images).

    • ”L’”: The HSL lightness (RGB, HSL and grayscale images).

    • ”S’”: The HSL saturation (RGB, HSL and grayscale images).

    • ”H”: The HSV hue (RGB, HSV and grayscale images).

    • ”L”: The luma (RGB and grayscale images).

    • ”L*”: The CIE lightness L* (RGB, grayscale, CIELab and CIELuv images).

    • ”c*”: The CIE chroma c* (CIELab and CIELuv images).

    • ”s*”: The CIE saturation s* (CIELuv images).

    • ”h*”: The CIE hue angle h* (CIELab and CIELuv images).

  • errors (bool, optional) – If False, discard unknown channel keys; If True (default), raise a ValueError.

Returns:

The list of channel keys.

Return type:

list

class equimage.image_stats.MixinImage

Bases: object

To be included in the Image class.

histograms(channels=None, nbins=None, recompute=False)

Compute histograms of selected channels of the image.

The histograms are both returned and embedded in the object as self.hists. Histograms already registered in self.hists are not recomputed unless required.

Parameters:
  • channels (str, optional) –

    A combination of keys for the selected channels:

    • ”1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): The first/second/third channel (all images).

    • ”V”: The HSV value (RGB, HSV and grayscale images).

    • ”S”: The HSV saturation (RGB, HSV and grayscale images).

    • ”L’”: The HSL lightness (RGB, HSL and grayscale images).

    • ”S’”: The HSL saturation (RGB, HSL and grayscale images).

    • ”H”: The HSV hue (RGB, HSV and grayscale images).

    • ”L”: The luma (RGB and grayscale images).

    • ”L*”: The CIE lightness L* (RGB, grayscale, CIELab and CIELuv images).

    • ”c*”: The CIE chroma c* (CIELab and CIELuv images).

    • ”s*”: The CIE saturation s* (CIELuv images).

    • ”h*”: The CIE hue angle h* (CIELab and CIELuv images).

    If it ends with a “+”, channels gets appended with the keys already registered in self.hists. Default (if None) is “RGBL” for RGB images, “VS” for HSV images, “L’S’” for HSL images, “L” for grayscale images, “L*c*” for CIELab and “L*s*” for CIELuv images.

  • nbins (int, optional) – Number of bins within [0, 1] in the histograms. Set to equimage.params.maxhistbins if negative, and computed from the image statistics using Scott’s rule if zero. If None (default), set to equimage.params.defhistbins.

  • recompute (bool, optional) – If False (default), the histograms already registered in self.hists are not recomputed (provided they match nbins). If True, all histograms are recomputed.

Returns:

hists[key] for key in channels, with:

  • hists[key].name = channel name (provided for convenience).

  • hists[key].nbins = number of bins within [0, 1].

  • hists[key].edges = histogram bins edges.

  • hists[key].counts = histogram bins counts.

  • hists[key].color = suggested line color for histogram plots.

Return type:

dict

statistics(channels=None, exclude01=None, recompute=False)

Compute statistics of selected channels of the image.

The statistics are both returned and embedded in the object as self.stats. Statistics already registered in self.stats are not recomputed unless required.

Parameters:
  • channels (str, optional) –

    A combination of keys for the selected channels:

    • ”1”, “2”, “3” (or equivalently “R”, “G”, “B” for RGB images): The first/second/third channel (all images).

    • ”V”: The HSV value (RGB, HSV and grayscale images).

    • ”S”: The HSV saturation (RGB, HSV and grayscale images).

    • ”L’”: The HSL lightness (RGB, HSL and grayscale images).

    • ”S’”: The HSL saturation (RGB, HSL and grayscale images).

    • ”H”: The HSV hue (RGB, HSV and grayscale images).

    • ”L”: The luma (RGB and grayscale images).

    • ”L*”: The CIE lightness L* (RGB, grayscale, CIELab and CIELuv images).

    • ”c*”: The CIE chroma c* (CIELab and CIELuv images).

    • ”s*”: The CIE saturation s* (CIELuv images).

    • ”h*”: The CIE hue angle h* (CIELab and CIELuv images).

    If it ends with a “+”, channels gets appended with the keys already registered in self.stats. Default (if None) is “RGBL” for RGB images, “VS” for HSV images, “L’S’” for HSL images, “L” for grayscale images, “L*c*” for CIELab and “L*s*” for CIELuv images.

  • exclude01 (bool, optional) – If True, exclude pixels <= 0 or >= 1 from the median and percentiles. Defaults to equimage.params.exclude01 if None.

  • recompute (bool, optional) – If False (default), the statistics already registered in self.stats are not recomputed. If True, all statistics are recomputed.

Returns:

stats[key] for key in channels, with:

  • stats[key].name = channel name (provided for convenience).

  • stats[key].width = image width (provided for convenience).

  • stats[key].height = image height (provided for convenience).

  • stats[key].npixels = number of image pixels = image width*image height (provided for convenience).

  • stats[key].minimum = minimum level.

  • stats[key].maximum = maximum level.

  • stats[key].percentiles = (pr25, pr50, pr75) = the 25th, 50th and 75th percentiles.

  • stats[key].median = pr50 = median level.

  • stats[key].zerocount = number of pixels <= 0.

  • stats[key].outcount = number of pixels > 1 (out-of-range).

  • stats[key].exclude01 = True if pixels >= 0 or <= 1 have been excluded from the median and percentiles, False otherwise.

  • stats[key].color = suggested text color for display.

Return type:

dict

equimage.image_stretch module

Histogram stretch.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“hms”, “mts”, “ghs”, “Dharmonic_through”.

equimage.image_stretch.hms(image, D)

Apply a harmonic stretch function to the input image.

The harmonic stretch function defined as

f(x) = (D+1)*x/(D*x+1)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f’(0) = D+1.

Parameters:
  • image (numpy.ndarray) – The input image.

  • D (float) – The stretch parameter (expected > -1).

Returns:

The stretched image.

Return type:

numpy.ndarray

equimage.image_stretch.mts(image, midtone)

Apply a midtone stretch function to the input image.

The midtone stretch function defined as

f(x) = (midtone-1)*x/((2*midtone-1)*x-midtone)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f(midtone) = 0.5. It is nothing else than the harmonic stretch function with D = 1/midtone-2.

See also

hms()

Parameters:
  • image (numpy.ndarray) – The input image.

  • midtone (float) – The midtone level (expected in ]0, 1[).

Returns:

The stretched image.

Return type:

numpy.ndarray

equimage.image_stretch.ghs(image, lnD1, b, SYP, SPP=0.0, HPP=1.0)

Apply a generalized hyperbolic stretch function to the input image.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Parameters:
  • image (numpy.ndarray) – The input image.

  • logD1 (float) – The global stretch parameter ln(D+1) (must be >= 0).

  • b (float) – The local stretch parameter.

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float, optional) – The shadow protection point (default 0; expected in [0, SYP]).

  • HPP (float, optional) – The highlight protection point (default 1; expected in [SYP, 1]).

  • inverse (bool) – Return the inverse transformation function if True.

Returns:

The stretched image.

Return type:

numpy.ndarray

equimage.image_stretch.Dharmonic_through(x, y)

Return the stretch parameter D such that f(x) = y, with f the harmonic stretch function.

The harmonic stretch function defined as

f(x) = (D+1)*x/(D*x+1)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f’(0) = D+1.

This function provides an alternative parametrization of f. It returns D such that f(x) = y.

See also

hms()

Parameters:
  • x (float) – The target input level (expected in ]0, 1[).

  • y (float) – The target output level (expected in ]0, 1[).

Returns:

The stretch parameter D such that f(x) = y.

Return type:

float

class equimage.image_stretch.MixinImage

Bases: object

To be included in the Image class.

set_black_point(shadow, channels='', trans=True)

Set the black (shadow) level in selected channels of the image.

The selected channels are clipped below shadow and linearly stretched to map [shadow, 1] onto [0, 1]. The output, stretched image channels therefore fit in the [0, infty[ range.

Parameters:
Returns:

The processed image.

Return type:

Image

set_shadow_highlight(shadow, highlight, channels='', trans=True)

Set shadow and highlight levels in selected channels of the image.

The selected channels are clipped below shadow and above highlight and linearly stretched to map [shadow, highlight] onto [0, 1]. The output, stretched channels therefore fit in the [0, 1] range.

Parameters:
Returns:

The processed image.

Return type:

Image

set_dynamic_range(fr, to, channels='', trans=True)

Set the dynamic range of selected channels of the image.

The selected channels are linearly stretched to map [fr[0], fr[1]] onto [to[0], to[1]], and clipped outside the [to[0], to[1]] range.

Parameters:
Returns:

The processed image.

Return type:

Image

harmonic_stretch(D, inverse=False, channels='', trans=True)

Apply a harmonic stretch to selected channels of the image.

The harmonic stretch function defined as

f(x) = (D+1)*x/(D*x+1)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f’(0) = D+1.

Parameters:
Returns:

The stretched image.

Return type:

Image

gharmonic_stretch(D, SYP=0.0, SPP=0.0, HPP=1.0, inverse=False, channels='', trans=True)

Apply a generalized harmonic stretch to selected channels of the image.

The generalized harmonic stretch function f is defined as:

  • f(x) = b1*x when x <= SPP,

  • f(x) = a2+b2/(1-D*(x-SYP)) when SPP <= x <= SYP,

  • f(x) = a3+b3/(1+D*(x-SYP)) when SYP <= x <= HPP,

  • f(x) = a4+b4*x when x >= HPP.

The coefficients a and b are computed so that f is continuous and derivable. SYP is the “symmetry point”; SPP is the “shadow protection point” and HPP is the “highlight protection point”. They can be tuned to preserve contrast in the low and high brightness areas, respectively.

f(x) falls back to the “usual” harmonic stretch function

f(x) = (D+1)*x/(D*x+1)

when SPP = SYP = 0 and HPP = 1 (the defaults).

Moreover, the generalized hyperbolic stretch function for local stretch parameter b = 1 is the generalized harmonic stretch function.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Note

Code adapted from https://github.com/mikec1485/GHS/blob/main/src/scripts/GeneralisedHyperbolicStretch/lib/GHSStretch.js (published by Mike Cranfield under GNU GPL license).

Parameters:
  • D (float) – The stretch parameter (must be >= 0).

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float, optional) – The shadow protection point (default 0, must be <= SYP).

  • HPP (float, optional) – The highlight protection point (default 1, must be >= SYP).

  • inverse (bool, optional) – Return the inverse transformation if True (default False).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

Image

midtone_stretch(midtone, inverse=False, channels='', trans=True)

Apply a midtone stretch to selected channels of the image.

The midtone stretch function defined as

f(x) = (midtone-1)*x/((2*midtone-1)*x-midtone)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f(midtone) = 0.5. It is nothing else than the harmonic stretch function with D = 1/midtone-2.

Parameters:
Returns:

The stretched image.

Return type:

Image

midtone_transfer(midtone, shadow=0.0, highlight=1.0, low=0.0, high=1.0, channels='', trans=True)

Apply the shadow/midtone/highlight/low/high levels transfer function to selected channels of the image.

This method:

  1. Clips the input data in the [shadow, highlight] range and maps [shadow, highlight] onto [0, 1].

  2. Applies the midtone stretch function f(x) = (m-1)*x/((2*m-1)*x-m), with m = (midtone-shadow)/(highlight-shadow) the remapped midtone.

  3. Maps [low, high] onto [0, 1] and clips the output data outside the [0, 1] range.

Parameters:
  • midtone (float) – The input midtone level (expected in ]0, 1[).

  • shadow (float, optional) – The input shadow level (default 0; expected < midtone).

  • highlight (float, optional) – The input highlight level (default 1; expected > midtone).

  • low (float, optional) – The “low” output level (default 0; expected <= 0).

  • high (float, optional) – The “high” output level (default 1; expected >= 1).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

Image

garcsinh_stretch(D, SYP=0.0, SPP=0.0, HPP=1.0, inverse=False, channels='', trans=True)

Apply a generalized arcsinh stretch to selected channels of the image.

The generalized arcsinh stretch function f is defined as:

  • f(x) = b1*x when x <= SPP,

  • f(x) = a2+b2*arcsinh(-D*(x-SYP)) when SPP <= x <= SYP,

  • f(x) = a3+b3*arcsinh( D*(x-SYP)) when SYP <= x <= HPP,

  • f(x) = a4+b4*x when x >= HPP.

The coefficients a and b are computed so that f is continuous and derivable. SYP is the “symmetry point”; SPP is the “shadow protection point” and HPP is the “highlight protection point”. They can be tuned to preserve contrast in the low and high brightness areas, respectively.

f(x) falls back to the “standard” arcsinh stretch function

f(x) = arcsinh(D*x)/arcsinh(D)

when SPP = SYP = 0 and HPP = 1 (the defaults).

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Note

Code adapted from https://github.com/mikec1485/GHS/blob/main/src/scripts/GeneralisedHyperbolicStretch/lib/GHSStretch.js (published by Mike Cranfield under GNU GPL license).

Parameters:
  • D (float) – The stretch parameter (must be >= 0).

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float, optional) – The shadow protection point (default 0, must be <= SYP).

  • HPP (float, optional) – The highlight protection point (default 1, must be >= SYP).

  • inverse (bool, optional) – Return the inverse transformation if True (default False).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

Image

ghyperbolic_stretch(lnD1, b, SYP, SPP=0.0, HPP=1.0, inverse=False, channels='', trans=True)

Apply a generalized hyperbolic stretch to selected channels of the image.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Parameters:
  • logD1 (float) – The global stretch parameter ln(D+1) (must be >= 0).

  • b (float) – The local stretch parameter.

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float, optional) – The shadow protection point (default 0, must be <= SYP).

  • HPP (float, optional) – The highlight protection point (default 1, must be >= SYP).

  • inverse (bool, optional) – Return the inverse transformation if True (default False).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

numpy.ndarray

gpowerlaw_stretch(D, SYP, SPP=0.0, HPP=1.0, inverse=False, channels='', trans=True)

Apply a generalized power law stretch to selected channels of the image.

The generalized power law stretch function f is defined as:

  • f(x) = b1*x when x <= SPP,

  • f(x) = a2+b2*(1+(x-SYP))**(D+1) when SPP <= x <= SYP,

  • f(x) = a3+b3*(1-(x-SYP))**(D+1) when SYP <= x <= HPP,

  • f(x) = a4+b4*x when x >= HPP.

The coefficients a and b are computed so that f is continuous and derivable. SYP is the “symmetry point”; SPP is the “shadow protection point” and HPP is the “highlight protection point”. They can be tuned to preserve contrast in the low and high brightness areas, respectively.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Parameters:
  • D (float) – The stretch parameter (must be >= 0).

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float, optional) – The shadow protection point (default 0, must be <= SYP).

  • HPP (float, optional) – The highlight protection point (default 1, must be >= SYP).

  • inverse (bool, optional) – Return the inverse transformation if True (default False).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

numpy.ndarray

gamma_stretch(gamma, channels='', trans=True)

Apply a gamma stretch to selected channels of the image.

The gamma stretch function is defined as:

f(x) = x**gamma

This method clips the selected channels below 0 before stretching.

Parameters:
Returns:

The stretched image.

Return type:

Image

curve_stretch(f, channels='', trans=True)

Apply a curve stretch, defined by an arbitrary function f, to selected channels of the image.

f may be, e.g., an explicit function or a spline interpolator. It must be defined over the whole range spanned by the channel(s).

Note

This is practically a wrapper for Image.apply_channels().

Parameters:
Returns:

The stretched image.

Return type:

Image

spline_stretch(x, y, spline='akima', channels='', trans=True)

Apply a spline curve stretch to selected channels of the image.

The spline must be defined over the whole range spanned by the channel(s).

Parameters:
  • x (numpy.ndarray) – A sampling of the function y = f(x) interpolated by the spline.

  • y (numpy.ndarray) – A sampling of the function y = f(x) interpolated by the spline.

  • spline (int or str, optional) – The spline type. Either an integer (the order) for a B-spline, or the string “akima” (for an Akima spline, default).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

Image

statistical_stretch(median, boost=0.0, maxiter=5, accuracy=0.001, channels='', trans=True)

Statistical stretch of selected channels of the image.

This method:

  1. Applies (a series) of harmonic stretches to the selected channels in order to bring the average median of these channels to the target level.

  2. Optionally, boosts contrast above the target median with a specially designed curve stretch.

It is recommended to set the black point of the image before statistical stretch.

Note

This is a Python implementation of the statistical stretch algorithm of Seti Astro, published by Franklin Marek under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/). See: https://www.setiastro.com/statistical-stretch.

Hint

You can apply the harmonic stretches and the final contrast boost separately by calling this method twice with the same target median, first with boost = 0, then with boost > 0. As the average median of the image already matches the target median, no harmonic stretch will be applied on second call.

Parameters:
  • median (float) – The target median (expected in ]0, 1[).

  • boost (float, optional) – The contrast boost (expected >= 0; default 0 = no boost).

  • maxiter (int, optional) – The maximum number of harmonic stretches applied to reach the target median (default 5). For a single channel, the algorithm shall actually converge in a single iteration.

  • accuracy (float, optional) – The target accuracy of the median (default 0.001).

  • channels (str, optional) – The selected channels (default “” = all channels). See Image.apply_channels() or https://astro.ymniquet.fr/codes/equimagelab/docs/channels.html.

  • trans (bool, optional) – If True (default), embed the transormation in the output image as output.trans (see Image.apply_channels()).

Returns:

The stretched image.

Return type:

Image

equimage.image_utils module

Image utils.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“is_valid_image”, “clip”, “blend”.

equimage.image_utils.is_valid_image(image)

Return True if the input array is a valid image candidate, False otherwise.

Parameters:

image (numpy.ndarray) – The image candidate.

Returns:

True if the input array is a valid image candidate, False otherwise.

Return type:

bool

equimage.image_utils.clip(image, vmin=0.0, vmax=1.0)

Clip the input image in the range [vmin, vmax].

Parameters:
  • image (numpy.ndarray) – The input image.

  • vmin (float, optional) – The lower clip bound (default 0).

  • vmax (float, optional) – The upper clip bound (default 1).

Returns:

The clipped image.

Return type:

numpy.ndarray

equimage.image_utils.blend(image1, image2, mixing)

Blend two images.

Returns image1*(1-mixing)+image2*mixing.

Parameters:
  • image1 (numpy.ndarray) – The first image.

  • image2 (numpy.ndarray) – The second image.

  • mixing (float or numpy.ndarray for pixel-dependent mixing) – The mixing coefficient(s).

Returns:

The blended image image1*(1-mixing)+image2*mixing.

Return type:

numpy.ndarray

class equimage.image_utils.MixinImage

Bases: object

To be included in the Image class.

is_out_of_range()

Return True if the image is out-of-range (data < 0 or > 1 in any channel), False otherwise.

Returns:

True if the image is out-of-range, False otherwise.

Return type:

bool

empty()

Return an empty image with same size as the object.

Returns:

An empty image with the same size as self.

Return type:

Image

black()

Return a black image with same size as the object.

Returns:

An black image with the same size as self.

Return type:

Image

clip(vmin=0.0, vmax=1.0)

Clip the image in the range [vmin, vmax].

See also

Image.clip_channels() to clip specific channels.

Parameters:
  • vmin (float, optional) – The lower clip bound (default 0).

  • vmax (float, optional) – The upper clip bound (default 1).

Returns:

The clipped image.

Return type:

Image

scale_pixels(source, target, cutoff=None)

Scale all pixels of the image by the ratio target/source.

Wherever abs(source) < cutoff, set all channels to target.

Parameters:
  • source (numpy.ndarray) – The source values for scaling (must be the same size as the image).

  • target (numpy.ndarray) – The target values for scaling (must be the same size as the image).

  • cutoff (float, optional) – Threshold for scaling. If None, defaults to equimage.helpers.fpepsilon(source.dtype).

Returns:

The scaled image.

Return type:

Image

blend(image, mixing)

Blend the object with the input image.

Returns self*(1-mixing)+image*mixing. The images must share the same shape, color space and color model.

Parameters:
  • image (Image) – The image to blend with.

  • mixing (float or numpy.ndarray for pixel-dependent mixing) – The mixing coefficient(s).

Returns:

The blended image self*(1-mixing)+image*mixing.

Return type:

Image

equimage.imports module

eQuimage top-level symbols.

This imports relevant symbols from the submodules into the equimage/equimagelab namespace. These symbols are defined by the __all__ dictionary (if any) of each submodule, and listed in their docstring.

Also, the methods of the MixinImage class of each submodule are imported in the Image class.

equimage.params module

Image processing parameters.

The following symbols are imported in the equimage/equimagelab namespaces for convenience:

“get_RGB_luma”, “set_RGB_luma”.

equimage.params.get_image_type()

Return the image type.

Returns:

The image type, either “float32” (for 32 bits floats) or “float64” (for 64 bits floats).

Return type:

str

equimage.params.set_image_type(dtype)

Set image type.

Parameters:

dtype (str) – The image type. Can be either “float32” (for 32 bits floats) or “float64” (for 64 bits floats).

equimage.params.get_CIE_params()

Return the CIE illuminant and observer.

Returns:

The CIE illuminant and observer as a tuple of strings.

Return type:

str, str

equimage.params.set_CIE_params(illuminant, observer)

Set CIE illuminant and observer.

Parameters:
  • illuminant (str) – The name of the standard illuminant. Can be “A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, or “E”. See https://en.wikipedia.org/wiki/Standard_illuminant.

  • observer (str) – The name of the observer. Can be “2” (2-degree observer) or “10” (10-degree observer).

equimage.params.get_RGB_luma()

Return the RGB weights rgbluma of the luma.

The luma L of an image is the average of the RGB components weighted by rgbluma:

L = rgbluma[0]*image[0]+rgbluma[1]*image[1]+rgbluma[2]*image[2]

Returns:

The red, blue, green weights rgbluma of the luma.

Return type:

float, float, float

equimage.params.set_RGB_luma(rgb, verbose=True)

Set the RGB weights of the luma.

Parameters:
  • rgb

    The RGB weights of the luma as:

    • a tuple, list or array of the (red, green, blue) weights. They will be normalized so that their sum is 1.

    • the string “uniform”: the RGB weights are set to (1/3, 1/3, 1/3).

    • the string “human”: the RGB weights are set to (.212671, .715160, .072169). The luma is then the luminance for lRGB images, and an approximate substitute for the lightness for sRGB images.

  • verbose (bool, optional) – If True (default), print the updated definition of the luma.

equimage.params.set_max_hist_bins(n)

Set the maximum number of bins in the histograms.

Parameters:

n (int) – The maximum number of bins within [0, 1].

equimage.params.set_default_hist_bins(n)

Set the default number of bins in the histograms.

Parameters:

n (int) – If strictly positive, the default number of bins within [0, 1] (practically limited to equimage.params.maxhistbins). If zero, the number of bins is computed according to the statistics of each image. If strictly negative, the number of bins is set to equimage.params.maxhistbins.

equimage.stretchfunctions module

Histogram stretch functions.

equimage.stretchfunctions.shadow_stretch_function(x, shadow)

Return the linear shadow stretch function f(x).

The input data x is clipped below shadow and linearly stretched to map [shadow, 1] onto [0, 1]. The output, stretched data therefore fits in the [0, infty[ range.

Parameters:
  • x (numpy.ndarray) – The input data.

  • shadow (float) – The shadow level (expected < 1).

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.shadow_highlight_stretch_function(x, shadow, highlight)

Return the linear shadow/highlight stretch function f(x).

The input data x is clipped below shadow and above highlight and linearly stretched to map [shadow, highlight] onto [0, 1]. The output, stretched data therefore fits in the [0, 1] range.

Parameters:
  • x (numpy.ndarray) – The input data.

  • shadow (float) – The shadow level (expected < 1).

  • highlight (float) – The highlight level (expected > shadow).

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.dynamic_range_stretch_function(x, fr, to)

Return the linear dynamic range stretch function f(x).

The input data x is linearly stretched to map [fr[0], fr[1]] onto [to[0], to[1]], then clipped outside the [to[0], to[1]] range.

Parameters:
  • x (numpy.ndarray) – The input data.

  • fr (a tuple or list of two floats) – The input range.

  • to (a tuple or list of two floats) – The output range.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.harmonic_stretch_function(x, D, inverse)

Return the harmonic stretch function f(x).

The harmonic stretch function defined as

f(x) = (D+1)*x/(D*x+1)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f’(0) = D+1.

Parameters:
  • x (numpy.ndarray) – The input data.

  • D (float) – The stretch parameter (expected > -1).

  • inverse (bool) – Return the inverse stretch function if True.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.gharmonic_stretch_function(x, D, SYP, SPP, HPP, inverse)

Return the generalized harmonic stretch function f(x).

The generalized harmonic stretch function is defined as:

  • f(x) = b1*x when x <= SPP,

  • f(x) = a2+b2/(1-D*(x-SYP)) when SPP <= x <= SYP,

  • f(x) = a3+b3/(1+D*(x-SYP)) when SYP <= x <= HPP,

  • f(x) = a4+b4*x when x >= HPP.

The coefficients a and b are computed so that f is continuous and derivable.

f(x) falls back to the “usual” harmonic stretch function

f(x) = (D+1)*x/(D*x+1)

when SPP = SYP = 0 and HPP = 1.

Moreover, the generalized hyperbolic stretch function for local stretch parameter b = 1 is the generalized harmonic stretch function.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Note

Code adapted from https://github.com/mikec1485/GHS/blob/main/src/scripts/GeneralisedHyperbolicStretch/lib/GHSStretch.js (published by Mike Cranfield under GNU GPL license).

Parameters:
  • x (numpy.ndarray) – The input data.

  • D (float) – The stretch parameter (expected >= 0).

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float) – The shadow protection point (expected in [0, SYP]).

  • HPP (float) – The highlight protection point (expected in [SYP, 1]).

  • inverse (bool) – Return the inverse stretch function if True.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.midtone_stretch_function(x, midtone, inverse)

Return the midtone stretch function f(x).

The midtone stretch function defined as

f(x) = (midtone-1)*x/((2*midtone-1)*x-midtone)

is a rational interpolation from f(0) = 0 to f(1) = 1 with f(midtone) = 0.5. It is nothing else than the harmonic stretch function with D = 1/midtone-2.

Parameters:
  • x (numpy.ndarray) – The input data.

  • midtone (float) – The midtone level (expected in ]0, 1[]).

  • inverse (bool) – Return the inverse stretch function if True.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.midtone_transfer_function(x, shadow, midtone, highlight, low, high)

Return the shadow/midtone/highlight/low/high levels transfer function f(x).

This function:

  1. Clips the input data in the [shadow, highlight] range and maps [shadow, highlight] onto [0, 1].

  2. Applies the midtone stretch function f(x) = (m-1)*x/((2*m-1)*x-m), with m = (midtone-shadow)/(highlight-shadow) the remapped midtone.

  3. Maps [low, high] onto [0, 1] and clips the output data outside the [0, 1] range.

Parameters:
  • x (numpy.ndarray) – The input data.

  • midtone (float) – The input midtone level (expected in ]0, 1[).

  • shadow (float) – The input shadow level (expected < midtone).

  • highlight (float) – The input highlight level (expected > midtone).

  • low (float) – The “low” output level (expected <= 0).

  • high (float) – The “high” output level (expected >= 1).

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.arcsinh_stretch_function(x, D)

Return the arcsinh stretch function f(x).

The arcsinh stretch function is defined as:

f(x) = arcsinh(D*x)/arcsinh(D)

Parameters:
  • x (numpy.ndarray) – The input data.

  • D (float) – The stretch parameter (expected >= 0).

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.garcsinh_stretch_function(x, D, SYP, SPP, HPP, inverse)

Return the generalized arcsinh stretch function f(x).

This is a generalization of the arcsinh stretch function, defined as:

  • f(x) = b1*x when x <= SPP,

  • f(x) = a2+b2*arcsinh(-D*(x-SYP)) when SPP <= x <= SYP,

  • f(x) = a3+b3*arcsinh( D*(x-SYP)) when SYP <= x <= HPP,

  • f(x) = a4+b4*x when x >= HPP.

The coefficients a and b are computed so that f is continuous and derivable.

f(x) falls back to the “usual” arcsinh stretch function

f(x) = arcsinh(D*x)/arcsinh(D)

when SPP = SYP = 0 and HPP = 1.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Note

Code adapted from https://github.com/mikec1485/GHS/blob/main/src/scripts/GeneralisedHyperbolicStretch/lib/GHSStretch.js (published by Mike Cranfield under GNU GPL license).

Parameters:
  • x (numpy.ndarray) – The input data.

  • D (float) – The stretch parameter (expected >= 0).

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float) – The shadow protection point (expected in [0, SYP]).

  • HPP (float) – The highlight protection point (expected in [SYP, 1]).

  • inverse (bool) – Return the inverse stretch function if True.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.ghyperbolic_stretch_function(x, logD1, b, SYP, SPP, HPP, inverse)

Return the generalized hyperbolic stretch function f(x).

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Note

Code adapted from https://github.com/mikec1485/GHS/blob/main/src/scripts/GeneralisedHyperbolicStretch/lib/GHSStretch.js (published by Mike Cranfield under GNU GPL license).

Parameters:
  • x (numpy.ndarray) – The input data.

  • logD1 (float) – The global stretch parameter ln(D+1) (expected >= 0).

  • b (float) – The local stretch parameter.

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float) – The shadow protection point (expected in [0, SYP]).

  • HPP (float) – The highlight protection point (expected in [SYP, 1]).

  • inverse (bool) – Return the inverse stretch function if True.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.gpowerlaw_stretch_function(x, D, SYP, SPP, HPP, inverse)

Return the generalized power law stretch function f(x).

The generalized power law stretch function is defined as:

  • f(x) = b1*x when x <= SPP,

  • f(x) = a2+b2*(1+(x-SYP))**(D+1) when SPP <= x <= SYP,

  • f(x) = a3+b3*(1-(x-SYP))**(D+1) when SYP <= x <= HPP,

  • f(x) = a4+b4*x when x >= HPP.

The coefficients a and b are computed so that f is continuous and derivable.

For details about generalized hyperbolic stretches, see: https://ghsastro.co.uk/.

Parameters:
  • x (numpy.ndarray) – The input data.

  • D (float) – The stretch parameter (expected >= 0).

  • SYP (float) – The symmetry point (expected in [0, 1]).

  • SPP (float) – The shadow protection point (expected in [0, SYP]).

  • HPP (float) – The highlight protection point (expected in [SYP, 1]).

  • inverse (bool) – Return the inverse stretch function if True.

Returns:

The stretched data.

Return type:

numpy.ndarray

equimage.stretchfunctions.gamma_stretch_function(x, gamma)

Return the gamma stretch function f(x).

The gamma stretch function is defined as:

f(x) = x**gamma

This function clips the input data x below 0 before stretching.

Parameters:
  • x (numpy.ndarray) – The input data.

  • gamma (float) – The stretch exponent (expected > 0).

Returns:

The stretched data.

Return type:

numpy.ndarray