RBA's PixInsight Processes Reference Guide [Pre-Release ed.]


111 15 7MB

English Pages 263 Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
RBA'S PixInsight Processes Reference Guide
About this Reference Guide
How is this Reference Guide structured
ACDNR
Process > NoiseReduction
When to use ACDNR
Parameters
ACDNR Filter
Edge Protection
Lightness Mask
ATrousWaveletTransform
Process > Compatibility
When to use ATrousWaveletTransform
Parameters
Wavelet Layers
Detail Layer
Noise Reduction
K-Sigma Noise Thresholding
Deringing
Large-Scale Transfer Function
Dynamic Range Extension
AdaptiveStretch
Process > IntensityTransformations
When to use AdaptiveStretch
Parameters
Region of interest
Annotation
Process > Painting
When to use Annotation
Parameters
ArcsinhStretch
Process > IntensityTransformations
When to use ArcsinhStretch
Parameters
AssignICCProfile
Process > ColorManagement
When to use AssignICCProfile
Parameters
AssistedColorCalibration
Process > ColorCalibration
When to use AssistedColorCalibration
Parameters
AutoHistogram
Process > IntensityTransformations
When to use AutoHistogram
Parameters
AutomaticBackgroundExtractor
Process > BackgroundModelization
When to use ABE
Parameters
Sample Generation
Global Rejection
Local Rejection
Interpolation and Output
Target Image Correction
B3Estimator
Process > Flux
When to use B3Estimator
Parameters
Background References (1 and 2)
BackgroundNeutralization
Process > ColorCalibration
When to use BackgroundNeutralization
Parameters
Binarize
Process > IntensityTransformations
When to use Binarize
Parameters
Blink
Process > ImageInspection
When to use Blink
ChannelCombination
Process > ChannelManagement
When to use ChannelCombination
Parameters
ChannelExtraction
Process > ChannelManagement
When to use ChannelExtraction
Parameters
ChannelMatch
Process > Geometry
When to use ChannelMatch
Parameters
CloneStamp
Process > Painting
When to use CloneStamp
Parameters
ColorCalibration
Process > ColorCalibration
When to use ColorCalibration
Parameters
White Reference
Background Reference
ColorManagementSetup
Process > ColorManagement
When to use ColorManagementSetup
Parameters
Monitor Profile
System Settings
Default Profiles
Default Policies
Color Proofing
Global Options
ColorSaturation
Process > IntensityTransformations
When to use ColorSaturation
Parameters
CometAlignment
Process > ImageRegistration
When to use CometAlignment
Parameters
Target Frames
Format Hints
Output
Parameters
Subtract
ConvertToGrayscale
Process > ColorSpaceConversion
ConvertToRGBColor
Process > ColorSpaceConversion
Convolution
Process > Convolution
When to use Convolution
Parameters
Parametric
Library
Image
CosmeticCorrection
Process > ImageCalibration
When to use CosmeticCorrection
Parameters
Target Frames
Output
Use Master Dark
Use Auto detect
Use Defect List
Real Time Preview
CreateAlphaChannels
Process > ChannelManagement
When to use CreateAlphaChannels
Parameters
Crop
Process > Geometry
When to use Crop
Parameters
Margins/Anchors
Dimensions
Resolution
Process Mode
Fill Color
CurvesTransformation
Process > IntensityTransformations
When to use CurvesTransformation
Parameters
Curve Channels
Debayer
Process > Preprocessing
When to use Debayer
Parameters
Target Images
Format Hints
Output
Deconvolution
Process > Deconvolution
When to use Deconvolution
Parameters
PSF
Algorithms
Deringing
Wavelet regularization
Dynamic Range Extension
DefectMap
Process > ImageCalibration
When to use DefectMap
Parameters
DigitalDevelopment
Process > Obsolete
When to use DigitalDevelopment
Parameters
DDP Filter
DDP Color Emphasis
Divide
Process > Obsolete
When to use Divide
Parameters
DrizzleIntegration
Process > ImageIntegration
When to use DrizzleIntegration
Parameters
Input Data
Format Hints
Drizzle
Region of Interest
DynamicAlignment
Process > ImageRegistration
When to use DynamicAlignment
Parameters
Source and Target views / Selected Sample: x of z
Reference generation
Aligned Images
Registered Image
DynamicBackgroundExtraction
Process > BackgroundModelization
When to use DynamicBackgroundExtraction
Parameters
Target View / Selected Sample: x of z
Symmetries
Model Parameters (1)
Model Parameters (2)
Sample Generation
Model Image
Target Image Correction
DynamicCrop
Process > Geometry
When to use DynamicCrop
Parameters
Size/Position
Rotation
Scale
Interpolation
Fill Color
DynamicPSF
Process > Image
When to use DynamicPSF
Table columns
PSF Model Functions
Star Detection
Image Scale
ExponentialTransformation
Process > IntensityTransformations
When to use ExponentialTransformation
Parameters
ExtractAlphaChannels
Process > ColorManagement
When to use ExtractAlphaChannels
Parameters
Channels
Mode
FITSHeader
Process > Image
When to use FITSHeader
Parameters
FastRotation
Process > Geometry
When to use FastRotation
Parameters
FluxCalibration
Process > Flux
When to use FluxCalibration
Parameters
FourierTransform
Process > Fourier
When to use FourierTransform
Parameters
GradientHDRComposition
Process > GradientDomain
When to use GradientHDRComposition
Parameters
Target Frames
Parameters
GradientHDRCompression
Process > GradientDomain
When to use GradientHDRCompression
Parameters
GradientMergeMosaic
Process > GradientDomain
When to use GradientMergeMosaic
Parameters
Target Frames
Parameters
GREYCstoration
Process > NoiseReduction
When to use GREYCstoration
Parameters
HDRComposition
Process > ImageIntegration
When to use HDRComposition
Parameters
Input Images
Format Hints
HDR Composition
Fitting Region
HDRMultiscaleTransform
Process > MultiscaleProcessing
When to use HDRMultiscaleTransform
Parameters
Deringing
Midtones Balance
HistogramTransformation
Process > IntensityTransformations
When to use HistogramTransformation
Parameters
ICCProfileTransformation
Process > ColorManagement
When to use ICCProfileTransformation
Parameters
ImageCalibration
Process > ImageCalibration
When to use ImageCalibration
Parameters
Target Frames
Format Hints
Output Files
Pedestal
Overscan
Master Bias
Master Dark
Master Flat
ImageIdentifier
Process > Image
ImageIntegration
Process > ImageIntegration
When to use ImageIntegration
Parameters
Input Images
Format Hints
Image Integration
Pixel Rejection (1)
Pixel Rejection (2)
Pixel Rejection (3)
Large-Scale Pixel Rejection
Region of Interest
IntegerResample
Process > Geometry
When to use IntegerResample
Parameters
Dimensions
Resolution
InverseFourierTransform
Process > Fourier
When to use InverseFourierTransform
Parameters
Invert
Process > IntensityTransformations
When to use Invert
LRGBCombination
Process > ColorSpaces
When to use LRGBCombination
Parameters
Channel Weights
Transfer Functions
Chrominance Noise Reduction
LarsonSekanina
Process > Convolution
When to use LarsonSekanina
Parameters
Filter Parameters
Filter Application
Dynamic Range Extension
LinearFit
Process > ColorCalibration
When to use LinearFit
Parameters
LocalHistogramEqualization
Process > IntensityTransformations
When to use LocalHistogramEqualization
Parameters
LocalNormalization
Process > ImageCalibration
When to use LocalNormalization
Parameters
Outlier rejection
Support Files / Normalization
Target Images
Format Hints
Output Files
MaskedStretch
Process > IntensityTransformations
When to use MaskedStretch
Parameters
Region of Interest
MergeCFA
Process > Preprocessing
When to use MergeCFA
Parameters
MorphologicalTransformation
Process > Morphology
When to use MorphologicalTransformation
Parameters
Morphological Filter
Structuring Element
Thresholds
MultiscaleLinearTransform
Process > MultiscaleProcessing
Layers and Scales
When to use MultiscaleLinearTransform
Parameters
Layers
Detail Layer A/B
Noise Reduction
Linear Mask
K-Sigma Noise Thresholding
Deringing
Large-Scale Transfer Function
Dynamic Range Extension
MultiscaleMedianTransform
Process > MultiscaleProcessing
When to use MultiscaleMedianTransform
Parameters
Layers
Detail Layer A/B
Noise Reduction
Linear Mask
Dynamic Range Extension
NewImage
Process > Image
When to use NewImage
Parameters
Image Parameters
Initial Values
NoiseGenerator
Process > NoiseGeneration
When to use NoiseGenerator
Parameters
Distribution
PhotometricColorCalibration
Process > Photometry
When to use PhotometricColorCalibration
Parameters
Process Parameters
Image Parameters
Plate Solving Parameters
Advanced Plate Solving Parameters
Photometry Parameters
Background Neutralization
PixelMath
Process > PixelMath
When to use PixelMath
Parameters
Expressions
Destination
Pixel Math Expression Editor
RGBWorkingSpace
Process > ColorSpaces
When to use RGBWorkingSpace
Parameters
RangeSelection
Process > MaskGeneration
When to use RangeSelection
Parameters
ReadoutOptions
Process > Global
When to use ReadoutOptions
Parameters
Resample
Process > Geometry
When to use Resample
Parameters
Dimensions
Resolution
Process Mode
Rescale
Process > IntensityTransformations
When to use Rescale
Parameters
RestorationFilter
Process > Deconvolution
When to use RestorationFilter
Parameters
PSF
Noise Estimation
Filter Parameters
Deringing
Dynamic Range Extension
Rotation
Process > Geometry
When to use Rotation
Parameters
Interpolation
Fill Color
SampleFormatConversion
Process > Image
When to use SampleFormatConversion
ScreenTransferFunction
Process > IntensityTransformations
When to use ScreenTransferFunction
Parameters
SCNR
Process > NoiseReduction
When to use SCNR
Parameters
SimplexNoise
Process > NoiseGeneration
When to use SimplexNoise
Parameters
SplitCFA
Process > Preprocessing
When to use SplitCFA
Parameters
Target Frames
Output
StarAlignment
Process > ImageRegistration
When to use StarAlignment
Target Images
Format Hints
Output Images
Star Detection
Star Matching
Interpolation
StarGenerator
Process > Render
When to use StarGenerator
Parameters
StarMask
Process > MaskGeneration
When to use StarMask
Parameters
Structure Growth
Mask Generation
Mask Preprocessing
Statistics
Process > Image
When to use Statistics
Parameters
SubframeSelector
Process > Preprocessing
When to use SubframeSelector
Parameters
SubframeSelector Window
Subframes
System Parameters
Star Detector Parameters
Region of Interest
Format Hints
Output Images
Measurements Window
Parameters
Measurements Table
Measurements Graph
Expressions Window
Parameters
Superbias
Process > Preprocessing
When to use Superbias
Parameters
TGVDenoise
Process > NoiseReduction
When to use TGVDenoise
Parameters
Local Support
UnsharpMask
Process > Convolution
When to use UnsharpMask
Parameters
USM Filter
Deringing
Dynamic Range Extension
Recommend Papers

RBA's PixInsight Processes Reference Guide [Pre-Release ed.]

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

I

II

MASTERING PIXINSIGHT AND THE ART OF ASTROIMAGE PROCESSING

RBA's PixInsight Processes Reference Guide

Rogelio Bernal Andreo (Pre-Release April 2020)

III

PixInsight Processes Reference Guide (Annex to “PixInsight and the art of Astroimage Processing”) Author: Rogelio Bernal Andreo, aka RBA Published by: Rogelio Bernal Andreo Sunnyvale, CA 94086, USA © 2020 Rogelio Bernal Andreo, All Rights Reserved. No part of this publication may be used or reproduced or transmitted in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. Adobe®, Adobe® Photoshop® and Adobe® Lightroom® are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Microsoft®, Microsoft® Excel® and Microsoft® Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective owners. This copy belongs to Claudio Ulloa Saavedra

For more information about the book “Mastering PixInsight”: http://www.deepskycolors.com/mastering-pixinsight.html For deep-sky or nightscape workshops and astro-camps: http://www.deepskycolors.com/workshops.html For speaking arrangements and assignments: Rogelio Bernal Andreo, [email protected] For the latest news, images, updates: http://www.facebook.com/DeepSkyColors/ @deepskycolors on Instagram

IV

Table of Contents About this Reference Guide...................................................................................................1 How is this Reference Guide structured.................................................................................1 ACDNR..................................................................................................................................3 ATrousWaveletTransform.....................................................................................................7 AdaptiveStretch....................................................................................................................13 Annotation............................................................................................................................15 ArcsinhStretch......................................................................................................................16 AssignICCProfile.................................................................................................................17 AssistedColorCalibration.....................................................................................................18 AutoHistogram.....................................................................................................................20 AutomaticBackgroundExtractor...........................................................................................22 B3Estimator..........................................................................................................................25 BackgroundNeutralization...................................................................................................27 Binarize................................................................................................................................29 Blink.....................................................................................................................................30 ChannelCombination............................................................................................................31 ChannelExtraction................................................................................................................32 ChannelMatch......................................................................................................................33 CloneStamp..........................................................................................................................34 ColorCalibration...................................................................................................................36 ColorManagementSetup.......................................................................................................38 ColorSaturation....................................................................................................................42 CometAlignment..................................................................................................................44 ConvertToGrayscale.............................................................................................................48 ConvertToRGBColor...........................................................................................................48 Convolution..........................................................................................................................49

V

CosmeticCorrection..............................................................................................................52 CreateAlphaChannels...........................................................................................................57 Crop......................................................................................................................................58 CurvesTransformation..........................................................................................................60 Debayer................................................................................................................................63 Deconvolution......................................................................................................................66 DefectMap............................................................................................................................71 DigitalDevelopment.............................................................................................................72 Divide...................................................................................................................................74 DrizzleIntegration.................................................................................................................76 DynamicAlignment..............................................................................................................80 DynamicBackgroundExtraction...........................................................................................83 DynamicCrop.......................................................................................................................88 DynamicPSF.........................................................................................................................91 ExponentialTransformation..................................................................................................95 ExtractAlphaChannels..........................................................................................................96 FITSHeader..........................................................................................................................97 FastRotation.........................................................................................................................98 FluxCalibration.....................................................................................................................99 FourierTransform...............................................................................................................100 GradientHDRComposition.................................................................................................101 GradientHDRCompression................................................................................................103 GradientMergeMosaic........................................................................................................104 GREYCstoration................................................................................................................106 HDRComposition...............................................................................................................109 HDRMultiscaleTransform..................................................................................................111 HistogramTransformation..................................................................................................113 ICCProfileTransformation.................................................................................................119 ImageCalibration................................................................................................................120 ImageIdentifier...................................................................................................................126 VI

ImageIntegration................................................................................................................126 IntegerResample.................................................................................................................136 InverseFourierTransform...................................................................................................138 Invert..................................................................................................................................139 LRGBCombination............................................................................................................139 LarsonSekanina..................................................................................................................142 LinearFit.............................................................................................................................144 LocalHistogramEqualization..............................................................................................145 LocalNormalization............................................................................................................146 MaskedStretch....................................................................................................................153 MergeCFA..........................................................................................................................156 MorphologicalTransformation...........................................................................................157 MultiscaleLinearTransform................................................................................................161 MultiscaleMedianTransform..............................................................................................171 NewImage..........................................................................................................................176 NoiseGenerator...................................................................................................................178 PhotometricColorCalibration.............................................................................................179 PixelMath...........................................................................................................................188 RGBWorkingSpace............................................................................................................193 RangeSelection...................................................................................................................196 ReadoutOptions..................................................................................................................197 Resample............................................................................................................................201 Rescale...............................................................................................................................204 RestorationFilter.................................................................................................................205 Rotation..............................................................................................................................209 SampleFormatConversion..................................................................................................210 ScreenTransferFunction.....................................................................................................211 SCNR.................................................................................................................................213 SimplexNoise.....................................................................................................................215 SplitCFA.............................................................................................................................216 VII

StarAlignment....................................................................................................................218 StarGenerator.....................................................................................................................228 StarMask.............................................................................................................................231 Statistics.............................................................................................................................235 SubframeSelector...............................................................................................................236 Superbias............................................................................................................................248 TGVDenoise.......................................................................................................................250 UnsharpMask.....................................................................................................................253

VIII

RBA'S PixInsight Processes Reference Guide About this Reference Guide This guide is aimed at being useful to novice, intermediate and advanced image processors. This means the guide has to reach up to an advanced level, which may make the guide a bit hard to read by those fairly new to astroimage processing. Novice users will benefit more from the main book, where we can find workflows, techniques and many practical examples illustrated with real data are explained at a pace that's easy to follow. However, the guide can still be very useful to beginners when they're using some particular processing tool and would like to learn more about it, even if just the behavior of one particular parameter. What this Reference Guide will not do: 1. It does not teach how to process your images. 2. It does not show practical examples or tutorials. 3. It does not explain how PixInsight's user interface works. For all of the above and more, refer to the main book. What the reference will (hopefully) do for you: 1. Provide a comprehensive description of each of the nearly 100 processes in PixInsight. What they are, how they work, when to use them and what they can do for us. 2. Describe every parameter, some extensively, as well as the effects of adjusting many of them, particularly those that offer tangible benefits for the process at hand. 3. Offer suggestions about the use of a particular tool within a processing workflow, when to use it, etc.

How is this Reference Guide structured The reference guide is structured in a very simple way: alphabetical order. If you know the name of the process you're interested in, no need to guess under which category it was included.

1

Each process comes with an introduction that sometimes is just a few lines, while in other cases it extends to more than one page. The introduction is followed by a “When to use...” section that outlines when and why should we use each process, again, sometimes extensively. A list of all parameters and their explanation follows. Even though the guide describes how to use each process, many recommended values, etc. all dialog window screenshots show the process with its default values. For illustrated practical examples, please refer to the main book. The digital version of the guide is a dynamic/interactive PDF. The first time a process is mentioned within the documentation of another process, clicking on it will take you to its documentation. Most processes are described inclusively, but without going too far. That is, the guide is designed so we can navigate to any process directly and have right there all the information we need. When describing all processes in PixInsight this can lead to continuous repetition, since many parameters are very similar or even identical among many processes. Not only that, there are many concepts particular to PixInsight that need to be understood when working with determined processes, yet, an explanation in every process that relates to those concepts would probably be overkill. As a result, I aimed at a practical balance between repetition and referring to other parts of the book, while still keeping each process documentation as inclusive as possible. All parameters are always described, even common or basic ones such as “Add Images”, “Toggle” or “Upper/Lower Limits”, while concepts or topics that require extended explanations may offer pointers about where to continue reading on the topic within the book.

This copy belongs to Claudio Ulloa Saavedra

2

ACDNR Process > NoiseReduction

ACDNR stands for Adaptive Contrast-Driven Noise Reduction. It is a very flexible implementation of a noise reduction algorithm based on multiscale and mathematical morphology techniques. The idea behind ACDNR is, as with any good noise reduction tool, to perform efficient noise reduction while preserving details. ACDNR includes two mechanisms that work cooperatively: a special low-pass filter and an edge protection device. The low-pass filter smooths the image by removing or attenuating small-scale structures, and the edge protection mechanism prevents details and image structures from being damaged during low-pass filtering. ACDNR offers two identical sets of parameters: one for the lightness and another for the chrominance of color images. Chrominance parameters are applied to the CIE a* and b* components in the CIE L*a*b* color space, while lightness parameters are applied to the L component for color images, and to the nominal channel of grayscale images. In general, chrominance parameters are much less critical and we can define a stronger noise reduction than for the lightness.

When to use ACDNR ACDNR is a decent choice when we're trying to either tone down noise in an image or when we're trying to soften an image or a mask. Because we can apply noise reduction to the details (lightness) and the color (chrominance) separately, it also comes useful when we're trying to limit noise reduction to just one of the two.

3

While ACDNR can be applied to linear and nonlinear images whenever noise reduction is desired, it is not usually the best choice for applying noise reduction to linear images. That said, ACDNR was one of the original processes that started PixInsight back around 2003, and today it is competing against other tools in PixInsight that offer more advanced noise reduction methods, such as TGVDenoise, MultiscaleLinearTransform or MultiscaleMedianTransform.

Parameters ACDNR Filter

Apply: Since the ACDNR interface offers dual functionality by allowing us to define noise reduction for lightness and chrominance separately, we enable (or disable) this check box to apply (or not) the noise reduction to the lightness – should we have the Lightness tab active – or the chrominance – if the active tab is Chrominance. Lightness mask: Enable/disable using the lightness mask for the lightness or chrominance noise reduction, depending on which tab we're on. Read below for more information about the lightness mask. Std.Dev.: Standard deviation of the low-pass filter (in pixels). The low-pass filter is a mathematical function that is discretized on a small square matrix known as a kernel in the image processing jargon. This parameter controls the size in pixels of the kernel used. The kernel size directly defines the sizes of the image structures that the low-pass filter will tend to remove. For example, standard deviations between one and 1.5 pixels are appropriate to remove highfrequency noise that dominates most CCD images. Standard deviations between 2 and 3 pixels are quite usual when dealing with film images. Larger deviations, up to 4 or 6 pixels, can be used to smooth low-SNR regions of astronomical images (as the sky background) with the help of protection masks.

Amount: This value, in the range from 0.1 to one, defines how the denoised and the original image are combined. A zero amount value would leave the image unchanged, and an amount value of one would replace the image with its denoised version completely. This parameter is especially useful when the ACDNR filter is used repeatedly (see the Iterations parameter below). At each iteration, amount can be used to re-inject a small fraction to the image resulting from the preceding iteration. This leads to a recursive procedure that can help in fine-tuning and stabilizing the overall process.

4

Iterations: This is the number of times that the low-pass filter is applied. The ACDNR filter is much more efficient when applied iteratively. A relatively small filter (with a low standard deviation) applied several times is in general preferable to a larger, more aggressive filter applied once. When three or more iterations are used, ACDNR's edge protection is usually much more efficient and yields more robust results. The Amount parameter (see above) can also be used along with iterations to turn ACDNR filtering into a recursive procedure, mixing the original and processed images. Prefilter: If necessary, ACDNR can apply an initial filtering process to remove small-scale structures from the image. This can help achieve a more robust edge protection for the reasons explained above. Two prefiltering methods have been implemented: Multiscale and Multiscale Recursive. Both methods employ special wavelet-based routines to remove all bright and dark image structures smaller than two pixels. The recursive method is extremely efficient. This feature should only be used in presence of huge amounts of noise, when all significant image structures have sizes well above the two-pixel limit. Robustness: When ACDNR's edge protection has to operate in presence of strong small-scale noise, it may have a hard time defining accurate edges of significant structures. For example, isolated noisy pixels can be very bright or dark, and their contributions to the definition of protected edges can be relevant. Robustness refers here to the ability of ACDNR to become immune to small-scale noise when discriminating significant image structures. Three robustness enforcing methods have been implemented: Weighted average, Unweighted average and Morphological median. In these three methods, a neighborhood is defined for each pixel and a reference value is calculated from the neighbor pixels, which is then used to command the edge protection device. Both methods have their strong points. The method based on the morphological median is especially good to preserve sharp edges. On the other hand, the weighted average method can yield more natural-looking images. We can try both of them and see which is best for us, according to our preferences. Structure size: Minimum structure size to be considered by the noise reduction algorithm. Symmetry: When enabled, use the same threshold and overdrive parameters for both dark and bright side edge protection. Edge Protection

We define an edge as a brightness variation that the edge protection mechanism tries to preserve (protect) from the actual noise reduction. If we consider an edge as the locus of a brightness change, then for each edge there is a dark side and a bright side. ACDNR's edge protection gives

5

separate control over dark and bright sides of edges. For each side, there are two identical parameters, threshold and overdrive. Threshold: This parameter defines the relative brightness difference that triggers the edge protection mechanism. For example, a threshold value of 0.05 means that the edge protection device will try to protect image structures defined by brightness changes equal to or greater than a 5%, with respect to their surrounding areas. Higher thresholds are less protective. Too high of a threshold value can allow excessive low-pass filtering, and thus lead to destruction of significant image features. Lower thresholds are more protective, but too low of a threshold can lead to poor noise reduction results. In general, protection thresholds are critical and require some trial and error work. Overdrive: This parameter controls the strength of edge protection. When overdrive is zero (its default value), edge protection just tries to preserve the existing pixel values of protected edges. When overdrive is greater than zero, the edge protection mechanism tends to be more aggressive, exaggerating the contrast of protected edges. This parameter can be useful because it may allow a larger threshold value, which in turn gives better noise reduction, while still protecting significant edges. However, overdrive is an advanced parameter that requires experience and must always be used with care: incorrect overdrive dosage can easily generate undesirable artifacts. Star protection: When enabled, a protection mechanism is activated, in combination with the general edge protection mechanism, to prevent the low-pass filter (the noise reduction) from damaging stars. The Star threshold parameter then becomes available. Star threshold: As a complement to the edge protection for bright sides, Star threshold allows us to define a threshold for star edge protection. As with the more general threshold parameter above, higher thresholds are less protective and stars may be softer or even disappear. Lightness Mask

To improve ACDNR's flexibility, we can use an inverse lightness mask that modulates the noise reduction work. Where the mask is black, original (unprocessed) pixels are fully preserved; where the mask is white, noise reduction acts completely. Intermediate (gray) mask levels define a proportional mixture of unprocessed and processed pixel values. This mask can be useful to protect high SNR (signal to noise ratio) regions, while applying a strong noise reduction to low SNR regions. A typical example of this is smoothing the background of a deep-sky image while leaving bright regions intact.

6

Removed wavelet layers: To create an effective mask, wavelets are used to soften the mask. Here we define the number of wavelet layers (starting from one) to be removed in order to build the mask. Midtones/Shadows/Highlights: The ACDNR mask is generated and controlled with these three parameters. They define a simple histogram transform that is applied to a copy of the lightness that is used to mask the noise reduction process. Take into account that an inverse mask is always generated, which means that we must reverse our logic when varying these histogram parameters. Increasing the midtones tends to remove protection and lowering them would cause the opposite effect, while increasing the shadows will remove protection very fast. Lowering the highlights will add protection but very low values usually protect way too much. Preview: To help achieving a correct mask with a minimal effort, the ACDNR interface includes a special mask preview mode. When this mode is enabled, the ACDNR process simply generates the mask, copies it to the target image, and terminates execution. When used along with the RealTime Preview window, this mask preview mode is particularly useful.

ATrousWaveletTransform Process > Compatibility

ATrousWaveletTransform (often abbreviated as ATWT) is a rich and flexible processing tool that can be used to perform a wide variety of noise reduction and detail enhancement tasks. The à trous (with holes) algorithm is a powerful tool for multiscale image analysis. With ATWT we can perform a hierarchical decomposition of an image into a series of scale layers, also known as wavelet planes. Each layer contains only structures within a given range of characteristic dimensional scales in the space of a scaling function. The decomposition is done throughout a number of detail layers defined at growing characteristic scales, plus a final residual layer, which contains the rest of unresolved structures. This concept is explained in more detail when we discuss the MultiscaleLinearTransform process, so please review it if you're new to these concepts. This multiscale approach offers many advantages, among which is that by isolating significant image structures within specific detail layers, detail enhancement can be carried out with high accuracy at any given scale. Similarly, if noise occurs at some specific dimensional scales in the 7

image, as it usually happens in most cases, by isolating it into appropriate detail layers, we can reduce or remove the noise at the scales where it usually is without affecting significant structures nor any details from different scales.

When to use ATrousWaveletTransform ATrousWaveletTransform was for many years the to-go tool in PixInsight to process images at different scales. When the MultiscaleLinearTransform tool was developed, ATWT became irrelevant and is still only included in PixInsight for compatibility with old scripts and such. The reason MultiscaleLinearTransform took over ATWT is because MultiscaleLinearTransform not only offers an improved version of everything ATWT does, but much more. Therefore, while a description of ATWT as well as its uses is included here for reference, we should not need to use ATWT, and use MultiscaleLinearTransform (or MultiscaleMedianTransform) instead. ATWT was traditionally used in multiple different situations, whether applied to an actual astroimage or a mask. It was often used to separate small structures from the image (such as stars), or to create smooth images that mostly emphasize the larger structures in the image.

Parameters ATWT comprises two main sets of parameters, the first to define the layered decomposition process and the second for the scaling function used for wavelet transforms. Wavelet Layers

Dyadic: Detail layers are generated for a growing scaling sequence of powers of two. The layers are generated for scales of 1, 2, 4, 8... pixels. For example, the fourth layer contains structures with characteristic scales between 5 and 8 pixels. This sequencing style should be selected if noise thresholding is being used.

8

Linear: When Linear is selected, the Scaling Sequence parameter is the constant difference in pixels between characteristic scales of two successive detail layers. Linear sequencing can be defined from one to sixteen pixels. For example, when Linear 1 is selected, detail layers are generated for the scaling sequence 1, 2, 3, ... Similarly, Linear 5 would generate the sequence 1, 6, 11, ... Layers: This is the total number of generated detail layers. This number does not include the final residual layer (R), which is always generated. In PixInsight we can work with up to sixteen wavelet layers, which allows us to handle structures at really huge dimensional scales. Modifying large scale structures can be very useful when processing many deep-sky images. Scaling Function: Selecting the most appropriate scaling function is important because by appropriately tuning the shape and levels of the scaling function, we gain full control on how precise the different dimensional scales are separated. In general, a smooth, slowly varying scaling function works well to isolate large scales, but it may not provide resolution enough as to decompose images at smaller characteristic scales. Oppositely, a sharp, peak-wise scaling function may be very good isolating small scale image features such as high-frequency noise, faint stars or tiny planetary and lunar details, but quite likely it will be useless to work at larger scales, as the global shape of a galaxy or large Milky Way structures, for example. In PixInsight, à trous wavelet scaling functions are defined as odd-sized square kernels. Filter elements are real numbers. Most usual scaling functions are defined as 3×3 or 5×5 kernels. A kernel in this context is a square grid where discrete filter values are specified as single numeric elements. Here's a more detailed description of the different scaling functions offered in ATWT: •

3×3 Linear Interpolation: This linear function is a good compromise for isolation of both relatively large and relatively small scales, and it is also the default scaling function on start-up. It does a better job on the first 4 layers or so.



5×5 B3 Spline: This function works very well to isolate large-scale image structures. For example, if we want to enhance structures like galaxy arms or large nebular features, we'd use this function. However, if we want to work at smaller scales, e.g. for noise reduction purposes, or for detail enhancement of planetary, lunar or stellar images, this function is a bad choice.



3x3 Gaussian: This is a peaked function that works better at isolating small-scale structures, so they can be used to control a smoothing effect among other things. 9



5x5 Gaussian: Same as the 3x3 Gaussian but using a 5x5 kernel.



3×3 Small-Scale: A peak-wise, sharp function that works quite well for reduction of highfrequency noise and enhancement of image structures at very small characteristic scales. Good for lunar and planetary work, for strict noise reduction tasks, and to sharpen stellar objects a bit. For deep-sky images, use this function with caution. The main difference between the 5 different 3x3 Small Scale functions ATrousWaveletTransform provides is in the strength/value of the central value of the 3x3 kernel: 4, 8, 16 32 or 48.



5x5 Peaked: The more pronounced the peak of a scale function is, the more surgical it will be on small scale structures and the less suitable it'll be to isolate large scale structures. As its name indicates, the 5x5 peaked function uses a rather pointy 5x5 kernel.



7x7 Peaked: The two 7x7 kernels (1 and 0.5) provide even more “pointiness” than the previous kernels.

See below a 3D plot comparison between the 5x5 Gaussian, 5x5 Peaked and both 7x7 Peaked kernels, as defined in PixInsight.

The window below the pull-down option to define the scaling function will show the generated layers. Individual layers can be enabled or disabled. To enable/disable a layer, double-click anywhere on the layer's row. When a layer is enabled, this is indicated by a green check mark. Disabled layers are denoted by red 'x' marks. The last layer, R, is the residual layer, that is, the layer containing all structures of scales larger than the largest of the generated layers. In addition to the layer and scale, an abbreviation of the parameters specific to each layer -if defined - are also displayed.

10

Detail Layer

Bias: This is a real number ranging from –1 to +15. The bias parameter value defines a linear, multiplicative factor for a specific layer. Negative biases decrease the relative weight of the layer in the final processed image. Positive bias values give more relevance to the structures contained in the layer. Very high values are not recommended for most purposes. Noise Reduction

For each detail layer, specific sets of noise reduction and detail enhancement parameters can be defined and applied simultaneously. Filter: Only available in ATrousWaveletTransformV1, here we define the type of noise reduction filter. Out of the three options available, the default Recursive Multiscale Noise Reduction often yields the best balance between noise reduction and detail preservation. A Morphological Median Filter would often generate slightly less noisy images at the expense of details. Threshold: The higher the threshold value, the more pixels will be treated as noise for that particular scale. Amount: When this parameter is nonzero and Noise Reduction has been enabled, a special smoothing process is applied to the layer's contents after biasing. The Amount parameter controls how much of this smoothing is used. Iterations: This parameter governs how many smoothing iterations are applied. Extensive try out work is always advisable, but recursive filtering with two, three or four iterations and a relatively low amount value is generally preferable, instead of trying to achieve the whole noise reduction goal with a single, strong iteration. Kernel size: Only available in ATrousWaveletTransformV1, when Directional Multiway Median Filter or Morphological Median Filter is selected, here we define the kernel size in pixels. A value of 4 would define a 4x4 kernel. K-Sigma Noise Thresholding

When activated, K-Sigma Noise Thresholding is applied to the first four detail layers. This technique will work just as intended if we select the dyadic layering sequence. The higher the threshold value, the more pixels will be treated as noise for the characteristic scale of the smaller wavelet layers. Threshold: Defines the noise threshold. This is the “k” in the k-sigma method. Anything below this value will be applied the noise reduction defined by the rest of the parameters. 11

Amount: Strength of the attenuation applied to the threshold coefficients. Soft thresholding: Not available in ATrousWaveletTransformV1, when enabled, it will apply a soft thresholding of wavelet coefficients instead of the default, harder thresholding. That's the recommended value for most cases. Use multiresolution support: Not available in ATrousWaveletTransformV1, enable this option to compute the noise standard deviation of the target image. If disabled, ATWT will take less time to complete at the expense of accuracy. Deringing

When we use ATWT for detail enhancement, what we are applying is essentially a high-pass filtering process. High-pass filters suffer from the Gibbs effect, which generates the unpopular ringing artifacts. For more detailed information about ringing artifacts and deringing, please review the documentation in MultiscaleLinearTransform about the topic. ATrousWaveletTransform includes a procedure to fix the ringing problem. It can be used for enhancement of any kind of images, including deep-sky and planetary. Dark: Deringing regularization strength for dark ringing artifacts. Increase to apply a stronger correction to dark ringing artifacts. The best strategy is to find the lowest value that effectively corrects the ringing, without overdoing it. Bright: Deringing regularization strength for bright ringing artifacts. It works exactly as Dark but for bright ringing artifacts. Since each image is different, the right amount varies from image to image. It is recommended starting with a low value – such as 0.1 – and increase as needed before over-correction becomes obvious. Output deringing maps: Generate an image window for each deringing map image. New image windows will be created for the dark and bright deringing maps, if the corresponding amount parameters are nonzero. Large-Scale Transfer Function

ATWT lets us define a specific transfer function for the residual layer. •

12

Hyperbolic: A hyperbolic curve is similar to a multiplication by a positive factor slightly less than one, which usually will improve color saturation by darkening the luminance. The break point for the hyperbolic curve can be defined in the slider to the right.



Natural logarithm: The natural logarithm function will generally produce a stronger darkening of the luminance.



Base-10 logarithm: The base-10 logarithm function will result in a much stronger darkening of the luminance than the natural logarithm or hyperbolic functions.

Dynamic Range Extension

Several operations executed during a wavelets transformation – such as a bias parameter - may result in some areas reaching the upper or lower limits of the available dynamic range. The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. We can control both the low and high range extension values independently. Low range: If we increase the low range extension parameter, the final image will be brighter, but it will have fewer black-saturated pixels. High range: If we increase the high range extension parameter, the final image will be globally darker, but fewer white-saturated pixels will occur. Any of these parameters can be set to zero (the default setting) to disable extension at the corresponding end of the dynamic range. Target: Whether ATWT should be applied over to only the lightness, the luminance, chrominance or all RGB components.

AdaptiveStretch Process > IntensityTransformations

AdaptiveStretch is a nonlinear contrast and brightness adjustments tool in PixInsight that mostly depends on adjusting a single noise threshold parameter. Despite this being a simple definition, AdaptiveStretch does offer some 13

significant advantages when compared to other brightness and contrast tools. For example, AdaptiveStretch not only tries to maximize contrast, but it does so without clipping any data (no pixel values will become either zero or one). Of course, pixels that were clipped before applying AdaptiveStretch will continue being clipped after the process is done.

When to use AdaptiveStretch AdaptiveStretch is a simple tool that can achieve decent results quickly, however it is not at versatile as many other image intensity processes in PixInsight. Therefore the recommendation is to use AdaptiveStretch when we're looking for quick and decent results without clipping data.

Parameters Noise threshold: Brightness differences below the noise threshold are assumed to be caused by noise, these being the areas that are attenuated by the process, while brightness differences above the noise threshold will tend to be enhanced. The lower the value, the more aggressive the stretch would be. Adjust this parameter along with Contrast Protection for best results. Contrast protection: This parameter is used to constrain the stretch effect on areas that are either very bright or very dark. The higher the value, the more protection. The checkbox to the right allows us to completely disable this parameter, which can be useful to see the results with and without contrast protection. Maximum curve points: Here we indicate the maximum number of points in the transformation curve. The computed values will depend on the bit depth of the image. For example, for 8-bit and 16-bit integer images, AdaptiveStretch would only process up to 256 and 65536 points respectively. Normally we wouldn't need to modify this parameter, and only aim for more points in very specific cases with images displaying a very large dynamic range. Real-time curve graph: Enabling this option will open a window that, when the Real-Time Preview mode is enabled, will show the curve being applied as we adjust the parameters. This window includes two buttons, one depicting a photo camera, which would generate an 8-bit image of the actual graph, and another button displaying a graph that when clicked, would open the CurvesTransformation process with the current transformation defined in it. This can be very useful not only to evaluate and understand the transformation being applied, but also as a learning tool.

14

Region of interest

This common set of parameters allows us to restrict the analysis to a specific area in the image. Often times we would define a region of interest (ROI) instead of a preview so that the entire processing workflow can be recreated automatically or saved on a process icon.

Annotation Process > Painting

Annotation is a tool to add simple text sentences (one line at a time) to an image. It also allows us to include a leader line, that is, a line that goes from the text itself to a point that we define dynamically. Annotation is a dynamic process which, among other things, it means that the text is not actually rendered into image pixels until the process is executed. First, we define the text and parameters, then we click anywhere in the image where we want to add the annotation, after which we can reposition the text and the leader line using the mouse.

When to use Annotation Annotation is used when we wish to label certain areas in an image, usually after the image is considered final and it is already saved without annotations. Some people prefer using other image editing applications that offer more versatility when it comes to adding text and labels, although for basic annotation and labeling, Annotation should suffice. Being a dynamic process, we can modify our annotation any way we like until we execute the process, at which point, the annotation will be rendered into image pixels permanently – unless we undo it later, of course. If we see our annotations to either collide or overlap, be aware that there is a known problem at the time of this writing (PixInsight 1.8.8-3) that may cause this behavior.

15

Parameters Text: Enter here the string that will be displayed. Show leader: Display a “leader” line. Move the mouse over the area the line is pointing at, and drag it around to change its position. Font Properties: Here, we can define the font properties: font, size, style, color and opacity.

ArcsinhStretch Process > IntensityTransformations

ArcsinhStretch is a tool that stretches image intensity, without affecting color information. It does so by means of applying an inverse hyperbolic sine function.

When to use ArcsinhStretch ArcsinhStretch is best used when we're ready to bring a RGB image from linear to nonlinear, meaning we have already performed at the very least, color calibration, and gradient correction (background extraction). Although it can be used with monochrome images, it is often used on color images when other stretch functions seem to destroy color information in stars.

Parameters Stretch factor: Increase or decrease the intensity of the stretch. Because the stretch factor has a logarithmic response, it works well across the entire range of possible values. Best results are obtained by combining this stretch with the Black point value, described below. Black point: Here, we set the black point. The higher the stretch factor, the most sensitive black point adjustments become – this being the reason the parameter offers two different sliders: the top slider for coarse adjustments, and the bottom slider for very fine adjustments.

16

Protect highlights: In cases when the stretch we're performing results in saturated pixels, checking this parameter would rescale the entire image so that such pixels don't become saturated. Use RGB working space: If disabled, ArcsinhStretch would assume all R, G, and B values have an equal weight of 1/3 each when calculating the luminance. If enabled, the process will use the weights defined in the current RGB Working Space (see the process RGBWorkingSpace). Estimate Black Point: Clicking on this button will automatically set the black point to a value that is often a good starting point. More specifically, it sets the black point at a level where 2% of pixels would be clipped to zero. Further adjustments are often needed, especially if the Stretch factor is readjusted. This option is only available when the Real-Time Preview is activated. Highlight values clipped to zero: Display in white (in the Real-Time Preview) all pixels that are clipped to a value of zero by the process.

AssignICCProfile Process > ColorManagement

AssignICCProfile is used to assign an ICC profile to an image. An ICC profile is a set of data that characterizes a color input or output device, or a color space, according to ICC (International Color Consortium) standards.

When to use AssignICCProfile Assigning an ICC profile to an image allows the image to be displayed correctly across different devices, so it's always a good idea to assign the ICC profile to our images, particularly if we're going to share the images with others (say, on the web) or create prints. Because different ICC profiles would change the appearance of our image in our screen, it is recommended to get this done as one of our first steps during processing. There are instances where we may want to make an ICC profile change later on.

17

Parameters Current Profile: It shows the description of the ICC profile assigned to the image selected on the view selection combo box, or the default RGB ICC profile (as selected in the ColorManagementSetup dialog) if no ICC profile is currently assigned to the image. We cannot change the profile description. New Profile lets us assign an ICC profile to the target image. Assign the default profile: It assigns the default ICC profile to the target image, as selected in the Color Management Setup dialog. Leave the image untagged: Don't add a ICC profile to the image. In this case, the default profile will be used to manage its color. Assign profile: Here is where we can assign a different ICC profile to the image. We use the pulldown menu to select the profile, type it manually in the text box, or click on the looping green arrows icon (bottom-right) to refresh the profile list.

AssistedColorCalibration Process > ColorCalibration

AssistedColorCalibration allows us to find, manually, the proper color balance (or calibration) of a linear image before applying background neutralization or gradient subtraction. Because these other processes will alter the color information in the image, by finding the right white balance before any of those processes are applied, we can better determine the actual RGB coefficients for our image. The caveat is, we need to dial the Red, Green and Blue sliders manually in a trial and error fashion. It is not an automatically calculated color calibration tool. The “assistance” comes in the form of being able to preview the results after applying a histogram stretch and color saturation adjustments – but just for the previews - which can be more flexible than simply applying a STF (ScreenTransferFunction).

18

To use AssistedColorCalibration we first need to define two previews: one that will be good for a background reference and another one that we'll use to evaluate the results – before applying the process to the actual image. The background reference preview should contain mostly background free of nebulosity or galaxies, while the “results” preview should target an area rich in color features so we can better evaluate the results.

When to use AssistedColorCalibration AssistedColorCalibration is just one of the many tools we can use to balance (calibrate) the color in our images with PixInsight. While it is recommend to use other processes – that are more dependent on the actual image rather than us dialing the RGB sliders to our liking – AssistedColorCalibration comes handy when either none of the more automated processes seem to work, or when we simply would like to make the RGB adjustments manually while our image is still linear. AssistedColorCalibration may also be used to help us determine the color coefficients of our camera, this also being a reason why the adjustments are recommended to be done prior to background extraction or other processes that alter the original signal. However, a process that depends on personal adjustments such as AssistedColorCalibration, lacks the rigorousness that other processes offer.

Parameters White Balance: This is where we modify the actual weights of each channel, manually. Prior to making adjustments, we need to fine-tune the remaining parameters. Do note that these are the only parameters that will be applied to the image. The remaining of parameters in this dialog box (all explained below) only affect the previewing instances, they're not applied to the final image. Background Reference: In this parameter we select the preview that will be used as background reference.

19

Histogram Transformation: Use the sliders at the bottom of the graph (or enter values manually in the corresponding text boxes) to stretch the results in our sample preview – that is, the preview we use to evaluate results, not the one used for background reference. Saturation: Just like the Histogram Transformation parameter above, this parameter is used to give a boost to our “results” preview, except in this case we're boosting color saturation, mostly to help us preview the final colors in the image, so we can determine whether we're close to the results we want, or we need to continue tweaking the RGB sliders.

AutoHistogram Process > IntensityTransformations

This process applies an automatic histogram transform to each RGB channel to achieve prescribed median values, along with a quick histogram clipping feature.

When to use AutoHistogram AutoHistogram is one of the many tools available in PixInsight for stretching our data, and using it (versus using other processes) is often a matter of choice. While the tool itself does some thinking for us – unlike, say, HistogramTransformation or CurvesTransformation – it's somewhat limited to applying a specific median value to the entire image, plus some clipping for enhanced contrast. Its ease of use is often a reason for being the tool of choice for bringing an image from linear to nonlinear stage.

Parameters Histogram Clipping: We enable this option if we want to do a histogram clipping. While better results can be easily achieved with other tools, careful clipping adjustments can improve the results obtained with AutoHistogram. We should adjust the clipping value(s) only after having

20

made the adjustments on the Target Median Values (defined below) and previewed the results without clipping. •

Joint RGB/K channels: Select this option to apply the clipping equally in all RGB/grayscale channels. If selected, only the R/K values under Shadow/Highlights Clipping need to be entered.



Individual RGB/K channels: Select this option to apply the clipping differently for each RGB channels. If selected, the values for each channel can be modified individually. Since AutoHistogram isn't much of a color balancing tool, it is usually better to not use this option.



Shadows Clipping: The “black point” clipping values. In most cases, this is the only clipping value that may need adjustments.



Highlights Clipping: The clipping values for the highlights. Rarely used in astronomical images.

Target Median Values: Enable this option to perform a transform to each RGB channel to achieve prescribed median values. Disable to skip it. •

Stretch Method: Here we select one out of three different typical stretch algorithms: Gamma (a typical exponential transform), Logarithmic and Rational Interpolation (MTF). The latest method, Rational Interpolation, is a midtones transfer function that usually gives the most contrast, so it's often the preferred method. For a softer stretching, we would use any of the other two options.



Joint/Individual RGB/K Channels: Selecting one option or the other depends on whether we want to perform the gamma transform to all RGB/grayscale channels equally or individually.



Set As Active Image: When clicked, the parameters in the AutoHistogram window will be populated with the corresponding data from the active image.



Capture readouts: When enabled, image readouts will have AutoHistogram recalculate the target median values. We do an image readout by clicking (optionally dragging) on the image.

21

AutomaticBackgroundExtractor Process > BackgroundModelization

AutomaticBackgroundExtractor – also known as ABE in PixInsight jargon – is one of PixInsight's background modelization tools of choice. As its name says, ABE does its work in a completely automatic fashion: we provide a source image, a number of parameters controlling ABE's behavior, and we get another image with the generated background model. That model can also be applied to the original image by subtraction or division, depending on the type of uneven illumination problems to correct for. Additive phenomena like light pollution gradients should be corrected by subtraction. Multiplicative effects like vignetting should be fixed by a division, though in this case applying correct flat-field calibration is the correct procedure. Except for the Correction parameter – which needs to be specified if we want ABE to correct out image - for general purposes, the default values are often a good starting point. ABE works by sampling the background in typically small samples across the image at fixed sizes and intervals. With this information, ABE then can create a background model with acceptable accuracy.

22

When to use ABE Whenever our (lineal) image displays a noticeable gradient, vignetting or any other uneven illumination adding unwanted signal to our images, ABE offers a quick way to obtain results without having to deal with “sample generation” (a key element when using ABE's manual counterpart, DynamicBackgroundExtractor or DBE). Applying ABE over our image may quickly correct these defects. If ABE fails, then we try DBE instead. It is not recommended to use ABE in difficult situations or in cases where we want to perform a very careful background modeling.

Parameters Sample Generation

Box Size: Length in pixels of all background sample boxes. Large sample boxes may fail to capture local background variations, while small sample boxes may detect small-scale variations that should be ignored, such as noise and stars. Box separation: Distance in pixels between two adjacent background samples. A large distance (less sample boxes) can help making a smoother background model. The more samples we use (smaller box separation), the longer it will take to build the model. Global Rejection

Deviation: Tolerance of global sample rejection, in sigma units. This value indicates how far background samples can be from the median background of the target image in order to be considered when building the background model. We can decrease this value to exclude more background samples that differ too much from the mean background. This can useful to avoid mistaking large-scale structures – such as large nebulae – in the generated background model. Unbalance: Shadows relaxation factor. Higher values will result in more dark pixels in the generated background model, while lower values can be used to reject bright pixels. Use Brightness Limits: Enable this option to set a high-low limit to determine what is a background pixel or not. When enabled, Shadows indicates the minimum value of background pixels, while Highlights determines the maximum value allowed for background pixels. Local Rejection

Tolerance: Tolerance of local sample rejection, in sigma units. We can decrease this value to reject more outlier pixels with respect to the median of each background sample. This is useful to protect background samples from noise and small-scale bright structures, such as small stars.

23

Minimum valid fraction: This parameter sets the minimum fraction of accepted pixels in a valid sample. The smaller the value, the more restrictive ABE will be accepting background samples. Draw sample boxes: When selected, ABE will draw the background sample boxes on a new 16bit image. This can be very useful when adjusting ABE parameters. In a sample boxes image, each sample box is drawn with a pixel value proportional to the inverse of the corresponding sample background value. Just try samples: When selected, ABE will stop after extracting the set of the background samples, without generating the background model. Normally we would select this option along with Draw sample boxes. That way, ABE will create a sample boxes image that we can use to evaluate the suitability of the current ABE parameters. Interpolation and Output

Function degree: Degree of the interpolation polynomials. ABE uses a linear least squares fitting procedure to compute a set of polynomials that interpolate the background model. In general, the default value (4th degree) is appropriate in most cases. For very complex cases, increasing this value may be necessary to reproduce local background variations more accurately. Downsampling factor: Downsampling ratio of the background model image. This is a value between one (no downsampling) and eight. Background models are very smooth images, meaning that they can usually be generated with downsampling ratios between 2 and 8 without problems, depending on the variations of the sampled background. Model sample format: This parameter defines the bit depth of the background model. Evaluate background function:When enabled, ABE generates a 16-bit image that we can use to evaluate the suitability of the background model. The comparison image is a copy of the target image to which we have subtracted the background model. The Comparison factor parameter is a multiplying factor applied to emphasize possible inconsistencies in the comparison image. Target Image Correction

Correction: Here, we decide whether we would like to apply the background model to produce a corrected version of the target image. Again, additive effects, such as gradients caused by light pollution should be subtracted, while multiplicative effects, such as differential atmospheric absorption or vignetting should be divided.

24

Normalize: When enabled, the median value of the target image will be applied after background model correction. If the background model was subtracted, the median is added, and if it was divided, the resulting image will be multiplied by the median. Enabling this option tends to recover the original color balance of the background. If this option is not selected (the default value), the above median normalization is not applied, which results in the corrected image usually acquiring a more neutral background. Discard background model: If enabled, ABE does not create an image with the background model after correcting the target image. If disabled, the generated background model will be provided as a new image.= with the suffix ABE_background. Replace target image: We enable this parameter if we want the correction to be performed directly on the target image. When disabled, a new corrected image is created, leaving the original target image unmodified. Identifier: If we wish to give the corrected image a unique identifier, we enter it here. Otherwise ABE will create a new identifier, adding _ABE to the identifier of the target image. Sample format: Define the format (bit depth) of the corrected image.

B3Estimator Process > Flux

B3Estimator is a process that creates a synthetic image as an estimate of flux at the specified output wavelength or frequency, using two images as source. Alternatively, it can be used to generate a thermal map as an estimate of temperature, using the laws of black body radiation. In other words, given two images at different wavelengths (spectral inputs), B3Estimator can calculate the temperature of black bodies in these two images, and synthesize a third image at another wavelength.

25

When to use B3Estimator B3Estimator can be used for different purposes, both scientific and aesthetic. Two that seem to be fairly popular are creating a synthetic channel (for a missing filter, for example, say, H-Alpha or Green) and enhancing features in our image that may not be too obvious at first, from non-black body objects or black body emissions. Remember however that B3Estimator relies on specific wavelengths and works better when used on black body targets. It is not a cosmetic tool. In order to produce accurate results, B3Estimator needs flux-calibrated images as the two source images. This can be done by using the FluxCalibration tool. Alternatively, we can skip using FluxCalibration as long as the source images are well equalized. Also, since all filters have a bandwidth but B3Estimator needs a single wavelength, a common practice is to divide the source images by the bandwidth of the filter used for that image, and then multiply the resulting image for the bandwidth we're targeting in our synthetic image. For example, if we used two source images, one captured with a R (red) filter with a middle point wavelength of 680 nm and bandwidth of 70 nm, and the other one used a G (green) filter of 540 nm and a bandwidth of 60 nm, and we aimed at producing a synthetic B (blue) image, we'd divide the R image by 70, the G image by 60, set our target to a central wavelength of, say, 450 nm and then multiply the resulting image by, say 75 nm if we assume our blue filter would have a bandwidth of 75 nm.

Parameters Input image 1 & 2: As indicated above, B3Estimator needs two (grayscale) source images – views actually, that is, images already opened in PixInsight.

26

Input wavelength 1(nm) & 2: Here, we enter the wavelength in nanometers corresponding to each of the input images. Output wavelength: Only needed when generating a synthetic image (see below), here we indicate, also in nanometers, the desired wavelength of the synthetic image. This assumes that every pixel in the image behaves as a black body. Intensity units: Here, we determine what do we want the pixels in the source images to represent. The default Photons/Wavelength value is the most common situation, where the pixels represent the number of photons in wavelength units. We can also treat the pixels as a measure of energy. Whether photons or energy, we can have these measured in wavelength or frequency units. Output image/s: Indicate whether to generate a synthetic image, a thermal map, or both. Background References (1 and 2)

Each input image can be associated with an image that would act as a background reference for the algorithm. We can define such images here, or leave these sections blank, in which case, the input images will act as their own background references. We can also limit each background reference image to a specific region of interest (ROI). Reference image: Here, we select the image we will use as a background reference. The image must be opened (a view) in PixInsight's workspace. Lower limit: We can select a range of valid pixel values for the purpose of evaluating the background. Pixels with a value lower than or equal to this will not be considered when calculating the mean background. Upper limit: This is the upper limit of the valid range of pixels. Pixel values equal or above this amount will not be considered when calculating the mean background.

BackgroundNeutralization Process > ColorCalibration

The BackgroundNeutralization tool makes the global color adjustments required to neutralize the background color of an image. This requires a good background reference. 27

When to use BackgroundNeutralization BackgroundNeutralization is a very popular tool in PixInsight that works best on linear images as, theoretically, any color balancing is better performed when our image is still linear. It is recommended to be used after removing any gradients, if any, from the image (using ABE or DBE). For the most part, the default parameters work well, but depending on our image, a small adjustment to the Upper limit parameter may be desirable.

Parameters Reference image: BackgroundNeutralization will use this image to calculate an initial mean background level for each channel. If left blank, the target image will be used as background reference. We should specify a view that represents as much as the true background of the image as possible, avoiding nebulosity, galaxies and other signal that might interfere with the readout from the pixels within the specified limits. A typical example involves defining a small preview over an area of the target image that is mostly sky, and selecting it here as the background reference image. Lower limit: Pixels with values less than or equal to this value will be ignored when calculating the mean background values. Since the minimum allowed value for this parameter is zero, black pixels are never considered background data. Upper limit: Pixels above this value will be ignored when calculating the mean background levels. Working mode: Use this option to select a background neutralization mode.

28



Target background: BackgroundNeutralization will force the target image to have the specified mean background value for all RGB channels. Any out-of-range values after neutralization will be truncated, which can produce some minimal data clipping.



Rescale: The target image is always be rescaled after neutralization – rescaled here means that all pixel values are recalculated so they all stay within the available dynamic range of the image, which also means that no data clipping can occur. In this mode, besides no data

clipping, the neutralized image maximizes usage of dynamic range usage, however, we can't control the resulting mean background values. •

Rescale as needed: This mode is similar to Rescale, except that the image is only rescaled if there are out-of-range values after neutralization. This is the default value.



Truncate: All out-of-range pixels after the neutralization process are truncated, usually clipping a large amount of pixels. This mode is useful to perform a background subtraction to a working image used for an intermediate analysis or processing step, but it rarely produces usable results in itself.

Target background: Only available when the working mode is Target background, here we define the final mean background level that will be imposed to the target image.

Binarize Process > IntensityTransformations

The Binarize process transforms all pixels in the image to either pure black (zero) or pure white (one). Binarize's threshold parameter also allows the perfect isolation of the stars in the mask by fine tuning the structure withdrawn.

When to use Binarize Binarize is mostly used to generate masks, whether star masks or more complex masks also based on strong signal in our images. While there's other similar but more flexible tools for these tasks, like RangeSelection, Binarize has the ability to define different thresholds for each RGB channel.

Parameters Joint RGB/K channels: Use a single threshold to be applied to each RGB/grayscale channels. Individual RGB/K channels: Use a different threshold for each RGB/grayscale channel.

29

Blink Process > ImageInspection

Blink can create an animation of several images, sequentially, that we can also save as a video file. Once loaded, we can enable/disable each image in the sequence by clicking on its checkbox. We define the master blink image by double-clicking on it. The animation is then played in a new window, named BlinkScreen.

When to use Blink Blink can be used at any situation when we have several images of same pixel height and width (also usually the same FOV, whether aligned or not), and we'd like to either spot differences between each image, evaluate our subframes (SubframeSelector is a much more advanced tool for that task), or would like to create an animation/timelapse based on the input images. The only parameters in Blink are the input files, all other actions are performed via icon buttons. Calculate a per-channel histogram stretch for each of the input images. The stretch is not applied to the images, only calculated and applied to the BlinkScreen image, which is, therefore, non linear. Color images will appear more neutral. Calculate an auto-STF (see process ScreenTransferFunction) based on the selected image, and apply this auto-STF to all images as they're displayed in the BlinkScreen window. Unlike with the previous option, what we see in the BlinkScreen is linear but “screenstretched”. Color images will preserve their original color casts. Controls to play the animation or step forward/backward one frame at a time. Click here to select the image files to be used. Close selected images. An image is selected when it appears highlighted. The check mark to the left let us know which images are to be included in the animation sequence. Close all images and start over. 30

Save all selected files (again, selected meaning highlighted) to a specific location. Move all selected files to a specified location. Crop all selected files – we can define the crop area by defining a preview first – and save them to a specified location. Display several statistical data about the input images. A Statistics dialog box appears that allows us to specify the image for which we'd like to see the stats and metadata, as well as some controls to define the range (either the normalized [0,1] range or a classic 16 bit range [0,65535]), the precision or number of decimals in the stats, whether we want the results to be sorted by channel, whether we want to limit the stats to a particular area (crop) and whether we want to display the results in a text file. If the Write text file option is left disabled, the results are written to the Process Console. Create a video file of the animation. Prior to using this function, we must have installed ffmpeg in our computer, a command-line tool that can encode and convert audio and video files. Due to the large number of parameters ffmpeg can use, we will not discuss it here (refer to the extensive documentation at https://www.ffmpeg.org/).

ChannelCombination Process > ChannelManagement

ChannelCombination is used to combine single channel images into a new image containing all the channels. This is useful for example to combine previously calibrated and aligned RGB channels of an image into one single color image. For combinations that include Lightness and RGB channels, see the LRGBCombination tool. Note that ChannelCombination can be applied to an existing image (New Instance icon) or in the global context (Apply Global icon). When applied to an existing color image, the channels defined in the dialog box will be applied to that image. When applied in the global context, ChannelCombination will create a brand new color image.

31

When to use ChannelCombination When exactly in the workflow should we combine three separate channels into a single color image depends on a number of things. Regardless, it is best done when the images are still linear and all three have been previously aligned. This is also regardless of whether we're combining broadband or narrowband data. Sometimes during our processing we may want to split the image into different channels (and not necessarily just using the RGB color space) to later recombine them. The recombination is done with ChannelCombination, while the split is done with ChannelExtraction, explained next.

Parameters Color Space: Select the color space to be used for the combination. Depending on the color space used, the channel/source image descriptors will change accordingly. Channels / Source Images: Once we have selected the color space, we enter on each of the boxes the images corresponding to each of the channels we wish to combine. For example, if we selected the RGB color space, we need to enter an image for the red channel, one for the green channel and another one for the blue channel. We can enable or disable each channel as needed. If a text box is left with the default , PixInsight will try to use an image with the same name as the target image plus the suffix _X where X corresponds to the abbreviation for that particular channel (_R for red, etc).

ChannelExtraction Process > ChannelManagement

The purpose of the ChannelExtraction tool is to create individual single channel images from a source color image, where each of the newly created images contains information about only one of the channels from the source image. For obvious reasons, ChannelExtraction cannot work on grayscale images.

32

When to use ChannelExtraction ChannelExtraction can be used anytime we want to process image channels separately, which can be desirable for a number of reasons at any time during our processing workflow. In addition to the popular RGB channel split, CIE L*a*b* and CIE XYZ are often used in astroimage processing to extract lightness or luminance separate from color data.

Parameters Color Space: Select the color space that contains the channels we want to extract. Channels / Target Images: Once we have selected the color space, we use the check-boxes here to indicate which channels we want to extract. We can also enter the image identifiers if we wish, or leave it as in which case PixInsight will assume the identifiers are the same name as the source image plus the suffix _X where X corresponds to the abbreviation for that particular channel (_R for red, etc). Sample Format: We can select Same as source to produce individual images with the same format (bit depth) as the source image, or specify a different format.

ChannelMatch Process > Geometry

ChannelMatch is intended to manually align individual RGB channels in a color image by means of entering the offset values between channels.

When to use ChannelMatch Because ChannelMatch is used to align the RGB channels of an image, very much like we align different frames before stacking them, this tool is rarely 33

used for deep-sky imaging, as the alignment between frames is usually taken care of with much more advanced alignment tools like StarAlignment. In some rare cases, manual alignment between channels may be needed, as in severe cases of chromatic dispersion that generates misaligned star halos. ChannelMatch is however useful in planetary imaging, where the alignment between RGB channels cannot be solved via StarAlignment or DynamicAlignment yet.

Parameters RGB: Select/deselect the channels to be aligned. X-Offset / Y-Offset: Defines the x/y coordinates offset for the given channel. Integer numbers will result in a pixel-by-pixel operation, while if non-integer values are indicated, ChannelMatch will perform sub-pixel translations (interpolation). Whenever possible, interpolation should be avoided, especially at initial processing stages. Linear Correction Factors: Assign a linear (multiplicative) correction factor to each channel. A value of one will not apply any correction.

CloneStamp Process > Painting

CloneStamp is PixInsight's implementation of this well-known image editing tool. It is an interactive dynamic tool, just like DBE or DynamicCrop, for example. We can create process icons and scripts with CloneStamp instances, exactly as we can do for any other processes, and apply them to any number of images without restrictions. We open the CloneStamp interface and click on an image to start a new session. That image will be the “clone stamp target,” to which all clone stamp actions will write pixels. Then we Ctrl/Cmd+click on any point of an open image (including the target image of course) to define a first “source point,” click on the target image to define a first “target point,” and start dragging with the mouse to perform cloning actions. We can start a new action by clicking again and dragging, and define a new source/target point pair with Ctrl/Cmd+click / click at any moment.

34

The Ctrl/Cmd+Z and Ctrl/Cmd+Y keys can be used while the target image is active to undo/redo clone stamp actions. If we cancel the process (red cross icon on the interface's control bar), the initial state of the image is restored. If we apply the process (green check mark icon), all clone stamp actions (except those that have been undone) are applied, just as any other process.

When to use CloneStamp The main use of CloneStamp is to remove small artifacts from our image that could not be removed or corrected by other means. For that reason, it is normally used late in the processing stage. However, in some situations it may be better to “clone out” a given artifact early in the process, so as to not have the artifact to become even harder to remove after other processes have made the artifact more obvious. CloneStamp is also used sometimes to make “corrections” on a mask.

Parameters Radius: Radius of the clone stamp brush. Softness: Modify to define softer or coarser brush edges. Opacity: Define the opacity (strength) of the cloning action. Copy brush: Copy current action brush parameters. Show bounds: Draws a box around the current cloned area. Navigation controls: The CloneStamp interface includes a local history that can be used to undo/redo/delete performed cloning actions. This allows us to revisit any cloning action done during the cloning session, by clicking on the blue arrows (From left to right: First, previous, next and last cloning actions) and also delete any given action by navigating to it and clicking on the X to the left of the navigating arrows.

35

ColorCalibration Process > ColorCalibration

The principle behind ColorCalibration is to calibrate the image by sampling a high number of stars in the image and using those stars as a white reference. The tool can, however, be used in many different ways. ColorCalibration can also work with a view as its white reference image. This is particularly useful to calibrate an image using a nearby galaxy, for example. The integrated light of a nearby galaxy is a plausible white reference, since it contains large samples for all star populations and its redshift is negligible. We can also use a single star as our white reference – G2V stars are favorite among many astrophotographers – or even just a few stars if we wanted, depending on what we're after. While much has been written about color balance criteria, the takeaway from tools like ColorCalibration is that we are in control of the criteria that we want to use at any given time.

When to use ColorCalibration A good color calibration is performed when the image has been accurately calibrated (no pun intended), particularly flat-field corrected, and the image is still linear and has a uniform illumination (no gradients). Preferably, the mean background value should be neutral, something that can be done with BackgroundNeutralization.

Parameters White Reference

Reference image: White reference image. ColorCalibration will use this image to calculate three color calibration factors, one per channel. If unspecified, the target image will be used as the white reference image. Lower limit: Lower bound of the set of white reference pixels. Pixels with values equal or smaller than this value will ignored. Since the minimum allowed value is zero, black pixels are always rejected.

36

Upper limit: Upper bound of the set of white reference pixels. Pixels with values greater than or equal to this value will be ignored. When set to one, only white pixels are rejected, but by lowering it a bit, we can also reject pixels with very high values yet not saturated. Region of Interest: Define the rectangular area in the image to be sampled by ColorCalibration. Although defining previews to be used as white references is quicker, this parameters come handy when we want to reuse the process – say creating an instance of it. Structure detection: Detects significant structures at small dimensional scales prior to evaluation of color calibration factors. When this option is selected, ColorCalibration uses a multiscale detection routine to isolate bright structures within a specified range of dimensional scales (see the next two parameters). We can use this feature to perform a color calibration based on the star(s) that appear in the white reference image. Structure layers: Number of small-scale wavelet layers used for star detection. The higher the number of layers, the larger the stars being considered for color calibration would be. Noise layers: Number of wavelet layers used for noise reduction. Use this parameter to prevent detection of bright noise structures, hot pixels and/or cosmic rays. Additionally, we can also use this parameter to have ColorCalibration ignore the smallest detected stars. A value of one would remove the smallest of scales (usually where most noise resides). The higher the value, the more stars would be ignored. Manual white balance: Perform a white balance by manually specifying the three color correction factors. If we select this option, no automatic color calibration will be applied. Output white reference mask: When selected, ColorCalibration will create a new image with a white reference mask, where white means pixels that were used to calculate the color correction

37

values, and black represents pixels that were ignored. Examining this mask can be useful to check whether the Lower limit and Upper limit parameters defined adequate limits for our image. Background Reference

Reference image: Background reference image. ColorCalibration will use this image to calculate an initial mean background level for each color channel. If undefined, the target image will be used as the background reference image. We should try to define a view that is a good representation of the true background of our image, that is, an area where most pixels represent the sky background of the target image. Lower limit: Lower bound of the background reference pixels. Pixels below this value will be ignored when calculating the background mean values. Upper limit: Upper bound of the background pixel limits. Pixels above this value will be ignored. Output background reference mask: When selected, ColorCalibration will create a new image with a background reference mask, where, just like for the Output white reference mask, white means pixels that were used to calculate the mean background values, and black for the pixels that were ignored. Likewise, examining this mask can be useful to check whether the Lower limit and Upper limit parameters worked as we expected.

ColorManagementSetup Process > ColorManagement

Color management is a simple concept that defines a complex problem: making sure that our images are being displayed consistently on different output devices, like different monitors or printers. Here, rather than trying to explain color management, profiles, etc. we'll focus on how to use the ColorManagementSetup dialog box in PixInsight.

38

When to use ColorManagementSetup Ideally, we would use ColorManagementSetup the first time we use PixInsight or anytime we either change out output devices (monitors, printers, etc) or calibrate such devices. Not doing so may still display our images nicely on our monitors, however, we lose control about how the images would display on other monitors so it is very important that we visit this dialog and adjust it properly. We may also need to use ColorManagementSetup the first time we do some soft-proofing (reviewing how an image would look like when being sent for printing, for example), particularly if we have never set a profile for proofing before.

Parameters Monitor Profile

This panel shows the file path and description for the ICC profile currently assigned to the monitor in our system. These values are obtained from the Operating System and should indicate either the default values set by the operating system or a profile created during a monitor calibration process – with a calibration tool such as the Spyder or X-Rite's. Rendering intent: Here we specify the transformation strategy for screen color rendition. The default perceptual rendering intent is usually the best option for astroimages. •

Perceptual: map all colors in the image space to the output device space.



Saturation: Maximizes color saturation at the expense of color accuracy. Good, sometimes, for graphics and illustrations.



Relative colorimetric: The mapping is done by matching the white point in the image with the white point in the color space. This may be desirable for printing, as pure white is then 39

nor printed at all, meaning the color comes from the paper itself. Out of range (gamut) colors are replaced by the nearest available colors. •

Absolute colorimetric: Same as relative colorimetric, except no white point match is done at all, meaning that a mismatch in the white point between the image and the output device may result in a color cast. That said, this is a good choice if later we're going to do some numerical color calculations.

System Settings

Here we can change the current monitor profile, which will be active after restarting PixInsight. Default Profiles

Here we indicate the default ICC profile for newly created images or images that do not have an ICC profile assigned (depending on the policies, documented below), being able to define a different ICC profile for either grayscale or color images. Default Policies

Here we tell PixInsight what to do whenever there's a conflict with the profiles or there's no profile at all in the image. Each of the options is fairly self-explanatory, although it's recommended to understand the pros and cons of each option. On Profile Mismatch: This happens when we open an image that has an ICC profile that is not our default profile.

40



Ask what to do: PixInsight will ask us what to do whenever this conflict arises. We will then be presented the remaining options listed below.



Keep embedded profiles: The profile embedded in the image will be used. This is the default and usually the best option.



Convert to the default profile: The pixel values in the image are recalculated from the embedded profile to the default profile, then the embedded profile is discarded. It is recommended not to use this option unless we know what we're doing and why.



Discard mismatching profiles: Ignore the embedded profile and treat the image as if it was defined in the default profile space. Again, not a commonly used option.



Disable color management: No color management whatsoever will take place when displaying the image on the screen. Not recommended.

On Missing Profile These are images that simply don't have any ICC profile assigned at all. •

Ask what to do: PixInsight will ask us what to do whenever an image is found to not have any ICC profile assigned. We will be presented the remaining options listed below.



Assign the default profile: The default profile will be assigned to the image, and the image will behave as if that was the profile originally assigned.



Leave the image untagged: Treat the image as if the default profile was used, but do not assign the default profile (or any other profile) to the image. This is the default value.



Disable color management: As before, no color management will be done for the image. Not recommended.

Color Proofing

Here we select the ICC profile to be used for color proofing and several related parameters. Color proofing can be enabled from the menu IMAGE > Color Management > Enable Color Proofing. Proofing Profile: Set this parameter to the profile to be used for color proofing. Proofing intent: This parameter works exactly like the Rendering intent parameter, except that in this case we're applying it to our proofing profile. Relative colorimetric is often the selection of choice for print proofing. Use black point compensation: When enabled, the maximum black levels in our images are adjusted to the black capabilities of the output device. While this may improve preserving details in the shadows of our images, it's worth reviewing the results with it enabled and disabled. Default proofing enabled: When enabled, currently opened and any new images will have Proofing enabled by default. Default Gamut check enabled: When enabled, currently opened and any new images will have Gamut check enabled by default. Gamut Warning: when color proofing and Gamut check are enabled, this is the color that will be used to highlight out-of-gamut values.

41

Global Options

Enable color management: Color management can be enabled or disabled on individual images, but here we define whether it is enabled or disabled by default in the PixInsight application. Use low-resolution CLUTs: Enable this option to use a less detailed color lookup table (CLUT) when color management is enabled. This is to improve rendering speed at the expense of color accuracy, something most people probably won't need. Embed ICC profiles in RGB images: Enable to include the ICC profile in newly generated RGB color images by default. Embed ICC profiles in grayscale images: Enable to include the ICC profile in newly generated grayscale images by default. Refresh Profiles: Reload the list of ICC profiles from the operating system. Load Current Settings: Populate all parameters with the current global color management settings.

ColorSaturation Process > IntensityTransformations

ColorSaturation allows us to modify color saturation as a function of the image's hue. This means that color saturation can be increased and decreased for a range of selected colors, as opposed to the way the CurvesTransformation interface would, which allows varying saturation as a function of itself. ColorSaturation works internally in a colorimetrically-defined HSVL space, which prevents noise transference from chrominance to lightness, and ensures full preservation of color balance. 42

The ColorSaturation interface is very similar to the interface in other processes such as CurvesTransformation. The ColorSaturation curve is defined with respect to a horizontal axis that covers the entire range of hue angles from –pi to +pi. Vertical curve values are saturation biases. Each point in a ColorSaturation curve can take saturation bias values (that is, vertical, or Y-axis values) in the range from –10 to +10, including zero. Positive saturation biases increase color saturation, while negative bias values desaturate colors.

When to use ColorSaturation ColorSaturation should be used on nonlinear images, preferably after we are done with any noise reduction processes, assuming noise reduction was needed. Since CurvesTransformation offers a simple and effective way to increase color saturation globally on all colors in the image, ColorSaturation can be used just when we want to adjust saturation differently depending on the hue – for example, increasing color saturation for blue hues but decreasing green hues and maintaining red hues. ColorSaturation is also often used with masks to adjust color saturation only on certain areas of the image. For example, we could use a star mask to increase color saturation on the stars, or a background mask to decrease color saturation in the background.

Parameters Curve editing buttons: These are the buttons right under the curve editing panel, on the left, that we may see in other curve-editing processes. From left to right: Edit – to create new points or move existing points; Select – to select a point, without moving it; Delete – will delete any point we click; Zoom In/Out – to zoom in or out on the curve editing panel; Pan – to scroll the editing panel; Zoom Factor – to enter the zoom factor manually (from one to 99). Range: By increasing this value we rescale the Y axis of the editing panel to increase the maximum saturation range. For example, with a range of one, the Y axis of the editing panel represents values from -1 to +1, a range of two would have the Y axis define values from -2 to +2, etc. Store/Restore Curve buttons: We can save the current curve at any time by clicking the Store Curve button. It will be saved for the entire PixInsight session or until we store a new curve. Click the Restore Curve button to restore the last saved curve. Invert Curve: Flip the current curve around its X axis.

43

Curve Interpolation buttons: Define the interpolation algorithm for the curve. •

Akima Subspline: This algorithm allows us to move curve points that would only affect the adjacent segments of the curve, leaving the rest untouched. The Akima Subspline algorithm will only be used if four or more points are defined, otherwise ColorSaturation will use Cubic Spline. This is the default value.



Cubic Spline: This algorithm will produce the smoothest curve of all, which makes it also a very good choice. Making small point variations may generate very different curves, though, so we need to be careful. Also, straight segments between points are not possible due to the nature of the algorithm, so if corner points are desired in the curve, Akima Subspline is preferable.



Linear: Selecting this option will create, not a curve, but straight segments between points. For most cases, Linear interpolation should not be used, due to its coarseness and “jumpy” behavior.

Hue / Saturation: These two parameters indicate the hue and saturation values for any point added to redefine the curve. The small blue triangle icons to the right of these two parameters allow us to navigate through all the selected curve redefinition points. Hue Shift: An offset value that simply moves the origin of the hue axis. By varying hue shift, we can modify the ranges of colors which a given saturation curve acts on, without changing any curve points. For example, if we change hue shift from its default zero value to 0.5, what the original saturation curve was applying to blue/cyan colors will now be applied to orange/red colors. In this way we have a high freedom to fine-tune color saturation transforms with very little effort.

CometAlignment Process > ImageRegistration

CometAlignment is a process that can assist us in aligning an image on a comet, almost automatically. It works by using a list of single master light images, assumed to be captured sequentially and with a comet in them which, assuming we captured the images while guiding on a

44

star (not on the comet), will appear at different coordinates on each subframe. The tool allows us to define the location of the comet in the first and last frames, and takes care of the rest.

When to use CometAlignment Obviously, we need a sequence of single images capturing a comet cruising through the cosmos. CometAlignment is useful whether we want to produce an image just aligned on the comet while displaying trailing stars, or a composite where both, stars and comet are aligned. Being an alignment operation, it is highly recommended to use CometAlignment when the images are still linear, prior to any image integration process.

Parameters Target Frames

Use the button “Add Files” or drag the files and drop them over the table. #: Each image in the sequence is numbered. We can also double-click on it to set the reference image. If no reference image is set, the first image will be used as the reference image. ?: We can enable/disable any given image from the sequence by double-clicking on this icon. File: The name of each file in the sequence. When a file has drizzled data associated with it, a will be displayed before the filename. DATE-OBS: This value is obtained from the DATE-OBS header of each file, a parameter that is usually found on FITS and XISF files, documenting the time and date the image was acquired. When importing RAW files that don't include the DATE-OBS keyword, PixInsight internally simulates the DATE-OBS keyword, allowing for such files to be included. Otherwise, if CometAlignment does not find this keyword, the file is rejected. If the file should have the keyword or had it at an earlier processing stage, find the process that removed it and circumvent it, if possible. 45

X/Y: Image coordinates in pixels of the comet's centroid or core coordinates for each image in the sequence. We can modify these values by either opening the image (double-click on its name in the Target Frames table) and clicking in the centroid, or by manually entering the values in the Parameters section below. dSec: Time difference, in seconds, between the current image and the reference image. dX/dY: Difference, in pixels, between the (x,y) coordinates of the current image and the reference image. Add Files: CometAlignment only works with files, not views. Click here to add the files from our storage device. Select All: Mark all files in the sequence as selected. Add Drizzle Files: Click this button if, during “regular” image registration, we generates drizzled files (files with a .xdrz suffix that can be generated by StarAlignment) to add the corresponding .xdrz files. Note that CometAlignment will not recognize drizzled data created with the BatchPreProcessing script. Clear Drizzle Files: Remove the association (not the actual files) between the drizzle data files and the target images. Set Reference: Set the currently selected file as the reference image to which all other images would be aligned to. Invert selection: Mark as selected all not-selected images and vice-versa. Toggle Selected: Enable or disable the currently selected file from the sequence. Remove Selected: Completely remove the selected image(s) from the sequence. Clear: Completely remove all images from the sequence. Useful to start over. Full Paths: When Full paths is enabled, the File column will not only display the file name but also the complete path in our storage device. Format Hints

We can use format hints in CometAlignment to change the way files are loaded (input hints) or written (output hints).

46

Output

When executed, CometAlignment generates aligned copies of the input images. Here we specify where and how these files are created. By default, the newly created files will have the same filename as their source files. Output directory: The folder where the newly created files will be saved. Prefix: Insert a string of text at the beginning (prefix) of each filename. Default is blank: no prefix. The string must only contain characters that can be used in filenames per the operating system our computer is running. Postfix: Add a string of text at the end of the filename, prior to the file extension (.xifs, .fits, etc). The default is “_ca”, as a hint that these are images created with CometAlignment. Overwrite: When enabled, if a file already exists in the output directory with the same name, overwrite it. If the same situation arises when this option is disabled, the new filename will be adding an underscore character followed by a numeric index: _1 the first time a same filename is found, _2 should it happen a second time, and so on. Parameters

Here we can see and modify the coordinates for the first and last images in the sequence. X/Y (first row): See or enter/modify the (x,y) coordinates of the comet's centroid for the first image in the sequence. X/Y (second row): See or enter/modify the (x,y) coordinates of the comet's centroid for the last image in the sequence. Show: We can click on any of the two Show buttons to bring up on the workspace the first or last images in the sequence. dX/dY: Comet's velocity in pixels per hour for the X and Y coordinates respectively. Subtract

This section allows us to produce better starless or comet-less images by subtracting an image that is usually the first integrated image we produce with CometAlignment without any subtraction. Therefore, during the first use of CometAlignment for a given data set, these parameters are usually left blank. Operand image: This is the image that will be subtracted. 47

ImageIntegration: This option should be selected unless we're aligning drizzled data. DrizzleIntegration: Select this option when aligning drizzled data. Operand is – Stars aligned: The operand image is first subtracted and then aligned to the comet per the reference image. This is the desired option when the operand image is mostly star field data. Operand is – Comet aligned: The image to be subtracted will be first aligned to the comet in the target image. Normally we select this option when the operand image only contains comet data. Drizzle save – Stars aligned: If this option is enabled, when the operand image also has drizzled data associated with it, in addition to the (not aligned) images and drizzle files being created, CometAlignment will produce a set of images that are star aligned. Files in this set will have the “_r” postfix added to their filenames. Drizzle save – Comet aligned: If this option is enabled, when using drizzled data, in addition to the (not aligned) drizzle image being created, CometAlignment will produce a set of images that are comet aligned. These files will have the Output's Postfix string added to their filenames.

ConvertToGrayscale Process > ColorSpaceConversion

This process does not contain a dialog box – that is, it's immediate and it's applied to the last active view. It will convert the last active view to grayscale. It can only be applied to color images.

ConvertToRGBColor Process > ColorSpaceConversion

Like ConvertToGrayscale, this process does not contain a dialog box and it's immediately applied to the last active view, converting it to RGB color. It can only be applied to grayscale images.

48

Convolution Process > Convolution

Convolution is a mathematical operation that, although often associated with softening our image data, it can be used in many different image processing applications.

When to use Convolution The Convolution process is often used whenever we need to apply a low-pass or a high-pass filter to our image (often on a mask rather than on actual image data), but it can also be used for other purposes, such as edge detection, creating synthetic PSF functions or even multi-scale processing.

Parameters Convolution offers three different ways to define the convolution response function. Parametric

Define a convolution, usually a Gaussian function, via parameters. StdDev: Standard deviation in pixels of the low-pass filter. Increasing the value of this parameter will produce a larger filter, making the convolution filter to act at larger scales. Note that this parameter utilizes the dual high-precision sliders also found in other processes, where the upper slider is used to make broad adjustments from 10 to 250 and the lower slider is used for fine adjustments from 0.10 to 10. Shape: Here, we define the filter function distribution. A value of 2 produces a classic Gaussian convolution. Values smaller than 2 produce a sharper distribution while values larger than 2 produce a flatter distribution.

49

Aspect ratio: Modify the aspect ratio of the function vertically. When this value is different than one, the Rotation parameter becomes available. Rotation: Rotation angle in degrees of the filter function. Library

Convolution's Library mode acts as a filter library database where we can use stored filters, edit them, remove them or define new filters as well as new libraries. Here, filters are defined as typical kernel or separable filters. New filters are entered as text, using a very specific syntax, starting by defining whether the filter is a kernel or separable filter. Kernel filters are very small matrices of filter coefficients such as:

0

1

0

-1

0

1

0 -1

0

Their syntax is as follows: KernelFilter { name { Edge Diagonal (3) } coefficients {

0

1

0

-1

0

1

0 -1

0 }

}

Separable filters are defined as a row and column vectors that when multiplied, they produce the filter's kernel. For example, the following filter:

50

-1

0

1

-1

0

1

-1

0

1

Can be separable as the following row and column vectors: Row vector:

1.106682

0.000000 -1.106682

Column vector: -0.903602 -0.903602 -0.903602

It would then be defined as: SeparableFilter { name { Prewitt Edge West (3) } row-vector {

1.106682

0.000000 -1.106682 }

col-vector { -0.903602 -0.903602 -0.903602 } }

All filters must have an odd size equal or greater than 3 (so that there is always a single centered coefficient) and must be square. Library file: Path and file for the currently loaded filter library. Save as: Save the current filter library under a new filename. New: Create a new and empty filter library. Default: Load the default filter library, /library/default.filters

which

usually

is

under

Filter: This pull-down option lists all available filters under the current filter library. High-pass rescaling: A convolution often produces out-of-range pixel values, generally negative values (clipped) but sometimes also values above one (saturated). When this option is left disabled (the default value), all out-of-range values are truncated. When enabled, if the resulting image has out-of-range values, it will be rescaled to the [0,1] range. This will produce a more defined convolution at the expense of less contrast. Coefficients: Display the kernel coefficients of the currently selected filter, in a list format. Edit: Edit the current filter, utilizing the syntax indicated earlier. New: Create a new filter. Remember to use the syntax we explained before. 51

Remove: Remove the selected filter from the current library. This action cannot be undone. Image

The Image section allows us to use an image to define the convolution filter. The image can be a single image or a preview. This method is often used for analysis and simulation purposes, whereas the other sections (Parametric and Library) are usually applied during regular image processing sessions.

CosmeticCorrection Process > ImageCalibration

This module replaces hot and cold pixels with averaged values from neighboring pixels. CosmeticCorrection works best with a map image of defective pixels or a master dark frame, although an auto-detect mode is also provided to clean any remaining hot or cold pixels, as well as a way to manually enter defective rows or columns, or just sections of them.

When to use CosmeticCorrection CosmeticCorrection should be applied after image calibration. Although it's not a mandatory process, it can definitely help in effectively removing any remaining “bad” pixels that are sometimes still present after proper calibration. We should examine our calibrated frames up close and make an assessment as to whether we should apply CosmeticCorrection.

Parameters Target Frames

Add Files: CosmeticCorrection only works with files. Click here to add the calibrated (but not aligned) files to be corrected. Select All: Mark all files in the list as selected. Invert selection: Mark as selected all not-selected images and vice-versa. Toggle Selected: Enable or disable the currently selected file from the list. 52

Remove Selected: Completely remove the selected image(s) from the list. Clear: Completely remove all images from the list. Useful to start over. Full Paths: When Full paths is enabled, the File column will not only display the file name but also the complete path in our storage device. Output

When executed, CosmeticCorrection will generate the corrected copies of the input images in the directory specified here. By default, newly created files will have the same filename as their source files. Output directory: The folder where the newly created files will be saved. Prefix: Insert a string of text at the beginning (prefix) of each filename. Default is blank: no prefix. The string must only contain characters that can be used in filenames per the operating system our computer is running. Postfix: Add a string of text at the end of the filename, prior to the file extension (.xifs, .fits, etc). The default is “_cc”, to remind us these were images created with CosmeticCorrection. CFA: Enable this option if the images to be corrected are color images (CFA, OSC). Leave disabled otherwise. Overwrite: When enabled, if a file already exists in the output directory with the same 53

name, overwrite it. If the same situation arises when this option is disabled, the new filename will be adding an underscore character followed by a numeric index: _1 the first time a same filename is found, _2 should it happen a second time, etc. Amount: How much, on a scale from zero to one, the correction will be applied to the “bad” pixels, zero being no correction applied at all, and one indicating that a full correction will be applied. Use Master Dark

CosmeticCorrection can use a master dark frame to identify where hot and cold pixels are. In this section we select the master dark frame as well as adjust, if necessary, the thresholds for the hot and cold pixels. Hot Pixels Threshold Once the dark frame has been defined, the parameters in this subsection will be populated with approximately ideal values for a proper correction, although it's always a good idea to fine-tune the parameters via experimentation. Enable: Enable detection of hot pixels from the master dark frame. When adjusting any of the following three parameters (Level, Sigma and Qty), the other two are recalculated automatically. Level: In the [0,1] range, define the pixel clipping value. Anything equal or above this value will be considered a hot pixel. Sigma: This parameter indicates the standard deviation from the mean of the selected Level (above) threshold. In order to reach the ideal value, it's better to define a Preview on one of the images, make that Preview visible (click on the Preview's image tab), activate Real-Time Preview and start decreasing Sigma's value from its default value of 50 until we are happy. Be careful, as too low Sigma values may “correct” more pixels than necessary. Qty: This is a count of the number of hot pixels that will be corrected. Cold Pixels Threshold Same as above, but for cold pixels. Enable: Enable detection of cold pixels from the master dark frame.

54

Level: In the [0,1] range, define the pixel clipping value for cold pixels. Anything equal or below this value will be considered a cold pixel. Sigma: As it's “hot pixel” counterpart, Sigma here indicates the standard deviation from the mean of the selected Level threshold. Qty: This is a count of the number of cold pixels being corrected. Don't be surprised if the count of cold pixels is radically smaller than the count for hot pixels, as hot pixels are a much more common occurrence in astronomical images. Use Auto detect

The Auto detect mode allows us to identify bad pixels based on the clipping values of the target images, instead of relying on a master dark frame. Hot Sigma: When enabled, we can define numerically how different a pixel value must be from its surrounding pixels to be considered a hot pixel. The lower the value, the more aggressive CosmeticCorrection is at detecting and removing hot pixels. Cold Sigma: When enabled, we can define numerically how different a pixel value must be from its surrounding pixels to be considered a cold pixel. Here, also a lower value will attempt to detect and correct more cold pixels. Use Defect List

This section allows us to accurately define known, previously saved, sensor defects: pixels, columns, rows and sections of columns or rows. Once we have defined a number of defects, we can save them in a text file that can be later be loaded for future use. The format for a defect list file is quite simple. Each line defines a defect, starting with either Col or Row, and followed by the Defect coordinate (explained below). If the defect includes a Limit (also explained below), the two numbers defining the limit follow. For example: Col 1322 25 50

Row 475 The above sample defines two defects. The first one happens in column number 1322 but only on the (Y) pixels 25 to 50. The second defect defines the entire row number 475 as defective. Load: Load a defect list from a file. The file should have been previously created and saved with CosmeticCorrection – unless we understand the syntax and decide to edit it ourselves, knowing what we're doing. 55

Save: Save the current list of defects to a text file. Select All: Mark all defects in the list as selected. Invert selection: Mark as selected all not-selected defects and vice-versa. Toggle Selected: Enable or disable the currently selected defects from the list. Remove Selected: Completely remove the selected defects(s) from the list. Clear: Completely remove all defects from the list. Useful to start over. Defect coordinate: This is where we enter the X (if a column) or Y (if a row) sensor coordinate where the defect is happening. Col/Row: Select whether we're adding a column defect (more common) or a defect in a row. Limit: If the defect does not happen on the entire column/row, enable this option and in the next two text boxes enter the first and last pixels (Y coordinates if a column, X if a row) that are defective. Add defect: Add the defect defined in the above parameters to the list of defects. Real Time Preview

When we open the Real-Time Preview window, we can look at this section to get a count of the number of pixels being affected. This can be useful when the Real-Time Preview is, in fact, a preview rather than the complete image. Show map: When enabled, rather than the Real-Time Preview displaying our corrected image, it will display a map showing the pixels being detected as bad. Snapshot: Create a new view (image) displaying the current contents in the Real-Time Preview window.

56

CreateAlphaChannels Process > ChannelManagement

The CreateAlphaChannels process will add a new alpha channel with the luminance of the target image, or with a constant transparency value. Optionally one can use this process to replace an existing alpha channel.

When to use CreateAlphaChannels Alpha channels are additional channels to the nominal channels in an image, that are generally used to define the transparency of the image. CreateAlphaChannels is therefore useful when we want to define a particular transparency for a given image. See also ExtractAlphaChannels, which allows us to extract the alpha channel(s) from an image.

Parameters From Image: When enabled, the alpha channel is created from the luminance of the image specified in the view selection list. Invert: When enabled, the alpha channel will be created from the inverted luminance of the source image. Close source: If enabled, close the image used as the source image once the alpha channel has been created. From Transparency: When enabled, the alpha channel is defined as a constant transparency value, specified in the box/slider below. Replace existing alpha channels: When enabled, the newly created alpha channel will replace any existing alpha channel.

57

Crop Process > Geometry

Crop is used to perform a fixed crop to an image, with detailed precision. To perform a more visually defined crop (i.e. using the mouse to draw the cropping area over the image), use the DynamicCrop module (a different process described later). Before performing the crop we should select the view where we will be performing the crop, using the view selector (by default indicating or ).

When to use Crop Cropping an image is something that can arise at different parts of almost any image processing workflow. In astronomical images is of utmost importance after having aligned (and perhaps integrated) a set of single, calibrated images, to remove unwanted edges that are almost always generated after star alignment. Cropping may also be one of the last things we do on an image, as we define its final framing after the image has been fully processed. That said, while the Crop tool offers unique features – such as cropping based on margins and anchors – and it's probably a better choice for scripts that require cropping, the dynamic behavior of DynamicCrop is often preferred, as it's a lot more intuitive and yet, very precise.

Parameters Margins/Anchors

We can enter the margins for the crop in the provided boxes. To define a margin to the left, enter the amount in the left box. To define a margin on the top, enter the amount on the top box, etc. Then, we 58

can choose between eight different anchors to indicate to which direction the image will shift after the crop. Use the arrow icons to change margin distribution on each side of the image. Dimensions

Height/Width: In these boxes (different measurement units provided) we enter the dimensions of the cropped area. Resolution

Horizontal/Vertical: Horizontal and vertical resolution of the target image in pixels per inch/cm. Centimeters/Inches: Select centimeters if the resolution entered in the Horizontal and Vertical dimension parameters is in centimeters. Select inches if it's in inches. Force Resolution: When selected, this option also changes the resolution and resolution unit of the target image. Process Mode

Crop Mode: Four options are available: •

Relative margins: The margins indicated in the Margins/Anchors section are assumed to be relative to the target image.



Absolute margins in pixels: The margins indicated in the Margins/Anchors section are assumed to be absolute to the target image, and measured in pixels.



Absolute margins in centimeters: Same as above, but measured in centimeters.



Absolute margins in inches: Same, but measured in inches.

Fill Color

When the cropped image requires expansion beyond the limits of the source image– that is, we indicated dimensions that cover an area that does not exist in the source image, new pixels are added at the corresponding sides with the color specified in this section (RGB and Alpha values).

59

CurvesTransformation Process > IntensityTransformations

CurvesTransformation implements a set of transfer curves that can be applied to selected channels of images. A transfer curve in PixInsight is an interpolated function applied to each pixel of an image. We define input and output values for a set of points. Then for each pixel, the current pixel value is used to interpolate a new value from the set of given points, and the interpolated value replaces the original.

CurvesTransformation is a particularly powerful implementation for curve editing, as it can adjust up to ten different channels, properly isolated from the rest via internal color space transformations. It also includes a zooming feature (to edit the curve with as much precision as we need) and other useful features, described in the Parameters section.

When to use CurvesTransformation Curve editing is a fundamental part of image processing in general, and astroimage processing is no exception. Curves are most often used when our image is no longer lineal, however curves can also be applied to linear images, whether to delinearize them, or to adjust the black and white point while maintaining image linearity. Although curves (and processes based on curve adjustments) have been in the past one of the methods of choice for performing the first nonlinear stretch in astronomical images, today there 60

are many other tools that can do a much better job, leaving CurvesTransformation for nonlinear adjustments of one or several of the many channels offered. The reasons leading to using CurvesTransformation are countless, whether applied to a working image, a mask, an auxiliary image, etc. but most of the time, CurvesTransformation is used to adjust: •

All of the nominal RGB channels (at once) or the lightness component (from CIE L*a*b*), with the purpose of adjusting brightness and contrast.



RGB channels individually. This will change the color appearance of our image and it was a popular method to color balance an image many years ago when other tools were not available.



Saturation (from HSVL*), as it provides an easy and effective way to adjust overall color saturation in a color image.

Adjustments to the other provided channel components is not as common.

Parameters Current curve: The curve being modified. We can easily spot it because only the current (active) curve will display the locations of its points. Current point: When we modify a curve, we do so by defining one or more curve points. Other modified curves: We can visualize other modified curves, or keep them out of the way (see Display Options below). Point edit controls: Edit, Select and Delete. Edit mode allows us to add new points and move existing points. Select allows us to select a different current point – we cannot do anything to it, but it's useful for example to examine its coordinates. To delete a point, we click on the Delete control, move the mouse over the point we want to delete, then click on it. Channel selectors/Component Selection: PixInsight's CurvesTransformation allows us to modify many different channels, discussed below. Current point coordinates: On a scale from zero to one, it shows the current point coordinates. Point selector: The small arrows allow us to travel from one point to another.

61

Curves cursor: As we move the mouse over the curve editing area, the cursor informs us where we are. Zoom edit controls: The first control (4 arrows pointing out) allows us to zoom in the curve editing area. We click on the control, move the mouse over the curve editing area and click to zoom-in. The second control (4 arrows pointing in) allows us to zoom out. The third control allows us to move (by means of clicking and dragging over the curve editing area) over a zoomed-in editing area. The fourth control allows us to directly enter the zoom factor, from one to 99. Zoom reset button: By clicking the button, we reset the curve editing area back to scale one. Display options: The button on the left switches on/off the display of all modified curves, except for the current curve being edited. This is useful for comparison and reference purposes, for example. The button on the right toggles on/of the background grid. Curve options: The button on the left allows us to temporarily save a curve. The next button will restore the curve to the saved position. The third button will reverse the curve, and the fourth button will reset the curve to its original position. Interpolation options: Select which interpolation option we wish PixInsight to apply when reading our curve modifications. The Akima Subspline interpolation is considered the closest to how a person would draw a curve, only taking the nearest data points into account when the curve is determined at a certain position. The Cubic Spline interpolation still does a smooth curve function, but may require additional points to force smooth segments in some situations. The Linear interpolation may require a lot of points to be properly defined and it's not usually the choice for most curve editing needs. Curve Channels

To edit the curve of a particular channel, select it from the available channel selector buttons. R, G and B: The red, green and blue channels, respectively. RGB/K: All three RGB (or grayscale) channels. A: The alpha channel.

62

L: The CIE L* component (luminance) from the CIE L*a*b* color space. Luminance curves can be very helpful when uniformly illuminated areas with little hue variations hide image detail. a: The CIE a* component. b: The CIE b* component. c: The CIE c* component. H: Hue. Because the hue curve is performed by transforming each pixel to the HSV space, and the HSV space is somewhat color blind, hue curves must be used with some care to avoid wild luminance/chrominance variations. S: Saturation. The saturation curve in CurvesTransformation varies saturation as a function of itself, meaning we can achieve the effect of increasing saturation for unsaturated pixels without modifying already saturated ones. The effect of a saturation curve is quite smooth, controllable, and is guaranteed to preserve color balance.

Debayer Process > Preprocessing

The Debayer tool is intended to be used with images captured with one-shot color (OSC) devices. It will debayer a list of OSC RGB images (all pattern types supported) applying bi-linear interpolation or super-pixel demosaicing methods.

When to use Debayer Obviously, we would only use Debayer on data captured with a color (OSC) camera. Debayering should only be applied on images that have already been fully calibrated and cosmetically corrected. Once applied, we can continue our processing treating our newly debayered images as regular RGB images.

Parameters Bayer/Mosaic Pattern: Different OSC cameras have different patterns. Here we indicate the pattern that corresponds to the camera used to capture the image to be debayered. 63

Demosaicing method: •

Bilinear: This method interpolates green and red/blue pixels, generating a high quality full-sized image (no resolution/size loss). It uses a 3x3 pixel matrix.



SuperPixel: This method takes a 2x2 matrix (4 pixels) and uses them as the RGB values for a single pixel. This creates halfsized images of good quality and works faster.



VNG: The Variable Number of Gradients method (default) uses a 5x5 pixel matrix around each pixel in the image. In practical terms, VNG generates images where edges are better preserved than when using the Bilinear method, for example, and with less color noise and artifacts.

Evaluate noise: When enabled, Debayer will calculate noise estimates for each demosaiced image and store these values as metadata in the demosaiced image, via the NOISExxx FITS keyword. Noise evaluation: Define the noise evaluation algorithm. Debayer currently offers two algorithms. •

Multiresolution Support: MRS is the default algorithm and should be the better choice for most cases. It only looks at the first four wavelet layers, where noise is likely to reside.



Iterative K-Sigma Clipping: Use K-Sigma only on images that have virtually no smallscale noise and that therefore, cannot be properly evaluated via MRS.

Save as Default: Saves the selected pattern and method options so future uses of the Debayer tool will have the selected options as default. Restore from Default: If we change the pattern and method options but would like to restore them back to the default options.

64

Target Images

Add Files: Debayer can operate on a list of files, rather than just one at a time. Click here to add the files to be debayered. Select All: Mark all images in the list as selected. Invert selection: Mark as selected all not-selected images and vice-versa. Toggle Selected: Enable or disable the currently selected image from the list. Remove Selected: Completely remove the selected image(s) from the list. Clear: Completely remove all images from the list. Full Paths: When Full paths is enabled, the File column will not only display the file name but also the complete path in our storage device. Format Hints

We can use format hints in Debayer to change how the target images are loaded (input hints) or how the debayered files are written (output hints). Output

When executed, Debayer will generate the debayered copies of the input images in the directory specified here. By default, newly created files will have the same filename as their source files. Output directory: The folder where the newly created files will be saved. Prefix: Insert a string of text at the beginning (prefix) of each filename. Default is blank: no prefix. Postfix: Add a string of text at the end of the filename, prior to the file extension (.xifs, .fits, etc). The default is “_d”, as a reminder that these are debayered files. Overwrite: When enabled, if a file already exists in the output directory with the same name, overwrite it. If the same situation arises when this option is disabled, the new filename will be adding an underscore character followed by a numeric index: _1 the first time a same filename is found, _2 should it happen a second time, etc. On error: What should Debayer do if it encounters an error while processing the target images? Continue (ignore the error and proceed with the next image), Ask User whether to continue or not, or directly Abort. 65

Deconvolution Process > Deconvolution

In the imaging world, deconvolution is the process of reversing the optical distortion that takes place during data acquisition, thus creating clearer, sharper images. Deconvolution works by undoing the smearing effect caused to an image by a previous convolution with a given PSF (Point Spread Function). The Deconvolution tool is PixInsight's implementation of Richardson-Lucy and Van Cittert deconvolution algorithms, complemented with wavelet-based regularization and deringing algorithms. Regularized deconvolution works by separating significant image structures from the noise at each deconvolution iteration. Significant structures are kept and the noise is discarded or attenuated. This allows for simultaneous deconvolution and noise reduction, which leads to strong deconvolution procedures that yield greatly improved results when compared to traditional or less sophisticated implementations. Unless used for a purpose other than true deconvolution, the Deconvolution tool should only be used on linear images.

When to use Deconvolution Although deconvolution is a process usually associated with effects like sharpening, it is important to understand that deconvolution is a reconstructive process, not a detail-enhancement process. For that reason, deconvolution needs to be performed on linear data, fully calibrated, aligned and integrated, preferably on data with relatively high SNR, so the deconvolution algorithms act on data that responds well to the PSF being applied. Sometimes deconvolution is applied after a linear noise reduction, to avoid deconvoluting noise,

66

although the Deconvolution process in PixInsight has ways to avoid deconvolving noise, mainly via regularization, explained below. When deconvoluting the luminance of a color image, Deconvolution uses the Y component of the CIE XYZ as luminance. Due to this color space conversion to extract the luminance, a linear RGB working space (RGBWS) is required. Therefore, prior to the deconvolution, RGBWorkingSpace should be applied to the image, with the Gamma value set to one, unless that is already our default RGBWS values. See RGBWorkingSpace for more information on RGB working spaces. In addition to these rather rigorous applications, the Deconvolution process can also be used later in the process when the image is no longer lineal. While this will not execute a true deconvolution, and it may be argued that the tool is being underused, it can also act as a decent sharpening tool later in the process, although mostly for cosmetic or visual purposes. Deconvolution is most effective when applied to high-signal areas in the image. Even the regularization algorithms do a good job at preserving non significant structures, it is therefore recommended to use a linear mask.

Parameters PSF

Deconvolution provides three ways to define the type of PSF for the deconvolution algorithms: Parametric PSF This was the most commonly used deconvolution method until the process DynamicPSF came along. Here, we define the PSF parametrically, that is, adjusting the value of a few parameters. This method attempts to deconvolve the most common convolution distortions errors found in astronomy images, such as those caused by atmospheric turbulence. StdDev.: The value for the standard deviation of the PSF distribution. Increase to apply deconvolution at larger structure scales. Shape: Controls the peak sharpness of the PSF profile. When this value is smaller than 2, the PSF has a prominent central peak. When it's greater than 2, the PSF has a much flatter profile. When the value is equal to 2, we have a pure normal (or Gaussian) distribution. Aspect ratio: Aspect ratio of the PSF (vertical/horizontal aspect ratio).

67

Rotation: Rotation angle of the distorted PSF in degrees. It is only active when the value for the aspect ratio is smaller than one. Motion Blur PSF We can use the Motion Blur PSF in cases where we have some tracking errors, or similar situations that generate unidirectional motion distortions. Length: Value of the PSF motion length, in pixels. Angle: Rotation angle of the PSF motion length in degrees. External PSF We use this option when we want to define the PSF based on an existing image. For best results, we should use an image defined by the process DynamicPSF, this being the preferred way to use Deconvolution. In theory, the image of a star could be used, but in practice, the results may not be good. Also, it is important that the star is very well centered on the image to be used as PSF, or the deconvoluted image will be shifted. When using a star image created by DynamicPSF, the star will already be centered. View Identifier: The view (image) selected to define the external PSF. Algorithms

In this section we define the deconvolution algorithm we wish to apply. Deconvolution provides two options and their regularized versions: Richardson-Lucy: In general, Richardson-Lucy is the algorithm of choice for deconvolution of deep-sky images. Van Cittert: The Van Cittert algorithm is extremely efficient for deconvolution of high-resolution lunar and planetary images due to its ability to enhance very small image structures. Regularized Richardson-Lucy: Regularized version of the Richardson-Lucy algorithm (read the regularization section below to learn more about regularization). Regularized Van Cittert: Regularized version of the Van Cittert algorithm (again, read below to learn more about regularization). Iterations: Maximum number of deconvolution iterations. See the parameter Convergence below, in the Regularization section. 68

Target: Apply the deconvolution only to the luminance of the target image or to the RGB components. If luminance is selected when deconvoluting a color image, a color space conversion is performed to the CIE XYZ color space, and CIE Y is used (the luminance component in the CIE XYZ color space). Deringing

For more detailed information about ringing artifacts and deringing, please review the documentation in MultiscaleLinearTransform about the topic. Deconvolution offers two deringing methods: global and local deringing. Global deringing is similar to the deringing features used in other process tools such as ATrousWaveletTransform, UnsharpMask and RestorationFilter and it usually degrades the result of deconvolution. Local deringing improves protection around small-scale, high-contrast features and requires a deringing support image, which is basically a star mask, except that it works differently than if we applied the mask directly to the image, as the mask is used individually at every iteration of the deconvolution, as opposed to simply controlling the overall transparency of the deconvolution. Global dark: Global deringing regularization strength to correct dark ringing artifacts. Increase to apply stronger protection. Global bright: Global deringing regularization strength to correct bright ringing artifacts. Local deringing: Enable this option to apply deringing by using a deringing support image, usually a star mask. Local support: Specify the identifier of an existing view (opened image) to be used as the deringing support image. It must be a gray scale image with the same dimensions as the target image. The deringing support is optional; if we don't specify it, the global deringing algorithm will be applied uniformly to the whole image. The deringing support allows us to use a star mask to drive a local deringing algorithm that can enhance protection of stars and other high-contrast, small scale image structures. Local amount: Local deringing regularization strength. This value will multiply the deringing support image (internally only; the support image is not modified at all). This way, we can modulate the local deringing effect.

69

Wavelet regularization

These parameters define how the algorithms perform separation between significant image structures and the noise at each deconvolution iteration, and how noise is controlled and suppressed during the whole procedure. Noise model: The regularization algorithms assume a dominant statistical distribution of the noise in the image. By default, Gaussian white noise is assumed, but we can select a Poisson distribution. In general, we'll see little differences, if any, between the results obtained under both noise models. Wavelet layers: This is the number of regularization wavelet layers used to decompose the data at each deconvolution iteration. This parameter can vary between one and four layers, but we should keep it as low as possible to cut noise propagation well without destroying significant structures. In most cases the default value of two layers is appropriate. Next to the wavelet layers parameter, we can specify the wavelet scaling function to be used. This identifies a low-pass kernel filter used to perform wavelet transforms. The default B3 Spline function is the best choice in most cases. A sharper function, such as Linear, can be used to gain more control over low-scale noise, if necessary. The Small-Scale function is mostly experimental. Noise threshold: Regularization thresholds in sigma units. In other words, here we specify a limiting value such that only those pixels with lower values can be considered as pertaining to the noise in a given wavelet layer. The higher the threshold value, the more pixels will be treated as noise for the characteristic scale of the wavelet layer in question (either 1, 2, 4, 8 or 16 pixels), that is, larger thresholds will apply noise reduction to more structures at each wavelet scale. Each row indicates the noise threshold values for 1, 2, 3, 4 and 5 pixel layer structures, respectively. Only the rows indicated by the wavelet layers parameter are available. Noise reduction: Regularization strength per iteration. These values represent the strength of the noise reduction procedure that is applied to noisy structures in each wavelet layer. A value of one means that all noise structures will be completely removed. Smaller values will attenuate but not remove them. A value of zero means no noise reduction at all. Each row indicates the noise reduction values for 1, 2, 3, 4 and 5 pixel layer structures, respectively. Only the rows indicated by the Wavelet layers parameter are applicable. Convergence: Automatic convergence limit in differential sigma units. A property of regularized deconvolution is that the standard deviation of the deconvoluted image tends to decrease during the whole process. When the difference in standard deviation between two successive iterations is smaller than the convergence parameter value, or when the maximum number of iterations is 70

reached —whichever happens first—, then the deconvolution procedure terminates. So when this parameter is zero (the default value), there is no convergence limit and the deconvolution process will perform the specified maximum number of iterations, regardless. Disabled: Disable automatic convergence, that is, it sets the value of Convergence to zero. In these cases, the Deconvolution would perform the specified maximum number of iterations regardless. Dynamic Range Extension

Use these sliders to increase the range of values that are kept and rescaled to the [0,1] standard range, and adjust for saturation during the deconvolution process. Low Range: Shadows dynamic range extension. High Range: Highlights dynamic range extension.

DefectMap Process > ImageCalibration

DefectMap is a simple tool to replace defective pixels with values derived from the values of the pixel's neighboring pixels. The defective pixels are defined in an image, which acts as our defect map reference, where black pixels (pixels with a value of 0) define defective pixels, and other values are interpreted differently depending on the Operation parameter, explained below.

When to use DefectMap Anytime we notice that our calibrated images still display hot or cold pixels. While CosmeticCorrection is a much more versatile tool for this procedure, DefectMap can be used when we're after a simple pixel correction operation.

Parameters Defect map: This is the view that will act as a defect map.

71

Operation: Define how the new pixel value will be calculated. •

Mean: The average value of neighboring pixels.



Gaussian: Similar as the Mean operation as pixels are averaged, but following a Gaussian distribution.



Minimum: Use the minimum value found among the neighboring pixels.



Maximum: Use the maximum value found among the neighboring pixels.



Median: The median value of neighboring pixels.

Structure: Define the shape of the area around the defective pixel. •

Square: The neighborhood pixel area is of a square shape, around the defective pixel.



Circular: The neighborhood area is of circular shape.



Horizontal: Only pixels in the same row are considered to be neighbors.



Vertical: Only pixels in the same column are considered to be neighbors.

CFA images: Enable if the image to which we're going to apply the process is a color image.

DigitalDevelopment Process > Obsolete

DigitalDevelopment or DDP works by compressing the range of brightness between the bright and dim portions of an image. PixInsight's DDP implementation works like this: A low-pass Gaussian filter is applied to the image, to which a pedestal acting as a break point is added to every 72

pixel. The value of the original image is then divided this value, then a more general pedestal is added. After this is done, a high-pass filter is applied (edge emphasis), and finally a mask-based color emphasis technique is applied to correct the loss of chrominance by the dynamic range compression performed by the DigitalDevelopment process.

When to use DigitalDevelopment PixInsight offers several more powerful and versatile tools to deal with this common problem in astroimages than DigitalDevelopment which is, in fact, included under the Obsolete process group. If we would still like to use DigitalDevelopment, it is generally used to perform the first nonlinear stretch on an image.

Parameters DDP Filter

Curve Break Point: The break point is a pedestal that acts as a breaking or turning point in the hyperbolic function defined by DDP. Curve Base Pedestal: An additive element to the DDP operation. Use higher values to generate a brighter image. Edge Emphasis: Standard deviation of Gaussian filter. In more practical terms, this parameter defines the intensity of a very rough high-pass filter applied after the initial computation of the DDP is done. DDP Color Emphasis

When we apply DigitalDevelopment to an image, the dynamic range is highly compressed. This means that the chrominance is compressed too. Therefore, the color contrast is decreased. The color emphasis operation helps compensate this. It works by assigning to each of the RGB channels a mask based on another (or the same) channel. Red Mask: The pull-down control allows us to select the channel to which the red mask will be applied. When the luminance is used as the mask, no color compensation is done. Green Mask: Likewise but for the green mask. Blue Mask: Same as above, for the blue mask in this case.

73

RGB/rgb Masks: Assign the RGB/rgb combination. This is the same as saying that the red mask will be applied to the red channel, the green mask to the green channel, and the blue mask to the blue channel. RGB/bgr Masks: Assign the RGB/bgr combination. This is equivalent to say that the red mask will be applied to the blue channel, the green mask to the green channel, and the blue mask to the red channel. Luminance Masks: Assigns the three color masks to the luminance, so no color emphasis is, in fact, done.

Divide Process > Obsolete

Divide is a tool that now resides in the Obsolete process group with one single, very specific purpose: to apply flat-fielding to film images. In film astrophotography, dividing by a flat-field image usually yields incorrect results. This is due to the fact that film response is not linear, so when applying the flat field correction, we must know as accurately as possible the particular response functions for the film used. Since film response depends on a myriad of reasons (temperature, humidity, development, digitization of the image, etc), this is very difficult to do. Divide assists us, if not in applying a perfect flat field correction, at least in making decent approximations by internally creating a synthetic flat field image and divide it from the target image.

74

When to use Divide As explained, Divide was developed as a method to apply flat correction to images captured with photographic film. We can still use it in the unlikely event we capture images with film and decide to try this technique. However, the fact that film photography is virtually no longer used for astronomical photography and that, in the rare case that we do use film, there are other solutions that work just as good (if not better), such as background modeling via DBE, earned this process its place in the Obsolete category.

Parameters Operand Image: This is the image to be flat-field corrected. We can type the view identifier or select it from the pull-down menu. Operation: Divide offers three different workflows: •

Plain Division: A simple division of the target image with the flat image will be performed.



Image Linearization: When selected, the target image is linearized prior to the division, by applying an initial gamma function to both the original image and the flat image, then delinearized after the division, by using a symmetric gamma function.



Nonlinear Division: Using a nonlinear method is much more complex, but Divide attempts to attack the problem by allowing us to define the linear zone and continuity in our film image. When this option is selected, the three tabs below become available: Linear Zone: Only available when Nonlinear Division is selected, here we define the range of pixel values that define the linear zone. The linear zone is an approximate range within the available dynamic range where the film density is assumed to respond proportionally to the number of photons. We can set the lower and upper limits in the [0,1] range, or use statistical width limits with values from 0 to 10. Continuity/Amount: Continuity is the continuity degree (polynomial order) and Amount is the decay rate of the film linearity. Both values are modeled as the decay rate. To better understand these concepts, if the amount of photons exciting the film is very small, there's a decay that produces a nonlinear response. And if the amount of photons is very large, it's equivalent to reaching the saturation point, and again, there's a nonlinear response right before full saturation. 75

Plot: A simple graph showing the response function. The integral plot is how typically CCD response graphs are shown. Normalization Factor: Here we define the normalization factor for the flat field. Generally, the Median or Mean values from the operand image are used, although we can also select the Minimum or Maximum value found in the operand image, as well as a fixed value (Custom Factor).

DrizzleIntegration Process > ImageIntegration

Drizzle is a technique – also known as variable-pixel linear reconstruction and originally developed for the Hubble Deep Field observations – that can perform a linear reconstruction of undersampled data, removing the effects of geometric distortion while preserving surface and absolute photometry and resolution. DrizzleIntegration is, as its name indicates, a process used to integrate drizzle data. It is not a substitute for ImageIntegration, but rather an additional step when we're aiming at producing a final drizzle integrated image.

When to use DrizzleIntegration DrizzleIntegration needs to be used after a number of other processes have been completed. Such processes are what produce the original drizzle data that will ultimately be used by DrizzleIntegration. Before using DrizzleIntegration on a data set, we must start with fully calibrated files and align them using StarAlignment with the Generate drizzle data option enabled. Then, the registered images need to be integrated using ImageIntegration, adding the drizzle (.xdrz) files to it and again enabling “Generate drizzle data”. Now, finally, the .xdrz files are ready for DrizzleIntegration. Without this workflow, DrizzleIntegration won't work as intended or it simply won't be able to work at all. As for whether we should drizzle our images or not, it highly depends on our goals for any given image, but there's a few requirements for drizzle to work:

76

1. First, dithered data is a requirement for the drizzle algorithms to work well. Therefore, the decision starts before capturing our data (unless we always dither... as we should!). 2. The data must be undersampled. This is why high resolution images of 1000 mm and larger focal lengths often benefit more from drizzling our data than images captured at shorter focal lengths. 3. Last, drizzle does add noise to our image, so a large number of subframes is also required.

Parameters Input Data

Add Files: Click here to add the drizzle data files to the list. Add L.Norm. Files: Associate the drizzled input images with local normalization data files. These are files created with the LocalNormalization tool (.xnml) Clear L.Norm. Files: Remove all .xnml files from being associated to the drizzled data files. Select All: Mark all files in the list as selected. Invert selection: Mark as selected all notselected files, and vice-versa. Toggle Selected: Enable or disable the currently selected image(s) from the list. Remove Selected: Completely remove the selected files(s) from the list. Clear: Completely remove all files from the list.

77

Static data targets: Use the full file path to each data file when associating local normalization files with drizzled data files, as opposed to just the filename. This may come handy when dealing with files having the same filename but stored in different directories, but it has the disadvantage that future associations will be dependent on the files being exactly at the same complete file path. Full Paths: When Full paths is enabled, the File column will not only display the file name but also the complete path in our storage device. Format Hints

As usual, whenever available, we can use format hints to change the way files are loaded (input hints) or written (output hints). Drizzle

Scale: Factor multiplying the image dimensions. For example, setting this value to 2 would perform a “drizzle x2” integration that would produce an image with four times the area as the input image. Drop shrink: Pixel reduction factor. This small pixel reduction often produces sharper results due to the smaller PSF during the convolution process, usually at the expense of SNR. Common effective values range from 0.7 to 1.0. Kernel function: Pixel data in the source images is regarded as “drops of data” that will “rain” down on a larger pixel grid (our new image scale), hence the drizzle analogy. This parameter defines the shape of such drops. Square and circled shaped kernels work well with most images. The Gaussian and the different VariableShape options are preferred when we have a very large set of nicely dithered source images. Grid size: When Gaussian or a VariableShape kernel function has been selected, here we can specify the size of the grid of values that will be computed to integrate the selected kernel function. Enable CFA drizzle: When enabled, DrizzleIntegration will understand the drizzle data was calculated on images captured with a color camera – technically, monochrome images with a CFA/Bayer filter – and it will generate a drizzled color RGB image as a result.

78

How Drizzle maps input pixels onto the output image

CFA pattern: Select the CFA pattern to use, which describes the position of the R, G and B “pixels” in the Bayer matrix. When Auto is selected (recommended), DrizzleIntegration will try to obtain this information from the input drizzle .xdrz files. Indeed, CFA pattern information is also stored in .xdrz files by processes that create drizzle files, like StarAlignment or CometAlignment. Enable pixel rejection: When enabled, DrizzleIntegration will read pixel rejection data from the .xdrz files (created by the ImageIntegration tool during the previous mandatory step to using DrizzleIntegration) and perform such pixel rejection for the integrated image being created. This parameter is enabled by default, being the preferred choice. Enable image weighting: When enabled (default value), if the drizzled data files include image weighting data, use it. This is the recommended value. Weight data is added to the drizzle files by the ImageIntegration process. Enable surface splines: When enabled (default and usually desired value), if the .xdrz files include surface splines for image registration, use it. If disabled, projective transformations are used instead of surface splines. This data is is added to the drizzle files by the StarAlignment process. Enable local distortion: When enabled (default value), if the drizzled data files include local distortion models for image registration, use them. This data is is added to the drizzle files by the StarAlignment process. Enable local normalization: By default (and with this option disabled), DrizzleIntegration applies a scale + zero offset global normalization to the output image. If we have associated local normalization data files, we can enable this option to apply the local normalization instead. Close previous images: Enable this option to close existing drizzle integration and weight images before running a new DrizzleIntegration process. Region of Interest

Define a limited area within the input images (not in the output, generated image) to execute the process, as opposed to acting on the entire image data. This is mostly useful for faster testing sessions.

79

DynamicAlignment Process > ImageRegistration

DynamicAlignment is a semi-automatic, easy to use image registration system for deep-sky images. After we've opened the DynamicAlignment dialog, we click on an image (source image), then on a second image (target image). The goal is to have the target image registered to match the source image. After selecting both images, we define a set of alignment points, where stars are usually the reference. DynamicAlignment's interface includes an adaptive star-searching algorithm that works perfectly well on linear raw images (use ScreenTransferFunction to see the stars if necessary). DynamicAlignment also includes a useful prediction system: starting from the second star, it will predict the target position of every other source star. Every time a new alignment point is defined, it is added to a (dynamic) list managed by DynamicAlignment.

When to use DynamicAlignment When processing astroimages, every registration/alignment procedure should happen when the images have just been fully calibrated and cosmeticcorrected, and the use of DynamicAlignment is no exception. Although DynamicAlignment is too “manual” and limited in comparison to StarAlignment, there may be some difficult cases where StarAlignment may just not be able to compute a valid registration that the more manual procedure offered by DynamicAlignment could resolve. More often than not, however, DynamicAlignment is used to align two master light images – rather than a whole set

80

of sub-exposures to create a master light, the later being a situation where StarAlignment is definitely the tool of choice. That said, DynamicAlignment may be required over StarAlignment in a number of different workflows, such as having to create a registration distortion model for a particular lens with the ManualImageSolver script (see StarAlignment's Distortion model parameter).

Parameters Source and Target views / Selected Sample: x of z

Ref#: Index of the currently selected alignment point. Navigation toolbar: Select the first alignment point. Select the previous alignment point. Select the next alignment point. Select the last alignment point. Invert the current alignment point. Normal (non inverted) points look for bright stars over a dark background. An inverted point behaves just the opposite way: dark star over a bright background. By inverting points we can select arbitrary image structures, not only stars, as alignment features. For example, we can select dark alignment features on lunar and planetary images using this option. Load the current alignment point on the source and target views. Delete the selected alignment point(s). Track alignment points on the source and target views.

X/Y: These are the horizontal and vertical coordinates of the star center for the currently selected alignment point in the source image.

81

Rs/Rt: Source and target star radius. Half-size of the currently selected alignment point in the source (Rs) and target (Rt) image. This value is computed automatically. eX/eY: Error in the last predicted target position, x,y coordinates. Difference in pixels between the horizontal and vertical coordinates of the computed and predicted point in the target image. Once two alignment points are defined, DynamicAlignment will predict target star positions for newly selected stars. To force a new prediction of coordinates for any existing reference, just select and move it slightly with the mouse. In this way the alignment point will be recalculated, along with its predicted coordinates. dX/dY: Differential star position (target-source) x,y coordinates. This is the difference in pixels between the horizontal and vertical coordinates of the target and source positions for the current point. It measures the vertical displacement of the target image with respect to the source image for the current alignment point. Reference generation

Source search radius: Initial search radius in the source image. This parameter determines the size in pixels of the initial search box used to detect valid alignment points on the source image. Increase it to favor detection of larger structures. Decrease to facilitate finding relatively small features; for example, dense star fields. Target search radius: Initial search radius in the target image. Everything said about the source search radius is applicable to the target search radius as well. Removed wavelet layers: Number of wavelet layers used for noise reduction. Noise reduction is used to avoid DynamicAlignment mistakenly assume noise structures are stars, as well as hot pixels or cosmic rays. This parameter can also be used to control the sizes of the smallest detected stars (increase to exclude more stars). Background threshold: Threshold value for rejection of background pixels. This is a limiting value, expressed in sigma units, below which image pixels will be considered as part of the sky background and ignored when it comes to detect stars. The threshold is determined by calculating the standard deviation of the search area (see search radius above). Colors: Alignment points are overlaid (not actually drawn) on the source and target images. These are the colors used to draw them. We can change them to different colors by clicking on the color square, then defining a new RGB value.

82

Aligned Images

Source/Target: The identifiers of the source and target images. Registered Image

Identifier: Enter here the identifier for the registered image. If the default is selected, the identifier will be the same as the target image plus the suffix _registered. Sample format: The format (bit depth) of the registered target image.

DynamicBackgroundExtraction Process > BackgroundModelization

DynamicBackgroundExtraction or DBE is a dynamic PixInsight process. Dynamic processes allow user interaction over the image as a way to define process parameters. In the case of DBE, the user defines a number of samples over free sky background areas, and the DBE process builds a background model by three-dimensional interpolation of the sampled data.

When to use DynamicBackgroundExtraction DBE's main purpose is to correct unwanted signal in the image that is causing different brightness levels across the image that do not correspond to real brightness levels. The two most popular uses of DBE are correcting for gradients and vignetting, but it can be used for correcting other defects such as amp-glow. This correction can be performed on a per-subframe basis, after subframe calibration but before aligning and integrating all subframes. However, for most purposes, it often suffices to apply DBE on master light images that have already been integrated and cropped to remove unwanted edges that are often found in newly integrated master lights. When we have a tricolor set, say R, G and B, we need to decide whether combine all three images into a single RGB color image and then apply DBE (if needed), or apply DBE individually to each channel image, then combine them, already corrected, into a single RGB. Applying DBE individually to each channel image gives us more control and sometimes is highly desirable. If our image does not need much correction, or we're not after very detailed results, combining all three

83

files into a single RGB color image and applying DBE to that image can still produce very effective results. In any event, whenever needed, DBE should be applied to linear images.

Parameters Target View / Selected Sample: x of z

This section provides data and parameters related to individual samples. Sample #: Current sample index. Anchor X: Horizontal coordinate of the current sample's center. Anchor Y: Vertical coordinate of the current sample's center. Radius: The radius of the current sample, in pixels. R/K, G, B: RGB values for the current sample. Fixed: Enable to force a constant value for the current DBE sample. This value will be used as the median of the sample pixel values for each image channel. Wr, Wg, Wb: Statistical sample weight for the red (or gray, if a monochrome image), green and blue channels respectively. A weight of one indicates that the current sample is fully representative of the image background. A value of zero means that the sample will be ignored. Intermediate weights correspond to how much a sample represents the background. Symmetries

DBE offers a feature that allows for symmetrical behavior of samples, either horizontally, vertically or in diagonal. When a sample has one of these options enabled, it will automatically generate duplicates around the center of the image. This function is particularly useful when, for example, we have an image with vignetting but we cannot access the background pixels because they are “covered” by pixels defining a celestial object, such as a nebula.

84

The user then can define a sample where there is no background available and utilize the symmetrical properties to add identical background values around the center of the image, assumed to be the symmetrical center of the vignetting. While less than ideal (a better solution would be nice), it's the only tool available in DBE to “tell” that a sample at one location to behave as a sample at another location. H, V, D: Enabling any of these parameters will enable horizontal, vertical or diagonal symmetry. Axial: Enables axial symmetry. The value indicates the number of axis. Activating this button will show the active symmetries for all samples, not just the selected sample. Model Parameters (1)

This section provides data and parameters related to individual samples. Tolerance: This parameter is expressed in sigma units, with respect to the mean background value of the image. Higher tolerance values will identify brighter pixels defining the background, including more pixels in the background model, but at the risk of also including pixels that are not defining true background. Decreasing the tolerance will cause a more restrictive pixel rejection; however, too low tolerance values will lead to poorly sampled background models.

85

Shadows relaxation: Increasing this parameter allows for the inclusion of more dark pixels in the generated background model, while more restrictive criteria can be applied to reject bright pixels (as specified by the tolerance parameter). This helps creating a better background model without inclusion of somewhat bright and large objects. Smoothing factor: This parameter controls the adaptability of the 2-D surface modeling algorithm used to build the background model. With a smoothness value of 0, a pure interpolating surface spline will be used, which will reproduce the values of all DBE samples exactly at their locations. Moderate smoothness values are usually desirable; excessive smoothness can lead to erroneous models being created. Unweighted: By selecting this option, all statistical sample weights will be assumed to have a value of one, regardless of their actual values. This can be useful in difficult or unusual cases, where DBE's automatic pixel rejection algorithms may fail due to too wild gradients. In such cases, we can manually define a usually small set of samples on strategic locations and by enabling this option, the background modeling routines will assume we know what we're doing. Model Parameters (2)

Symmetry center X / Symmetry center Y: As explained in the Symmetries section above, sample symmetries can be useful to deal with illumination irregularities that possess symmetric distributions. These two parameters define the horizontal and vertical axis of symmetry, respectively, in image coordinates. Minimum sample fraction: This parameter indicates the minimum fraction of non-rejected pixels in a valid sample. No sample with less than the specified fraction of background pixels will be generated. Set to zero to take into account all samples regardless of how much each sample is calculated to represent the background. Continuity order: This value is the derivative order of continuity for the 2-D surface spline used to generate the background model. Higher values can produce more accurate models (better adaptability to local variations). However, higher values also may lead to instabilities and rippling effects. The recommended and default value is 2. Sample Generation

This section is used to define and generate the automatic creation of background samples. Default sample radius: The radius for newly created background samples, in pixels.

86

Resize All: Click here to resize all existing background samples to the value specified in the “Default sample radius” box. Samples per row: Number of samples in a row when generating samples automatically. Generate: Click here to automatically generate samples across the image based on the parameters in this section. Minimum sample weight: No samples will be generated with statistical weights below this value. This parameter only applies to automatically generated samples. Sample color, Selected sample color, Bad sample color: These are the colors used to draw the sample boxes on the target image. We can change each color by clicking on it and redefining its RGB component values. Model Image

Identifier: If we wish to give the background model image a unique name, we enter it here. Otherwise PixInsight will create a new image name, usually adding _background to the name of the target image. Downsample: This parameter specifies a downsampling ratio for generation of the background model image. For example, a downsampling value of 2 means that the model will be created with one half the sizes of the target image. Background models are by definition extremely smooth functions. For this reason, a background model can usually be generated with downsampling ratios between 2 and 8 without problems, depending on the variations of the sampled background. A downsampled model greatly reduces the required calculation times for background model interpretation. Sample format: This parameter defines the format (bit depth) of the background model. Target Image Correction

Correction: Here we indicate what correction will be performed. Subtraction is the preferred choice for most cases, in particular gradient correction and any other additive effects. Division is used for multiplicative phenomena, such as vignetting or differential atmospheric absorption. Normalize: When enabled, the initial median value of the image will be applied after background model correction. If the background model is subtracted, then the median will be added; if the background model is divided, the median will be multiplied. Normalization tends to recover the initial color balance of the background in the corrected image. 87

If this option is disabled (the default value), median normalization will not be applied and the corrected image will tend to have a more neutral background. Discard background model: Dispose the background model after correcting the image. If this option is left disabled, the generated background model will be available as a newly created image. Replace target image: When enabled, the correction will be performed to the target image, literally replacing the target image with the corrected image. If disabled, the corrected image will be available as a newly created image, leaving the original target image untouched. Identifier: If we wish to give the corrected image a unique name, we enter it here. Otherwise PixInsight will create a new name, usually adding _DBE to the name of the target image. Sample format: Define the format (bit depth) of the corrected image.

DynamicCrop Process > Geometry

The DynamicCrop process facilitates a simultaneous crop, rotation and scaling mechanism in a highly interactive way. Because the Dynamic Crop dialog does not use an abstract interface (it needs an image window to work), we cannot just drag the “new instance” icon over to an image. What we need to do is to open the dialog, define the area to crop with the mouse over the target image, rotate the crop area if desired, and execute (green check mark). The “new instance” icon is there in case we want to instantiate DynamicCrop, that is, create a process icon with this dialog, so we can apply the same crop to another image with the same geometry (size). When the area has been defined, the parameters in the DynamicCrop process window will be updated accordingly, and we can edit them manually if we wish.

When to use DynamicCrop As noted when describing the Crop tool, cropping an image is something that can arise at different parts of almost any image processing workflow, from the moment we remove unwanted or 88

residual edges from an image right after image integration/stacking to the very end of our processing when we perform a final “framing” of our fully processed image for its final presentation. Being much easier and intuitive to use, DynamicCrop is often the preferred choice over Crop for most cropping needs.

Parameters Size/Position

Width/Height: Define the width and height of the cropped area. Anchor X/Anchor Y: Define the anchor point in (x,y) coordinates of the cropped area. Anchor point diagram: Click on one of the nine boxes to set the anchor point. Press SHIFT while double-clicking on one of the nine boxes to set the anchor point and move the cropping rectangle to the corresponding area on the image. Rotation

Angle: Define the angle of the cropped area, in degrees. An angle of zero degrees means no rotation. We can also use the circle icon to “draw” the angle, rather than entering it in degrees in the Angle text box. Clockwise: When enabled, the rotation is performed clockwise at the the angle specified above. When disabled, the rotation is assumed to be counter-clockwise. Center X/Y: The center coordinates for the rotation. Use fast rotations: Fast rotations are rotations of 180 and 90 degrees (clockwise and counterclockwise). When this option is enabled, if a fast rotation is done, the rotation is calculated by swapping and copying pixels between memory locations without floating point operations, which, because it is not needed in this case, it results in no data degradation but at the same time it is extremely fast. Rotation point diagram: Click anywhere to change the rotation angle of the cropping rectangle.

89

Scale

Scale X/Y: Once cropped, the area can be rescaled if we like. The X/Y values determine the scale for the resulting crop image. A value of one means the original scale is preserved. A value of two for example, will double the scale in that axis. Width/Height: Same as above except that in this case we determine the scale based on the final width and height size of the cropped area. Values are given in several different units: pixels, centimeters and inches. Preserve aspect ratio: When enabled, modifying the scaling value of one dimension (whether width or height) will result in a proportional adjustment to the other dimension. Interpolation

Algorithm: Usually it's best to leave it as “Auto” unless we have a special reason to force PixInsight to use one of the available algorithms. In the Auto mode, Bicubic spline is used for upsampling scaling ratios, and also for slight downsampling ratios, when the Mitchell-Netravali filters cannot be properly sampled (filter kernels smaller than 5x5 elements). Mitchell-Netravali cubic filters are used for the rest of downsampling operations. If we don't select the Auto mode, it may be useful to know that when downscaling an image, the nearest neighbor and bilinear algorithms tend to be the poorest performers, followed by bicubic spline and bicubic B-spline, with the Mitchell-Netravali and Catmull-Rom algorithms often providing very good results. When upscaling an image, bicubic spline usually gives the best results. The Mitchell-Netravali interpolation filter can be used to achieve higher smoothness in the upsampled result, which can be desirable in some applications.

90

Clamping: Only available for the Auto, Bicubic Spline and Lanczos algorithms. These algorithms sometimes produce ringing artifacts and to compensate for this side effect, this clamping mechanism allows us to avoid the negative interpolated values that cause the ringing. The lower the clamping threshold, the more aggressively the ringing is attacked, at the expense of detail preservation and aliasing. Smoothness: This parameter is only available if the Mitchell-Netravali, Catmull-Rom Spline or Cubic B-Spline algorithm has been selected, and it allows us to increase or decrease the smoothness level. Fill Color

When the cropped image requires expansion beyond the limits of the source image– that is, we indicated dimensions that cover an area that does not exist in the source image, new pixels are added at the corresponding sides with the color specified in this section (RGB and Alpha values).

DynamicPSF Process > Image

DynamicPSF is a powerful dynamic tool mainly designed for interactive PSF (Point Spread Function) fitting. We start by clicking on a star in the target image, and DynamicPSF immediately calculates a number of useful parameters describing the PSF fitting and other variables, such as the object's centroid coordinates, mean local background, FWHM, etc. We can then continue clicking on other stars to add them to the list (PSF collection).

When to use DynamicPSF DynamicPSF is mostly an analysis tool that only works well with unprocessed linear images, either individual raw images or images resulting from an image integration. We can use DynamicPSF after deconvolution but mainly to measure how effective the deconvolution was. Besides its obvious need when we are after certain calculations where PSF modeling can be useful (such as optics performance, focus accuracy, seeing conditions, etc.), we may also want to use DynamicPSF to produce a synthetic PSF image that, for example, could be used as an external PSF image in the Deconvolution process.

91

Table columns

Function type: Name of the PSF model function. Ch: Channel index of this fitted PSF. 0=red (or grayscale for monochrome images), 1= green, 2=blue. B: Local background value. A: Amplitude. This measures the peak value of the fitted function. cx: Horizontal (x) coordinate of the centroid, in pixels. cy: Vertical (y) coordinate of the centroid, in pixels. sx: Size of the fitted function on the X axis, in pixels. sy: Size of the fitted function on the Y axis, also in pixels. FWHMx: Full width at half maximum on the X axis, in either pixels or arcseconds, depending on the settings in the Image Scale section. FWHMy: Full width at half maximum on the Y axis, in either pixels or arcseconds, depending on the settings in the Image Scale section.

92

r: Aspect ratio. A value equal or lower than one representing the quotient sy/sx. theta: Rotation angle of the X axis in degrees, with respect to the X axis in the image and the centroid. The rotation angle can be shown either signed [–90°,+90°] or unsigned [0°,180°]. When signed angles are used, the counter-clockwise direction is positive and the clockwise direction is negative. beta: The beta exponent of Moffat PSF model functions. MAD: Mean Absolute Difference between the fitted PSF model function and the actual pixel values. This is an estimate of fitting quality: the smaller this value, the better the fit. PSF Model Functions

We can select what PSF model functions will be applied to new PSF fits. Auto: Let DynamicPSF find the best model function for each selected star. This is the default option. Gaussian: Enable to include Gaussian PSF fits. Moffat: Enable to include Moffat PSF model functions with fitted parameters. Moffat10, Moffat8, Moffat6, Moffat4, Moffat25, Moffat15, Lorentzian: Enable any of these functions to fit Moffat PSF model functions with fixed β parameters of 10, 8, 6, 4, 2.5, 1.5 and 1, respectively. Circular PSF: When enabled, DynamicPSF fits circular PSF functions. When disabled, it fits elliptical functions. Elliptical functions have two distinct axes and a rotation angle. Elliptical functions are usually preferable, as they provide more information about the true shapes and orientations of the fitted PSFs. Sometimes, however, circular functions may be preferable, such as cases of very noisy images or strongly undersampled images that rarely provide enough data to fit elliptical functions reliably. Signed angles: When enabled, rotation angles are displayed in the [–90°,+90°;] range. If disabled, they are displayed in the [0°,180°] range. Signed angles are useful to prevent ambiguities introduced by small rotations around zero degrees. Star Detection

DynamicPSF tries to find a representative star when we click on the active image. Here we define how DynamicPSF looks for stars and finds them. 93

Search radius: Size, in pixel of the radius defining the area where DynamicPSF will look for a star when we click on the image. A large radius helps detecting larger stars, whereas a smaller radius may be useful when selecting a star within a crowded star field. Background threshold: Define, in sigma units, the value where anything with this value or smaller is considered background and anything above it is signal from the object (star) we're trying to detect. Smaller values make DynamicPSF being less sensitive to star detection, while larger values help isolate very small and faint structures. Acceptable values are within the [0.05, 5] range. Automatic aperture: When enabled, DynamicPSF tries to find the smallest sampling area necessary to avoid inaccurate background evaluation when detecting the star. It is recommended to use the enabled default value, as it usually delivers better accuracy and performance. Image Scale

Scale mode: Defines how to compute the image scale. •

Standard FITS keywords: In this mode, DynamicPSF tries to use the standard FOCALLEN, XPIXSZ and YPIXSZ FITS header keywords to compute the image scale in arcseconds per pixel. If these keywords are not found or contain invalid values, DynamicPSF does not calculate image scale and FWHM is then expressed in pixels.



Literal value: In this mode we can enter the image scale directly, in arcseconds per pixel.



Custom FITS keyword: In this mode we can specify the name of a custom FITS keyword that we know it contains the image scale in arcseconds per pixel.



Pixels: Ignore image scale, show FWHM in pixels.

Image scale: When the Literal value mode is selected, we can enter the image scale here, in arcseconds per pixel. Custom keyword: When the Custom FITS keyword mode is selected, enter here the name of the keyword containing the image scale in arcseconds per pixel. If the specified keyword is not found, image scale is ignored and FWHM is shown in pixels.

94

ExponentialTransformation Process > IntensityTransformations

The goal of ExponentialTransformation is to increase the contrast in the shadows, without increasing noise and preserving the information in the highlights at the same time. Bringing out the faintest information in the image is not hard to achieve with histograms or curves adjustments; however, by doing so we have to face two serious problems. On one hand, when we enhance features in the shadows, we are enhancing data that is represented by values just a bit over background noise, so noise will usually be increased too. The second problem is that there's a high chance that the brightest objects become saturated. By using a mask, ExponentialTransformation can better enhance the desired information without increasing noise.

When to use ExponentialTransformation ExponentialTransformation works well with nonlinear images, and it's in fact often used on images that are nearly processed, whenever our goal is to bring out some faint data without increasing noise.

Parameters Function: ExponentialTransformation offers two different functions to address the problem stated above: •

SMI: This technique was first described in the book Photoshop for Astrophotographers, by Jerry Lodriguss. The author explains how to apply this operation through the Screen blending method in Adobe Photoshop®. SMI takes its name from the words that describe the workflow in Photoshop: Screen, Mask, Invert.



PIP: Like SMI, PIP takes its name from the operations involved: Power of Inverted Pixels and usually generates excellent results without being as aggressive as SMI.

95

The PIP and SMI functions share many characteristics. Both increase the contrast in the shadows, but modify the corresponding pixel values in different ways. In both cases, faint information can be improved to reach easily perceivable levels. However, the SMI and PIP functions are quite different in their aggressiveness. PIP is a good choice when the background becomes too bright, or if there is almost no contrast in our image. Sometimes the SMI method may lead to unbalanced colors, color casts or unbalanced colors. Order: Adjustment of the strength with which the selected function (SMI or PIP) is applied. Smoothing: Amount of blurring applied to the duplicate image. Luminance Mask: Activate a luminance mask protecting highlight areas of the image.

ExtractAlphaChannels Process > ColorManagement

The ExtractAlphaChannels process is used to obtain a copy of one or more of the alpha channels in an image, as independent grayscale images. This process can also be used to delete one or more alpha channels from an image.

When to use ExtractAlphaChannels As explained when we describe the CreateAlphaChannels, alpha channels are additional channels to an image that are generally used to define the transparency of the image. ExtractAlphaChannels is therefore useful when we want to have as an individual grayscale image the alpha channel defining a particular transparency for a given image. Because ExtractAlphaChannels allows us to also remove the alpha channel for an image, we can use it when we don't want an image to have an associated alpha channel. We can also use ExtractAlphaChannels to extract the alpha channel, modify it, and later add it back with the CreateAlphaChannels process, effectively replacing the alpha channel.

96

Parameters Channels

Here we select whether we want to extract all of the alpha channels in an image, only the active alpha channel or the alpha channels we decide. This last option requires us to enter the alpha channel IDs, separated by commas. Mode

Extract alpha channels: The default operation, which will extract the selected alpha channel(s) but leave the target image unmodified. Delete alpha channels: Rather than extracting them, by selecting this option, the selected alpha channels will be deleted from the image.

FITSHeader Process > Image

The FITSHeader process will display the header information contained in the target FITS image. It only works with images using the FITS format or images that preserve FITS keywords, like PixInsight's XIFS format.

When to use FITSHeader FITSHeader is the tool we use anytime we need to consult or use values typically stored as FITS metadata. Also, as FITSHeader also allows us to add new keywords, the tool can be very useful for adding certain keywords to a file that were missing at some point during the processing (it should not happen as long as we use either FITS or XIFS files within PixInsight) or simply not present because they were not supported by the software that created the files, or similar situations.

97

Parameters Name / Value / Comment: To add new entries, we enter the information in the Name, Value and Comment parameters, and click Add. The Value and Comment fields are not mandatory, per the FITS standards. Replace / Remove: Besides viewing the existing values or adding new keywords, we can also replace and remove entries, by selecting an entry and clicking on the corresponding button on the lower-right area of the dialog box. HIERARCH Convention: When enabled, the parameters will conform to the HIERARCH convention, which allows keywords longer than 8 characters and containing the full range of printable ASCII values. When disabled, the FITS Standard convention is used, which limits keywords to no more than 8 characters that can only include letters, digits, underscore or minus signs.

FastRotation Process > Geometry

Fast rotations are rotations of 180 and 90 degrees (clockwise and counter-clockwise) and horizontal/vertical mirror swaps. They differ from rotations at other angles because the rotation can be calculated by swapping and copying pixels between memory locations without floating point operations or interpolation, which results in no data degradation and at the same time it is extremely fast.

When to use FastRotation Anytime we're after a non-destructive 90/180 degrees rotation or horizontal/vertical flip (mirror).

Parameters The options in this dialog are sufficiently self-explanatory: Rotate 180 degrees, 90 clockwise, 90 counter-clockwise, horizontal and vertical mirrors. 98

FluxCalibration Process > Flux

FluxCalibration is a tool used to produce an estimation of energy calibration, converting an image's ADUs into energy flux. This estimation is calculated by correcting all the effects induced during capture, such as quantum efficiency, filter transmissibility, telescope aperture, etc., associating image pixel values to a standard physical system, a spectral energy flux in this case.

When to use FluxCalibration FluxCalibration only works on grayscale images. The best time to use FluxCalibration is after our target (grayscale) image has been calibrated, that is, it has been bias and dark subtracted and flatfield corrected. An image that has been flux calibrated becomes blocked from being flux calibrated again, that is, we can only apply FluxCalibration once to any given image. After an image has been flux-calibrated, it is re-scaled to the [0,1] range, storing the rescaling factors in the FITS header, adding the FLXMIN and FLXRANGE keywords.

Parameters Each parameter can be obtained in three different ways: •

Literal value: the values are manually entered.



Standard FITS keyword: the values are obtained from the FITS metadata, using standard FITS keywords.



Custom FITS keyword: if our FITS files store any of this information under non-standard FITS keywords, we can enter such keywords here.

99

Wavelength (nm): Effective filter wavelength in nm. Transmissivity: Filter transmissibility in the range [0,1]. Filter width (nm): Filter bandwidth, in nm. Aperture (mm): Telescope aperture diameter, in mm. Central obstruction (mm): Telescope central obstruction diameter, in mm. Exposure time (s): Exposure time in seconds. Atmospheric extinction: Atmospheric extinction in the [0,1] range. Sensor gain (e-/ADU): The sensor gain in e-/ADU. This value must be greater than zero. Quantum efficiency: Sensor quantum efficiency in the [0,1] range.

FourierTransform Process > Fourier

The FourierTransform process in PixInsight transforms an image into the frequency (Fourier) domain. Once applied to an image it will provide either the phase and magnitude components or the real and imaginary components. While the math behind it may be somewhat overwhelming for the not mathematically inclined, its practical uses are generally easy to apply. Being in the frequency domain, Fourier transforms are ideal for dealing with regular patterns.

When to use FourierTransform FourierTransform can be used in conjunction with InverseFourierTransform which performs the inverse operation back to the spatial domain. One typical use of FourierTransform in the processing of astronomical images is the detection, analysis and elimination of residual patterns, periodic noise and the like. It is, obviously, also useful for detecting dominant frequencies in our images. 100

A typical periodic pattern fix would involve applying FourierTransform to an image, identifying the pattern artifacts in the magnitude image (not the phase image), remove them or tone them down (which can be accomplished in a number of ways, from using CloneStamp to PixelMath to even more creative ways), and finally reconstruct the image using InverseFourierTransform.

Parameters Centered: If enabled, the origin of the discrete Fourier transform (DFT) is centered in the transform matrix. Otherwise, it is centered at the top left corner. Power Spectrum: When enabled, compute and generate magnitude and phase components. If disabled, compute instead the real and imaginary components.

GradientHDRComposition Process > GradientDomain

As you might suspect, this process is used to combine images with different exposure times and integrate them in a High Dynamic Range fashion, aiming at preserving as much information as possible from every pixel, avoiding under and overexposed data. It indeed serves the same purpose as HDRComposition, except it uses a completely different method, under the gradient domain (not directly related to the light gradients we often have in our images). When to use GradientHDRComposition

GradientHDRComposition can work with linear and nonlinear data, although it tends to deliver better results with linear images. If it is important to maintain data linearity in the composited

101

image ( our input images must first be linear, of course), disabling Keep in log() scale will produce a linear image.

Parameters Target Frames

Add Files: Click here to add the files to be combined. The order is not important. Select All: Mark all files in the list as selected. Invert selection: Mark as selected all not-selected images and vice-versa. Toggle Selected: Enable or disable the currently selected file from the list. Remove Selected: Completely remove the selected image(s) from the list. Clear: Completely remove all images from the list. Full Paths: When Full paths is enabled, display the complete path to each listed file. Parameters

Log10 Bias: When the input images have a constant pedestal, we can adjust the bias here, which is scaled using log(10). The default value of -7 disables bias correction. A value of, for example -3 means a bias of 0.001 and so on. Be careful when assigning large bias values, as they often produce images with lost faint details and artifacts starting to take over the image. Keep in log() scale: When enabled, it keeps the composited image in a logarithmic scale. This is the default, as it tends to produce better results. However, if we'd like to generate a linear composited image, we can disable it, at the expense of less reliable results. Generate masks: If enabled, in addition to the composited image, GradientHDRComposition generates two images defining the different regions, as determined by the GradientHDRComposition tool, used for the HDR composition. One image will define the regions for the x gradient (the image will be named HdrCompositionDxMask), and the other defining the regions for the y gradient, named HdrCompositionDyMask. NegativeBias: When enabled, a negative bias is applied. They tend to produce very contrasty images.

102

GradientHDRCompression Process > GradientDomain

GradientHDRCompression executes a HDR compression in the gradient domain over a single image. This is different than GradientHDRComposition, a process that can also work with a single image, but that it requires two or more images of different exposure times to work properly.

When to use GradientHDRCompression GradientHDRCompression effectively tones down very bright signal areas, so that we can better detect fainter details within those areas. The tool can be applied to both, linear and nonlinear images although it often makes more sense to use it on nonlinear images with large contrast differences. The resulting image will always be nonlinear even if the source image is linear – in that case, unaltered regions will remain linear, but that's it. We can use GradientHDRCompression on both, grayscale and color images. When working with color images, GradientHDRCompression will only make adjustments to the luminance/lightness component. It's best to use GradientHDRCompression on images that have been carefully corrected for light gradients (via DBE, for example), as any gradient still present in the image is likely to be enhanced and made more visible. Using this tool aggressively, we may obtain a resulting image that have more visible faint structures but also appears to have lost high-contrast details. In these cases, the resulting image may need to be later combined with the source image, so the high contrast details are brought back to the image. Gradients, as understood by GradientHDRCompression and all other GradientXXX processes in PixInsight, are defined by the difference among neighboring pixels, and these differences are where the tool performs its adjustments.

103

Parameters Max. log10(gradient): Defines the maximum gradient in the image. The smaller the value (closer to -7), the more suppression on bright structures. This parameter cannot be equal or smaller than the following Min. log10 parameter. Min. log10(gradient): Defines the minimum gradient in the image. Higher values tend to reveal more faint structures but may start clipping data (see Rescale parameter below). Exponent (gradient): Defines how the gradient is transformed. Values below one (the default) tend to produce results similar to reducing the value of Max.log10. Rescale to [0,1]: If enabled (default), rescale the results to the [0,1] range. If disabled, rescale to the image's original range. Preserve Color: When enabled, apply the original RGB ratios to the final image. This helps preserve the original color of the image prior to the compression, although in some occasions the results may appear over-saturated. If disabled, colors may appear pale because of the changes in luminance/lightness after the compression. When this happens, an increase in color saturation after GradientHDRCompression should bring colors back more naturally overall, with the exception of color information on dark areas.

GradientMergeMosaic Process > GradientDomain

GradientMergeMosaic is a sophisticated yet easy to use tool to create seamless mosaics of any size.

When to use GradientMergeMosaic Merging nonlinear data with GradientMergeMosaic often produces better results, however the tool can be perfectly used with linear data as well. The time to stitch a mosaic depends on our workflow. This moment often depends on whether we're using filters or a color camera and the number of panes. For example, a 2-pane mosaic captured with LRGB filters could be built on a per-filter basis (we'd build four mosaics, one per 104

filter, that should align), then blend all four sub-mosaics. with a (L)RGB combination. However, a very large mosaic, say 5x4 (20 panes) also four filters per pane, should be easier to build if we combine all 4 filters for each pane, then stitch the resulting 20 panes once. Other strategies can also be devised. Note that GradientMergeMosaic does not register any of the panes. This has to be done prior to running the tool. GradientMergeMosaic expects files that have already been aligned. Since panes in a mosaic tend to overlap just a bit (10% to 25% is normal, as opposed to nearly 100% when working on a single frame), these aligned images will have large empty areas. For GradientMergeMosaic to work, pixels in these unused areas should have a value of zero (black). For a more detailed workflow and insights about using GradientMergeMosaic and building deepsky mosaics, please refer to the Mosaics chapter in the Image Processing section of the book.

Parameters Target Frames

Add Files: Click here to select the already registered files that are part of the mosaic. Select All: Mark all images in the list as selected. Invert selection: Mark as selected all not-selected images and vice-versa. Toggle Selected: Enable or disable the currently selected image from the list. Remove Selected: Completely remove the selected image(s) from the list. Clear: Completely remove all images from the list. Full Paths: When Full paths is enabled, the File column will not only display the file name but also the complete path to the file in our storage device.

105

Parameters

Type of combination: Indicate how GradientMergeMosaic will deal with overlapping areas, either by averaging all overlapping pixels (Average) or by using only the pixels from the images further down the list, regardless of the information in the images above it (Overlay) as long as they're overlapping, of course. This means that the order of the files in the Target Frames area does not matter for the Average method but does make a difference when using the Overlay method. Shrink radius: Define, in pixels, the number of pixels that are removed from the border of each imaged area (here, imaged area means the area in each image that actually contains data, not including the zeroed black pixels that define empty areas). The purpose of this parameter is to correct for sometimes imperfect edges around the imaged area, often a bit darker than the average background. When such edges are present, GradientMergeMosaic cannot integrate these panes seamlessly. If the merged image shows visible seams with the default value of one, we can increase this value until the seams are gone. Feather radius: Define the size of the border area that we want “smoothed” while combining two panes. The default 10 works well for most cases, but if we notice stars right on the edge of a frame causing artifacts after the merge, we can try increasing this value. Black point: Any pixel equal or smaller than this value is considered defining “empty” areas. The default value of zero is appropriate, especially since that's the value StarAlignment uses to define empty areas. Generate mask: Enabling this option will also produce a new image representing the mask used by GradientMergeMosaic. When more than two panes are being merged, the mask will use different gray intensities to define the different merged images.

GREYCstoration Process > NoiseReduction

GREYCstoration is PixInsight's implementation of an open-source image regularization algorithm created by David Tschumperlé. This implementation is focused on the denoising capabilities of the algorithm.

106

When to use GREYCstoration GREYCstoration is a noise reduction tool, and therefore it comes to mind when we're trying to either tone down noise in our images or we're trying to soften an image or a mask. GREYCstoration works considerably better with nonlinear images than linear (it is not designed to be used with linear data). Not being intuitive to use and the fact that on average it doesn't produce better results than other noise reduction tools in PixInsight, makes GREYCstoration a rather unpopular noise reduction tool. Indeed, for noise reduction purposes, it is recommended to use other tools and methods in PixInsight, such as TGVDenoise, MultiscaleMedianTransform or even ACDNR.

Parameters Iterations: The number of times the GREYCstoration algorithm should be applied. Smoothing is mostly controlled by Amplitude and iterations. One iteration of a large value for Amplitude is often equivalent to many iterations with a smaller amplitude. However, with proper choice of other parameters, sometimes more iterations are better as sometimes this can prevent too much smoothing across high contrast areas (edges). Amplitude: Regularization strength per iteration. This parameter represents the average amount of smoothing that is performed. Sharpness: Contour preservation. This parameter tells GREYCstoration about structure preservation. Once the local structures of the image have been detected, GREYCstoration has to decide how much it will smooth image pixels. Basically, it decreases the smoothing when the local structure is contrasted. This parameter simply dictates how this decreasing of smoothness must be considered. When it's high, even low-contrasted structures will be preserved. We should not set it too high or most noise may still remain in the image. On the contrary, when the value of this parameter is low, the structures have to be very contrasted to avoid local smoothing. Often times, the default needs to be lowered down significantly. 107

Anisotropy: Smoothing anisotropy. This parameters set the anisotropy level of the considered smoothing. The anisotropy notion relates to the way the performed smoothing orientation will extend locally in space. A value larger than about 0.2 may produce artifacts. In general, we want to preserve isotropy, especially on deep-sky images. Noise scale: In short, this parameter is a threshold for the size of the noise to remove. Too small a value will not smooth the noise. Mathematically, this parameter is defined as the standard deviation of a blurring Gaussian kernel applied to the original image before estimating its geometry. In other words, it defines the scale under which details won't be considered as structures but much more as noise. Regularity: Geometry regularity. This parameter is mathematically defined as the standard deviation of a blurring Gaussian kernel applied to the field of structure tensors, which are matrices that describe locally the image structure geometry. Like the noise scale, it can be seen as a scale, not on the image itself, but on its structures. Basically, this parameter will tell GREYCstoration how smooth should be the geometry of the image structures. Spatial step size: GREYCstoration performs a spatial averaging of pixel values. This parameter defines the spatial integration step. Angular step size: Angular integration step. Precision: Computation precision. The default value of 2 should work for most cases. Interpolation: In general, Interpolation does an excellent work with its default Nearest neighbor value, but we may try with Bilinear or 2nd order Runge Kutta which sometimes may provide slightly more accurate results. Fast approximation: As a general rule, this parameter should be left at its default state (enabled). Coupled channels: This option should be enabled for color images, unless we want each color channel to be processed as an independent grayscale image, which may be useful sometimes.

108

HDRComposition Process > ImageIntegration

HDRComposition is a very easy to use process to combine images with different exposure times and integrate them in a single high dynamic range composite image, aiming at preserving as much information as possible from every pixel, avoiding under and overexposed data.

When to use HDRComposition HDRComposition can work with linear and nonlinear data. HDRComposition can generate a 64-bit image which may be advised in some situations, particularly when working with linear data. In any case, a minimum of two images, previously aligned, are required.

Parameters Input Images

Add Files: Click here to add the files to be combined. They should be ordered by exposure time (longest exposure files first) unless Automatic exposure evaluation is enabled (see below). Move Up: Move the selected file(s) up one position in the list. Move Down: Move the selected file(s) down one position in the list. Select All: Mark all files in the list as selected. Invert selection: Mark as selected all not-selected images and vice-versa. Toggle Selected: Enable or disable the currently selected file from the list. Remove Selected: Completely remove the selected image(s) from the list. Clear: Completely remove all images from the list. Full Paths: When Full paths is enabled, the File column will display the complete path to the file, not just the filename.

109

Format Hints

As usual whenever format hints are available, we can enter them here to change the way files are loaded (input hints). Since HDRComposition does not write files to disk, output hints are not available. HDR Composition

Binarizing threshold: This parameter helps define what areas from a long exposure image will be replaced by the same areas from a shorter exposure image. The default 0.8 does a very good job in most cases. Too high of a value will result in losing detail in bright areas, whereas a value too low will result in losing detail in darker areas. Mask smoothness: Defines how smooth the mask defining bright areas in an image should be. Mask growth: Sometimes it may be useful to expand the mask a bit, especially when bright areas in the image are surrounded by halos or other artifacts. Replace large scales: As we increase the value of this parameter, larger scale structures are ignored when analyzing the shorter exposure image(s). For most HDR compositions, the default value of 0 (don't ignore large scales) should work best. Automatic exposure evaluation: Enable to have HDRComposition try to determine exposure times automatically (not from metadata or exif information but by analyzing the image). If disabled, HDRComposition will assume that the files in the Input Images list are sorted by exposure time, with the longest exposure files first, and those with the shortest exposure last, which is okay as long as we know that's the case. Reject black pixels: When enabled (the default and recommended value), ignore black pixels and don't use them during the HDR process. If disabled, HDRComposition could use black pixels to replace bright or saturated areas, which often is not desirable.

110

Generate a 64-bit HDR image: Generate a 64-bit floating point image – that's a 10 to the 15 th of total discrete values! If disabled, a 32-bit floating point image will be created, which is sufficient for most cases. Output composition masks: When enabled, also create an additional image that contains the HDR composition mask (HDR_mask01, 02, etc). Close previous images: If this option is disabled, every time we use HDRComposition, it will create a new image, first named HDR, then HDR1, HDR2, etc. When enabled, HDRComposition, always generates the new composited image as HDR, replacing any previous HDR image if it existed. Fitting Region

Use the four available parameters to define a specific area within the images to process, instead of processing the entire images. Alternatively, we can click on the From Preview button if we've previously defined the area with a Preview.

HDRMultiscaleTransform Process > MultiscaleProcessing

HDRMultiscaleTransform (HDRMT) is a multiscale processing tool designed to control the dynamic range of images. While wavelet transformations are able to separate image structures as a function of their scales, HDRMT further separates and isolates individual scales and their contained structures. In this way, local contrast of structures defined in a given scale is not perturbed by larger structures defined in subsequent layers. HDRMT should be tried on previews that include the full range of brightness values

111

that is present on the whole image; otherwise the obtained results on the preview won't be identical to what will be achieved after applying the same instance to the entire image.

When to use HDRMultiscaleTransform HDRMT is particularly useful at rescuing details from areas that are otherwise too bright for those details to surface. It is therefore useful mainly on non-linear images that contain these bright areas and we would like the details in those areas to become more visible.

Parameters Number of layers: This parameter is the number of wavelet layers to which HDRMT will be applied, using a dyadic sequence. A value of 6 for example (the default) means that HDRMT will be applied to scales of 1, 2, 4, 8, 16 and 32 pixels (the first 6 layers of a dyadic sequence). A value of 3 will only apply HDRMT to scales 1, 2 and 4. Number of iterations: The HDRMT algorithm can work iteratively to converge to a solution where the image is completely flat above the scales where it has been applied. Here we define the number of times (iterations) we want to execute the defined HDRMT transformation. Just one iteration works very well in most cases, but if we want a stronger HDR compression, we can increase this value. Inverted: Enable inverted HDRMT iterations. This option can be useful to preserve shadow details in some cases. Overdrive: This value increases the amount of dynamic range compression, exponentially. Median transform: Use a median transform instead of using wavelet transforms to calculate the different scales. Median transformations are explained in more detail under MultiscaleMedianTransform and wavelets are also covered under MultiscaleLinearTransform and ATrousWaveletTransform. Scaling function: Select a wavelet scaling function. Peaked scaling functions such as linear interpolation work better to isolate small-scale structures. Smooth scaling functions such as Cubic B-spline work better to isolate larger scales. For a more detailed explanation of each of these scaling functions, review the documentation about the Scaling Function parameter for the ATrousWaveletTransform process. To lightness: Apply HDRMT only to the lightness of color images, leaving color information unmodified. 112

Preserve hue: After applying HDRMT, recover the original hues. Use as needed. Lightness mask: Use a lightness mask to protect dark background regions. Deringing

For detailed information about ringing artifacts and deringing, please review the documentation in MultiscaleLinearTransform about the topic. If we use wavelets (option Median transform disabled) the resulting image is subject to the Gibbs effect we find in high-pass filters that generates the dreadful ringing artifacts. For that purpose, HDRMT offers a deringing protection mechanism, similar to the one we can find in other tools such as MultiscaleLinearTransform and many others. Small-scale: Deringing strength for small-scale ringing artifacts. As always, the goal is to find the lowest value that works effectively in protecting the image from ringing. Large-scale: Deringing strength for bright ringing artifacts. Output deringing maps: Generate an image window for each deringing map image. New image windows will be created for the dark and bright deringing maps, if the corresponding amount parameters are nonzero. This is useful as a testing aid when our output image displays ringing artifacts and we're adjusting the deringing parameters unsuccessfully. Midtones Balance

None: Don't apply a midtones transfer function. Automatic: Apply an automatic midtones transfer function to recover the original median values. Recommended. Manual: Manually specify a midtones balance value, using the Midtones balance option.

HistogramTransformation Process > IntensityTransformations

Histograms are dynamic objects in PixInsight. They are calculated and generated automatically whenever necessary. Any view (opened image window) can be selected on the 113

HistogramTransformation window to inspect and manipulate its histogram functions. When a view is selected this way, its histograms are immediately calculated if they don't already exist. When the selected view is modified in any way that changes its pixel contents, its histograms are automatically recalculated and the HistogramTransformation window is updated accordingly. The process by itself is not complex: just a few normalized numbers to clip pixels at the shadows or highlights and to expand the dynamic range, and a simple transfer curve to adjust the midtones balance. Yet, the interface for histogram manipulation in PixInsight is probably one of the most elaborated in the entire application, and without a doubt, one of the best visual histogram adjustments tools that exist today.

When to use HistogramTransformation Histogram adjustments are used extensively across many image processing workflow steps. In fact, it's not a bad idea to evaluate the histogram after applying any other process, to see how it looks like and perhaps make small new adjustments. It can also be used on masks, to change their appearance, and many other purposes where a linear or midtones histogram adjustment makes sense. For this reason, it is important to learn how to read it and use it.

114

Parameters Output histogram: Output histogram functions are calculated according to the entire set of parameters, as currently defined in the HistogramTransformation window. In rare occasions we might find very small differences between the predicted values and the resulting histogram values after applying or previewing the histogram transform. Input histograms: When a view is selected in the view selection list, its histogram functions are drawn. Input histograms are actual functions calculated for the currently selected view. Histogram cursor: As we move the mouse over the input or output histogram areas, the cursor informs us where we are. Input/Output histograms zoom controls: Histogram functions can sometimes be difficult to read. For that reason, both the input and output histogram areas are zoomable from 1:1 to a factor of 999:1. Not only that, but both the horizontal and the vertical zoom ratios can be set independently. When a value greater than one is specified for horizontal or vertical zoom, scroll bars appear on the bottom or right edges, respectively, of the input and output histograms. We can use these scroll bars and their associated scroll thumbs and arrows to navigate on the magnified histogram. View selection list: Here we can select an existing view to work with its histograms on the Histograms window. Any image view or preview can be selected. Immediately after a view is selected, the program checks for availability of the associated histogram resources, and if not found, they are automatically generated and the histogram functions are calculated. Then the involved graphical elements are updated. Once we have selected a view, automatic recalculation and re-display of its histogram functions takes place each time any pixel value is changed. Graphics style: We can view the histograms in four different ways: lines, areas, bars and dots. Plot resolution: Histograms are calculated with 16-bit accuracy in PixInsight. If we select a plot resolution smaller than 16 bits, histogram values are once more rescaled to the specified range before drawing the histogram functions. Although the default value is usually fine, we may want to use a specific plot resolution with a particular image to adapt the graphical representation of histogram values to actual image contents. Note that this parameter defines the plot resolution, not the bit-depth of the image being analyzed.

115

Information panel: This panel shows information associated with the current cursor position over the input or output histograms. For example: x = 0.820669 (51-51); 1225, %0.0127 | y = 0.362694

The above line means that the horizontal cursor position over the histogram graphic corresponds to a normalized pixel value of 0.820669 in the [0,1] range. The selected view contains 1225 pixels for this level, which is equivalent to the 0.0127 percent of the total amount of pixels. Finally, the vertical cursor position corresponds to a relative intensity of 0.362694 in the normalized [0,1] range. Channel selectors: We can define independent histogram manipulations for red, green, blue and combined RGB or grayscale channels, as well as the alpha channel, if present. Transforms defined for the red, green and blue channels are applied to the same channels of RGB color image views. For RGB color image views, the combined channel transform applies equally to all three channels. For grayscale image views, only the combined channel transform (whose corresponding channel selector is labeled as RGB/K) applies. Shadows/highlights clipping: Shadows and highlights clipping parameters (as well as midtones balance) can be edited either by typing numerical values or by moving their corresponding triangular sliders (Histogram controls area). Histogram manipulation parameters are defined in the normalized real dynamic range, from zero to one. The edit controls implement the standard PixInsight's parsing procedures to ensure that we always specify strictly valid numerical values within the valid range. An additional constraint guarantees that the value of shadows clipping is always less than or equal to highlights clipping. Note that each edit control is tied to its corresponding triangular slider control. When we modify a value in the clipping edit control box, the triangular slide is updated accordingly and the other way around. Midtones balance: Same as above but for the midtones balance. Increasing the midtones (moving the slider to the left) will darken our active image, while decreasing it will brighten it. Histogram controls: At the bottom of the input histograms there is a horizontal rectangular area whose background is drawn as a gradient, ranging from pure black to pure red, green, blue or white, depending on the currently selected channel. This gradient represents the full available dynamic range, and is oriented as pixel values vary for histogram functions, transfer curves and readouts.

116

Three small triangular shapes, known as sliders, are shown ordered on this area and are associated with their respective histogram manipulation parameters, namely, from left to right: shadows clipping, midtones balance and highlights clipping. We can click on any slider and drag it horizontally to change its associated parameter. MTF curve: This is the midtones transfer curve. Histogram display: There's four controls here. From left to right: •

Reject saturated pixels: When enabled, saturated pixels in the active image view will not be represented in the histograms. Note that the saturated pixels do remain saturated. This may be helpful to examine histograms with a high number of saturated pixels.



Show raw RGB histograms: When this option is enabled, changes to parameters for individual RGB channels are not taken into account to draw the histogram functions in the combined RGB/K channel. In this case, the functions plotted are just the histograms of the selected view in its current state. When disabled, individual RGB parameter sets are used to calculate modified histogram functions in the combined RGB/K channel.



Lock output histogram channel: When this option is enabled, displayed channel(s) for the output histograms don't change when we change the current channel for input by clicking channel selection buttons. This is useful to see how changes to an individual channel affect the output histogram by comparison with the rest of channels.



Toggle MTF curve: Show/hide the midtones transfer curve.



Toggle background grid: Show/hide the background grid.

Readout mode buttons: Readouts work by clicking on any view (image or preview) of an image window in any of the four available readout modes - from left to right: normal (value is displayed in the histogram but nothing is changed), black point, midtones and white point). In this mode, while the mouse button is held down, readout values are calculated for the cursor coordinates and sent to the histogram window. In other words, after clicking on these icons, hover over an image and click on an area - the black point, midtones or white point, depending on the readout button selected - will be set to the value of the pixel we just clicked. Automatic adjustments: The Auto Clip buttons automatically clip histograms by predefined amounts. If the currently selected view is an RGB color image, automatic clipping occurs for each individual RGB channel. For grayscale images, automatic clipping works for the RGB/K combined channel only. 117

AutoClip Shadows and AutoClip highlights will perform the auto-clip for the shadows and highlights values respectively. Auto zero shadows and Auto zero highlights will reset the shadows and highlights values. Auto Clip Setup: The predefined clipping amounts can be established by clicking on the Auto Clip Setup button. When we do so, the Auto Clipping Setup dialog is shown. In this dialog we can define whether to clip at each histogram end, and the amount of pixels that will be automatically clipped. Amounts are expressed as percentages of the total number of pixels in the target image. Default values are: shadows and highlights clipping enabled, 0% pixels clipped. These settings will only clip unused segments of dynamic range at both ends of histograms. Clipped pixel counts: Clipping counts refer to the number of clipped pixels at both histogram ends, whenever we adjust the Shadows or the Highlights. Clipped pixels at the shadows will be set to zero (black), while clipped pixels at the highlights will be set to one (white). Information from clipped pixels is always lost. When working with a RGB color image, the pixel count equals the number of clipped pixels on each channel, meaning the percentage values can technically reach a value of up to 300% if all pixels were clipped. Dynamic range expansion: These controls let us enter values for low and high dynamic range expansion parameters. These two parameters actually allow expansion of the unused dynamic range at both ends of the histogram. This can be probably better understood as a two-step procedure: first, dynamic range is expanded to occupy the entire interval defined by the lower and upper bound parameters, but actual pixel values are not changed. The second step is to rescale both dynamic range and pixel values back to the normalized [0,1] range. The result of this process is that all pixel data are constrained to a smaller effective interval, and free unused portions appear at the histogram ends. This is used as a previous step for some image processing techniques with the purpose of preserving actual pixel data from losses due to excessive contrast gains.

118

ICCProfileTransformation Process > ColorManagement

This process allows converting an image from its current ICC profile to the color space defined by a different ICC profile.

When to use ICCProfileTransformation There are a number of reasons we might want to convert an image to a different color space. The most common occurrence is when preparing an image for being published on the Internet by converting it to sRGB if that wasn't the image's ICC profile at first. Another common reason is preparing a file for printing or for combining it with an image that uses a different ICC profile.

Parameters Source Profile: The image whose ICC profile we wish to change. When the image view is selected, its ICC profile will display below it. Target Profile: The new profile we want to assign to the source image. •

Convert to the specified profile: We select this option if we want to assign one of the several available ICC profiles to our source image.



Convert to the default profile: Depending on whether our source image is RGB or grayscale, selecting this option will assign the default profile to the image.

Rendering Intent: When the gamut of source color space exceeds that of the destination, saturated colors are liable to become clipped (inaccurately represented). The color management module can deal with this problem in several ways: •

Perceptual (Photographic images): The gamut transformation is done according to the selected ICC profile. This method may result in strong color variations. 119



Saturation (graphics): Similar to Perceptual except that while the results from a perceptual intent tend to be more pleasing, in the case of the Saturation intent, they tend to be more eye-catching.



Relative Colorimetric (match white points): The goal in relative colorimetry is to be truthful to the specified color, but depending on different media we may end up with flat images, maintaining only a fraction of available grays or a collapsed image with loss of details in dark shadows. Media differences are the only thing we really would like to adjust for. Obviously there has to be some gamut mapping going on also. Usually this is done in a way where hue and lightness are maintained at the cost of reduced saturation.



Absolute Colorimetric (proofing): Absolute colorimetry and relative colorimetry actually use the same table but differ in the adjustment for the white point media. Perceptually, the colors may appear incorrect, but instrument measurements of the resulting output would match the source. Colors outside of the proof print system's possible color are mapped to the boundary of the color gamut. Absolute colorimetry is useful to get an exact specified color, or to quantify the accuracy of mapping methods.

Black point compensation: The black point compensation feature works in conjunction with relative colorimetric intent. Perceptual and Saturation intents should make no difference, although it affects some profiles. When enabled, the black point compensation mechanism will scale the full image across the gray axis in order to accommodate the darkest tone origin media can render to the darkest tone destination media can render. Floating point transform: ICC transformations often benefit from being done in floating point. If our source image is not in floating point format, it will be transformed to 32-bit floating point.

ImageCalibration Process > ImageCalibration

ImageCalibration is PixInsight's approach to calibrating images. It is an extremely flexible, versatile and powerful tool that may appear intimidating at first, especially to those not well versed in the calibration processes. However, since calibration is such a crucial step for producing good quality astroimages, any time spent learning this tool is well worth it.

120

When to use ImageCalibration ImageCalibration is normally used prior to any other image manipulation, with just a few exceptions. ImageCalibration is not only used to calibrate our data files (lights) but also to calibrate our calibration frames: bias, darks and flats. It is important to understand most of the parameters and use them (or not) accordingly. In order for ImageCalibration to work we must have at least one image in the Target Frames list and have checked at least one of the “Master” calibration files (bias, dark or flat) or have selected an Overscan area.

Parameters Target Frames

Add Files: Add image files to the list of images to calibrate. Select All: Select all input images. Invert Selection: Invert the current selection of input images. That is, images that were selected will be deselected, and the rest of images (that were deselected) will be selected. Toggle Selected: Toggle the enabled/disabled state of currently selected input images. Disabled input images will be ignored during the integration process. Remove Selected: Remove all currently selected input images from the list. Clear: Clear the list of input images. Full paths: Show the full path of each image in the list, rather than just the file name. Format Hints

Format hints are available to change the way the files to be calibrated are loaded (input hints) and how the output calibrated files are written (output hints). Output Files

When executed, ImageCalibration will generate the calibrated image(s) in the directory specified here. By default, newly created files will have the same filename as their source files and the postfix “_ca”.

121

Output directory: The folder where the newly created files will be saved. If this field is left blank, new files will be created in the same directory as the input files. Prefix: Insert a string of text at the beginning (prefix) of each filename. Default is blank: no prefix. Postfix: Add a string of text at the end of the filename, prior to the file extension. The default is “_ca”, as a reminder that these are calibrated frames. Sample format: Select the bit depth of the calibrated images. Output pedestal (DN): The output pedestal is a value between 0 and 65535 that is added to the calibrated images, usually with the purpose of avoiding possible negative values. These negative values rarely appear when calibrating regular astronomical images, so the rule of thumb is to not add any pedestal in most cases, but when calibrating bias or dark frames, negative values may appear and in those cases, setting a pedestal to compensate is recommended. Evaluate noise: Under most normal calibration situations, noise evaluation is recommended. This information will be stored as a FITS keyword (NOISExxx) in the calibrated file so other processes can take advantage of this precomputed information. ImageIntegration in particular takes a very good look at this information.

122

Noise evaluation: When the previous option (Evaluate noise) is enabled, define here the noise evaluation algorithm. •

Multiresolution Support: MRS is the default algorithm and should be the better choice for most cases. It only looks at the first four wavelet layers, where noise is likely to reside.



Iterative K-Sigma Clipping: use K-Sigma only on images that have virtually no smallscale noise and that therefore, cannot be properly evaluated via MRS.

Overwrite existing files: When enabled, if a file already exists in the output directory with the same name, overwrite it. If the same situation arises when this option is disabled, the new filename will be adding an underscore character followed by a numeric index at the end of the filename: _1 the first time a same filename is found, _2 should it happen a second time, etc. On error: What should ImageCalibration do if it encounters an error while processing the target images? Continue (ignore the error and proceed with the next image), Ask User whether to continue or not, or directly Abort. Pedestal

Pedestal mode: Unlike the Output pedestal defined earlier, which is added to our images after being calibrated, this is an input pedestal that is subtracted prior to any calibration. It is recommended leaving it up to ImageCalibration to determine whether a pedestal is already defined inside the images to be calibrated. We achieve this by selecting the default option, Default FITS keyword (PEDESTAL). •

Default FITS keyword (PEDESTAL): In this mode, ImageCalibration tries to find the standard FITS keyword PEDESTAL to determine if a pedestal was previously defined. If the PEDESTAL keyword is not found or contains an invalid value, ImageCalibration will assume there's no pedestal.



Literal value: In this mode we can enter the pedestal value directly (see Pedestal value below).



Custom FITS keyword: In this mode we can specify the name of a custom FITS keyword that we know it contains pedestal information. We enter the keyword in the Pedestal keyword parameter (see below)

Pedestal value (DN): When selecting Literal value, enter here the desired pedestal value. Valid values are 0 to 65535.

123

Pedestal keyword: When selecting Custom FITS keyword, enter here the custom FITS keyword. Overscan

An overscan refers to a feature some cameras offer. When overscan is enabled, images captured by the camera will include an area (the overscan region) that represents the actual bias level of the sensor. Overscan correction is mainly useful when we notice a “bias drift” in our bias frames. If we check the mean or median values of several of our bias frames and such values don't differ significantly, then we don't need to do overscan correction, and a classic bias subtraction will deliver great results without the extra effort. Note that if we're planning on using overscan correction, all our images would have to be captured with the overscan option enabled, not only our lights but also our calibration files. To do an overscan correction we first define our image area (Image region), the overscan areas (Source region), and the areas to be overscan-corrected (Target region). ImageCalibration allows us to define up to four different overscan areas, although normally we would want to use just one area. In order to define an overscan area, we must enable the corresponding checkbox before entering the coordinates. Image region: Define the coordinates (a crop, really) of the pixels that define actual image data and nothing else (no overscan areas). The order is: left, top, width and height (in pixels). Source region(s): These are the overscan areas. Target region(s): These are the areas that will be overscan-corrected. Normally we would want to enter here the same coordinates we entered in Image region. Master Bias

If our calibration routine includes a master bias frame, we enable this section and enter here the master bias file. Calibrate: When using overscan correction, enable it for ImageCalibration to correct the master bias frame. Master Dark

If our calibration routine includes a master dark frame, we enable this section and enter here the master dark file.

124

Calibrate: When enabled, ImageCalibration will calibrate the master dark frame, that is, it will bias-subtract the master dark frame, as long as we also defined a master bias frame. If an overscan area was defined, then ImageCalibration will do an overscan correction. Optimize: If enabled, ImageCalibration will perform a number of optimization calculations to improve the calibration process. If the exposure time of the master dark frame and the images being calibrated is different, ImageCalibration will rescale the master dark frame. It can also adjust for small temperature changes between the master dark frame and the light frames and other adjustments to minimize dark subtraction induced noise. Optimization threshold: Any pixels below this value (measured in sigma units, from the median) will not be used to compute any optimizations. Values between one and three are usually appropriate. Optimization window: Since there's virtually no results difference between computing noise estimates (during the dark optimization processes) on the whole image or just on a smaller section of it, ImageCalibration's default is to only inspect the center 1024x1024 pixels of the image, speeding up computing time significantly. If we'd like to use the whole image to calculate noise estimates, we set this parameter to zero. CFA pattern detection: Define how CFA patterns from color cameras are determined. •

Detect CFA: Try to find CFA pattern information in the file or try to determine it automatically.



Force CFA: Assume all images use a CFA pattern. Use this option only if no CFA pattern was detected using the previous option, Detect CFA.



Ignore CFA: Ignore CFA patterns entirely.

Master Flat

If our calibration routine includes a master flat frame, we enable this section and enter here the master flat file. Calibrate: When enabled, ImageCalibration will subtract (if present) master bias and master dark frames from the master flat frame. Also overscan corrections will be applied if an overscan region was defined. If our master flat frame was already accurately calibrated, we can leave it disabled.

125

ImageIdentifier Process > Image

We use the ImageIdentifier option whenever we wish to give any given image or view a different identifier (name). Setting an identifier does not change the actual file name, if the image was previously saved to disk.

ImageIntegration Process > ImageIntegration

The ImageIntegration tool performs a combination of up to thousands of FITS files into a single integrated (stacked) image. During the integration process, ImageIntegration can also perform state-of-the-art rejection, normalization, noise evaluation and many other tasks designed to maximize our data under a large number of different situations.

When to use ImageIntegration ImageIntegration is usually the last of the calibration/registration/stacking sequence of producing our master light frames, unless we are also generating drizzle data in which case DrizzleIntegration would be that last step (preceded by ImageIntegration). ImageIntegration is also one of the first processes we would probably use if we follow a typical workflow, as it's fundamental for creating our master calibration frames as well. “Image integration” is the process that elsewhere is usually described as stacking. In reality, ImageIntegration can be used for more than stacking our calibration and light frames although any other use is generally limited to data or computation analysis.

Parameters Input Images

Add Files: Add image files to the list of input images. 126

Add L.Norm. Files: Associate input images with local normalization data files. These are files created with the LocalNormalization tool (.xnml) Clear L.Norm. Files: Remove all .xnml files from being associated to the drizzled data files. Add Drizzle Files: If we originally generates drizzled files (files with a .xdrz suffix, generated by StarAlignment) we click on this button to add the corresponding .xdrz files. Clear Drizzle Files: Remove the association (not the actual files) between the drizzle data files and the target images. Set Reference: Make the currently selected file on the list the reference image. The reference image is the first image in the list of images to be integrated. Its statistical properties are then used to calculate normalization parameters and relative combination weights for the rest of images. Select All: Select all input images. Invert Selection: Invert the current selection of input images. That is, images that were selected will be deselected, and the rest of images (that were deselected) will be selected. Toggle Selected: Toggle the enabled/disabled state of currently selected input images. Disabled input images will be ignored during the integration process. Remove Selected: Remove all currently selected input images. Clear: Clear the list of input images. Full paths: Show the full path of each image in the list, rather than just the file name. Format Hints

Format hints are available in ImageIntegration to change the way the input images are loaded (input hints). Since ImageIntegration does not write new files to disk, there's no output hints available.

127

Image Integration

Combination: Select a pixel combination operation. The average combination provides the best signal-to-noise ratio in the integrated result. The Median combination provides more robust rejection of outliers, but at the cost of more noise. The Minimum and Maximum options, which will only display the pixels with the maximum or minimum values of the stack are not recommended unless we have some specific reason and we know what we're doing. Normalization: Image normalization for combination. When one of these options is selected, ImageIntegration will normalize/scale all input images before combining them. No normalization: When selected, the images won't be normalized. This is is the adequate choice when integrating bias or dark frames, as this option preserves pedestals that must be preserved. Normally this option should not be selected for integration of flat and light frames.



Additive: Mean background values will be matched via additive operations.



Multiplicative: Mean background values will be matched via division operations. This is the correct option to integrate master flat frames.



Additive + scaling: Along with additive normalization, the images will

128



be scaled to match dispersion. This is the default option, and it's the option of choice for integrating light frames. •

Multiplicative + scaling: Along with multiplicative normalization, the images will be scaled to match dispersion.

Weights: Each image can be assigned a weight. Here, a weight means that each image is assigned a multiplicative factor, based on some criteria, typically related to the “quality” of the image. This later causes ImageIntegration giving more weight to “better” images when integrating them, improving the results. •

Don't care: No weighting applied. Suitable for integrating master bias, darks and flat frames.



Exposure time: Exposure information will be retrieved from the standard EXPTIME and EXPOSURE FITS keywords (in that order).



Noise evaluation: Via multiscale noise evaluation techniques, relative SNR values are calculated. This is the most accurate approach for automatic image weighting, and the default option.



Average signal strength: Derives relative exposures directly from statistical properties of the image. This option will not work if some of the input images have additional illumination variations, such as sky gradients.



Median: Calculate the weights of the input images from the median sample values.



Average: Calculate the weights of the input images from the mean sample values.



FITS keyword: Specify the name of a FITS keyword to retrieve image weights. The specified keyword must be present in all input images and must have a numeric value. Some processes like SubframeSelector can store pre-calculated weights in FITS keywords, that can now be retrieved via this option.

Weight keyword: When the FITS keyword option is selected as the weighting method, this is where we enter the custom FITS keyword that will have the image weight information. Scale estimator: Selects a estimator of scale for weighting and scaling.

129



Average absolute deviation from the median: This was the original default scale estimator in ImageIntegration until mid-2013. It only looks at pixels with values in the [0.00002,0.99998] range, effectively excluding hot and cold pixels and other extremely bright artifacts.



Median absolute deviation from the median (MAD): MAD work best with mages that have large background areas and it's often considered a strong scaling method albeit not as effective as others.



Biweight midvariance: This method tends to produce better results than MAD due to its superior efficiency.



Percentage bend midvariance: Similar efficiency as the Biweight midvariance method but particularly resistant to outliers.



Sn/Qn estimators of Rousseeuw and Croux: All described methods measure the variability of pixel values from the median, which makes sense, as the median in deep-sky images is often the actual mean background of the image. However, these estimators assume variations that are symmetric from the central value, which is not always the case. The Sn and Qn scale estimators don't work around a central value, but by calculating differences between pairs of data points. Both methods (Sn and Qn) are as robust at dealing with outliers as MAD, but they're more efficient.



Iterative k-sigma / biweight midvariance: This is the most effective method from a (Gaussian) efficiency perspective, and the default option. It applies a sigma-clipping routine based on the biweight midvariance.

Ignore noise keywords: If enabled, ImageIntegration will ignore NOISExxx FITS keywords and obtain noise information from cached data instead. Disable it (default value) to take advantage of pre-computed noise estimates from these keywords, if available. Generate integrated image: Normally we want this option to be enabled (default) so we obtain an integrated image. We can disable it to save a small computing time when we're only evaluating pixel rejection, since we can do this just by looking at rejection maps and statistics without an integrated image. Generate a 64-bit result image: When enabled, ImageIntegration will generate an 64 bit float image, as opposed to 32 bit float which is the default.

130

Generate drizzle data: When enabled, generate a drizzle file (.xdrz) for the integrated image. This data can later be used by the DrizzleIntegration tool to produce the final drizzled image. In order to generate drizzle data with ImageIntegration we must first add drizzle files associated with our input images, meaning we need to start generating drizzle data when aligning our images with StarAlignment. Subtract pedestals: If enabled, look for the PEDESTAL keyword in the input files and apply it, if found. It's recommended to leave it enabled (default value). Truncate out-of-range: When enabled, if the integrated image contains pixels out of the [0,1] range, truncate these values. If disabled, ImageIntegration will rescale the entire image, so no pixels are out of range which, while this should not normally happen if the images have been properly calibrated, is the best option for doing an image integration. When integrating flat frames, however, we don't want to rescale the intensity values, therefore truncation (enabling this option) is the best option. Evaluate noise: Evaluate the standard deviation of Gaussian noise for the final integrated image. This option is useful, for example, to compare the results of different normalization and weighting methods. Close previous images: Select this option to close existing integration and rejection map images before running a new integration process. This is useful when testing the same integration set repeatedly with different parameters. Automatic buffer size: Let ImageIntagration determine the size of the memory stack and data buffers used by the tool. This is the default and recommended value. Buffer size (MiB): When the Automatic buffer size option is disabled, we can define here the size of the working buffers used to read pixel rows. The larger the buffer size, the better the performance of ImageIntegration. The default value works well for most systems. Stack size (MiB): When the Automatic buffer size option is disabled, we can define here the size of the tool's stack. The larger the size, the better the performance of ImageIntegration. The default value works well for most systems. Optimal values for the buffer and stack size depend on the number of input files and their sizes. The rule of thumb is for the buffer size to be large enough to store a single image line in 32 bit float so it can be read in a single read operation, and then the stack size to be at least width x height x (12 x number of images being integrated + 4)

131

The default values work well for many typical integration sessions, however, when integrating very large number of images and/or the images are very large, or when we're working with a system short in memory or resources, adjusting these values can significantly improve performance. Use file cache: If enabled, ImageIntegration creates a dynamic cache of working image parameters, such as pixel statistics, normalization data and more, that greatly improves performance. Disabling this option will force ImageIntegration to recalculate all this data any time it's needed, usually requiring reloading image files from disk and considerably slowing down the integration process. Pixel Rejection (1)

Rejection Algorithm: The rejection algorithm is another crucial decision during image integration. ImageIntegration offers several state-of-the-art rejection methods. Min/Max: Min/max performs a rejection based on a fixed number of pixels from each stack, without any statistical basis. It is not recommended unless we have some specific goal in mind that requires this kind of rejection.



Percentile Clipping: This is the preferred rejection method for very small sets of images, such as 3 to 6 images. Percentile clipping rejects pixels outside a fixed range of values relative to the median of each pixel stack in a single pass.



Sigma Clipping: The iterative sigma clipping algorithm is usually the best option to integrate 10 to 15 images or more.



Winsorized Sigma Clipping: This algorithm is similar to the Sigma clipping algorithm, but uses a special iterative procedure that ensures a robust estimation of parameters through a technique named winsorization. This is the preferred algorithm for large sets of images.



Averaged Sigma Clipping: The averaged iterative sigma clipping algorithm is another good option for small sets between three and 10 images. For slightly larger sets of images, the “normal” Sigma clipping algorithm tends to perform better.



Linear Fit Clipping: This algorithm minimizes average absolute deviation and maximizes inliers, often beating sigma clipping for large image sets, particularly when the images have wild sky gradients. A minimum of five images is required and a minimum of 15 is recommended.

132





CCD noise model: This algorithm must be used with uncalibrated data and accurate sensor parameters that we will enter in the Pixel Rejection (3) section. It should normally be used only to integrate calibration images (bias, darks and flats).



Generalized Extreme Studentized Deviate (ESD) Test: The ESD test is used to detect outliers in a stack that follows an approximately normal distribution. It offers excellent results when integrating 25+ images.

Normalization: Normalization is essential to perform a correct pixel rejection, as it corrects for images having different statistical distribution. •

Scale+zero offset: This method matches mean background values and dispersion via multiplicative and additive transformations. This is the default and recommended rejection normalization method.



Equalize fluxes: This method matches the main histogram peaks of all images prior to pixel rejection via a multiplicative operation. This is the preferred method to integrate sky flat fields, when trying to match dispersion does not make sense. We can also use this method to integrate uncalibrated images.



Local normalization: Use normalization functions from local normalization files (.xnml) that were calculated and created previously with the LocalNormalization process and that are associated to our input files. This is the method of choice when the input images are substantially different one from another, whether due to different optics being used, different conditions during acquisition, etc.

Generate rejection maps: Rejection maps represent the number of rejected pixels by displaying pixels of different intensity. Maps are generated for both low and high rejected pixel. Clip low pixels: Reject pixels with values below the median of the pixel stack. Clip high pixels: Reject pixels with values above the median of the pixel stack. Clip low range: Reject pixels with values equal or below the Range low parameter, defined in Pixel Rejection (2). Clip high range: Reject pixels with values equal or above the Range high parameter, defined in Pixel Rejection (2).

133

Report range detection: Show the number of range-rejected pixels in the summary that is displayed on the console. If disabled, this information is not displayed in the summaries. Map range rejection: Include range-rejected pixels in the rejection maps. Pixel Rejection (2)

Most of these low/high values control limits and ranges for the pixel rejection process. Min/Max low: Number of low (dark) pixels to be rejected by the Min/max algorithm. Min/Max high: Number of high (bright) pixels to be rejected by the Min/max algorithm. This option and the one above are only available when the Min/Max algorithm has been selected as the rejection algorithm. Percentile low: Low clipping factor for the Percentile clipping rejection algorithm. The lower the value, the more dark pixels will be rejected. Percentile high: High clipping factor for the Percentile clipping rejection algorithm. This option and the one above are only available when the Percentile clipping algorithm has been selected as the rejection algorithm. The lower the value, the more bright pixels will be rejected. Sigma low: Low sigma clipping factor for the Sigma clipping and Averaged sigma clipping rejection algorithm. The higher the value, the less dark pixels will be rejected. Sigma high: High sigma clipping factor for the sigma clipping and averaged sigma clipping rejection algorithm. The higher the value, the less bright pixels will be rejected. This option and the one above are only available when the sigma, Winsorized sigma or averaged sigma clipping algorithms has been selected as the rejection algorithm. Winsorization cutoff: Only active when the Winsorized Sigma-clipping algorithm is selected, any pixels further away from the median than this value (in sigma units) are set to the median of all images being integrated for those particular pixels. This helps strong outliers to be replaced with adequate values. Linear fit low: Here we define the tolerance of the Linear Fit Clipping algorithm (in sigma units) for low pixel values. Linear fit high: Here we define the tolerance of the Linear Fit Clipping algorithm (also in sigma units) for high pixel values.

134

ESD outliers: When using the Generalized Extreme Studentized Deviate (ESD) Test, use this parameter to determine the maximum number of outlier pixels to be detected from the stack. The value is a proportional measure where a value of 0.2 indicates that up to a 20% of outlier pixels can be detected for each pixel stack, a value of 0.4 indicates 40% and so on. ESD significance: This parameter, also only available when EDS is used, allows us to determine the chances that a false positive is determined to be an outlier. As with ESD outliers, we can interpret the [0,1] range as percentage, with 0.1 meaning 10%, 0.05 meaning 5%, etc. Range low: Active when the Clip low range option was enabled in the Pixel Rejection (1) section, this parameter is used to set the low values being rejected. Any pixels with values equal or lower than this parameter will be rejected. Range high: Active only when the Clip high range option is also enabled in the Pixel Rejection (1) section, this parameter is used to set the high pixel values being rejected. Any pixels with values equal or higher than this parameter will be rejected. Pixel Rejection (3)

These are the required parameters for the CCD Noise Model rejection algorithm. CCD gain: CCD sensor gain in electrons per data number (e-/ADU). CCD readout noise: CCD readout noise in electrons. CCD scale noise: Indicates the CCD scale noise (AKA sensitivity noise). This is a dimensionless factor. Scale noise typically comes from noise introduced during flat fielding. This and the previous two parameters are only used by the CCD noise rejection algorithm. Large-Scale Pixel Rejection

Large-scale pixel rejection is particularly useful to reject large and bright unwanted signal such as the traces left by airplanes, satellite trails, or even RBI artifacts. It can also reject large dark structures. When enabled, computation time will increase noticeably. Reject low large-scale structures: Enable large-scale rejection for pixels with low values. This will reject dark large-scale structures. Layers (low): Define here what structures are considered “large” for the purpose of rejecting low pixel values. Layers with a scale equal or lower than this value will not be considered large-scale structures. Therefore we can increase this value to detect and reject larger dark structures. Note that including too many layers will increase computing time and may result in too many pixels 135

being rejected, while just including just one layer may not reject efficiently unwanted bright largescale structures, depending on our image. Growth (low): Once the large-scale structures have been identified, we can increase the rejection area a bit, which can produce a more accurate definition of the ideal rejection area. The default value of 2 is a good compromise. Reject high large-scale structures: Enable large-scale rejection for pixels with high values. This will reject bright large-scale structures, which is usually the reason for enabling large-scale pixel rejection. Layers (high): Define here what structures are considered “large” for the purpose of rejecting high values. Layers with a scale equal or lower than this value will not be considered large-scale structures. As with the Layers (low) parameter, including too many layers will increase computing time and may result in too many pixels being rejected, while just including one layer may not reject efficiently unwanted bright large-scale structures, depending on our image. Growth (high): Same behavior as documented for the Growth (low) parameter, except that this time the rejection area is targeting high pixel values. Therefore, once the large-scale structures have been identified, we can increase the rejection area a bit, which can produce a more accurate definition of the ideal rejection area. Again, the default value of 2 is a good compromise. Region of Interest

We can define a ROI to restrict ImageIntegration's rejection and integration tasks to a specific rectangular area. This is mainly used for testing and analyzing different parameters faster than if the entire image areas are integrated. That said, ImageIntegration still calculates image statistics from the whole images.

IntegerResample Process > Geometry

IntegerResample resizes images by integer factors. The selected image in the view selection list at the top is used just to give an example of how its dimensions would change, not to actually apply the process to it necessarily.

136

When to use IntegerResample IntegerResample is often used to rescale an image so it matches the binning at which other images were captured. In reality it can be used anytime we need to rescale our images up or down by multiplying it or dividing it by an integer value. For more arbitrary rescaling, the Resample process is preferred.

Parameters Resample Factor: The scaling factor. Downsample: Reduce the target image size by the resample factor. Upsample: Increase the target image by the resample factor. Downsample mode: When downsampling, we can indicate whether the resampling algorithm will calculate the resampled pixels from the average, median, maximum or minimum values of the group of pixels being downsampled. Dimensions

Width/Height: If we select an image/view in the view selection list at the top, the Original px values will be populated with the width and height of the image. The rest of the values in this section (Target px, cm and inch) will indicate the final size of the image in pixels, centimeters and inches respectively, based on the Original px values, the resample factor and whether we selected Downsample or Upsample. Resolution

Horizontal/Vertical: Define the horizontal and vertical resolution of the target image in pixels per inch/cm (see below). Centimeters/Inches: Select centimeters if the resolution entered in the Horizontal and Vertical parameters is in centimeters. Select inches if it's in inches. Force Resolution: When selected, this option also changes the resolution and resolution unit of the target image.

137

InverseFourierTransform Process > Fourier

The InverseFourierTransform process in PixInsight transforms the components of a Fourier transformation from the frequency domain into the spatial domain. Once applied, it will provide the reconstructed image from the two Fourier transformation components.

When to use InverseFourierTransform The typical strategy is to apply a FourierTransform to an image, perform some operation on the resulting components and then apply the InverseFourierTransform to reconstruct the image. There's countless operations that can be done on the components, for many different purposes, from sharpening or smoothing to analyzing the data for classification or scientific applications such as spectroscopy and many non-astronomy related disciplines. As stated, InverseFourierTransform can be used in conjunction with FourierTransform. As mentioned in the FourierTransform documentation, one typical use in the processing of astronomical images is the detection, analysis and elimination of residual patterns, periodic noise and the like. Review the FourierTransform documentation for more information.

Parameters First DFT component: Here, we enter either the image representing the magnitude component (the phase would then be required as the second component) or the image representing the real component (the imaginary component will then be required as the second component). Second DFT component: Depending on whether we're entering the complex components (real and imaginary) or the polar components (magnitude and phase), enter here the imaginary (complex) or the phase (polar) component. On out-of-range result: What to do with out-of-range values after the inverse Fourier transform.

138



Don't care: Don't do anything to the out-of-range values. This may render invalid pixel values, so this option is not recommended and it's rarely used.



Truncate: Out-of-range values are truncated, maintaining the original dynamics of the image. This is the default and most commonly used option.



Rescale: If there are out-of-range values after the inverse Fourier transform, rescale the entire image to the [0,1] range. While this option does not truncate/clip any data, the dynamics of the image could be different from the original image.

Invert Process > IntensityTransformations

This process does not contain a dialog box – that is, it's immediate and it's applied to the last active view. It will invert (create a negative image of) the currently active view.

When to use Invert Invert is commonly used when defining or using masks, anytime we wish to swap the areas being protected (and not protected) by the mask. Even though masks can be inverted on the fly without actually inverting the data, it's often less confusing to keep all masks being interpreted equally (all with the invert toggle either on or off), at which point the Invert process is easy to apply.

LRGBCombination Process > ColorSpaces

LRGBCombination is used to create a new image from a combination of individual images, each defining a single component: Lightness, R, G and B data. The procedure is performed within colorimetrically defined CIE L*a*b* and CIE L*C*h* color spaces, with perfect isolation between color and lightness data.

139

When to use LRGBCombination LRGBCombination is the tool of choice to integrate lightness and color data. This is a common situation in deep-sky photography where it is customary to capture a long session of data with a luminance filter, then a (usually) shorter session of color data, typically via R, G and B filters. This approach to capture data via LRGB filters feeds from the idea that when we're looking at an image, the details in the image are more impacting than just color information, and therefore, by obtaining high SNR with a luminance filter (details) and less SNR for the color data, we can produce a “better” image in less integration time than if we just captured RGB data. The suitability of this approach may be debatable but with its pros and cons, whenever we need to combine Luminance data with color data, LRGBCombination is the tool to use. Both, luminance and color data do not necessarily need to come from capturing data with a luminance and RGB filters, but when it comes to the actual combination, LRGBCombination will treat the L data as lightness and the color data as RGB channels. It is important to feed nonlinear data to LRGBCombination, therefore a good time in the workflow to execute LRGBCombination is after the first nonlinear stretch to both, the L and RGB data. We may choose to apply some processes to the lightness or the color data separately prior to the LRGBCombination, that depends on our particular strategy.

Parameters Channels / Source Images: Enter on each of the text boxes the images corresponding to each of the LRGB channels we wish to combine.

140

If we have a lightness image that we wish to apply to a RGB image – or vice-versa – we do not need to extract each of the RGB channels from the color image before using LRGBCombination. We can simply disable the R, G and B slots, include the lightness file name in the L text box, then apply a new instance to the image containing the RGB data. This operation will replace the original RGB image (the image to which we applied the new instance) with the output of LRGBCombination. If a text box is left with the default , PixInsight will try to use an image with the same name as the target image plus the suffix _X where X corresponds to the abbreviation for that particular channel (_R for red, etc), although it's recommended to specifically indicate the source file(s). Channel Weights

In addition to combining the LRGB data, LRGBCombination allows us to set specific weights to each component. Normally we wouldn't need to modify these values. Uniform RGB dynamic ranges: The individual weights are rescaled to define a uniform dynamic range. This maximizes the available dynamic range as well as helps preserve a correct chromatic balance across the entire image. Disable this option to skip this adjustment. Transfer Functions

Lightness: The lightness transfer function allows for adaptation of the lightness to the available RGB data. A value of 0.5 does not change existing values. This parameter can be adjusted, if necessary, to adapt the lightness to the existing brightness and contrast of the RGB data. Saturation: The saturation transfer function works with a specific noise reduction algorithm (see below), allowing dramatic color saturation improvements with virtually zero chrominance noise. Try decreasing the saturation balance to increase color saturation in our LRGB combined image or increasing it to desaturate the result. Yes, we set a lower value to increase saturation and viceversa. Chrominance Noise Reduction

A chrominance-specific, multiscale noise reduction algorithm can be applied at the final stages of the LRGB combination procedure, as part of the saturation transfer function (see the preceding point) by enabling this option. Smoothed wavelet layers: Defines the structure scales we wish to apply a smooth noise reduction.

141

Protected wavelet layers: Defines the structure scale that will be protected from the noise reduction algorithm.

LarsonSekanina Process > Convolution

LarsonSekanina is an implementation of the rotational gradient filter algorithm of Larson and Sekanina (Sekanina Z., Larson S. M., Astronomical Journal, 1984). LarsonSekanina allows us to apply this algorithm either as a true rotational gradient filter, or as a high-pass filter in polar coordinates. This implementation also includes a deringing algorithm, the possibility to modulate the filtering effect by combining the filtered and original images by arbitrary amounts, and a dynamic range extension feature to fix over-saturated areas.

When to use LarsonSekanina The Larson Sekanina algorithm is commonly used for the morphological study of comets. The filter does not have a typical astroimage processing aesthetic goal, other than of course being able to reveal visually “hidden” variations of brightness.

Parameters Filter Parameters

Radial Increment: Radial increment in pixels to calculate the gradient. The slider on top defines large scale structures (above 10 pixels). When that slider reaches its minimum value (10), then the slider on the bottom takes over, to define values between zero and 10. This is done with the purpose of being able to obtain more accuracy in the low scales while still using the graphical interface. We can always enter the exact desired values manually.

142

Angular Increment: Angular increment in degrees to calculate the Laplacian. The double slider interface works the same way as for the radial increment parameter. X-Center / Y-Center: Center of polar coordinates. When applied to an image of a comet, here we would enter the coordinates of the comet's nucleus. Interpolation: Here we define the method to be used to perform the interpolation. The Bicubic interpolations tend to be smoother than the Bi-linear interpolation. Filter Application

Amount: Strength of the filter, on a scale of 0.001 to 1. Threshold: Threshold value in the [0,1] range to protect image features low in contrast. Deringing: Threshold value to fix dark rings around bright image structures. Increase for more deringing strength. Use luminance: If enabled, LarsonSekanina will only be applied to the luminance in color images. High-Pass Mode: Enable to use LarsonSekanina as a high-pass filter. Disable to use it as a rotational gradient filter. Dynamic Range Extension

The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. Use the following two parameters to define different dynamic range limits. We can control both the low and high range extension values independently. Low Range: Shadows dynamic range extension. High Range: Highlights dynamic range extension. Disable: Only allowed for floating point images. Be aware that if disabled, out of range values may arise.

143

LinearFit Process > ColorCalibration

LinearFit is a process that calculates and applies a series of linear fitting functions to match the signal levels of a target image with the levels obtained from a reference image.

When to use LinearFit Unlike its name may suggest, LinearFit is not limited to work on linear images, although many common uses of LinearFit do take place when our data is still linear. It is usually a good idea to run LinearFit whenever we have two or more images that we're about to integrate or combine somehow, although its use depends on whether our workflow already accounts for, or requires adjusting for signal levels. For example, if we have three images, each with data from R, G and B filters respectively, and we're going to combine them into a single color RGB image, LinearFit can adjust the signal levels and mean background of two of the images (say G and B) to match those of the third one (R). Note that in this example, LinearFit is not a replacement for color balance. LinearFit can also be applied in other situations where combination or integration of multiple images is imminent. One of such uses is when building mosaics, which would be much easier to build seamlessly if the images to be combined in the mosaic have matching levels. Again, our particular workflow dictates whether LinearFit is required or whether we make these adjustments some other way.

Parameters Reference image: This is the image that LinearFit will use as a reference. Reject low: Pixels equal to or lower than this value in either the reference or the target images will be ignored. Since the minimum value of this parameter is zero, black pixels are always ignored.

144

Reject high: Pixels equal to or greater than this value in either the reference or the target images will be ignored. Since this parameter is always equal or smaller than one, white pixels are always ignored.

LocalHistogramEqualization Process > IntensityTransformations

This process offers a method to perform local histogram equalization via a contrast-limited implementation that produce a more even contrast range than classic equalization algorithms. LocalHistogramEqualization helps enhance local contrast and details in the image.

When to use LocalHistogramEqualization Typically included late in the workflow, or when the image is no longer linear at the very least, LocalHistogramEqualization can be useful on images having low contrast areas that we want to enhance with more contrast and details.

Parameters Kernel Radius: This is the radius, in pixels, defining the area around a pixel for evaluating the histogram. While lower values tend to increase contrast and details, they can also produce ringing artifacts or enhance noise. Higher values display the opposite effect: weaker contrast but less noise and artifacts. Values between 50 and 250 and recommended for most cases. Contrast Limit: This is the highest permitted value to define the slope of the transfer function. A value of one leaves the image unaltered, with higher values strengthening the effect. For most uses, small values between 1.5 and 3 are recommended.

145

Amount: Strength of the effect over the target image. A value of one completely replaces the original image with the processed version. A value of 0.7 blends 70% of the processed version with 30% of the original version, and so on. Histogram Resolution: Bit depth of the histogram used to calculate the transfer function. For most cases, the default of 8-bit works well. When using a big kernel radius, we can increase to 10 or 12-bit if the 8-bit results appear to cause artifacts or posterization.

LocalNormalization Process > ImageCalibration

LocalNormalization is a very powerful and useful process. However, to get the most out of it, it is very helpful to have a good understanding about how, why and when to use it. Normalization is a process that corrects the illumination of an image or a group of images so that all images become statistically fitting. This allows pixel rejection routines to be much more effective and the reason normalization has always been part of the ImageIntegration process, and still is. What LocalNormalization adds over the normalization routines already implemented in ImageIntegration is that LocalNormalization can detect and correct brightness variations locally around small “neighborhoods” in the image, as opposed to making calculations and adjustments globally to the whole image, hence the word “local”.

When to use LocalNormalization Although some tutorials suggest using LocalNormalization as part of a regular workflow, emphasis must be made that there are many situations when our final results won't be significantly different whether we use LocalNormalization or not, plus many other cases when LocalNormalization will simply not contribute anything to our image. LocalNormalization is most useful when we have a set of images to integrate that are of rather different data quality or, in more precise terms, that are statistically different. Situations that can lead to this scenario are data sets captured from different locations or under different atmospheric

146

or weather conditions (which can happen throughout a single night), or even different equipment. In those cases, LocalNormalization can be a life saver. LocalNormalization has also been touted as a great tool to deal with gradients early, and its approach of correcting each image individually makes more sense than correcting a single integrated image with “integrated” gradients. However, LocalNormalization is a lot more than a gradient correction tool, and its other implications need to be considered as well. Ultimately, whether to use LocalNormalization mostly as an aid to correct for gradients, or to deal with such gradients in other ways (say, applying DBE to each image in the batch, or just to the integrated image later in the process – both extremely effective ways to correct for gradients) is entirely up to the operator and the workflow we choose to follow. In any case, LocalNormalization is best used on single, calibrated, registered linear images, right before starting image integration.

Parameters Reference image: This is the image that will be used as a reference by LocalNormalization to determine the corrections that need to be done to the target image(s). Not only it should be our best image from the entire batch, it should be an image that is a good representation of the true brightness variations across the entire image. For this reason, it's not unusual to run this reference image through some “cleaning” processes prior to using it as a reference, such as a careful background extraction process (DBE) or even noise reduction routines that work well on individual subframes prior image integration, like MureDenoise. These adjustments would be done on a duplicate of our reference image, not on the actual reference image, of course. That said, when other processes are applied beforehand, we need to understand how they may affect the local normalization process on the remaining images. The best way we can do this is by applying LocalNormalization using a “cleaned” reference image, then using it with an “uncleaned” reference image and compare. It's good to remember that during a batch execution (apply global), if we use one of the images in the batch as a reference, we must also include it in the target list, so LocalNormalzation also computes the normalization function for that image. To avoid confusion, as mentioned earlier, it's better to generate a separate reference image, whether as a file or just a view, regardless of whether we'll apply some cleaning processes to that view or not.

147

Not only that, using one image from the batch (corrected or not) as a reference is not a requirement during a batch integration workflow. A previously integrated image can also serve as a good reference, depending on the situation, as long as it also is a good representation of the true variations in the image. Since LocalNormalization is particularly useful in unconventional situations, it is often those situations what lead us to take one approach or another. Scale: This parameter defines what “local” is when doing the local normalization, therefore it greatly influences the quality of the local normalization functions defined for each target image. The number in Scale is the size (in pixels) of the scale that will be used to determine local variations. Luckily, the default of 128 is a good average for most images, but fine-tuning may be required depending on the images being processed. Values that are multiples of 32 are preferred: 32, 64, 96, 128, 160... Values above 512 are rare and in most cases will result in a very global, not local, normalization. An effective local normalization requires a scale that is low enough to pick up differences from smaller regions in the image, but large enough so that it doesn't adjust regions that don't need it. When these unneeded corrections are made on the target image(s), in some cases, depending on the differences and other factors, artifacts may appear, although they are rare. In these cases, increasing the value of Scale often helps, but if we see that the background model is becoming way too soft while smaller scales still produce artifacts, we may consider either defining a very strong pixel rejection (below) or excluding the local normalization step altogether.

148

Outlier rejection

Outlier rejection: Since the local normalization process can be affected by differences between the reference and target images, it is a really good idea to reject outliers before normalization so the models are more accurate, among other things. The following five parameters help define what will be rejected or not. The default values for all five parameters tend to do a good job most of the time, so we should fine-tune them only when we want to be very precise about what pixels should be rejected or not, or when dealing with strong and significant outliers. Hot pixel removal: When a pixel is found to be an outlier and therefore needs to be removed, LocalNormalization applies a median filter to remove it. This is the size in pixels of the radius of that median filter. A value of zero is allowed, but that would effectively cancel any pixel removal operation even when bad pixels are found. The maximum value is 4. The default value 2 usually works best in most scenarios. Noise reduction: Prior to scanning an image for outliers, we can instruct LocalNormalization to apply some noise reduction to the image, and this value is the size (radius) in pixels of the Gaussian filter that is used to do the noise reduction. When detecting bad pixels and outliers, applying noise reduction to our images is often counterproductive, as small-scale outliers may then not be detected. That's the reason the default value is 0, which in this case it means no noise reduction will be done prior to detecting bad pixels – this being the recommended value for almost every situation. In really noisy images or in images with clipped or saturated data, however, outlier detection can be trickier, and some noise reduction could actually assist in better detection of outliers. Even in these cases, it's recommended to keep this value low. Very high values often lead to inaccurate rejection, modeling and poor local normalization. Background limit: Once the approximate local background value has been calculated, this parameter indicates how different from that value a structure needs to be in order to be considered significant. The larger the difference, the more outliers may be detected and the larger the rejected areas could be. The default 0.05 is a good value for most cases. When we know there are some large and strong outliers (plane or satellite trails), we can try increasing it in small 0.1 intervals, or reducing the threshold value (see next two parameters), always evaluating the rejection maps (see option to enable these maps, below). Fine-tune at the end if needed.

149

Reference threshold: Rejection is computed for both, the reference and the target images. This parameter defines a threshold for a structure to be considered an outlier in the reference image only. The smaller the value, the more structures will be considered outliers and vice-versa: the higher the value, the less pixels will be targeted for rejection. Note that this parameter also depends on the value of Background limit, as the limit it sets affects the threshold defined here. Target threshold: This is the same parameter as the Reference threshold, but applied to the target image(s) instead. Again, the higher the value, the less pixels will be rejected, while lowering the value too much could reject much more than what is needed, especially with a high Background limit. Support Files / Normalization

Apply normalization: This option defines the behavior of LocalNormalization when it comes to create output and support files, depending on whether it's applied globally or to a view (an opened image). In reality, we can run LocalNormalization both, globally and to a view regardless of this setting (with one exception that we'll mention in a moment), but the results won't be the same. •

Global execution only/Always: We should select Global execution only (or Always) if we want LocalNormalization to create normalized images, and apply globally (click the “Apply Global” icon). As with any global execution in similar processes, we must have some files in the Target Images area for it to work. Global execution only is not the default option because normalized (corrected) images are not usually needed at this time if our only goal here is to later integrate all images with these corrections applied. Why not? Because for image integration all we need are the XNML files, not the normalized images. XNML files are files that LocalNormalization can create (more on this in a moment) and that contain all the local normalization details and functions for each image that can later be used by other processes like ImageIntegration and DrizzleIntegration. The main reason for selecting this option and creating normalized images would be to examine them, run some tests or comparisons, etc. If applied on a view with Global Execution only selected, LocalNormalization will not create any normalized images nor XNML files (even if Generate normalized data is enabled), but it will still create background models, rejection maps and graph plots for the reference image and the view being targeted, assuming such options are enabled (see below).

150

In fact, if we want to create background models, rejection maps or graphs, we must execute LocalNormalization on a view. This is regardless of the option selected for Apply normalization. •

View execution only/Disabled: We select this option if we don't want to create normalized images. In this case, LocalNormalization will not run if Generate normalized data is disabled and we try to run it globally. Also, any time we run LocalNormalization on a view, neither XNML files nor normalized images are created, regardless of the option selected here and regardless of whether the Generate normalization data option (below) is enabled or disabled.

While the different scenarios, combinations and results described above may appear confusing, the main ideas are:  To generate normalized XNML files, execute globally.  To generate models, maps and/or graphs, apply New Instance to a view. Good for testing.  To generate normalized images (not typical), select Global execution only or Always.  To avoid generating normalized images (usual), select View execution only or Disable. No scale component: When enabled, the multiplicative component of the normalization correction is removed (technically, it's set with a value of one). In practical terms this means that we would only enable this parameter for some very specific cases where we want LocalNormalization to correct only additive differences and nothing more, such as when we have a data set of similar statistical values (data captured under similar conditions) that is affected by gradients (an additive component, not multiplicative). Generate normalization data: Generate an XNML file for each target image, when the process is applied globally. ImageIntegration and DrizzleIntegration can later use these files as stated earlier. Show background models: When this option is enabled and we apply LocalNormalization on a view, two files will be created. One representing the background model calculated for the reference image and the other, the background model calculated for the target image. The files will be named “background_r” (“r” for reference) and “background_t” (“t” for target). We should enable this option when we're testing different Scale values.

151

Show rejection maps: When this option is enabled and we apply LocalNormalization on a view, also two files will be created. In this case these are the pixel rejection maps for the reference and target image. The files will be named “LN_rmap_r” and “LN_rmap_t”. We enable this option when we're testing different pixel rejection parameters. Plot functions: When one of the three “3D” options is selected and, once again, assuming we're applying LocalNormalization to a view, two images are created, each with a nice graph representing the scale and offsets of the normalization function. Target Images

This section is composed by a large box with the list of images to be locally normalized. Add Files: Click here to add files to be locally normalized. Select All: When clicked, all target images become selected. Invert Selection: Deselected files will be selected, and files already selected will be deselected. Toggle Selection: Swaps which images are selected or not. Remove Selected: Removes the selected file(s) from the list. Clear: Deselect all files. This is the opposite of Select All. Full Paths: When enabled, the full path of the images in the list will be displayed, as opposed to just the file name. Format Hints

We can use input and output format hints in LocalNormalization. Input hints can change some ways in which the Target Images are loaded, and output hints can change some details about the way output files are written. A list of format hints is available in the “Format Hints” chapter. Output Files

By default, newly created files will have the same filename as their source files and the postfix “_n”. Output directory: When executed, LocalNormalization will generate the output image(s) in the directory specified here. If this field is left blank, new files will be created in the same directory as the input files.

152

Prefix: Insert a string of text at the beginning of each filename. Default is blank: no prefix. Postfix: Add a string of text at the end of the filename, prior to the file extension (.xifs, .fits, etc). The default is “_n”, as a reminder that these are normalized frames. Postfixes in PixInsight are best to be left with their default values, particularly for collaborative projects, practical tutorials, etc. Overwrite existing files: When enabled, if a file already exists in the Output directory with the same name, overwrite it. If this happens with the option disabled, the new filename will end with an underscore character followed by a numeric index: _1 the first time, _2 the second, and so on. On error: What should LocalNormalization do it it runs into an error while processing the target images? Continue (ignore the error and proceed with the next image), Ask User whether to continue or not, or directly Abort.

MaskedStretch Process > IntensityTransformations

MaskedStretch is a tool that helps us stretching our data without blowing up the highlights. It does that by applying small stretches (midtones adjustments in this case) iteratively, that is, every new stretch is applied to the previously executed small stretch. For every new stretch, a mask is applied protecting bright areas. The first stretch uses a very weak mask, slightly boosting the entire image. Then the next stretch applies a slightly stronger mask, then an even stronger mask and so on. In practical terms, this means that the brighter we're making the image, the more protected the bright areas are, effectively avoiding saturation in these bright areas.

153

When to use MaskedStretch MaskedStretch was originally conceived as a script to stretch a linear image (the script later developed into this module that is now part of the IntensityTransformations category). Therefore, it is most commonly used for this task of taking a linear image into a much brighter and visible nonlinear image. Despite its intended purpose of being applied to delinearize an image, it can also be successfully used in nonlinear data. In those cases, the overall strength of the stretch would need to be tamed accordingly.

Parameters Target background: MaskedStretch uses a background reference image to compute initial and final mean background values (see below to read more about the background reference image). We set this parameter to the final value we would like this background reference to have after the stretch. Starting with the 0.125 default value is a good starting point for most linear images, then we increase or decrease its value by 0.1 intervals depending on whether we want a stronger or weaker stretch. If suddenly our image becomes too dark or too bright, we back off and re-try with 0.01 intervals. If we want to aim at a target value that matches a particular intensity, we can use the HistogramTransformation tool to do a temporary stretch on the background reference to our liking, then use Statistics to see what the mean value is. Don't forget to undo the stretch, as it was only temporary. This approach works best when our background reference is a preview from our target image, so we can stretch the entire image, yet read the values for just the preview. Also, keep in mind the mean in Statistics refers to the entire image, not just the background, although if our background reference mostly includes background (sky) pixels, the value shouldn't be too far off. Iterations: This is the number of times MaskedStretch will execute small (masked) stretches, one after another. MaskedStretch will try to match the Target background value in the end, regardless of the number of iterations, but too few may result in artifacts, particularly around bright structures. When such artifacts appear, increasing the number of iterations should help. Other than that, the default 100 and even just 50 or so iterations are usually adequate. When working with nonlinear images, less iterations can be sometimes tolerated.

154

Clipping fraction: This parameter allows us to decide how many dark pixels in the image will be clipped before starting to stretch the data. The value represents the fraction of dark pixels to be clipped. For most uses, the default 0.0005 works well. Increasing this value will keep background areas from being stretched, however too high of a value will result in heavily clipped images. Color mask type: When MaskedStretch is applied to a color image, it can build the mask from the original image, based on two similar but different components from two color spaces: the Intensity component from the HSI color space, and the Value component from the HSV color space. In simple mathematical terms, intensity (I) can be calculated as the average of all three RGB values, whereas value (V) is the maximum of the RGB values. The differences in terms of masking are usually very subtle, although images dominated by one particular color may result in slightly more protective masks. A common strategy is to use the default HSI Intensity and if colors appear too soft in the stretched image, try HSV Value. Do keep in mind that because MaskedStretch tends to produce low-contrast images, depending on how aggressive the stretch is we may obtain softer colors – this could be corrected later with an increase in color saturation, for example. When applied to grayscale images, MaskedStretch ignores this parameter. Background reference: As stared earlier, one of the first things MaskedStretch does is calculating the initial mean background of a reference image – a value that is then used to see how strong each stretch needs to be done in order to reach the Target background value. Here we indicate the image containing this background reference. As any background reference, this image – which can be a preview (recommended), or a Region of Interest defined below – should be a good representation of the image's background, contain mostly background pixels, no stars or nebulae, etc. If no background reference is indicated, MaskedStretch will use the target image also as a background reference. Lower limit: When calculating the mean background value from the background reference image, ignore any pixels with a value equal or lesser than this value. The default value of zero ignores black pixels, and it's usually adequate. Upper limit: Same as Lower limit but defining now the highest possible pixel value that MaskedStretch will consider to be background data. In other words, any pixel value higher than this parameter is ignored when computing the mean background value.

155

Region of Interest

We can use this section to define a specific area within the background reference to be used as the background reference, as opposed to using the entire image defined in Background reference. For most purposes, this can be done faster directly by creating a preview in the background reference image and selecting the preview as the reference, but in some cases defining a ROI can be helpful – such as when using MaskedStretch as part of a script or a workflow where defining the ROI numerically makes sense.

MergeCFA Process > Preprocessing

MergeCFA is used in conjunction with SplitCFA for a particular debayering technique with color images. SplitCFA extracts the red, green and blue pixel values from a CFA image (there's two green pixels for every red and blue, so four values total), and creates four different images, one for each pixel value. Naturally, each resulting image has 1/2 the width and height in pixels. MergeCFA can then be used to put the four images back together after having done some processing with the images individually.

When to use MergeCFA MergeCFA is only used to reconstruct an image from the four CFA components that were previously split with SplitCFA (or with the proper PixelMath expressions). While it's generally recommended to use the different debayering options available in most calibration and integration tools in PixInsight, SplitCFA/MergeCFA were originally developed as an alternative for situations when regular calibration failed to produce good results. In these cases, SplitCFA is used to perform the extraction (the split) at the very beginning of the workflow, so each component can be calibrated individually. After that, one can recombine them back with MergeCFA and continue a typical CFA workflow, or instead, continue as if they were classic R, G and B data files, and at some point recombine them with ChannelCombination. 156

Parameters CFA Source Images: Each of the four CFA components to be merged. The order should be the same in which the individual images were extracted (splitted). At the moment, MergeCFA only accepts views (opened images), not files.

MorphologicalTransformation Process > Morphology

The MorphologicalTransformation (MT) tool is an implementation of several morphological transforms that are popular in astroimage processing, such as erosion (minimum filter), dilation (maximum filter) and more, with the support of a structuring element (similar to a tiny mask).

When to use MorphologicalTransformation As stated, morphological transformations are popular among astroimage processing enthusiast, due to their ability to modify the apparent size of structures in our images – a task that comes extremely handy in a large number of situations. The most popular tasks done with MorphologicalTransformation are often related to manipulating star size, whether on a mask or an actual astroimage. Dilation filters help us “increase” the size of bright structures, so it's common to use MT for increasing the size of stars in star masks (whenever the original stars in the mask weren't big enough for the task at hand). Likewise, erosion transformations are often used to reduce the presence of stars. A combination of erosion and dilation helps us generate results with less apparent artifacts.

157

MorphologicalTransformation can work on both, linear and nonlinear images. When applied to masks, MT can be used nearly at any stage in our workflow. However, when applying MT to an actual image, most workflows tend to include morphological transformations when the images are nonlinear, sometimes after several other nonlinear processes have already been applied, such as noise reduction. There are exceptions, such as when correcting for star size between color channels that otherwise may lead to color rings, a step that, if required, may be better fixed if corrected early.

Parameters Morphological Filter

Operator: This is the function that will be used for the transformation. MT offers seven different functions: Erosion: Erosion is one of the two fundamental morphological operations. Erosion is defined as the set of all points z such that the mask, translated by z, is contained in the image. In other words, erosion outputs a zero if any of the input pixels under the “1” pixels in the mask (structuring element) are zero. In terms of image processing, erosion reduces an object’s geometric area. When all elements in the mask are equal, erosion acts as a simple minimum filter.



Dilation: Dilation is defined as the set of all points where the intersection of the structuring element and the image are non-empty. For each source pixel, if any of the pixels in the mask are “1” and line up with a source pixel which is also “1,” the output pixel is “1.” In practical terms, dilation increases an object's geometric area. When all elements in the mask are equal, erosion acts as a common maximum filter.



Opening: The opening function is the dilation of the erosion of the image. It tends to smooth outward bumps, break narrow sections and eliminate thin protrusions. The opening filter tends to darken bright details.



Closing: The closing operation is the erosion of the dilation of the image. It tends to eliminate small holes and remove inward bumps. The closing filter tends to brighten dark details.



Morphological median: A morphological median computes the median of the corresponding pixels in the target image according to the structuring element.

158





Morphological selection: Morphological selection acts as a blend between the erosion and dilation methods. When this option is selected, the Selection parameter becomes available. See the Selection parameter below.



Midpoint: The midpoint function will execute the transformation by averaging the minimum and maximum.

Interlacing: Defines an interlacing amount. For most applications, the default value works well. Iterations: Here, we define the number of times (iterations) we want to execute the defined morphological transformation. As it is often the case, more iterations of lesser strength can yield better results than a single, stronger iteration. Still, just one or two iterations can produce satisfactory details in many cases, and it's common to use the default value of one. Amount: Strength of the morphological transformation. Selection: This parameter is only applicable when the Morphological selection operator has been selected. When this parameter is smaller than 0.5, the erosion effect is more noticeable. When it is over 0.5, the dilation effect does. In fact, when Selection is equal to zero, we'd be applying a pure erosion transformation, while when this value is one, we have a pure dilation transformation. With a value of 0.5 we have the same as a morphological median transformation. Strictly speaking, the Selection parameter only works as expected when the structuring element has a radial symmetry, but that is okay, as this is often the case when working with astronomical images. Structuring Element

The structuring element acts as a mask to the morphological transformation. For most tasks applied to astronomical images, we simply set the circular shape and adjust the Size parameter. PixInsight's implementation of MT however, also allows us to define our own structuring elements, select from a number of predefined structuring elements, and even create and manage our personal library of structuring element definitions, a resource largely unnecessary for most astronomical image processing needs but available should the need arise. Size: Size in pixels of the structuring element. Way: We can combine several different structural elements (this is called a multi-way morphological transformation). Depending on the active structures, this parameter may allow us to select just one way, or more. Once a way is selected, we can then define the morphological features of the structuring element.

159

Manage: We can load and save different definitions of structuring elements. The Manage option presents us with a dialog box to manage our library of structuring elements. Paint modes: Below the the structural element editing grid, there are three icons that we can use to define our own structural element. From left to right: •

Set element: Used to draw the structuring element in the editing grid. It will set any given pixel to “1.” Ctrl-click will draw the inverted (zero in this case).



Clear element: Used to clear a value in the structuring element editing grid. Just like with the previous tool, doing Ctrl-click will draw the inverse.



Show all ways: When a structuring element is defined by more than one way, the grid will usually only show the active way (see the Way parameter). Clicking this icon will toggle displaying all ways or just the active way.

Predefined structures: Under the Way parameter, MT offers a number of predefined structuring elements, as well as some buttons to perform basic operations to the structuring element, such as invert, rotate, set/reset all, etc. Thresholds

The function of the threshold parameters in the MorphologicalTransformation process is to avoid the transformation performing a higher than desirable modification of the original values. In practical terms it acts as an adaptive mask, as it works by achieving a more gradual effect than if it used hard thresholds. For most applications, leaving these parameters untouched yields appropriate results. Low: Set a low threshold for the morphological transformation. By increasing the low threshold, we avoid relatively bright pixels to become too dark. High: Set a high threshold for the morphological transformation. By increasing the high threshold, we avoid relatively dark pixels to become too bright.

160

MultiscaleLinearTransform Process > MultiscaleProcessing

MultiscaleLinearTransform allows us to perform a hierarchical decomposition of an image into groups of different structural scales and operate on those scales separately. The concept of scale here is similar to the idea of breaking up an image into different sub-images, each of them containing objects of a given size. For example, we could end up having all tiny stars in one image, larger stars in another, very big ones in a third image, another image only with galaxies of certain (pixel) size, and so on, in a way that, when recombining all these images, we would – in theory at least – end up with the original image. In reality, breaking up an image into different scales works somewhat differently than that, although the analogy is close enough. Rather than breaking the image into small or large individual objects, what we're doing is breaking it into small and large “details.” The small scales are where the details of the image reside. That's also where we can find most of the noise. Then, large scales contain information about the large details in the image. The larger the scale, the more it will look like just shapes, as opposed to details. See the image below to visualize the difference between scales.

Breaking an image into different structure scales 161

MultiscaleLinearTransform (sometimes also called MLT in PixInsight's jargon) was added to PixInsight in 2014 as a replacement for ATrousWaveletTransform, which is still available in PixInsight, although mostly for compatibility with old scripts. MultiscaleLinearTransform not only introduced a new multiscale transform based on multiple Gaussian filters, but it still offers the original wavelet transform from ATrousWaveletTransform, now renamed to Starlet instead of “à trous.” Layers and Scales

MultiscaleLinearTransform (and all other PixInsight's multiscale tools) group different scales in layers. Layers are numbered sequentially (1, 2, 3, 4, 5… up to a maximum of 16) and each layer is assigned certain scale sizes. By default, scale sizes in layers grow in a sequence of powers of two, so layers 1, 2, 3, 4 and 5 will contain scale structures of 1, 2, 4, 8 and 16 pixels respectively, and so on. This is the dyadic sequence that we will see in a moment and is the only valid sequence when using the Starlet method. The sequence can however also be defined as a linear association between layers and scales, so for example, layers 1, 2, 3, 4, 5... could contain scales of 1, 2, 3, 4 and 5 pixels in size respectively, or with a different incremental value, say 1, 6, 11, 16 if we set an increment of 5. Just breaking an image into different scales/layers does not do much to improve the image in itself. However, MultiscaleLinearTransform allows us to apply different processes to each of the different scales separately. More specifically, MultiscaleLinearTransform offers detail enhancement and noise reduction capabilities that can be applied and adjusted individually for each layer.

162

This is where the multiscale approach becomes really useful. Once we have defined what layers/scales we're going to work with, we can tell MultiscaleLinearTransform to apply some sharpening or noise reduction (or both) to individual scales. The advantages for this approach are countless and some are discussed next. We can attack noise only at the smallest scale structures, which is where the noise resides (in the small “details”) while leaving large scale “details” untouched. Or we could sharpen mid-size scale structures, where sharpening does not increase noise significantly. Or we could apply one or the other – or both – at multiple scales, but each with a strength suited for that particular scale.

When to use MultiscaleLinearTransform Despite its name, MultiscaleLinearTransform can be applied to both, linear and nonlinear images. Since the two embedded processes in the tool allow us to adjust sharpening details and noise reduction, we can use MultiscaleLinearTransform anytime we want to either enhance details, reduce noise or both. Whether to apply these processes when the image is still linear and/or later in the process depends on our workflow choices, although it is worth mentioning that this is one process that can work really well to mitigate noise when applied to linear images, unlike other noise reduction tools. In addition to its most immediate uses of enhancing detail and reducing noise, the ability of MultiscaleLinearTransform to break the image into different structural scales opens the door to perform any manipulation or processing at any scale, not just the embedded sharpening (bias) and noise reduction. This can be accomplished by creating new images, each with one or more of the different scales we choose to work with (for example, one image with scales one to three, and a second image with the rest of the scales), manipulate the images containing the scale structures we're interested in, then adding them all back together with PixelMath. By working on the small scales we can address noise and detail issues, while the larger scales can be worked out to rescue faint signal dominating the entire scene, or the overall brightness of the larger objects in the image. MLT can also be used as a first step in the generation of masks, meaning the tool can be handy whenever we're aiming for a mask that could indeed be started with MLT. By extracting one or two specific layers, we can construct masks targeting small-scale structures (say, stars), or large structures to define the extent of brightness of the main object(s) in the image. Once the new image (our future mask) is created, we can manipulate it further with other processes to meet our goals for this mask. For example, if we're using MultiscaleLinearTransform to create a star mask, once the image with only the smaller scales has been extracted, we may want to adjust its brightness, smooth it a bit, etc. 163

In short, MultiscaleLinearTransform can be used for many reasons at many stages during the processing workflow of an astroimage, whether we're trying to enhance details, tame noise, create supporting masks or even experiment by applying other processes at different scales. Once we know what we can do with MultiscaleLinearTransform and run into a situation where we could benefit from it, the first decision we often make is determining whether to use MLT or MultiscaleMedianTransform (which is the next tool in this guide). While both tools can be used successfully for similar tasks, one notable difference when applied to astronomical images is that when enhancing details, MLT suffers from ringing (Gibbs effect) due to its high-pass filter, but MultiscaleMedianTransform does not generate any ringing at all. Do note that MLT does offer an effective deringing mechanism, so ringing can be controlled with MLT. On the other hand, when applying noise reduction, MultiscaleMedianTransform may generate unwanted spotty effects in some cases that don't happen when using MultiscaleLinearTransform.

Parameters Algorithm: MultiscaleLinearTransform offers two different transforms, each with its own pros and cons. •

Starlet transform: This is the à trous wavelet transform originally implemented in the ATrousWaveletTransform tool. A-trous literally means “with holes” or “with gaps,” which refers to inserting zeros between coefficients in the filter. In practical terms, the Starlet transform is recommended when we aim at stronger details in small-scale structures or more diffuse details for large-scale structures.



Multiscale linear transform: This is also a wavelets transform but instead of “holes” it's based on multiple Gaussian filters. This transform is recommended when we look for better definition in larger scales or slightly lesser details in small-scale objects.

The differences between each method are easy to see by applying each transform to the same image and compare. For example, if we extract layer 6 (large scale structures) from an image using each transform, we will identify more details in the image to which we applied the multiscale linear transform, and less detail in the image processed with the starlet transform. On the other hand, extracting only a small scale (say layer 2), we may notice more definition in the starlet image, and more artifacts in the multiscale linear transform. In addition to their ability to break down an image into scales, both transforms are suitable for sharpening and noise reduction, and both can be safely applied to either linear or nonlinear images.

164

Layers

Dyadic: Detail layers are generated for a growing scaling sequence of powers of two. That is, the layers are generated for scales of 1, 2, 4, 8... pixels. For example, the fourth layer contains structures with detail scales between 5 and 8 pixels. This sequencing style is recommended for the Starlet algorithm or if noise thresholding is being used. Linear: When selected as Linear, the Scaling Sequence parameter is the constant difference in pixels between detail scales of two successive detail layers. Linear sequencing can be defined from one to sixteen pixels increments. For example, when Linear 1 is selected, detail layers are generated for the scaling sequence 1, 2, 3, 4... which can be useful when targeting small-scale structures. Similarly, Linear 5 would generate the sequence 1, 6, 11, 16... We may want to use this sequence to better isolate the smallest scales, although the Starlet transform using a dyadic sequence instead may perform better. Layers: This is the total number of generated detail layers. This number does not include the final residual layer, which is always generated and named “R”. We can work with up to sixteen (16) layers, which allows us to handle structures at really, really huge dimensional scales. Scaling function: Only active when the Starlet transform is selected, we use this option to select a wavelet scaling function. Peaked scaling functions such as linear interpolation work better to isolate small-scale structures. Smooth scaling functions such as B3 spline work better to isolate larger scales. Selecting the most appropriate scaling function is important because by appropriately tuning the shape and levels of the scaling function, we gain full control on how finely the different dimensional scales are separated. In general, a smooth, slowly varying scaling function works well to isolate large scales, but it may not provide resolution enough as to decompose images at smaller scales. Oppositely, a sharp, peak-wise scaling function may be very good isolating small scale image features such as highfrequency noise, faint stars or tiny planetary and lunar details, but quite likely it will be useless to work at larger scales, as the global shape of a galaxy or large Milky Way structures, for example. In PixInsight, starlet wavelet scaling functions are defined as odd-sized square kernels. Filter elements are real numbers. Most usual scaling functions are defined as 3×3 or 5×5 kernels. A kernel in this context is a square grid where discrete filter values are specified as single numeric elements.

165



Linear Interpolation (3): This 3x3 linear function is a good compromise for isolation of both relatively large and relatively small scales, and it is also the default scaling function. It does a better job on the first 4 layers or so.



B3 Spline (5): This 5x5 function works very well to isolate large-scale image structures. For example, if we want to enhance structures like galaxy arms or large nebular features, this function would be a good choice. However, if we want to work at smaller scales, e.g. for noise reduction purposes, or for detail enhancement of planetary, lunar or stellar images, this function is a poor choice.



Small-Scale 3~32 (3): These are 3x3 peak-wise, sharp functions that work quite well for reduction of high-frequency noise and enhancement of image structures at very small characteristic scales. Good for lunar and planetary work, for strict noise reduction tasks, and for sharpening stellar objects a bit. For deep-sky images, use this function with caution. The main difference between the 9 different Small Scale (3) functions is the strength of the central value (peak) of the 3x3 kernel: 3, 4, 5, 6, 8, 12, 16, 24 or 32.



Gaussian (5~11): These are a peaked functions that work better at isolating small-scale structures, so they can be used to control a smoothing effect among other things.

List of Layers: The window below the pull-down option to define the scaling function will show the generated layers. Individual layers can be enabled or disabled. To enable/disable a layer, double-click anywhere on the layer's row. When a layer is enabled, this is indicated by a green check mark. Disabled layers are denoted by red 'x' marks. The last layer, R, is the residual layer, that is, the layer containing all structures of scales larger than the largest of the generated layers. In addition to the layer and scale, an abbreviation of the parameters specific to each layer – if defined – are also displayed. Detail Layer A/B

Bias: This is a real number ranging from –1 to +15. The bias parameter value defines a linear, multiplicative factor for a specific layer. Negative biases decrease the relative weight of the layer in the final processed image. Positive bias values give more relevance to the structures contained in the layer. Bias is mainly used to increase sharpness and details, usually in scales 2 to 4, as a high bias in smaller scales could accentuate noise as well. Being a multiplicative operation, we don't need to increase the bias amount by much to obtain noticeable results. Values above one or below -1 are very rarely used.

166

Noise Reduction

When enabled, a special smoothing process is applied to the layer's contents after biasing. Here we specific the noise reduction parameters that will be applied to each detail layer. Threshold: The higher the threshold value, the more pixels will be treated as noise for the detail scale of the wavelet layer in question. Therefore, we can increase this value to remove more structures. It is common to apply a stronger threshold to the first layer, then smaller thresholds as scales become larger. As a general practice, try to find the smallest possible value that does the job. High values could soften too much the details at that particular scale. Amount: The noise reduction amount parameter controls how much smoothing is used. A value of zero means that no noise reduction mechanism will take effect, even if Noise Reduction has been enabled. Iterations: This parameter governs how many smoothing iterations are applied. Extensive try out work is always advisable, but recursive filtering with two, three or four iterations and a relatively low Amount value is generally preferable to executing the noise reduction in one single, strong iteration. Linear Mask

As an aid to its noise reduction capabilities, MultiscaleLinearTransform offers the possibility of using a mask when applying noise reduction, in order to attack more the areas with low SNR than those with higher SNR, something that should be done one way or another virtually every time noise reduction is applied to our data. Note that this mask does not control the strength of bias adjustments, only noise reduction. If we wanted to mask out some areas in the image from the entire MultiscaleLinearTransform process, we could simply apply such mask to the image, directly. Here, MultiscaleLinearTransform lets us define a linear mask, which has the advantage over a nonlinear mask in that it makes sure that the strength of the noise reduction is a function of the (inverse of the) SNR. In practical terms, what MultiscaleLinearTransform does is creating a duplicate of the image to which only linear adjustments (multiplications) are made. We must enable the checkbox next to where it says “Linear Mask” to activate the mask. Preview mask: When masking for noise reduction, it is always a good idea to evaluate the mask we will be using. When this option is enabled, all other parameters in MultiscaleLinearTransform are grayed out (disabled). Then, we make sure that the Real-Time Preview window is open, and adjust the mask parameters (described next) until we see a mask that feels right. Dark pixels block 167

noise reduction while bright pixels allow for noise reduction to happen. When we're happy, we deselect this option to regain control over the rest of MultiscaleLinearTransform parameters. Amplification: This is the number that will be multiplied to the copy of the original image, in order to create the mask. The higher the value, the more protected the image will be from being noise reduced. The ideal value depends heavily on the source image. On average, for most linear astroimages, values between 50 and 150 often work. Non-linear images would require a much smaller value, often just between 0 and 5. Smoothness: Smoothing a mask for noise reduction can help reducing noise more effectively, as luminance and lightness-based masks can inherit the noise from the source image. Here, we define whether we want more or less of the smoothness effect (a convolution) by increasing or decreasing this value. The value is the standard deviation of the convolution filter being applied for the smoothness effect. Values between one and four are often adequate. Inverted mask: This option is enabled by default because noise reduction masks based on the image being processed for noise are always inverted, so that bright areas become protected (black in the mask) and dark areas receive noise reduction (white in the mask). We should not need to disable this option unless we know what we're doing. K-Sigma Noise Thresholding

When activated, K-sigma noise thresholding is applied to the first four detail layers. This technique will work just as intended if we select the dyadic sequence. The higher the threshold value, the more pixels will be treated as noise for the scale of the smaller wavelet layers. This method is not as efficient at reducing noise as the per-layer Noise Reduction described earlier. However, it may be helpful in some situations when building masks or other support images. Threshold: Defines the noise threshold. This is the “k” in the k-sigma method. Anything below this value will be applied the noise reduction defined by the rest of the parameters. Amount: Strength of the threshold. Soft thresholding: When enabled, MultiscaleLinearTransform will apply a soft thresholding of wavelet coefficients instead of the default, harder thresholding, effectively creating a smoother final image. Use multiresolution support: Enable this option to compute the noise standard deviation of the target image for a slightly more accurate noise estimation.

168

Deringing

When we use MultiscaleLinearTransform for detail enhancement, what we are applying is essentially a high-pass filtering process. High-pass filters suffer from the Gibbs effect, which is due to the fact that a finite number of frequencies have been used to represent a signal discontinuity in the frequency domain. On images, the Gibbs effect appears as dark artifacts generated around bright image features, and bright artifacts around dark features. This is the well-known ringing problem. Ringing is an extremely annoying —and hard to solve— issue in astroimage processing. We probably have experienced this problem as black rings appearing around bright stars after some sharpening or deconvolution. However, ringing doesn't occur only around stars. In fact, we'll get ringing to some degree wherever a significant edge appears on our image and we enhance it, including borders of nebulae, galaxy arms, and planetary details, for example. In all of these cases, ringing is actually generating erroneous artifacts as a result of some limitations inherent to the numerical processing resources employed. Whether some ringing effects are admissible or not for a particular image is a matter of taste and common sense. MultiscaleLinearTransform includes a procedure to fix the ringing problem on a per-layer basis. It can be used for enhancement of any kind of images, including deep-sky and planetary. Dark: Deringing regularization strength for dark ringing artifacts. Increase to apply a stronger correction to dark ringing artifacts. The best strategy is to find the lowest value that effectively corrects the ringing, without overdoing it. Bright: This parameter works exactly as Dark but for bright ringing artifacts. Since each image is different, the right amount varies from image to image. It is recommended starting with a low value – such as 0.1 – and increase as needed before over-correction becomes obvious. Output deringing maps: Generate an image window for each deringing map image, as long as the corresponding amount parameters (Dark and Bright above) are nonzero. New image windows will be created for the dark and bright deringing maps, named dr_map_dark and dr_map_bright. Large-Scale Transfer Function

MultiscaleLinearTransform lets us define a specific transfer function for the residual (R) layer. This can aid us in adjusting the overall illumination of large scale structures. This option is most 169

useful when the residual layer is large enough so that small and medium-scale structures are not affected by it. If we only have, for example, three layers defined, the residual layer would include 8 pixel structures (or worse, 3 pixel structures if using a Linear 1 layer sequence), which may be too small for benefiting from this enhancement. •

Hyperbolic: A hyperbolic curve is similar to a multiplication by a positive factor slightly less than one, which usually will improve color saturation by darkening the luminance. The break point for the hyperbolic curve can be defined in the slider to the right. This is similar to a popular process dubbed Digital Development (DDP), although much more efficient and only targeting the residual layer. When enabled, we can adjust the curve break point of the hyperbolic function – the box and slider to the right of the pull-down menu. The higher the value, the darker the results, whereas smaller values will render a brighter image.



Base-10 logarithm: The base-10 logarithm function will result in a much stronger darkening of the luminance than the natural logarithm or hyperbolic functions.



Natural logarithm: The natural logarithm function will generally produce a strong darkening of the luminance.

Dynamic Range Extension

Several operations executed during a transformation – such as a bias adjustment - may result in some areas reaching the upper or lower limits of the available dynamic range. The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. We can control both the low and high range extension values independently. Low range: If we increase the low range extension parameter, the final image will be brighter, but it will have fewer black-saturated pixels. High range: If we increase the high range extension parameter, the final image will be globally darker, but fewer white-saturated pixels will occur. Any of these parameters can be set to zero (the default setting) to disable extension at the corresponding end of the dynamic range. Target: Select whether MultiscaleLinearTransform should be applied over to only the lightness, the luminance, chrominance or all RGB components. Layer Preview: Prior to deciding which layers should receive which adjustment and how, it may be useful to analyze the structures being identified for each layer. This option allows us to 170

visualize the transform coefficients, which can be useful to evaluate to which layers should we apply detail enhancement and/or noise reduction, for example. Adjustments made to bias or noise reduction parameters do affect this visualization. •

No layer preview: This is the default option. The transformation is applied to the image.



All Changes: All coefficients from the transform are represented in the image, with the pixel values proportionally representing their coefficients. This often produces an image predominantly neutral (gray), similar in appearance to those obtained when applying a high-pass filters.



Increasing Pixels: Only coefficients larger than zero are represented in the image. Pixel values proportionally represent their coefficients.



Decreasing Pixels: Only coefficients smaller than zero are represented in the image. As before, pixel values proportionally represent their coefficients.

MultiscaleMedianTransform Process > MultiscaleProcessing

MultiscaleMedianTransform (often abbreviated as MMT) is a very similar process to the MultiscaleLinearTransform tool we just described, not only in the user interface and parameters, but also in the tasks it can perform, that is, breaking an image into different scales and being able to apply detail enhancements and noise reduction on a layer (scale) basis. Their main difference is in the way each tool operates internally, where, rather than using a highpass filtering like the wavelet transform to separate the different scales, MMT uses a median transformation based on nonlinear morphological median filters. These are two completely different approaches that indeed yield very different results, some of which are discussed in the documentation for MultiscaleLinearTransform tool, right before MMT. If you have not read the introduction and “When to use...” sections about the MultiscaleLinearTransform process, please do so now, as they are pertinent to also understanding how MultiscaleMedianTransform works.

171

When to use MultiscaleMedianTransform Everything we said about when to use the MultiscaleLinearTransform tool is applicable to using MultiscaleMedianTransform, including some basic guidelines about whether to use one tool or another, so again, please review “When to use...” for the MultiscaleLinearTransform tool. MultiscaleMedianTransform can also be applied to both, linear and nonlinear images.

Parameters Algorithm: MultiscaleMedianTransform offers two different transforms, each with its own pros and cons. •

Multiscale median transform: This is the single median transform based on nonlinear morphological median filters described earlier. This transform works well for most purposes, and in comparison with the other method available (Median-wavelet transform, below) this option is more suitable when our goal is to isolate high-contrast structures.



Median-wavelet transform: This transform is a combination between the median transform described above, and a wavelets-based transform, where wavelets are used to define the non-significant and smooth structures in the image, while the median filter is used to define strong, high-contrast structures – technically each method addressing what they do best. In practice, this means that noise reduction operations can potentially yield better results.

Layers

Dyadic: Detail layers are generated for a growing scaling sequence of powers of two. The layers are generated for scales of 1, 2, 4, 8... pixels. For example, the fourth layer contains structures with characteristic scales between 5 and 8 pixels. Linear: When selected as Linear, the Scaling Sequence parameter is the constant difference in pixels between characteristic scales of two successive detail layers. Linear sequencing can be defined from one to sixteen pixels increments. For example, when Linear 1 is selected, detail layers are generated for the scaling sequence 1, 2, 3, which can be useful when targeting small-scale structures. Similarly, Linear 5 would generate the sequence 1, 6, 11, ... We may want to use this sequence to better isolate the smallest scales, although the starlet transform may take advantage of a dyadic sequence instead.

172

Layers: This is the total number of generated detail layers. This number does not include the final residual layer, which is always generated and named “R”. We can work with up to sixteen (16) layers, which allows us to handle structures at really, really huge dimensional scales. List of Layers: The window below the pull-down option to define the scaling function will show the generated layers. Individual layers can be enabled or disabled. To enable/disable a layer, double-click anywhere on the layer's row. When a layer is enabled, this is indicated by a green check mark. Disabled layers are denoted by red 'x' marks. The last layer, R, is the residual layer, that is, the layer containing all structures of scales larger than the largest of the generated layers. In addition to the Layer and Scale, an abbreviation of the parameters specific to each layer – if defined – are also displayed. Detail Layer A/B

Bias: This is a real number ranging from –1 to +15. The bias parameter value defines a linear, multiplicative factor for a specific layer. Negative biases decrease the relative weight of the layer in the final processed image. Positive bias values give more relevance to the structures contained in the layer. Bias is mainly used to increase sharpness and details, usually in scales 2 to 4, as a high bias in smaller scales would mostly accentuate noise. Being a multiplicative operation, we don't need to increase the bias amount by a lot to obtain noticeable results.

173

Noise Reduction

For each detail layer, specific noise reduction parameters can be adjusted. The noise reduction would then be applied to the layer's contents after biasing. Threshold: The higher the threshold value, the more pixels will be treated as noise and therefore attenuated. Therefore, increase this value to remove more structures. It is common to apply a stronger threshold to the first layer, then smaller thresholds as scales become larger. As a general practice, try to find the smallest possible value that does the job. Too high values will soften too much the details at that particular scale. Amount: Noise reduction only happens when this parameter is nonzero (and Noise Reduction has been enabled). This parameter controls how much smoothing is applied. Adaptive: When this parameter has a non-zero value, MultiscaleMedianTransform applies a local adaptive noise reduction (LANR) before the thresholding operation. A LANR is an effective method to attack variable noise as it uses a local adaptive filter which can be described as a filter that has a mechanism to adjust itself by adaptively updating its parameters throughout the process. In practice, after having found a good Threshold value, this is an excellent way to attack leftover small, high contrast, loose noise structures. If we see such leftover noise, we can slightly increase the value of this parameter from its default of zero, and keep increasing it until we see satisfactory results. Use smaller increments for the smaller scales and larger increments for large-scale layers. Linear Mask

As an aid to its noise reduction capabilities, MultiscaleMedianTransform offers the possibility of using a mask when applying noise reduction, in order to attack more the areas with low SNR than those with higher SNR, just like MultiscaleLinearTransform does. Note that this mask does not control the strength of bias adjustments, only noise reduction. If we wanted to mask out some areas in the image from the entire MultiscaleMedianTransform process, we could simply apply such mask to the image, directly. Here, MultiscaleMedianTransform lets us define a linear mask, which has the advantage over a nonlinear mask in that it makes sure that the strength of the noise reduction is a function of the (inverse of the) SNR. In practical terms, what MultiscaleMedianTransform does is to create a duplicate of the image to which only linear adjustments (multiplication) are made. We must enable the checkbox next to where it says “Linear Mask” to activate the mask.

174

Preview mask: When masking for noise reduction, it is always a good idea to evaluate the mask we will be using. When this option is enabled, all other parameters in MultiscaleMedianTransform are grayed out (disabled). Then, we make sure that the Real-Time Preview window is open, and adjust the parameters below until we see a mask that feels right. Dark pixels block noise reduction while bright pixels allow for noise reduction to happen. When we're happy, we deselect this option to regain control over the rest of MultiscaleMedianTransform parameters. Amplification: This is the number that will be multiplied to the copy of the original image, in order to create the mask. The higher the value, the more protected the image is from being noise reduced. The ideal value depends heavily on the source image. On average, for most linear astroimages, values between 50 and 150 often work. Smoothness: Smoothing a mask for noise reduction can help reducing noise more effectively, as luminance and lightness-based masks can inherit the noise from the source image. Here, we define whether we want more or less of the smoothness effect (a convolution) by increasing or decreasing this value. The value is the standard deviation of the convolution filter being applied for the smoothness effect. Values between one and four are often adequate. Inverted mask: This option is enabled by default because noise reduction masks based on the image being processed for noise are always inverted, so that bright areas become protected (black pixels in the mask) and dark areas receive noise reduction (white pixels in the mask). We should not need to disable this option unless we know what we're doing. Dynamic Range Extension

Several operations executed during a transformation – such as a bias parameter - may result in some areas reaching the upper or lower limits of the available dynamic range. The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. We can control both the low and high range extension values independently. Low range: If we increase the low range extension parameter, the final image will be brighter, but it will have fewer black-saturated pixels. High range: If we increase the high range extension parameter, the final image will be globally darker, but fewer white-saturated pixels will occur. Any of these parameters can be set to zero (the default setting) to disable extension at the corresponding end of the dynamic range.

175

Target: Whether MultiscaleMedianTransform should be applied over to only the lightness, the luminance, chrominance or all RGB components. Layer Preview: Prior to deciding which layers should receive which adjustment and how, it may be useful to analyze the structures being identified for each layer. This option allows us to visualize the transform coefficients, which can be useful to evaluate to which layers should we apply detail enhancement and/or noise reduction, for example. Adjustments made to bias or noise reduction parameters do affect this visualization. •

No layer preview: This is the default option. The actual transformation is applied to the image.



All Changes: All coefficients from the transform are represented in the image, with the pixel values proportionally representing their coefficients. This often produces an image predominantly gray (neutral), similar in appearance to those obtained when applying a high-pass filter.



Increasing Pixels: Only coefficients larger than zero are represented in the image. Pixel values proportionally represent their coefficients.



Decreasing Pixels: Only coefficients smaller than zero are represented in the image. As before, pixel values proportionally representing their coefficients.

NewImage Process > Image

NewImage creates a new image from scratch in the active workspace.

When to use NewImage Most of the time, new images in PixInsight are either directly loaded from a file, created as a duplicate of an already opened image or generated as a result of applying a tool or a process, so

176

creating a blank new image is not something often needed during a typical astroimage processing workflow. When we're creating a new image that needs to have the same size in pixels as an existing image, it may feel faster just creating a duplicate, and later overwrite the pixel information with whatever is that we need in that image. Still, if the new image we need has a specific bit depth and number of channels that does not match any opened image, it's faster to run NewImage than creating a duplicate and then readjust bit depth and/or number of channels.

Parameters Image Parameters

Identifier: Enter here the identifier for the new image. If left with the default option, PixInsight will use the following naming convention: ImageXX, where XX is an incremental number: the identifier of the first image created will be Image01, the second one, Image02, etc.. Sample format: Indicate the format (bit depth) for the new image. Color Space: Select whether the new image will be a color (RGB) or a grayscale image. Width/Height: The width and height of the new image, in pixels. The size number that appears between the Height and the Set As Active Image button is the calculated size in megabytes of the image to be created, based on the information entered here. Channels: The number of channels in the new image. While we can set a ridiculous number of channels, one to four channels is the norm for most practical purposes. Set As Active Image: When clicked, the parameters in the NewImage window will be populated with the corresponding data from the active image, if any. Initial Values

R, G, B, A: The new image can be completely black (RGB values of zero) or created with an evenly distributed color. When the number of channels is four or more, the Alpha channel slider also becomes adjustable.

177

NoiseGenerator Process > NoiseGeneration

NoiseGenerator is the standard noise generation tool in PixInsight. It is based on some of the most advanced uniform noise generator algorithms available today.

When to use NoiseGenerator NoiseGenerator should clearly be used at any time we want to add some noise to our images. But, when would we want or need to add noise to an image? Adding noise to an image is, most of the times, counterproductive, especially considering that astroimage processing is mostly a battle between signal and noise. Some workflows suggest adding noise to an image after a strong convolution or noise reduction. Strong convolutions can make an image to appear “soft,” and a small amount of noise is added so the image regains some granularity – which visually can be perceived as having more contrast, sharper or, at the very least, with a less washed out look. Since PixInsight offers state-of-the-art noise reduction tools that can be adapted to almost any kind of noise, this technique is not recommended and it will indeed produce much poorer results in the end. In practice, adding noise should be done most of the times for testing or image analysis – for example, adding synthetic noise for the purpose of testing a noise reduction tool. There are always exceptions. For example, if we are about to integrate or combine two images with different noise levels while building a mosaic, we may want to have matching noise levels, which can be attempted in either direction: adding noise (with NoiseGenerator) to the cleaner image, or reducing noise in the noisier image. Sometimes we may also want to add noise to a mask, although it is rare to benefit from that during most image processing workflows and techniques. Some workflows advocate for creating a synthetic noise floor that can be integrated with an image, as a way to correct for uneven or blotchy backgrounds. Adding the noise is then accomplished via 178

the NoiseGenerator tool. However, this is another approach that not only may suppress real faint signal, but also adds visible noise where there was no visible noise before. Ultimately, if we wish to add noise to our images for purely aesthetic reasons, NoiseGenerator is a perfect and easy tool to use.

Parameters Amplitude: Define the strength in which the noise effect will be applied. Distribution

Uniform: Uniform noise rarely occurs in nature but when digitizing a signal, errors take place that are uniformly distributed. The noise variance is independent of the image intensity. Normal: In nature, nearly everything is normally distributed. Normally distributed noise is Gaussian noise. Like Uniform noise, the noise variance is independent of the image intensity. Poisson: A source for Poisson noise is photon counting. When taking a picture in the real world, photons arrive at a certain rate. However, photons are not correlated and thus, the time between photons is not always the same. The number of photons we actually collect is Poisson distributed. Impulsional (Salt & Pepper): The distribution of impulse noise decays very slowly. This is a socalled fat-tail distribution and causes the salt-and-pepper noise effect. Probability: This parameter is only active when the Impulsional distribution is selected. Noise is characterized by a probability distribution function (PDF). The higher this value, the more probability of noise being generated.

PhotometricColorCalibration Process > Photometry

Color balance or color calibration is an essential step in astronomical image processing and PixInsight offers several sophisticated yet easy-to-use tools that assist us with this task. PhotometricColorCalibration (sometimes abbreviated as PCC in the context of color calibration and PixInsight) is not only the most ambitious of all of them at the moment of this writing, it's also very easy to use – and yes, the default values for the many parameters are often just fine.

179

The short version of what PhotometricColorCalibration does is to calibrate the color in our target image by plate-solving the image (that is, identifying known stars and other objects in the target image), then find the photometry data of some of these known objects, calculate what color adjustments are needed to bring objects with certain photometry to display neutral colors (our white reference) and apply those adjustments. In other words, as opposed to doing a color calibration based exclusively on the colors that we see in the image, PCC's goal is to apply an absolute white balance based on known spectral information. In fact, PCC does not need the white reference to be in the image as it can figure out the right color indexes by association. Behind the scenes, PhotometricColorCalibration does multiple tasks and calculations. In essence, it's a sequence of two PixInsight scripts (ImageSolver.js and AperturePhotometry.js) aided by some supporting tools and additional functionality. As we will see, although PhotometricColorCalibration can be used successfully without barely modifying the default values, the tool does give us a lot of options, not only regarding how to perform the plate-solving or photometry calculations, but even most importantly, what are we using as a white reference.

When to use PhotometricColorCalibration PhotometricColorCalibration can only be used in color images, as expected. A good color calibration is performed when the data has previously been accurately calibrated – particularly flat-field corrected – and the image is still linear with a uniform illumination (no gradients). Preferably, the mean background value should be neutral, something that can be done with BackgroundNeutralization. However, a neutral background is not always a requirement. Because PhotometricColorCalibration retrieves star catalog information from remote database servers, a working Internet connection in the system running PixInsight is required.

Parameters Process Parameters

Working mode: Select the type of target image, based on the filters used for each RGB channel. •

180

Broadband: Select this option when color-calibrating an image captured with RGB broadband filters. This instructs PhotometricColorCalibration to set a goal where all three channels in the white reference have equal values.



Narrowband: Select this option when color-calibrating an image captured with narrowband filters. This requires to specify the wavelength and bandwidth of each of the filters assigned to each RGB channel.

White reference: Here we define the type of object we're using as our white reference, this being the most critical decision to be made. Which of the many options we select can have a noticeable impact in the results. The default of Average spiral galaxy is the recommended choice by the developers of the tool, based on the concept that such galaxies are a good representation of an unbiased white reference. Other common white reference used in astroimages are G2V stars. In reality, any white reference is a good choice as long as we have a reason for selecting it. When our goal is mostly aesthetic, both white references (average spiral galaxy and G2V stars) are generally acceptable choices. RGB Filter Wavelength: When selecting the Narrowband working mode, these are the boxes where we enter the wavelength values of the filters used for each RGB channel. RGB Filter Bandwidth: Also only when selecting the Narrowband working mode, these are the boxes where we enter the bandwidth values of the filters used for each RGB channel. Database server: PCC needs to access online astronomical catalogs for finding stars and other objects. We should select the one nearest us, and only change to a different one if the server we selected is down. PCC assumes that we have a working Internet connection.

181

Apply color calibration: When this option is disabled and we apply PhotometricColorCalibration to an image, PhotometricColorCalibration will go through the entire process but will ultimately will not apply the computed color calibration. Unless we are evaluating some of the information given in the Process Console during PhotometricColorCalibration's execution, this option should always be enabled. Image Parameters

Right ascension: Here we enter the right ascension coordinates that loosely correspond to the center of our image. If the target image already has this information in its headers, PhotometricColorCalibration will ignore the values entered here. If we don't know the coordinates, we can first try to see if PhotometricColorCalibration can find them in the image (see Acquire from Image below) or do a search based on object names (see Search Coordinates below). The default format to indicate the R.A. coordinates is: HH MM SS.sss

That is, 2 digits for hours, a space, 2 digits for minutes (optional) and 2 digits for the seconds with fractional precision (also optional). A scalar format (HH.xxxxxxx) is also available (see Show complex angles below). Declination: Same as the Right ascension parameter but for the declination coordinate. The format in this case is: +DD MM SS.ss

The first character being + for Northern declination and – for Southern, with the rest being the coordinate numbers in degrees, minutes and seconds. Show complex angles: When enabled, show the above coordinates as complex angular values (hh mm ss.sss for example). When disabled, use a single number with decimals format (hh.xxxxxx) R.A. in time units: The default value (enabled) shows the R.A. coordinate in time units: hours, minutes and seconds. When disabled, the R.A. is displayed in angular units: degrees, arcminutes and arcseconds. Observation date: The date the data was captured is necessary to properly do the plate-solving routine. If not fetched from the file headers, we will need to enter it manually. Focal length: Approximate focal length in millimeters of the telescope or lens used to capture the target image. It doesn't need to be precise, but the closest to the effective focal length, the better.

182

Pixel size: Approximate pixel size (in microns) of the sensor used to capture the target image. If we binned during our capture, we will need to indicate the size of the binned pixel, for example when doing bin2, four times the size of a single bin1 pixel. Search Coordinates: When our target image does not contain some of the above values in its header metadata and we don't know the actual coordinates, but we know the name of one of the objects in our image that isn't far from the center, we can do an online search by clicking here. This will bring a simple window with an Object: search box where we can enter the name of the object. Using the NGC numbers usually works. Less technical names such as “Fireworks galaxy” may not work, but it's always worth trying. When the object is found, the coordinates are displayed on the output black box, and we can move those values to their corresponding boxes in the PhotometricColorCalibration dialog box by clicking “Get”. Acquire from Image: Click here to populate as many Image Parameters as possible from the active's image header data. If the image has no pertinent header data, an error message box pops up letting us know. Plate Solving Parameters

Automatic catalog: PhotometricColorCalibration uses Vizier astrometric catalogs to perform the plate-solve. When this option is enabled (recommended), PhotometricColorCalibration retrieves the information from a server it selects automatically based on the field of view of the image. If disabled, in the next parameter we must indicate which catalog will be used. Astrometry catalog: If Automatic catalog is disabled, define here the astrometry catalog to use: •

UCAC3: 100,766,420 objects.



PPMXL catalog: 910,469,430 objects (+/- 410 million with photometric data).



Tycho-2 catalog: 2,539,913 brightest stars.



Bright Star catalog: 9,110 objects (9,095 are stars).



Gaia DR2: 1,692,919,135 sources.

Note that the star, source and object counts may change over time from the time this was written. Also, more stars in the catalog does not necessarily mean we'll obtain better results. Automatic limit magnitude: Let PhotometricColorCalibration decide which stars to use – based on their magnitude (brightness) – for calculating the plate-solve. This is the default and recommended option.

183

Limit magnitude: When Automatic limit magnitude is disabled, we enter here the highest star magnitude to be included in the plate-solve calculations. The higher the value, the more stars will be included. Too high of a magnitude, PhotometricColorCalibration may try to match stars in the catalog that aren't present in the image, which may lead to a failed plate-solve. Too low of a magnitude and PhotometricColorCalibration may not use enough stars for a successful platesolve. Distortion correction: If the target image has noticeable distortion (for example, it was captured with a short focal length refractor telescope), enabling this option will have PhotometricColorCalibration to use an algorithm that can account for distortion correction. Otherwise, this option should be disabled. Force plate solving: Plate-solving information may be included in an image's metadata if it has been plate-solved before. Enabling this option will force PhotometricColorCalibration to perform a new plate-solve even if this information already exists in the file, and replace the existing data with the new one. Ignore existing metadata: Coordinates, instrument information, acquisition date and time and other information are often included in the target image's metadata, and PhotometricColorCalibration trusts that information by default. When Force plate solving is enabled, we can also enable this option so PhotometricColorCalibration also ignores all that information and will run based exclusively on the values we enter in the Image Parameters section. Advanced Plate Solving Parameters

Projection system: Plate-solving uses internally a projection model that gives PhotometricColorCalibration an idea about how the target image has been projected. Different optical systems project the image on the focal plane differently. The default projection, Gnomonic, is a good choice for most optical systems. Extremely wide-field images or images captured with fish-eye lenses benefit from other projections such as the Stereographic or Orthographic projections. Log(sensitivity): This parameter defines how sensitive is the star detection process. Note that the higher the value, the less sensitive the star detection process is, therefore less starts will be detected. Decrease to detect more stars. As with most parameters in most sections of the PhotometricColorCalibration tool, the default is often spot on. Noise reduction: When the target image is very noisy, PhotometricColorCalibration may mistake noise with stars and fail to plate-solve. Here, we can instruct PhotometricColorCalibration to 184

remove a number of small-scale wavelet layers internally from the target images, which will effectively remove small-scale noise. The default value of 0 means no noise reduction is applied – a good starting point in most cases. A value of one will remove layer 1 (very small-scale structures), a value of two will remove layers one and two, and so on. The maximum value is five wavelet layers, which should rarely be needed. Alignment device: This is the star matching algorithm used during image registration. Two options are offered: •

Triangle similarity: This method defines triangles from detected stars and finds a match by looking for triangle similarities. It's fast and works in most cases, including mirrored images and affine transformations.



Polygons: This method uses polygons instead of triangles for finding star matches. It's more suitable for images with local distortions or scale differences but does not work with mirrored images.

Spline smoothing: When dealing with distorted images and this parameter is greater than zero, rather than using interpolating surface splines, PhotometricColorCalibration will use approximating surface splines. In practical terms, the higher the value, the closer the approximating surface will be to the reference plane of the target image. This helps the star detection algorithm better ignore non-stellar objects and structures such as asteroids, small galaxies, etc. that could otherwise be mistaken for stars. Photometry Parameters

Photometry catalog: This is the catalog used to retrieve photometry information from the stars. As of version 1.8.8-3 the only catalog available is the APASS (AAVSO Photometric All Sky Survey DR9, with photometric data from around 62 million of stars. Automatic limit magnitude: Similar behavior as the Automatic limit magnitude setting for the Plate Solving Parameters, this setting lets PhotometricColorCalibration decide which stars to use – based on their magnitude (brightness) – from the photometric catalog. The enabled setting is the default and recommended option. Limit magnitude: When Automatic limit magnitude is disabled, we enter here the highest star magnitude to be included from the photometric catalog. The higher the value, the more stars will be included. Too high of a magnitude, PhotometricColorCalibration may try to match stars in the catalog that aren't present in the image, which may lead to a failed plate-solve. Too low of a

185

magnitude and PhotometricColorCalibration may not use enough stars for a successful platesolve. Automatic aperture: PhotometricColorCalibration uses a circular area (photometric aperture) to read star fluxes in the target image. When this option is enabled, PhotometricColorCalibration automatically determines what is the most suitable aperture, based on the target image scale and the information in the photometry catalog. This is the default and recommended value. When disabled, the Aperture parameter becomes editable and a fixed aperture needs to be defined there. Aperture: When Automatic aperture is disabled, here we must indicate the photometric aperture defined earlier. The value is the diameter of the circular area. Note that this value is not related to star detection or matching. It simply delimits the area PhotometricColorCalibration will use to read star fluxes in the target image. Saturation threshold: Any detected star that has at least one pixel with a value higher than this amount – in the usual [0,1] range – will be labeled as saturated (see colors and labels below under Show detected stars). This does not stop PhotometricColorCalibration from measuring the star. PSF photometry: When enabled, PhotometricColorCalibration will use PSF photometry instead of aperture photometry. Aperture photometry is a classic method of measuring light within a specific area around the object, whereas PSF fitting determines the PSF and mathematically fits the point spread function introduced on the target image. In practical terms, both methods are acceptable and the default (aperture photometry) is good in most cases. PSF can however produce better results in very crowded star fields – since it models the radial shape of each star in the image, PSF photometry can handle stars with close neighbors, unlike aperture photometry. Show detected stars: When this option is enabled, PhotometricColorCalibration will create three images – one per channel – during execution, where the detected stars in each channel have a circle around them, with the diameter of the filter matching the aperture used. During the process, PhotometricColorCalibration may apply some flags to each star as it detects them. The color of the circle tells us a few things about each star: Red: Stars that have another star(s) less than one pixel away (MULTIPLE flag).



Yellow: Stars that have another star within the photometric aperture, or stars which position in the image is too different from where the catalog expects it to be (OVERLAPPED or BADPOS flags).



Cyan: Stars with a very low signal-to-noise ratio (LOWSNR flag).

186





Pink: Stars that have at least one pixel over the saturation threshold indicated above. (SATURATED flag).



Green: The star has no flags assigned to it.

The colors are assigned in the order they were listed – when a flag is found following that order, PhotometricColorCalibration assigns that color and stops checking for more flags. This means that when checking a star flagged with, say, MULTIPLE and LOWSNR, the tag MULTIPLE will be found first, a red circle will be drawn, and no more flags will be checked. Show background models: If enabled, when PhotometricColorCalibration is executed, it will also create an image with the background model used for the photometric calculations in the target image. We can enable this option if we're curious or for testing or analysis purposes. Generate graphs: When enabled, PhotometricColorCalibration will create some interactive graphs showing the linear fits between the color indices of measured stars from the catalog and from the image. A good fit is represented by having the majority of dots near the fitted line. These graphs are interactive, meaning we can mouse over the graph to get individual values from each star represented. Background Neutralization

Lower limit: Pixels with values less than or equal to this value will ignored when calculating the background mean values. Since the minimum allowed value is zero, black pixels are always rejected. Upper limit: Pixels with values greater than or equal to this value will be ignored when calculating the background mean values. This parameter allows us to reject pixels with very high values. Since the maximum allowed value for this parameter is one, white pixels are always rejected. Region of Interest: Define the rectangular area in the image to be sampled by PhotometricColorCalibration for the purpose of background evaluation. Although defining previews is quicker, this parameters come handy when we want to reuse the process in the future – say creating an instance of it.

187

PixelMath Process > PixelMath

PixInsight's PixelMath is an extremely versatile and powerful tool to perform pixel-level arithmetic and logical operations between images. This reference manual does not aim to document PixelMath syntax or every available function or operator. Instead, it focuses on explaining the user interface and available options. For more detailed and complete information on those topics, review the chapter about PixelMath in the Image Processing section of the book.

When to use PixelMath PixelMath is one of those processes that may be needed at any time during the workflow of astronomical images, for any number of reasons. We can use PixelMath any time we need to apply a process to a single image or between two or more images that only involves arithmetic and/or logical operations, including the many functions available in PixelMath. These operations can be executed as a PixelMath expression or sequence of expressions. Some of the most common uses of PixelMath in astronomical image processing are: Analysis: Analytically driven reasons can lead us to having to perform calculations with one or more images that can be resolved via PixelMath. From simple tasks such as subtracting certain image from another image for analyzing differences, to complex (or not so complex) calculations for measuring data, performance, quality, etc.



Image blending: This is a very popular technique done with PixelMath, where we write simple expressions to combine two or more images, giving each a different weight. For example, we can use PixelMath literally any time after applying a process to our main

188



image with the purpose of blending the before and after instances of the image, modulating how much of the effect is applied – say, blending 75% from the after image with 25% from the before image, with a simple expression such as: (0.25 * before_image) + (0.75 * after_image)



Advanced blending: Blending operations are not limited to simple transparency cases like the one we just described. In fact, unlike other applications that offer a limited set of “blending modes,” PixelMath allows us to combine two or more images in any way we like, literally, not being limited to a preset of popular blending modes such as max/min, screen, color dodge, linear burn, etc.



Mask manipulation: Whether combining the maximum (or minimum) values of two different masks, subtracting one mask from another, or creating a new mask based on logical operations over existing masks (to name a few popular uses), PixelMath is extremely versatile when it comes to manipulating masks in many different ways.



Combining multiple filter images into a color image: For example, tricolor narrowband combinations have been performed in PixInsight with PixelMath for many years. Although PixInsight offers other tools specifically designed for this task, such as the script NBRGBCombination, PixelMath allows us to perform this combination arithmetically, based on any criteria we decide to use. From simply assigning each filter to a channel, to applying weighting to the filter contribution, blending two or more filters in a single channel or virtually any criteria we choose, including combining any number of filters to produce a RGB image.

This is not, at all, an exhaustive list. There are many, many more uses of PixelMath, some also fairly common, while others happening only under very specific cases demanding very specific solutions. Still, chances are we'll run into any of the reasons listed above before we run into other, less common PixelMath applications.

Parameters Expressions

RGB/K: For grayscale images, enter here the expressions for the gray channel. For RGB images, enter here the expressions for the red channel (when the Use a single RGB/K expression is disabled) or for all RGB channels (when the Use a single RGB/K expression is enabled).

189

G: If the Use a single RGB/K expression is disabled, enter here the expressions that will affect the green channel. B: Same as above but for the blue channel. A: Enter here the expressions that will affect the alpha channel. Symbols: We can use symbols in our expressions. Here is where we define the symbols, their values, etc. Use a single RGB/K expression: If the operations we want to execute should affect all RGB channels of a color image, we enable this function and enter the expressions in the RGB/K text box. If we want to perform different operations on different channels, we deselect this option. Expression Editor: Each of the parameters defined above can also be edited by using the Expression Editor, explained below, which we can access by clicking on this button. Destination

Generate output: When enabled, PixelMath will actually execute the expressions and apply it to the target image – whether the target image is an existing image or a new one. This is the default value and PixelMath's normal behavior. This option can be disabled when we are only interested in checking the final values of global variables after PixelMath's execution, but not in actually modifying the image. Global variables are a kind of variables introduced in PixInsight 1.8.0 that preserve their value as PixelMath runs the defined expressions on every pixel, and whose values are reported on the console when PixelMath ends. One classic example would be an expression that counts the number of pixels in an image meeting certain criteria – no modification to the image is needed, we're just interested in the final results. In such cases, we can disable this option. Single threaded: This option relates to PixInsight's ability to run multiple tasks in parallel. PixelMath always takes advantage of this multi-threading parallel execution (unless Enable parallel processing is disabled in PixInsight's Global Preferences) which makes sense for nearly every expression entered in PixelMath except for some very few exceptions when parallel processing just won't work. Identifying such exceptions is not difficult, however. Whether Single threaded should be enabled may be easier to notice visually in the results than in the expression itself. When an expression misbehaves due to parallel execution, the target image should look as if the expression had been

190

applied partially for as many times as the number of CPU cores/threads seen by PixInsight. When this happens, enabling this option will fix the problem. Use 64-bit working space: If this option is enabled, PixelMath will create and use 64-bit floating point working images to accumulate and store any intermediate results. Rescale result: When this option is enabled, resulting pixels are rescaled to the specified range of values (next two bound parameters). When the Rescale check box is disabled, resulting pixels are truncated to the normalized [0,1] range and the two edit controls for range boundaries are disabled. In simple terms, rescaling avoids out-of-range pixels. Lower/Upper bound: The rescaling range is defined as two boundaries, such as the normalized [1,0] range, that cannot be exceeded. Usually we want to keep these boundaries withing the default [0,1] range. Replace target image: When we apply PixelMath to an image, PixelMath will output its results to the same image. Create new image: The results from PixelMath will be applied to a newly created image. The next options are only available if Create new image is selected. Image Id: The Image identifier (name) of the new image. Image width/height: Width and height of the new image. means that the new image will have the same height and width as the target image (the image to which we applied PixelMath). Color Space: Define the color space of the new image. Valid options are Same as target, RGB Color and Grayscale. Optionally, we can select the Alpha channel check-box if we want the new image to also have an alpha channel. Sample format: We can select Same as target so the new image will have the same format (bit depth) as the target image. Otherwise specify a different format.

191

Pixel Math Expression Editor The purpose of the PixelMath Expression editor is to make easy to writePixelMath expressions, by putting at our disposal all image IDs, symbols, functions, operators and even punctuators, as well as a parsing function to evaluate our PixelMath expressions. The PixelMath Expression editor is a modal dialog, that is, we must OK or Cancel this dialog box before we can do anything else with the instance of PixInsight from where this dialog was invoked. The PixelMath Expression editor is divided in four different areas: Expression Editor: Here is where we write our expression for the active channel tab. We can either type the expression – which we can also do from the PixelMath dialog – or with the aid of the items listed in the Images, Symbols and Syntax areas. Reference Lists: These lists allow us to access any available image identifier, symbols, functions and even operators with the click of a button. The lists are divided into three sections: Images: The Images area contains a list of all available image identifiers. When clicking on a single identifier, some of the image's basic information is displayed in the Information window.



Symbols: Likewise, the Symbols area contains a list of all defined symbols. When clicking on a symbol, its value and variable type are displayed in the Information window.



Syntax: This area offers all available functions, operators, punctuators and symbol definition functions available in PixelMath. When clicking on any item in these lists, the Information window will display help and syntax information about the selected item.

192



When we double-click on one of the items in any of the aforementioned lists, that item is automatically “written” in the Expression Editor at the cursor's current position. This is useful if we don't remember the exact spelling of the images, symbols or functions, or if we just don't feel like typing. Evaluation Window: We can evaluate the expression in the editing area by clicking on the Parse button and PixelMath will execute the expression without applying it to the image, displaying the results here. This is very helpful to verify if the syntax is correct or whether the expression is doing what we're expecting, etc.

RGBWorkingSpace Process > ColorSpaces

RGBWorkingSpace is not related to ICC profiling or color space management. Color management is used to achieve consistent color when displayed using different devices. RGBWorkingSpace exists purely for image processing purposes within PixInsight, more specifically about the weight each R, G and B channels have when performing certain operations, in particular (but not only) the extraction of luminance from a color image or vice-versa. When PixInsight needs to calculate separate luminance and chrominance values, it uses a RGB Working Space (RGBWS). Each image view can use its own local RGBWS. For images that don't have their own RGBWS (as is by default), a global RGBWS is used.

When to use RGBWorkingSpace RGBWorkingSpace should be used anytime prior to extracting the luminance from a color image, as well as anytime we're about to run a process that either does a RGB to CIE L*a*b* conversion internally, or it requires the RGB working space of the target image to be linear (usually implying a CIE XYZ conversion). In the first case (RGB to CIE L*a*b* conversion), we use RGBWorkingSpace to make sure we assign appropriate weights (luminance coefficients) to each of the RGB channels. Which luminance coefficients are appropriate depend on the target image and our goals.

193

Generally, the default values (based on the sRGB color space) are okay, which means the use of RGBWorkingSpace is not mandatory if you're okay with certain degree of inaccuracy during these operations that use the RGBWS. In those cases, the default weights tend to mimic our visual response to colors, with the highest weight values for the green color (our eyes are more sensitive to green) and very low values for blue. However, if we want to be specifically rigorous, we can assign other coefficients to all three channels. Some classic workflows suggest assigning equal values to all channels (all channels set to “1”) and go with that for the rest of the processing session, this begin a more unbiased and neutral representation than the default sRGB values, and it works well with most OSC CCD and DSLR cameras. Some other times, we may adjust the luminance coefficients based on our interpretation of which colors (channels) should have a stronger dominance. For example, if we want to target red structures more predominantly, prior to extracting the luminance we may want to adjust the luminance coefficients so that green isn't nearly as relevant, while increasing the weight for the red channel. Or simply reduce considerably the value for green, while setting strong red and blue weights, to better mimic color “importance” in deep-sky images where green isn't as dominant. No criteria is wrong as long as there is a good reason for it. The other situation we mentioned is when we're about to perform a process requiring a linear RGB working space, implying a RGB to CIE XYZ conversion. These cases may be harder to identify if we don't know how the processes work internally, but if we know that our process will be doing that conversion internally, in addition to any RGB weights, we would set a Gamma value of one so the RGBWS is linear as well. Some of such cases are when applying Deconvolution to the luminance of a color image, when we're about to use the RangeSelection tool on a color image, or even SCNR, with the Lightness option enabled.

194

Parameters Image selection: When an image is selected here, its luminance coefficients and chromacity coordinate values are shown in their corresponding text boxes. We use this option to look up or bring up those values into the RGBWorkingSpace dialog box. Luminance coefficients (D50): These are the relative weights of red, green and blue used to calculate the luminance of a pixel. D50 refers to CIE illuminant D50, a well-known standard illuminant mostly constructed to represent natural daylight (the “D” in D50 comes from “daylight”) with a color temperature of 5003K. Chromaticity coordinates (D50): The x and y coordinates of the “pure” red, green and blue primaries in the RGB chromaticity diagram (a 2D mapping of the 3D RGB color space). These primaries are the colorants of the RGB color space. Gamma: The gamma value is used to linearize RGB components when doing a linear color conversion, like CIE L*a*b* to separate luminance and chrominance. Setting this parameter to 1 guarantees the linearity of the transformation, when working with linear images. The default 2.20 matches the average gamma value for the sRGB color space. Other Gamma values are usually not needed, particularly very high or low values. Use sRGB Gamma Function: When enabled, the Gamma slider is disabled, and gamma is set to 2.20, the average gamma value for the sRGB color space. Load default RGBWS: Clicking on this button populates RGBWorkingSpace's parameters with the values for the default RGBWS, which is sRGB. Load global RGBWS: Clicking on this button populates RGBWorkingSpace's parameters with the values for the global RGBWS. The global RGBWS value can be set by executing RGBWorkingSpace globally, that is, by clicking on the small “Apply Global” circle icon. After that, any time we need to retrieve those values, we can click on this button – until we Apply Global again with other values. If no global RGBWS values have been set by the operator, the default sRGB values are used. Normalize Luminance Coefficients: All luminance coefficients are normalized – the sum of all three coefficients equals 1 – before being applied. If we modify the luminance coefficient values, we can click on this button and the normalized values will be adjusted accordingly.

195

Apply Global RGBWS: Disabled by default, when we enable this option, the rest of parameters become unavailable and any execution of RGBWorkingSpace (whether as a new instance or global execution) will set the global RGBWS values.

RangeSelection Process > MaskGeneration

RangeSelection creates a grayscale image based on a target image, with controllable settings and pixel value limits. We first define a range of valid pixel values and RangeSelection will then create this new image where pixels in the target image within this range will be white, and pixels out of the range will be rendered black. In addition, RangeSelection offers fuzziness and smoothness controls to further customize the results, as well as a few other selectors. RangeSelection does not modify the target image. It creates a new image named range_mask (or range_mask1, range_mask2, etc if the names area already assigned to other opened images.

When to use RangeSelection RangeSelection is almost exclusively used to create luminance/lightness-based masks. Rather than extracting the luminance or lightness component and adjust it via HistogramTransformation, CurvesTransformation, Convolution and/or other tools, activating the Real-Time Preview window with RangeSelection and adjusting the parameters can yield very effective results very quickly. The default limit values are definitely meant to be adjusted, hence the recommended use of the Real-Time Preview window.

Parameters Lower limit: Pixels with values smaller than this value will be set as zero (black). The maximum value cannot be greater than the upper limit, defined below. Upper limit: Pixels with values greater than this value will be set as black (zero).

196

Link range limits: When enabled, it locks the interval between both limits, while still allowing to move them higher or lower. In other words, if we increase the value of one of the limits, the value of the other limit increases simultaneously by the same amount, and same thing if we lower any of their values. Fuzziness: Increasing this parameter will gradually soften the transitions at the edges of the black and white regions by assigning different intensities to the pixels in these areas. At the default value of zero (and also zero Smoothness), the image is purely a binary image of only black and white pixels. Smoothness: Increase this parameter to gradually apply a smoothing effect to the image. The value of this parameter is the standard deviation of the Gaussian filter used in the convolution to soften the mask. Screening: When this option is disabled, RangeSelection generates a binary mask (which can then be adjusted via the Fuzziness and Smoothness parameters). When enabled, RangeSelection creates a special type of image where the pixels within the range limit (the white pixels) are replaced with the actual pixels of the target image instead of white. Lightness: When this option is enabled, RangeSelection uses the lightness component (CIE L*) of the target image to create the output image. When disabled, RangeSelection uses the RGB/K components of the target image instead. Invert: When enabled, RangeSelection inverts the output image. We would do this when we want to swap which pixels will be protected or not in the mask we're generating.

ReadoutOptions Process > Global

ReadoutOptions offers a number of parameters to define how PixInsight reads, calculates and presents numerical pixel information from images. This information is generated dynamically when the user moves the mouse or another pointing device over an image. In addition to data displayed on the Readout toolbar, usually located at the bottom of PixInsight's main window, readouts can be shown on popup windows (readout previews) when the user clicks

197

on the image and holds the mouse button pressed for about half a second. Readout previews are a magnified rendition of the area surrounding the cursor location. Also, when we press the mouse over an image, the readout values can be sent to processes that request them, as it happens with HistogramTransformation and other tools.

When to use ReadoutOptions Readouts are useful for many different reasons, and how each user would like to customize how this information is read and presented can be driven by a large number of factors, even personal preferences. In addition to temporary needs, such as the processing of data that may be better understood for some purpose using a specific range or precision, if a user has never customized the readout options of their PixInsight installation (or it has been a while), it is recommended to at least understand the different options available. For the new readout options to take effect, we must execute it globally (click on Apply Global).

Parameters

Data / Color space: When PixInsight reads a a pixel (or more, depending on the probe size, defined below), it can display its value in many different data modes/color spaces: RGB/K: RGB components or grayscale.



RGB + L: RGB components + CIE L* (lightness).

198





RGB + Y: RGB components + CIE Y (luminance).



CIE XYZ: CIE XYZ unit vectors.



CIE L*a*b*: CIE L*a*b* normalized components.



CIE L*c*h*: CIE L*c*h* normalized components.



HSV: HSV components.



HSI: HSI components.

Calculation mode: How readouts are calculated when the probe size has more than one pixel. •

Average: Use the average/mean of all pixels in the probe.



Median: Use the median of all pixels in the probe.



Minimum: Uses the pixel with the minimum value in the probe.



Maximum: Uses the pixel with the maximum value in the probe.

Probe size in pixels: Sets the readout probe size in pixels. The default Single Pixel gives us readout accuracy at the pixel level, which is often desirable, although in some cases, a larger probe may be useful. Include alpha channel: When enabled, alpha channel values are included on pixel readouts. Include mask channel: When enabled, mask channel values are included on pixel readouts if a mask is being applied to an image. Show readout preview: When enabled, a readout preview is shown next to the cursor when we click the mouse and hold it for about half second. If disabled, readout previews are not shown. Preview center hairlines: When enabled, a cross-hair center line is drawn on readout previews. Preview size in pixels: Define the size in pixels of readout previews. The specified size must be an odd integer in the range [15,127]. Preview zoom factor: Set the zoom factor for generating readout previews. 1:1 to 16:1 factors are available.

199

Normalized real range – resolution: Enable the checkbox to define floating point real pixel readouts in the [0,1] range. Then, select the desired decimal precision (resolution) from the pulldown menu. Binary integer range – bit count: Enable the checkbox to the right of the pull-down menu, to define integer pixel readouts. Then, select the desired range from the pull-down menu. In this menu, the number on the left indicates how many bits are used to define one integer value (bit count or bit depth), while the number in parenthesis indicates the number of possible integer values (values). Arbitrary integer range – max.value: Enable the checkbox to define integer pixel readouts when we need a range not defined in the previous option (Binary integer range). In this case, readout values will be rescaled to integers ranging from 0 to the number we enter here. Equatorial coordinates: When enabled, equatorial coordinates will be displayed in the readout preview when the image has a valid astrometric solution. Ecliptic coordinates: When enabled, ecliptic coordinates will be displayed in the readout preview when the image has a valid astrometric solution. Galactic coordinates: When enabled, galactic coordinates will be displayed in the readout preview, also when the image has a valid astrometric solution. Coordinate items: Select the number of items (precision) used in sexagesimal representations of celestial spherical coordinates. That is, degrees and hours, minutes (optional) and seconds (also optional). Coordinate precision: Set the number of decimal digits – from 0 to 8 – to be included in the sexagesimal representations of celestial spherical coordinates. Broadcast readouts: When enabled, the readout engine will send readout information to active tools and processes that can use readout data. When disabled, those processes will not receive any readout information. It is recommended to leave this option enabled. Some of the processes that can receive readout information are HistogramTransformation, ColorSaturation, CurvesTransformation, ScreenTransferFunction and others. Load Current Readout Options: If we open ReadoutOptions and make some changes but we haven't executed Apply Global yet, clicking this button will populate all parameters with the readout options that are still active.

200

Resample Process > Geometry

The Resample process implements a versatile resampling procedure in PixInsight to modify the size of an image. Given a source image, Resample generates a target image of the specified dimensions, with the help of a pixel interpolation algorithm. We can specify the image resolution in pixels and other units in the same resampling process. Just like in the IntegerResample process, the selected image (first parameter at the top) is used just to show current dimensions and, as we make adjustments, to show us how the dimensions would change in different units, not to actually apply the process to that image necessarily. Again, while it's useful to select the image we're planning to resample so its dimensions are placed in the right boxes, we can manually define all the parameters if we like, without any image selected, and apply. We must always keep in mind that the way Resample will do the resizing is based on the options selected in the Process Mode section (explained below).

When to use Resample Resizing an image by anything always degrades data. Even when reducing it by integer samples (divide it by 2, for example), several pixels have to be combined into one. Now, when it comes to more arbitrary resizing, whether downsampling or upsampling (reducing or enlarging) an image (by anything other than by integer factors like x2, x3, etc), interpolation not only modifies the values of pixels but can generate artifacts in some cases.

201

For that reason, Resample should be used mainly when resizing an image is either absolutely necessary or for some particular practical reason. Most of the time, Resample is used to prepare a final image for presentation, to meet guidelines for submitting the image at some event or repository or to force an image to match the dimensions of another image.

Parameters Dimensions

The Dimensions panel is where we can inspect and modify image dimensions in pixels, centimeters and inches. We can also specify relative sizes as percentages of actual target image dimensions. Width/Height: If we select an image/view in the view selection list at the top, the Original px values will be populated with the width and height of the image. We can however introduce our own values if we like. The rest of the values in this section (Target px, %, cm and inch) is where we define the dimensions of the target image, and we can do that either in pixels, percentage, centimeters or inches, whichever is convenient for us. Preserve Aspect Ratio: When enabled, if we enter an amount in the any of the three (px, cm or inch) Target width boxes, the height values will be automatically updated to maintain a final image that has the same aspect ratio as the source. Likewise, if we enter a value in any of the target height boxes, the width boxes will be automatically updated to maintain a final image of the same aspect ratio as the source image. Algorithm: Usually it's best to leave it as “Auto” unless we have a special reason to force PixInsight to use one of the available algorithms. In the Auto mode, Bicubic spline is used for upsampling scaling ratios, and also for slight downsampling ratios, when the Mitchell-Netravali filters cannot be properly sampled (filter kernels smaller than 5x5 elements). Mitchell-Netravali cubic filters are used for the rest of downsampling operations. If we don't select the Auto mode, it may be useful to know that when downscaling an image, the nearest neighbor and bilinear algorithms tend to be the poorest performers, followed by Bicubic spline and Bicubic B-spline, with the Mitchell-Netravali and Catmull-Rom algorithms often providing very good results.

202

When upscaling an image, Bicubic spline usually gives the best results. The Mitchell-Netravali interpolation filter can be used to achieve higher smoothness in the upsampled result, which can be desirable in some applications. Clamping threshold: Only available for the Bicubic Spline, Lanczos and Auto algorithms. These algorithms sometimes produce ringing artifacts, and to compensate for this side effect, this clamping mechanism allows us to avoid the negative interpolated values that cause the ringing. The lower the clamping threshold, the more aggressively the ringing is attacked, at the expense of detail preservation and aliasing. Smoothness: This parameter is only available if the Mitchell-Netravali, Catmull-Rom Spline, Cubic B-Spline or Auto algorithm has been selected, and it allows us to increase or decrease the smoothness level of the interpolation. Resolution

Horizontal/Vertical: Define the horizontal and vertical resolution of the target image in pixels per inch or pixels per cm (see below). Centimeters/Inches: Select Centimeters if the resolution entered in the Horizontal and Vertical dimension parameters is in centimeters. Select Inches if it's in inches. Force Resolution: When selected, this option also changes the resolution and resolution unit of the target image. To define a Resample instance to set resolution parameters only, without actually resampling pixels, select this option and the Relative resize mode, with both percentual dimensions at 100%. Process Mode

Resample Mode: The resampling operation is quite simple by itself, but the precise interpretation of resampling parameters is not obvious. We have the following possibilities to define how to apply a resampling operation: •

Relative Resize: The default option. Only percentage dimensions are taken into account; the rest of parameters are ignored. The resampling process will apply the exact rescaling defined by the width and height values as percentages.



Absolute Dimensions in Pixels / Centimeters / Inches: The resampled image will be forced to have the width and height in pixels, centimeters or inches, depending on the specific option selected, as given by the corresponding parameters. The Preserve aspect ratio settings will be ignored. 203



Force area in pixels, keep aspect ratio: The resampled image will have pixels enough as to fill the specified area. The Preserve aspect ratio settings in the Dimensions section will be ignored, since the resampled image will have the exact aspect ratio defined by the width and height parameters.

Absolute mode: When one of the absolute modes are selected as Resample Mode, here we define whether both, width and height dimensions are forced to the defined values, or just one of the dimensions (height or width) is forced to the defined value, while the other one is resized to a value that will preserve the original aspect ratio of the image.

Rescale Process > IntensityTransformations

The Rescale process does not rescale the size of an image (Resample does that) but the values of all pixels in an image, so as to use the entire dynamic range available. It does so by dividing the value of each pixel minus the minimum pixel value in the image by the difference of the maximum and minimum pixel values in the image. In other words, if the minimum pixel value in an image is 0.3 (remember, PixInsight defines the dynamic range of an image from zero to one) and the maximum value is 0.7, the Rescale process will recalculate all pixels in the image so that 0.3 becomes 0, and 0.7 becomes 1. Of course, all other values in between get recalculated accordingly.

When to use Rescale Rescale may sometimes be needed when importing images created by other applications that may not follow certain file format specifications as strictly or in the same way as PixInsight does, particularly when it comes to distributing the image's entire dynamic range across the defined bit depth. These differences are often easy to detect because linear images appears as if having a high pedestal or an overall light gray appearance. In those cases, Rescale may be able to make the image usable in PixInsight, maintaining linearity.

204

The opposite situation may also arise, where an image's pixel values need to be rescaled in order for the image to be properly decoded by some applications that may have problems reading the image otherwise (usually because of a limitation in those applications). In some rare cases, we may want to redistribute the pixel values across the available dynamic range differently in an image for specific analytical or testing purposes. This could be done on actual image data, calibration files, masks, etc. depending on what we're after. Other than that, Rescale is seldom used unless a particular need arises that requires us to rescale the values of an image to certain range. For more elaborated custom rescaling, the PixelMath process offers more than enough flexibility.

Parameters RGB/K: Apply the rescaling to all RGB channels as a whole. RGB/K, individual channels: Apply the rescaling to each RGB channel individually. CIE L* (lightness): Apply the rescaling function only to the lightness component from CIE L*a*b* once the image has been converted to that color space. CIE Y (luminance): Apply the rescaling function only to the luminance component from CIE XYZ once the image has been converted color space.

RestorationFilter Process > Deconvolution

The RestorationFilter process allows us to select between the Wiener and Constrained Least Squares algorithms to perform one-step, frequency domain based image restoration filtering. The Wiener and Constrained least squares algorithms are well described in literature. They are ideal for restoration of lunar and planetary images, as well as general-purpose image restoration tools.

205

When to use RestorationFilter Both, the Wiener and the Constrained least squares filters, which are at the heart of the RestorationFilter tool, have been around for a long time, and both are solid options for recovering details from an image. As previously stated, in astrophotography, they're more often used for deconvolution of highresolution lunar and planetary images than for deep-sky images. In any case, PixInsight offers several tools that are more effective at image restoration and detail enhancement, such as Deconvolution, MultiscaleLinearTransform and others.

Parameters PSF

RestorationFilter provides three ways to define the type of PSF for the deconvolution algorithms: Parametric, motion blur, and external. Parametric PSF This is the most commonly used deconvolution method, as it attempts to deconvolve the most common convolution distortions errors found in astronomy images, such as those caused by atmospheric turbulence. It defines a convolution, usually a Gaussian function, via parameters. StdDev: Standard deviation in pixels of the low-pass filter. Increasing the value of this parameter will produce a larger filter, making the convolution filter to act at larger scales. Shape: Define the filter function distribution or, in other words, the peakness or flatness of the PSF profile. A value of 2 produces a classic Gaussian convolution. Values smaller than 2 produce a sharper distribution while values larger than 2 produce flatter distribution. Aspect ratio: Modify the aspect ratio of the function vertically. When this value is different than one, the Rotation parameter becomes adjustable.

206

Rotation: Rotation angle of the distorted PSF in degrees. It is only active when the value for the Aspect ratio is smaller than one. Motion Blur PSF Motion Blur PSF can be useful in cases where we have tracking errors parallel to the x or y axis of the chip or similar situations that generate unidirectional motion blur distortions. Length: Value of the PSF motion length, in pixels. Angle: Rotation angle of the PSF motion length in degrees. External PSF We use this option when we want to define the PSF based on an existing image. In theory, the image of a star is the best option, but in practice the results may not be good. Experiment modifying the image of the star with morphological filters, curves or the histogram. Also, it is important that the star is very well centered on the image to be used as PSF, or the deconvolved image will be shifted. View Identifier: The view (image) selected to define the external PSF. Noise Estimation

K: Decrease this value to increase filtering strength. Next to this value there are two other parameters that we can use to specify fine and coarse adjustments to the noise estimation, respectively. Filter Parameters

Algorithm: Select which algorithm we want to use for the restoration filtering: •

Wiener Filtering: Wiener filtering (minimizing mean square error) is commonly used to restore linearly-degraded images. However, to obtain optimal results, the power spectra of the ideal image and noise should be known, which it's unlikely.



Constrained least squares filtering: Constrained least squares filtering can easily outperform Wiener filtering, especially in presence of medium and high amounts of noise. Images may be noisier than when the Wiener filter is used but restoration tends to be sharper. In low-noise cases, both the Wiener and the Constrained least squares filtering tend to generate similar results.

207

Amount: Filtering amount. A value of one will apply the filter at its maximum strength. Smaller values will decrease the strength of the filtering. Target: Define the component(s) to which apply the restoration filter: •

Lightness (CIE L*): Apply the filter to the lightness component (from the CIE L*a*b* color space) of the target image.



Luminance (CIE Y): Apply the filter to the luminance component (from the CIE XYZ color space) of the target image. Enable this option to deconvolve the luminance of a linear RGB color image (no separate luminance), such as those created by DSLR and OSC CCD cameras. In all cases, a linear RGBWS must be used, that is, the RGBWS's gamma value must be equal to one.



RGB/K components: Apply the filter to each of the RGB components of the target image individually. Selecting this option may help preserve original colors when the process is applied to a color image.

Deringing

For detailed information about ringing artifacts and deringing, please review the documentation in MultiscaleLinearTransform about the topic. Dark: Deringing regularization strength for dark ringing artifacts. Increase to apply a stronger correction to dark ringing artifacts. The best strategy is to find the lowest value that effectively corrects the ringing, without overdoing it. Bright: Deringing regularization strength for bright ringing artifacts. It works exactly as Dark but for bright ringing artifacts. Since each image is different, the right amount varies from image to image. It is recommended starting with a low value – such as 0.1 – and increase as needed before over-correction becomes obvious. Output deringing maps: Generate an image window for each deringing map image. New image windows will be created for the dark and bright deringing maps, if the corresponding amount parameters are nonzero. Dynamic Range Extension

The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. Use the following two parameters to define 208

different dynamic range limits. We can control both the low and high range extension values independently. Low Range: Shadows dynamic range extension. High Range: Highlights dynamic range extension.

Rotation Process > Geometry

The Rotation process is used to rotate an opened image (a view) at any angle.

When to use Rotation Rotation can be used anytime we need to rotate an image by any angle. It is, however, advised to not rotate images any more times than really needed, as every rotation degrades the data due to the interpolation that takes place (except for fast rotations that don't need interpolation, see below).

Parameters Rotation Angle: Define the angle of the rotation, in degrees. An angle of zero degrees means no rotation. We can also use the circle icon to “draw” the angle, rather than entering it in degrees in the Angle box. Clockwise: When enabled, the rotation is performed clockwise at the the angle specified above. When disabled, the rotation is assumed to be counter-clockwise. Use fast rotations: Fast rotations are rotations of 180 and 90 degrees, clockwise and counterclockwise. When this option is enabled, if a fast rotation is done, the rotation is calculated by swapping and copying pixels between memory locations without floating point operations, which results in no data degradation and at the same time it is extremely fast. This option should always be enabled. 209

Interpolation

Algorithm: Usually it's best to leave it as “Auto” unless we have a special reason to force PixInsight to use one of the available algorithms. For a detailed explanation of the different interpolation algorithms, please refer to the Image Processing sections of the book for the chapter on interpolation. Clamping threshold: Only available for the Bicubic Spline, Lanczos and Auto algorithms. These algorithms sometimes produce ringing artifacts and to compensate for this side effect, this clamping mechanism allows us to avoid the negative interpolated values that cause the ringing. The lower the clamping threshold, the more aggressively the ringing is attacked, at the expense of detail preservation and aliasing. Smoothness: This parameter is only available if the Mitchell-Netravali, Catmull-Rom Spline, Cubic B-Spline or Auto algorithm has been selected, and it allows us to increase or decrease the smoothness strength of the interpolation. Fill Color

When the output image is created, areas that are beyond the limits of the source image will be filled with the color we define here (RGB and Alpha values).

SampleFormatConversion Process > Image

SampleFormatConversion is used to convert the format (bit depth) of an image to the format specified. The 32-bit floating point format is the preferred for processing most astroimages.

When to use SampleFormatConversion SampleFormatConversion is the tool we use anytime we need to change the bit depth of an image. Most of the time, reducing the bit depth of an image is not advised unless there's a good reason for it. For example, if our original image is in 32-bit floating point but we want to edit it with another program that doesn't handle the 32-bit floating point format or has limited support for it (like Adobe Photoshop®), in which case we can either 210

resample the image to a format supported by the other application (say, 16-bin integer) with SampleFormatConversion prior to saving it, or do the conversion in the “Save as” file format options dialog – most file formats (XISF, TIFF, FITS, …) offer this option. Increasing the bit depth of the image to the huge 64-bit floating point format may be advisable when dealing with images of very high dynamic range, although again, for most purposes, 32-bit floating point is more than sufficient – nowadays, at least! We can't speak for what the future will be like some years from now. Parameters are self-explanatory.

ScreenTransferFunction Process > IntensityTransformations

The ScreenTransferFunction tool (aka STF) defines a nonlinear transformation that will be applied to the screen rendition of an image, but without modifying its actual pixel data in any way. In other words, it allows us to visually stretch an image but without making any changes to the image itself. Hence, ScreenTransferFunction allows us to work with linear images, just as if they were nonlinear in an easy and completely transparent way.

When to use ScreenTransferFunction STF can be used with any image, linear or nonlinear. However, it is mostly used (and useful) with linear astronomical images, as it allows us to work with the image during the linear stage, while still seeing the effects of applying processes typically executed when the data in the image is still linear, like color calibration, background extraction, deconvolution and many more processes. In fact, it is such a common tool to use while processing astroimages that the automatic application of STF is one of the keyboard shortcuts PixInsight users learn first: Ctrl/Cmd-A. When dealing with linear images with a very high dynamic range or images with very weak data, if we notice some posterization in the STF'ed image, we can try to enable 24-bit Lookup Tables, which provides much higher resolution when computing the midtones transfer function that present the stretched image to us, without sacrificing much computing time. The option to enable 24-bit LUTs is only available from the STF toolbar or via view contextual menus and it's not discussed here. 211

Parameters

Link RGB Channels: Enabling this option will cause that any change in any of the RGB channels will affect equally all three RGB channels. When disabled, we can modify each channel separately. Edit STF mode: Select this mode to modify (drag) the values of the different channels. If we hold the shift key wile dragging, image update events will be blocked. Zoom 1:1: Reset the zoom to 1:1 Zoom In mode: Once selected, clicking on the editing area will zoom-in the editing area. Useful to perform more detailed adjustments. Auto Stretch: Click this icon to perform an automatic screen transfer stretch. Ctrl-click to edit the default AutoStretch parameters. Shift-click to apply a “boosted” stretch. Zoom Out mode: Once selected, clicking on the editing area will zoom-out the editing area. Edit STF Parameters: Click to manually enter the values of the different channels. Scroll Mode: Select this mode to pan around the editing area. Only useful if we have previously zoomed-in. Reset Channel Buttons: Bring the given channel back to its original state. Black point readout: Readouts work by clicking on any view (image or preview) of an image window in any of the readout modes (black, midtones, white point). In this mode, while the mouse button is held down, readout values are calculated for the cursor coordinates and sent to the 212

ScreenTransferFunction window. In other words, after clicking on this icon, hover over an image and click on an area - the black point of all channels will be set to the value of the pixel we just clicked. Readouts only work in ScreenTransferFunction if the option Broadcast readouts in ReadoutOptions is enabled. Midtones readout: Same as the black point readout but to set the midtones point. White point readout: The same except to set the white point. Enable/Disable STF: This allows us to quickly tell the STF engine whether to apply the STF to the target image or not. If disabled, the moment we make any changes to the STF, it will automatically re-enable itself – unless the track view option (below) is also disabled. To reset STF for a given image, we must reset the STF, not just disable it. The reset button is the button on the right-down corner of the STF dialog. Track view: When enabled, any changes made to the STF window will be visible in the target image. When disabled, the target image will not display any adjustments we make. Note that if we have already made changes to the STF, disabling track view will NOT reset the STF used by the image.

SCNR Process > NoiseReduction

Subtractive Chromatic Noise Reduction or SCNR was originally developed to deal with green pixels caused by noise. With the exception of some planetary nebulae or comets, there are no green astronomical objects or green stars in the sky. Therefore, any dominantly green pixels on a color balanced astroimage, are usually caused by noise. SCNR is incredibly effective at correcting the color of these green noisy pixels. While SCNR is categorized as a noise reduction process, the tool does not make a distinction between what is noise or isn't, and it simply adjusts the RGB values of each and every pixel that

213

have green as the dominant value. This means that SCNR will also remove green color casts on our images, regardless of whether the green cast is caused by noise or something else.

When to use SCNR SCNR can be used on both, linear and nonlinear color images, and there are as many tutorials suggesting to use it before delinearizing the image as there are suggesting to do it after. One common approach is to do it right after color calibration. It is advisable to use SCNR prior to noise reduction, although not required. In reality, SCNR can be applied anytime during the workflow once the image has been color balanced and corrected for gradients. Because SCNR will remove green dominant pixels regardless of whether they're caused by noise, SCNR should not be applied to images containing objects or structures that really are green, like some planetary nebulae or comets. It should also not be applied when doing narrowband color mapping (or any color mapping that isn't purely RGB), unless we intentionally want to remove dominant green signal (usually for purely aesthetic purposes) or we're targeting only specific areas or objects in the image by using a mask – say, removing green cast from stars but nothing else (using the appropriate star mask).

Parameters Color to remove: Although SCNR was developed mainly to deal with green noise, and it is unlikely that we ever need to remove red or blue pixels, we can do so by selecting a color from this option. Protection method: SCNR uses a protective method, mainly because the idea is to remove green noise, not to degrade our data. Maximum Mask: Apply a strong non-neutral technique that have a tendency to create a magenta cast over the target image unless very low values of Amount are used. This option is rarely used. It works better with very low Amount values (see below).



Additive Mask: Similar to the Maximum Mask but with a gentler effect. As with the maximum mask, the additive mask is rarely used, and it also works better with very low Amount values.



Average Neutral: This is the default method and by far the one that consistently produces better results. It basically averages the R and B values and sets the G value to either that

214



average or the original value for G, whichever is smaller. That way, pixels with a dominant G component have that value reduced, while all other pixels remain basically unchanged. •

Maximum Neutral: More efficient than the masked methods but more protective than Average Neutral, it sets the G component to either the highest (maximum) value of R and B or the original G value, whichever is smaller. This tends to remove green casts from darker areas of the image, while accentuating hues and saturation in brighter areas.



Minimum Neutral: Similar to the maximum neutral but aiming for the minimum values instead. That is, it sets the G component to either the minimum value of R and B, or the original G value, whichever is smaller. This causes a different effect, removing green casts from brighter areas of the image.

Amount: Define the strength of the SCNR function over the target image. Some methods are more sensitive to the amount value. While Average Neutral often produces great results with an amount value of one, the masked and max/min neutral methods often benefit from a lower value. Preserve luminance: Like its name indicates, when enabled, the SCNR will preserve the luminance data, modifying only chrominance information. This is the recommended option in most cases. Note that for SCNR to preserve luminance information, when this option is enabled SCNR converts the image from RGB to CIE L*a*b* and then back, the RGBWorkingSpace settings for the image matter.

SimplexNoise Process > NoiseGeneration

SimplexNoise is PixInsight's implementation of the simplex noise algorithm created by Ken Perlin, an enhanced version of the classical Perlin noise algorithm, which is more a texture generation than a noise generation algorithm. Noise generated by the SimplexNoise algorithm is not random noise but simulated noise. 215

When to use SimplexNoise Since SimplexNoise uses a texture generation algorithm, it is often used to create patterns for testing and analyzing other tools and processes, rather than applied directly to astronomical images. If used to add noise to an image, all considerations stated when discussing this topic for the NoiseGenerator tool apply.

Parameters Amount: Define the strength of the noise to be applied. Scale: Scale of the noise pattern. The x1, x10 and x100 options to the right define the increment amount when we use the spin box up/down buttons (same as the next two parameters). The default value of 100 is too large for most noise-addition needs. Very low values (1~4) tend to create better noise-like patterns. X/Y Offsets: Indicate the offset into the noise pattern. SimplexNoise defines a large pre-calculated pattern. This parameter defines the position (offset) in this large noise pattern field so that the pattern applied to our image will be different depending on the offset values. Operator: Once the noise pattern has been generated, we can choose different ways to add it to the image. The Copy operator takes the noise pattern and places it over the image with complete disregard for whatever data was present there before. All other options should be self-explanatory, as they all are straight operators, except for the screen operator which applies a 1 - (1-Source) × (1-Noise) operation.

SplitCFA Process > Preprocessing

SplitCFA is used individually or in conjunction with MergeCFA for a particular debayering technique with color images. SplitCFA extracts the red, green and blue pixel values from a CFA image (there's two green pixels for every red and blue, so four values total), and creates four different images, one for each pixel value. Naturally, each resulting image has half the width and half the height in pixels, that is, they have 1/4th the area of the target frames.

216

MergeCFA can then be used to put the four images back together after having done some processing with the images individually.

When to use SplitCFA While it's generally recommended to use the different debayering options available in most calibration and integration tools in PixInsight, SplitCFA/MergeCFA were originally developed as an alternative for situations when regular calibration failed to produce good results. In these cases, SplitCFA is used to perform the extraction (the split) at the very beginning of the workflow, so each component can be calibrated individually. After that, one can recombine them back with MergeCFA and continue a typical CFA workflow, or instead, continue as if they were classic R, G and B data, and at some point recombine them with ChannelCombination. SplitCFA can also be applied to calibration frames in situations where having each component as a separate image of its own could be useful. When executed globally (Apply Global), SplitCFA will attempt to split the files added to the Target Frames list and save the output files in the disk. If applied as a New Instance on an opened image, SplitCFA will create the four images on the desktop as new images (nothing will be saved on disk).

Parameters Target Frames

Add Files: Click here to add the files to be CFA-splitted. The order is not important. Select All: Mark all files in the list as selected. Invert selection: Mark as selected all not-selected files and vice-versa. Toggle Selected: Enable or disable the currently selected file from the list. Remove Selected: Completely remove the selected image(s) from the list.

217

Clear: Completely remove all images from the list. Full Paths: When Full paths is enabled, the File column will display the complete path to the file, not just the filename. Output

Tree: When enabled, SplitCFA will duplicate the directory structure of the target files at the specified output directory, creating the output files in the equivalent folders as the target files. Output directory: The folder where the newly created files will be saved. CFA sub-folder: When enabled, SplitCFA will create four sub-folders, named CFA0, CFA1, CFA2 and CFA3 where each of the output files will be created. Overwrite: When enabled, if a file already exists in the output directory with the same name, overwrite it. If the same situation arises when this option is disabled, the new filename will be adding an underscore character followed by a numeric index: _1 the first time a same filename is found, _2 should it happen a second time, etc. Prefix: Insert a string of text at the beginning (prefix) of each filename. Default is blank: no prefix. Postfix: Add a string of text at the end of the filename, prior to the file extension (.xifs, .fits, etc). The default is “_CFA”.

StarAlignment Process > ImageRegistration

The StarAlignment process is used to align/register two or more images and is, in fact, the go-to tool in PixInsight for image registration. The tool has been specifically designed for registration of deep-sky images. It's based on feature-based registration algorithms using stars as alignment references. Despite its complexity and myriad of parameters, it's a very easy to use tool where most default parameter values can often be left untouched.

218

When to use StarAlignment As a tool to register a stack of images prior to image integration, StarAlignment can fit at different stages within the workflow depending on our complete data set and strategy. That said, the most efficient strategies aim at using image registration the least possible number of times, due to the fact that interpolation can easily degrade our data. For example, in a typical multi-filter workflow, the best strategy is to register our entire data set to the same reference frame, regardless of the number of filters used. In some cases, for example when combining binned and unbinned data (say, RGB filters at bin2 with Luminance at bin1) it may still be beneficial to register all frames to the same reference image (say, one of the Luminance images at bin1), but we may choose to register the RGB data independently, giving consideration to the fact that the RGB data is already degraded in comparison to the L due to the reduced resolution. Building mosaics present even more diverse ways to register all our subframes, especially with multi-filter data sets. For example, following the same (L)RGB example, but with several mosaic subframes to deal with, we may choose to register and stitch all our R subframes together into a single mosaiced image, then do the same for our G and B data sets, and only then register those three single mosaiced images

219

together to build the color RGB master image (note that each time we mention the word “register” it implies running StarAlignment again). We could, however, build the mosaic in a completely different way by registering and combining a single (L)RGB image for each of the subframes individually, and later stitch them. This implies more uses of StarAlignment and more registrations taking place, but particularly with very large mosaics, sometimes the strategy that uses the least number of StarAlignment applications is not the most efficient. Regardless, for the purpose of registering images that are part of a stack, StarAlignment should be applied after image calibration, on linear images. If registering color images, these should be debayered prior to registering them. StarAlignment will also work on nonlinear images, so it can also be used to register already stretched of even fully processed images. This however should be reserved to situations when linear data is not available, such as aligning different images of the same object or FOV for comparisons or overlays.

Parameters Reference image: All target images will be aligned to this image. If there's an object that moved differently than the stars during capture and we wish to include it, we ideally want to select the frame where it is perfectly centered. We can chose whether to select a currently opened image inside PixInsight by leaving the default View in the little drop down menu on the right, or a file saved to our hard drive by selecting File instead. In any case, the reference image should be at the very least, one with good FWHM and low eccentricity, something we can find out with the SubframeSelector tool. Distortion model: If we would like to use a distortion model, here we load the distortion model file. A distortion model defines geometrical distortions (field curvature, lateral chromatic aberration, etc) such as those caused by lenses with focal lengths of 200 mm or smaller, which aren't widely used in deep-sky photography. A distortion model file is a text file that contains the information about these distortions, and that StarAlignment can use to correct them. Most of the times, we can safely ignore this option, although if we have an accurate distortion model of our optical system and StarAlignment is having problems registering our images, this may be an option worth exploring. 220

Distortion model files can be created with the script ManualImageSolver. The process to create them takes a few steps but it's fairly straightforward: 1. We need our (distorted) image and an image with no distortions. The script CatalogStarGenerator could be used to create a synthetic field of stars with no distortion. 2. DynamicAlignment is used to set matching point pairs between the two images. The more points, the better. 3. We save a new instance of DynamicAlignment on the workspace. 4. We run ManualImageSolver, select our target image and the icon instance we just saved. 5. We enable “Distortion correction” and “Generate distortion model” and OK the script. If ManualImageSolver complains that the image does not have valid astrometric data, try running ImageSolver (a different script) first. Undistorted reference: If we're using a distortion model (above) but our reference image is not distorted (we're assuming the rest of images are), we enable this option and StarAlignment will not use the distortion model on the reference image. Registration model: Registration models define how the coordinate mapping between the reference image and the target images is done. StarAlignment offers two different transformation models. •

Projective transformation: The projective transformation generates the best results for images that are close to the original values. That makes this method the best candidate for registering images with considerable overlapping, as when typically registering a stack of images.



Thin Plate Splines: StarAlignment uses an optimized version of this registration method that not only smooths deformations extremely well but it also sets more registration points in areas where they're needed most, improving performance and optimizing computation time. This is the preferred option for images that suffer from differential distortion, a common problem in mosaics and images with large rotation angle differences or images that are considerably misaligned.

Distortion correction: When enabled, StarAlignment will correct for optical distortion misalignment, such as those found in mosaics, as well as other less common (in astronomical images) nonlinear distortions. Disable for typical image registration but enable for mosaics and 221

difficult registration cases. Because the distortion corrections don't work with the projective transformation, this option is only available when Thin Plate Splines is selected. Also, the next two parameters are related to the distortion correction being set here, so they are only available when this option is enabled. Local distortion: Only available when Distortion correction is selected above, this option should be enabled (default value) for most cases. What local distortion does is to execute the distortion corrections by, in addition to building a distortion model based on matched stars, creating an additional distortion model based on all detected stars, not just the stars for which a match has been found. This results in much more accurate results. Generate distortion maps: If enabled, when Distortion correction is selected, generate an image representing the distortion map, highlighting the differences between the distortion model and a linear model. Working mode: This parameter defines what registration task will StarAlignment execute. Register/Match images: This is the default and most commonly used option, where all images in the Target Images list are aligned to the reference image by superposition. This is also the only mode that can save the registered images to disk.



Register/Union – Mosaic: This mode is used to to compose a mosaic using two different images. A certain amount of overlapping stars will be needed. The registered image will include both images, covering the entire area being imaged. Because StarAlignment can only register two images at a time when building a mosaic, the use of this option has been restricted to work only by dragging a new instance over currently opened images. For this reason, the Target Images area becomes unavailable when this option is selected.



Register/Union – Separate: This mode works exactly as Register/Union – Mosaic, generating output images that cover the entire area, except that in this case, two output images are generated, one for the reference image, one for the target image.



Structure detection, Structure Map, Detected Stars, Putative Star Matches and Matched Stars: Any of these options will not result in any alignment operation to be done, and Target Images are, in fact, not needed (the Target Images area becomes unavailable). The tool will just generate one or more images showing data useful to evaluate our image quality. For example, Matched Stars will output an image with the stars that were matched in all our images. Structure detection shows a map of the areas where StarAlignment will

222



look for stars and Structure Map works similarly but it creates a map of actual structures. These modes ares very useful for analysis of registration quality, spotting issues, etc. Generate masks: When enabled, StarAlignment generates an additional file for each processed image, in which white pixels will represent those pixels that are present both in the reference and the processed images. The other pixels will be black. These images are useful as masks to apply selective corrections, especially when doing mosaics. Generate drizzle data: If we're planning on creating a final drizzled image in PixInsight from the data set in the Target Images list, before using ImageIntegration or DrizzleIntegration, this is where we must start, with fully calibrated files, by aligning them with this option enabled. Enabling this option will then create the drizzle data files (extension .xdrz) and include the drizzle data in them, one per registered image. Frame adaptation: When enabled, StarAlignment applies a linear fitting function to adjust for brightness differences between the reference image and each of the target images. The strength of the fitting function – which consists of an additive pedestal and a multiplicative scaling – is determined from the values of the pixels inside the overlapping areas only. Enabling this option is recommended when aligning different frames for a mosaic, although some workflows may not need it, such as if we previously used LinearFit on the different frames of the mosaic. Target Images

This section is composed by a large window where the list of images to be aligned will be placed. Add Files / Add Views: Click on these buttons to add files or already opened images (views) to the list. Select All: When clicked, it will select all target images. Invert Selection: Deselected files will be selected, and vice-versa: files already selected will be deselected. Toggle Selection: Selects a deselected file, or deselects a selected file. Remove Selected: Removes the selected file from the list. Clear: Deselect all files. This is the opposite of Select All. Full Paths: When enabled, the full path of the images in the list will be displayed. 223

Format Hints

StarAlignment allows for input format hints to modify how files are loaded. When also generating output files (working modes Register/Match Images or Transformation Matrices), output format hints are also available. Output Images

This section helps us decide what to do with our aligned images. It is only available for the two working modes that generate output files: Register/Match Images and Transformation Matrices. Output Directory: The directory where the registered image files (and drizzle files, if Generate drizzle data is enabled) will be saved. If we leave this setting blank, the registered images will be saved in the same directory they are now. Output Prefix, Postfix / Mask / D.Map: These are all used to “tag” the newly created aligned registered files, generated masks and distortion maps, respectively. We can modify these parameters as needed, although it's recommended to use the defaults. Sample format: Select the format (bit depth) to be used in the newly generated aligned images. Normally, using the same depth as the corresponding targets is recommended. Other values can be used for specific purposes. Overwrite existing files: This is an added security check that, when disabled, prevents from overwriting our originals. On error: Specify what StarAlignment will do in case of running into problems or errors: continue, stop or ask us what to do. Star Detection

These values assist StarAlignment to find stars in the image. Many of these parameters can also be found in other PixInsight processes that depend on star detection, such as SubframeSelector. Detection scales: Number of wavelet layers used for detecting stars of certain size. Increasing this value helps large stars (and perhaps also some non stellar objects) being detected. Decreasing this value helps detect smaller stars, usually yielding a larger number of stars being detected. The default 5 works well in most cases. Lower the value to 4 or 3 if more stars need to be detected. Noise scales: Number of wavelet layers used to define noise levels. Structure scales equal to or lower than this value will be considered noise, not stars. This can also be useful to define the minimum size of detected stars – the higher the value, the more stars (small ones) are excluded. 224

Hot pixel removal: This is the radius in pixels of a circular median filter that is applied internally to the hot pixels of the images prior to the star detection step. The default value of one works well. Set to zero to disable hot pixel removal prior to the star detection phase. Noise reduction: This value defines the radius in pixels of a Gaussian convolution filter that is applied to each image internally during the star detection phase. The default value is zero because this option should not be used under normal circumstances, as it can degrade registration accuracy. That said, in cases where the images are extremely noisy and previous registration attempts seem to be confused by the noise, increasing this value may help. Log(sensitivity): This value, by definition the logarithm of the star detection sensitivity, measures the sensitivity of StarAlignment in detecting stars from their local background (the background around each star). Increase to limit star detection to bright stars, decrease to detect fainter stars or stars over bright backgrounds. Adjustment of this parameter should not be needed for most astronomical image processing needs. Peak response: This parameter defines how “pointy” a star needs to be in order to be detected. Increasing this value favors detecting stars with a flatter profile, while decreasing this value will require stars to have more prominent peaks in order to be detected. While adjustments of this parameter are not usually required, they may be come very handy in cases of images with saturated (flat) stars, for example. Maximum distortion: Maximum star distortion. Star distortion is measured against a square, which is assigned a distortion value of one. The distortion of a perfectly circular star is about 0.78 (π/4, actually). Lower this value to allow for more distortion. Increase to reject more distorted stars, which can be useful to exclude elongated stars from the registration model, for example. Upper limit: This parameter defines the highest star peak values allowed for a star to be detected. Stars with a peak value higher than this parameter will not be included. The default value of one (highest value, white, in the [0,1] range) means no star will be rejected because of this setting. Decrease to exclude stars that exceed certain brightness. Compute PSF fits: StarAlignment can calculate the PSF of each detected star and use the star's PSF centroid as the registration point, instead of calculating the registration point based on the actual pixels defining the star (called the weighted barycenter point). This option defines when, if at all, should StarAlignment take advantage of this theoretically more reliable registration method. We can select between three options: Never/Always do the registration based on PSF fits, or do it only when Distortion correction is also enabled, which is the default value.

225

If applied and StarAlignment does not find enough stars, try setting this parameter to Never, particularly if our source images are out of focus or stars are saturated. PSF fitting comes particularly handy with heavily distorted images. Inverted image: This option should only be enabled if we're working with negative images (dark stars over a bright background). Star Matching

RANSAC tolerance: When StarAlignment is looking for a matching star in a target image, this parameter defines how far can the matching star be from the place StarAlignment is expecting it to be, in order to be considered a match. The higher this value, the more tolerant the star matching process will be. While the default value of 2 works well for most alignment needs, cases with severe distortion or mosaics can benefit from higher values. RANSAC is the name of the algorithm in which the star matching routines of StarAlignment are based on. RANSAC iterations: RANSAC is an iterative method, and this parameter defines the maximum number of iterations that will take place. Because the optimal number of iterations can be determined adaptively by StarAlignment, it's often better to not change this parameter. Still, in very difficult cases, such as mosaics with small overlapping areas, a higher number of iterations may help finding a registration model. Maximum stars: Maximum number of stars allowed. In the default mode (or zero), if the initial attempt to register the image fails, StarAlignment will go through a series of matching attempts using a predefined sequence of star counts. In most situations, setting a fixed number of reference stars is not necessary, and the described mode is recommended. Only in cases of minimal overlap, as it happens with some mosaics, it may be necessary to set a large (or small) value here. Descriptor type: Define which geometrical descriptor StarAlignment will use: Triangle similarity: This method defines triangles from detected stars and finds a match by looking for triangle similarities. It's fast and often works in most cases, including mirrored images and affine transformations like translation, rotation or uniform scaling.



Polygon descriptors: These methods use polygons instead of triangles for finding star matches. They are more flexible, robust, and particularly suitable for images with local distortions, scale differences, or mosaics. They don't work with mirrored images, however.

226



Descriptors per star: Minimum number of descriptors per star. The higher the value, the more chances of finding putative star pair matches. Values between 20 and 100 work well in most cases. Compute intersections: When looking for star matches, we can instruct StarAlignment to start by automatically calculating the intersection between the reference and the target image, and then limiting the star matching process to that intersection area only. This is particularly useful when building mosaics, where overlapping areas are typically small, but it's not really necessary when registering images overlapping 75% or more. The default option, Mosaic modes only, limits finding the overlapping areas to when a mosaic working mode is selected (either Register/Union – Mosaic or Register/Union – Separate). Restrict to previews: When enabled, StarAlignment behaves similarly as the Compute intersections parameter, except that in this case, StarAlignment will limit looking for star matches in the areas defined by any Previews we have defined in the target image. Obviously, for this to work, we must have defined previews in the areas we know that overlap, and we can only apply this to a view (an opened image), since that's the only way we can define a preview in it. Use brightness relations: Brightness relations between alignment stars refers to StarAlignment's ability to consider brightness relationships (Star A is brighter than star B but darker than star C) when looking for a match. Using this information, the accuracy of the star matching process can be improved. However, certain situations like trying to register images captured with different narrowband filters, may cause problems. Use scale differences: Enable this option to take into account differences in scale (size) between the descriptors used to match the stars. A scale tolerance must then also be defined (see Scale tolerance below). This helps the star matching algorithm to find more valid star pair matches out of a large number of false putative matches. Images with small overlapping (mosaics) or with a large number of outliers benefit from accounting for scale differences, as long as the value for Scale tolerance is not too restrictive. Scale tolerance: This parameter is only available when Use scale differences is enabled. It defines the maximum allowed difference in triangle scale, for star matching purposes. If the difference in scale between two descriptors is larger than this value, the star will be rejected (no match). Interpolation

Pixel interpolation: For most typical registration cases, leaving this option as “Auto” should work well. However, it's not unusual to use specific algorithms depending on our particular needs.

227

In the Auto mode, Bicubic spline is used for upsampling scaling ratios, and also for slight downsampling ratios when the Mitchell-Netravali filters cannot be properly sampled. MitchellNetravali cubic filters are used for the rest of downsampling operations. If we don't select the Auto mode, when downscaling an image, the nearest neighbor and bilinear algorithms tend to be the poorest performers, followed by Bicubic spline and Bicubic B-spline, with the Mitchell-Netravali and Catmull-Rom algorithms often providing very good results. When upscaling an image, Bicubic Spline usually gives the best results. The Mitchell-Netravali interpolation filter can be used to achieve higher smoothness in the upsampled result, which can be desirable in some applications. Clamping threshold: Only available for the Bicubic Spline, Lanczos and Auto algorithms. These algorithms sometimes produce ringing artifacts and to compensate for this side effect, this clamping mechanism allows us to avoid the negative interpolated values that cause the ringing. The lower the clamping threshold, the more aggressively the ringing is attacked, at the expense of detail preservation and aliasing.

StarGenerator Process > Render

StarGenerator is a tool that, given a center point (indicated in RA and Dec coordinates and Epoch information), and image size and resolution, it looks up a star catalog to create a 32-bit integer image (conveniently named stars) creating a realistic image that shows the stars in that region. The synthetic stars in the image will also mimic star brightness, size and FWHM from the information contained in the start catalog. Do note that very bright stars in our astronomical images appear predominantly large due to diffraction, scattered light and some optical

228

imperfections, but physically they are still nothing but points of light. StarGenerator does not try to emulate these imperfections, meaning that if we point StarGenerator to, say, the Orion's belt area, do not expect seeing the three belt stars predominantly “larger” than the rest. The star catalog is not included in PixInsight and we need to download it from the Internet. Currently, the recommended catalog is the PPM-Extended catalog, with detailed astrometric and photometric information about more than 18 million stars. This catalog can be currently downloaded from PixInsight's website, here: https://pixinsight.com/download/#StarGenerator_Databases

When to use StarGenerator StarGenerator was not designed to add synthetic stars to our images (!!). The tool was developed with the construction of mosaics in mind, where StarGenerator can be a great aid in creating the reference frame to which all our mosaic subframes would then be registered to. It can also be very useful when we have images that suffer from strong optical distortion and we would like to produce an image correcting those distortions. For a more advanced and flexible tool to create synthetic star fields, see the script CatalogStarGenerator.

Parameters Star database: Here we indicate the star catalog file, which, as stated before, it needs to be downloaded manually prior to using StarGenerator. Right ascension/Declination/Epoch: These parameters define the center of the projection in RA and Dec coordinates. S: Enable to define southern Dec coordinates. If disabled, StarGenerator assumes the Dec coordinates are for the Northern Hemisphere. Projection: Here we define the projection system used by StarGenerator. Two projections are currently available: •

Gnomonic projection: This is the default choice, as it tends to produce a flat representation of spherical coordinates that is closer than other projections to how most optical systems project the images. It tends to produce much better results than the Conformal projection and it's the recommended choice for most applications. 229



Conformal projections: Conformal projections preserve the angular distance between two objects in the image. This is useful for projecting very wide field images (10+ degrees) with minimal distortion.

Focal length: We define the output image size and resolution by entering information about the optical system we're trying to duplicate. Here we enter the approximate focal length of the telescope or lens, in millimeters. Pixel size: Here we enter the pixel size (in microns) of the camera's sensor we're trying to emulate. If we don't know this value, an online search for the specs of the sensor in the camera should help. Sensor width/height: Width and height in pixels of the camera's sensor. Limit magnitude: Any stars with magnitudes higher than this value will not be included in the synthetic image. Increase to include more stars. Very low magnitude values may render images with very few stars or no stars at all. Output mode: StarGenerator can either generate the synthetic image (Render Image) or create a CSV text file (Generate CSV File) that starts defining the width and height of the image, followed by a list of numbers, grouped in trios: the x and y image coordinates of the star and its magnitude. When Generate CSV File is selected, in the box below we must include the path and filename for the output file. Star FWHM (*): When the output mode is Render Image, here we control the apparent FWHM to be applied to the synthetic stars. The lower the value, the sharper the stars will be. Nonlinear: StarGenerator creates a linear image but enabling this option, it performs a nonlinear stretch, which better resembles what a star field looks like, visually, in an astronomical image. The strength of the stretch is defined by the next parameter, Nonlinear target minimum. Nonlinear target minimum: This parameter determines how strong the nonlinear stretch will be. The value we set here tells StarGenerator to perform a nonlinear stretch until the minimum sample value in the image matches this value. Valid ranges are 0.25 to 1. The higher the value, the stronger the stretch.

230

StarMask Process > MaskGeneration

The StarMask process is an excellent tool in PixInsight to build star masks of many types. StarMask operates by extracting the L component from the target image (or a duplicate of the image if it's in the grayscale color space) and applying a multiscale algorithm that detects and extracts all image structures within a given range of scales (read sizes). The algorithm is based on the Starlet transform and multiscale morphological transforms.

When to use StarMask As of PixInsight version 1.8.8-3, StarMask is pending a considerable upgrade, however it's still one of the most versatile star mask generation tools out there. StarMask works effectively on linear and nonlinear images. Star masks are a fundamental resource in astroimage processing and they can be used for a myriad of reasons. It would take several pages documenting each of them, and it would still be an incomplete list. That said, we can narrow down the usage of a star mask to two main reasons: •

To protect the stars from processes that could otherwise affect them negatively, such as applying a deconvolution.



To protect everything but the stars, as it would happen if we are after adjusting color saturation on the stars only, or after reducing their presence or size, for example.

StarMask can be used as a meet-end tool, where the output image (the mask) comes out exactly as we need the mask to be, but it can also be a starting point for a more elaborated mask. For example, if we struggle with star growth in the mask created by StarMask, we could adjust star growth later, using the MorphologicalTransformation tool on the mask created by StarMask. Likewise for many other adjustments that could be made to the mask. Masks are documented in great detail in the Image Processing sections of the book.

231

Parameters Noise threshold: This value is used to differentiate between noise and significant structures. Basically, all detected structures below this threshold will be considered noise, and the rest will survive as significant structures. Obviously, higher thresholds will include less structures in the mask, and vice versa. Therefore, increase this value to prevent inclusion of noise, and decrease it to include more structures. Working mode: We can select one out of four available operation modes: Star Mask: Generate the actual star mask. This is the default and most typical working mode.



Structure Detection: Generate a special mask with all detected structures, also known as a structure map. A structure map is useful to know exactly which image structures are being detected. It can also be used for actual image processing purposes, especially as the starting point of other mask generation tasks.



Star Mask Overlay: In this mode, StarMask generates an 8-bit RGB test image where the red channel contains the generated star mask superposed to the target image, while the green and blue channels have no mask contribution. The base image to build this overlaid image is the target image after applying the initial histogram transform (see the Shadows Clipping, Midtones Balance and Highlights Clipping parameters below).



Structure Detection Overlay. This mode is essentially the same as Star Mask Overlay, but instead of the star mask, the structure map is overlaid on the target image.

232



Scale: This parameter is the number of (dyadic) wavelet layers used to extract image structures. The larger the value of Scale, the bigger the structures that will be included in the generated mask. Always try to set this parameter to the lowest value capable of extracting all required image structures; values between 4 and 6 wavelet layers (scales up to 16 to 64 pixels) covers virtually all deep-sky images. Structure Growth

Large-scale: Growth factor for large-scale structures (stars). This defines an additional growing procedure applied to all structures considered large (structure scales higher than the value in Compensation but nor larger than the value for Scale). Small-scale: Small-scale growth factor. This defines a similar growing procedure, but now applied to the set of small-scale structures (structures with a scale equal or lower than Compensation). Compensation: Small-scale growth compensation. This is the number of small-scale wavelet layers (from zero up to the Scale parameter minus one) for small-scale growth compensation. Mask Generation

Smoothness: This parameter determines the smoothness of all structures in the final mask. If generated with insufficient smoothness, the mask will probably cause edge artifacts due to abrupt transitions between protected and unprotected regions. On the other hand, excessive smoothness may degrade masking performance. In the case of a deringing support, finding a correct value for this parameter is very important. If in doubt, it is preferable to exaggerate smoothness, because the effects of leaving too small of a value are usually much worse. The default of 16 is rather large for most typical star masks. Although appropriate values depend on the image and the task at hand, many successful star masks use Smoothness values between 3 and 8. Aggregate: This parameter defines how individual image structures contribute to the mask construction process. Enable this parameter to generate a mask where structures are gathered by summing their representations on all wavelet layers. Binarize: This parameter defines how the initial set of detected structures is truncated to differentiate the noise from significant structures. If enabled, the initial set of detected image structures is binarized: all structures below the Threshold parameter value are considered noise and hence removed (set to black), and the rest of structures are set to pure white. Therefore we should enable this parameter to generate a mask

233

where all structures are initially white. In this case, only the smoothness parameter will determine the final brightness of all structures (smaller structures will be dimmer when smoothed). If disabled, the initial set of detected image structures is truncated: all structures below the Threshold parameter value are considered noise and hence removed (set to black), and the rest of structures are rescaled to occupy the whole range from pure black to pure white. Structures that are supported by more wavelet layers will be brighter. Contours: Enable this option to build a mask based on structure contours. This option involves implicit binarization of all structures before contour detection. Invert: Invert the mask after it has been generated. Mask Preprocessing

Shadows/Midtones/Highlights: These parameters correspond to a histogram transform that is applied to the target image prior to structure detection and mask generation. In fact, this histogram transform is an important preparatory step in the StarMask algorithm. These parameters have default values of 0.0, 0.5 and 1.0, respectively, which define an identity transformation (no change). However, usually we'll need to apply lower values of the midtones balance parameter, especially working with linear images, mainly for two reasons: •

To improve overall structure detection. In linear images, the structure detection algorithm may need us to improve local contrast of small structures in order to separate them from the noise.



To block structure detection over bright parts of the image, where we don't want the mask to include structures that are not stars, for example, but actually small-scale nebular features.

Increasing the Shadows parameter may also help to improve detection slightly; however, if we set it to a very high value, clipping will occur in the shadows, which will prevent inclusion of dim structures. This is an effective way to leave out of the mask dimmer and smaller stars that would otherwise be detected. Generally, the highlights parameter is left with its default 1.0 value. Truncation: Highlights truncation point. This value, in the range [0,1], is a highlights clipping point applied to the final mask (before multiplying it by the Limit parameter, see below). It can be used to force the cores of bright structures to be pure white. Decrease this value to improve protection in the cores of mask structures.

234

Limit: This value, in the range [0,1], multiplies the whole mask after is has been completed, so it is useful to impose an upper limit for all mask pixels. Many deringing supports generated by structure binarization work better with lows limit values, between 0.1 and 0.5. If mask inversion has been selected, this multiplication will take place before the inversion.

Statistics Process > Image

Invoke this process to obtain statistical data from any given view (opened image). By default, Statistics shows eight common statistical values in astronomical images: a count of sampled pixels (pixels with a value different than zero and one), mean, median, average absolute deviation, MAD, minimum and maximum values. By clicking on the wrench icon, we can choose to display many other values: standard deviation, modulus, norm, sum and mean of squares, variance and others. The Minimum/Maximum position option displays pairs of (x,y) coordinates.

When to use Statistics Statistics can be used at any time on any kind of image, anytime we would like to know any of the statistical variables offered by the tool. This need can arise in many occasions and for so many different reasons that attempting a list is pointless. It is good to know beforehand the information that can be obtained from Statistics, as well as the different options to display the information, explained below.

Parameters Image selection: Here we select the view that will be analyzed. Range: Select the range to which Statistics should rescale the information obtained from the image. The default is the Normalized Real [0,1] range, also the standard range used in PixInsight. 235

Other, integer-based ranges are offered as well (note that the results will be in decimal notation, for better accuracy in the results). We will need to make the proper selection, depending on how we'll be using the data later. Scientific notation: Display the statistical data in scientific (exponential) notation, rather than the standard decimal notation. Normalized: When enabled, all scale variables are calculated with the standard deviation of a normal distribution. Unclipped: By default, Statistics does not compute pixels with a value of zero or one (pure black or pure white, in the normalized [0,1] range). When this option is enabled, Statistics does include those pixels when calculating all statistical variables. Text View: Open a small text window with the output of Statistics in text format, suitable for copy & paste operations, say, to be used in an email or document. Track View: Enable this option to have Statistics dynamically update its information, based on the active view (opened image). This allows us to get updated information in the Statistics window as we select one image or another.

SubframeSelector Process > Preprocessing

This entry refers to the module SubframeSelector under PROCESS > Image Inspection, not the script of the same name under SCRIPT > Batch Processing. Both serve similar purposes, but the newer module is more advanced and easier to use. SubframeSelector is a tool designed to aid us in determining the quality of a group of subframes containing linear astronomical images. While complex in appearance (it really is a complex tool), it's not difficult to use, at least for relatively simple analysis. SubframeSelector needs to be used with linear images, otherwise the obtained “measurements” would be meaningless. SubframeSelector can only be executed globally (Apply Global).

236

When to use SubframeSelector SubframeSelector can be used at different stages during the workflow, or not at all. Overall, SubframeSelector is mostly used for three reasons: 1. To analyze a data set, looking for outliers or “bad” frames. 2. To analyze a data set, looking for the best subframe(s). 3. To calculate and assign weights to a set of images. Identifying bad frames is something commonly done with raw or calibrated images that will later be registered and integrated. In cases where our images were captured under very similar conditions, SubframeSelector may not be needed at all. Rejecting a couple of frames simply because, for example, their FWHM was off by 0.5 or their SNR was slightly lower than the rest, might not improve our final results. More often than not, mild differences in a small percentage of frames don't degrade the final results. For that reason, most people use SubframeSelector for this purpose only when they know or suspect that there may be some frames that could really degrade our integrated image. Using SubframeSelector when looking for the best frames is normally done prior to image registration, with the goal of selecting a good reference frame for the registration process. Again, with data sets of similar quality, selecting one frame over another may not make a significant contribution. Weighting images is another common usage of SubframeSelector. While ImageIntegration does a great weighting evaluation, typically based on noise estimations, SubframeSelector gives us more control, if we so want (or need) it, to the point that we can define our own expressions – 237

using data collected by SubframeSelector, such as FWHM or Eccentricity – to determine an image's “weight.” SubframeSelector also supports adding a custom FITS header keyword to each image with their calculated weight, and ImageIntegration has an option to retrieve that custom keyword to determine the image's weight. In most cases, the image weighting in ImageIntegration does a great job, and SubframeSelector is mostly used for image weighting when we prefer to use a customized expression that may better target specific goals – for example, we may want the weight in each image to be more dependent on FWHM and eccentricity than SNR if we care more about image details than noise. One thing to consider when having ImageIntegration using weights calculated with SubframeSelector is that the weights obtained with SubframeSelector were from calibrated (or raw) images, but prior to using the images in ImageIntegration, they must first be registered (with StarAlignment). Registration involves data interpolation, which means the value of the statistical variables we used earlier to determine the weights would be different after registration, sometimes possibly leading to different weights. In practical terms this means that the weights calculated with SubframeSelector are accurate for the original set of (unregistered) images but only approximately accurate for the registered images we feed ImageIntegration. SubframeSelector can also be used purely for informational purposes only, as well as many other specific tasks. Due to the many variables it calculates and reports, the many different options to display information, graphs and exporting capabilities, SubframeSelector can be an excellent analytical tool offering plenty of details about a particular imaging session or data set.

Parameters

SubframeSelector Window The main SubframeSelector window is where we add the files we want to analyze and where we set all parameters used for the process to work. It's divided into three main sections: the area to load the subframes, information about the telescope and camera used, and the parameters that manage star detection. In addition, familiar sections like ROI, Format Hints and Output Files are also included. Routine: Situated at the very top of the main SubframeSelector dialog, we define here the main task to be done by SubframeSelector. Three options are available:

238



Measure Subframes: This is the default option. It analyzes all the subframes, does all the calculations, weighting, presentation, etc. so we can inspect and analyze the data, but does not create any new files anywhere.



Output Subframes: Select this option to instruct SubframeSelector to copy (or overwrite) the approved subframes to the directory specified in the Output directory section (below). During a typical subframe selection workflow, this is often done after we have reviewed and approved/rejected subframes, run weighting or approval expressions, have done all the analysis with the images, and we would like to have a new set of just the approved images, containing weight information (if we set the Keyword parameter, see option a bit later).



Star Detection Preview: When this option is selected and SubframeSelector is executed, a star map image is created after the star detection process of the first frame, as well as a copy of the original image used to do the star detection, and nothing else. This option is used to analyze how effective the star detection routine is, before proceeding with the actual analysis.

Expressions/Measurements window icons: On the top-right of the SubframeSelector dialog there's two small icons that we can use if we previously closed either the Expressions or Measurements windows (both documented in a moment) but would like to open them again. Subframes

Add Files: SubframeSelector only works with files. Click here to add the files to be analyzed. Invert: Mark as selected all not-selected files and vice-versa. Toggle: Enable or disable the currently selected file from the list. Remove: Completely remove the selected image(s) from the list. Clear: Completely remove all images from the list. Useful to start over. File Cache: When Full paths is enabled, the File column will not only display the file name but also the complete path in our storage device. File Cache: When enabled, SubframeSelector will not analyze images for which their measurements have already been made and reside in a cache used by the tool. This saves considerable computing time and is recommended to leave it enabled. Disable to force SubframeSelector recompute the values for all subframes.

239

System Parameters

These are parameters that apply to all subframes related to the equipment used (camera and telescope) to capture the images. Subframe scale: Here we enter the pixel scale of our optical system, in arcseconds/pixel. This variable is a combination of the telescope's focal length and camera's pixel size. If we don't know these values, we should refer to the manufacturer specifications of the telescope and imaging sensor. Camera gain: This is the camera gain in electrons per data number. If unknown, we should refer to our camera specifications, or use “1” although some properties may not be accurately calculated if this value is wrong. Camera resolution: This value indicates the resolution (“bit depth”) of our camera's sensor. At the time of this writing, most CMOS sensors (DSLR cameras and some astro cameras) use 14-bit, with CCD sensors normally using either 16-bit or 14-bit. Site local midnight: This is a value between 0 and 23 that indicates the time in UTC (Universal Time Coordinates) when it's midnight at the location where the images were captured. If unknown, use the default value of 24. This parameter does not affect calculations, it's only used for display purposes. Scale unit: We can specify which pixel scale unit SubframeSelector will use when displaying the values for FWHM and FWHM Mean. We can choose between arcseconds and pixels. Which unit we choose depends on specific needs. Both units can be used for the purposes stated in the “When to use” section. If the Measurements window already has files with calculated values, when we change this option, the corresponding values are updated to reflect the new units. Data unit: We can also specify the units in which SubframeSelector will display camera's pixel data. Values that are affected by this are the Median, Median mean deviation and Noise. When the Camera gain value is one, both options (Electrons and Data Numbers) are equivalent. As with scale unit, which units we select is a matter of how we would like to see the data being represented. Star Detector Parameters

These values assist SubframeSelector in how it will find and analyze stars in the image. The recommended goal is for this process to detect between several hundred and several thousands of stars, although this value will vary from image to image. In short, the ideal number is one that samples many starts in the image thoroughly across the field of view, but not so many that the 240

process becomes extremely slow per image. Many of these parameters can also be found in other PixInsight processes that depend on star detection, such as StarAlignment. Structure layers: Number of small-scale wavelet layers used for structure detection. The higher the number of layers, the larger the stars being detected would be. Noise layers: Number of wavelet layers used for noise reduction. Use this parameter to prevent detection of bright noise structures, hot pixels and/or cosmic rays. Additionally, we can also use this parameter to have SubframeSelector ignore the smallest detected stars. The higher the value, the more stars would be ignored. Hot pixel filter: When a pixel is found to be an outlier and therefore needs to be removed, SubframeSelector applies a median filter to remove it. This is the size in pixels of the radius of that median filter. A value of zero is allowed, which effectively cancels any pixel removal operation. The maximum value is 20. Values between one and three usually works well. Apply hot pixel filter to detection image: This option determines whether hot pixel removal should also be applied to the image used for star detection, or just to the image used to build the structure map. Enabling this option will provide very good hot pixel rejection at the expense of possibly less stars being detected. If left disabled (the default), SubframeSelector will detect more stars, now at the expense of possibly confuse a hot pixel for a star. As a starting point, the default option works well. If we think that hot pixels are being interpreted as stars, we would run SubframeSelector with Routine set to Star Detection Preview and verify. Noise reduction filter: Prior to scanning each image for hot pixels, we can instruct SubframeSelector to apply some noise reduction to the image, and this value is the radius in pixels of the Gaussian filter that is used to do the noise reduction. A value of zero (default) means no noise reduction will be done, which is the recommended setting. In very noisy images or in images with clipped or saturated data, however, detecting true hot pixels can be trickier, and some noise reduction could actually assist in better detection. Even in these cases it's recommended to keep this value low. Very high values often lead to inaccurate rejection, modeling and poor local normalization. When the value of this option is greater than zero, it is assumed that hot pixel filtering will be applied to the detection image, even if that option (above) is not enabled. Sensitivity: This value – by definition, the logarithm of the star detection sensitivity – measures the sensitivity when detecting stars from their local background (the background around each star). Increase to limit star detection to bright stars, decrease to detect fainter stars or stars over bright backgrounds. Adjustment of this parameter should not be needed for most astronomical images.

241

Peak response: This parameter defines star peak response, that is, how “pointy” a star needs to be in order to be detected. Increasing this value favors detecting stars with a flatter profile, while decreasing this value will require stars to have more prominent peaks in order to be detected. While adjustments of this parameter are not usually required, they may be very handy in cases of images with saturated (flat) stars, for example. Max. distortion: Maximum star distortion. Star distortion is measured against a square, which is assigned a distortion value of one. The distortion of a perfectly circular star is about 0.78 (π/4 to be exact). Lowering this value will allow for more distortion. Increasing the value will reject more distorted stars, which can be useful to exclude elongated stars from the registration model, for example. Upper limit: This parameter defines the highest star peak values allowed for a star to be detected. Stars with a peak value higher than this parameter will not be included. The default value of one (highest value, white, in the [0,1] range) means no star will be rejected because of this setting. Decrease to exclude stars that exceed certain brightness. Background expansion: Adjust this parameter to enlarge (or reduce) the size of the rectangular area around a star, for the purpose of evaluating the background around such star. Increasing this value may produce more accurate background level estimates in images with a lot of background and isolated stars. Reduce this value to one or two for images with star crowded fields if the default value doesn't produce an adequate number of stars detected. For most purposes, the default works well. XY stretch: This parameter allows us to define, in sigma units, how large the area should be when detecting the center (barycenter) of a star. Increase to obtain better accuracy for multiple stars or very star-crowded fields. If these complex situations don't appear in our image, use the default value to obtain better accuracy overall. PSF fit: SubframeSelector can calculate the PSF of each detected star. Here, we indicate the particular PSF used to fit the stars in the images. The Gaussian function (default) can safely be used in most cases. Data obtained with one particular PSF function (say, Gaussian) is not compatible with data obtained with a different function. Circular PSF: When enabled, SubframeSelector fits circular PSF functions. When disabled, it fits elliptical functions. Elliptical functions have two distinct axes and a rotation angle. Elliptical functions are usually preferable, as they provide more information about the true shapes and orientations of the fitted PSFs, this being the default option. Sometimes, however, circular

242

functions may be preferable, such as cases of very noisy images or strongly undersampled images that rarely provide enough data to fit elliptical functions reliably. Pedestal: If our images have a pedestal (a fixed value added to all pixels in an image), we can have the star detection process subtract that amount from the image. The unit in which we express this pedestal is based on the Camera resolution (see above) data numbers. Region of Interest

Define a limited area within the input images to execute the process, as opposed to acting on the entire image data. This ROI only applies to the star detection and fitting phases. Measurements that depend on the entire image, such as the Median are calculated on the whole image and ignore these ROI settings. Format Hints

We can use format hints to change some characteristic about how SubframeSelector loads files (input hints) and writes them (output hints). Output Images

This short section helps us decide what to do when SubframeSelector creates copies of approved subframes, assuming Output Subframes is the selected Routine. Output Directory: The directory where all the output files will be saved. If left blank, each output file will be saved in the same directory as its corresponding source subframe. Output Prefix, Postfix: These are all used to tag the output files. We can modify these values as needed, although it's recommended to use the defaults, particularly in collaborative projects. Keyword: This is the keyword that will be stored in the output files containing the weight information. The default is SSWEIGHT but it can be set to any word that follows FITS keyword syntax and it's not in use. If our plan is to use these values during image integration, we would need to set the same keyword in the ImageIntegration tool. If this field is left blank, no weight information is saved to the output, approved subframes. Overwrite existing files: This is an added security check that, when disabled prevents from overwriting our originals. On error: It specifies what SubframeSelector will do in case of running into problems or errors: continue, stop or ask what to do.

243

Measurements Window This is the window where all the results are displayed once SubframeSelector has been executed (unless the Routine parameter is set to Star Detection Preview). The window is divided in two areas. A table on top with the subframes and all the information collected about them (Measurements Table), and two graphs on the bottom half (Measurements Graph).

Parameters Measurements Table

Sort by: Here, we select the field (column) used to sort the data in the table. To the right of this pull-down menu, we have a second pull-down menu where we define whether to sort in ascending or descending order. Toggle Approve: Swap the approved/rejected status of the selected image(s). Toggle Lock: Swap the locked/unlocked status of the selected image(s). Invert: Swap image selection. Images that were selected will be deselected and vice-versa. Remove: Remove the selected image(s) from the table. Clear: Remove all images from the table. Save CSV: SubframeSelector can save the values in the table to a text file in CSV commaseparated format. This file can then be processed by custom scripts or applications, shared with peers, or loaded on a spreadsheet software such as MS Excel or OpenOffice's Calc. Table: The table itself has 17 columns and as many rows as processed input images, in this order: 244

Note: to define the column names, we deliberately use the property name (“variables” that can be used in the Expressions window), so as to document these property names at the same time. See other property names after the list. •

Index: Each subframe is assigned an index number, starting at 1.



Approved: Activate (green check-mark) or deactivate (red X) subframe approval. We can click on the icon to swap the value of this flag.



Locked: Lock or unlock a particular subframe(s). Also, we can toggle this status by clicking on the icon.



Filename: This is where the filename of each subframe goes.



Weight: The weight of each subframe, as determined by the weighting expression (see below). If no expression is set, this value is always zero.



FWHM: The weighted Full Width Half Maximum value, using the units set in the Scale Units parameter in the main SubframeSelector window.



Eccentricity: The weighted star profile eccentricity for each subframe. This gives us a good idea about how distorted the profile of stars is. The smaller the value, the less distorted the star is.



SNRWeight: Signal to noise ratio weight estimate for each subframe. This is a good relative estimation of how noisy each subframe is. Higher values means less noise.



Median: The median of each subframe, using the units set in Data unit, in the main SubframeSelector window.



MedianMeanDev: The mean absolute deviation from the median of each subframe, also using the units set in Data unit, in the main SubframeSelector window.



Noise: An estimate measure of the noise standard deviation, again represented in the units set in Data unit.



NoiseRatio: The ratio between the number of noise pixels and total number of pixels.



Stars: Total number of stars detected in each subframe.

245



StarResidual: This column shows the normalized mean absolute deviation between the fitted PSF model and the actual star data.



FWHMMeanDev: Mean absolute deviation from the median FWHM, using the units set in the Scale Units parameter in the main SubframeSelector window.



EccentricityMeanDev: Mean absolute deviation from the median eccentricity for each frame.



StarResidualMeanDev: Mean absolute deviation from the median residual of the fitting process.

While not represented in the table, other properties whose values can be obtained (and used in the Expression windows) are Sigma, Median, Min and Max values for all the above image properties. For example, for Sigma values, we have WeightSigma, FWHMSigma, EccentricitySigma, SNRWeightSigma, MedianSigma, MeanDeviationSigma, NoiseSigma, StarResidualSigma, FWHMMeanDevSigma, EccenricityMeanDevSigma and StarResidualMeanDevSigma. Same notation is used for Median/Max/Min variables, i.e. WeightMax, etc. Sigma and Median/Max/Min cannot be combined, though – that is, WeightSigmaMax for example will not be recognize. For users of the original script SubframeSelector (not this module), note that properties StarSupport and MeanDeviation are not supported. Measurements Graph

The Graph window displays two different graphs. The graph on the left displays how any of the available plot ordinates changes throughout the entire subframe sequence. The abscissa or X axis represents each of the subframes (Index). The ordinate or Y axis is determined by the value in the Ordinate pull-down menu (see below). The graph on the right is a double graph that displays count ranges and probability estimates for the range of values in any of the available plot ordinates. Both graphs are interactive. The left graph allows us to click on the subframe points to toggle their approved status and lock them (shift-click to unlock), while displaying the weight, median, FWHM and the sigma value of the FWHM as we move the mouse around the plot points. The graph on the right displays the probability for a particular range of values to happen (the vertical axis on the right representing this value in the [0,1] range), or the count for certain ranges

246

of values – depending on whether we move the mouse over a probability point or elsewhere within the blue bar representing counts. Both graphs allow us to click and drag to zoom in, or double-click to reset the zoom factor. Ordinate: This is where we select the variable to be used as plot ordinate for the first graph. Save PDF: Click here to save the current graphs to a PDF file.

Expressions Window The expression window is a mechanism to automatically determine what images should be flagged as approved, as well as determine the weight of each subframe, based on our own personal criteria. We do this by writing mathematical expressions that can use the values from all the variables in the table.

Parameters Approval: If the expression we write here is true for any given subframe, such subframe will be flagged as approved, when we click the “play” button on the right. Then, if the expression is invalid, a red cross appears on the left of the expression window and all images are flagged as approved. Some examples (for syntax and illustration purposes). Note these expressions only return true or false: EccentricitySigma < 3 FWHMSigma < 2) && (SNRWeightSigma > -1) FWHMSigma = -3 && EccentricitySigma Preprocessing

The Superbias module reads a master bias image that was previously created from a reduced number of single bias frames, and it attempts to create a new master bias image (the superbias) free of noise, by resembling the integration of hundreds of single bias frames.

When to use Superbias Since Superbias needs a master bias frame to work, we would use it right after having created such master bias, and continue the calibration process using the output image as the new master (super) bias.

248

If our master bias was created from a large number of single bias frames (over 100), Superbias probably won't add a quantitative benefit to our calibration efforts, and we can safely skip the process of creating a superbias master frame. Because Superbias attempts to reproduce the column structures of the bias pattern, it should not be used with master bias images from sensors that have bad partial columns. There have also been reports about Superbias not working well with certain CMOS sensors or sensors suffering from ampglow. It is important to understand that in many cases, whether we use a master bias from an “okay” number of subframes (20 or more) or we use a superbias will be negligible in many cases. While there will definitely be less noise added to the light frames, the amplitude is so low, that any differences would be marginal. That said, it's a process so fast and easy to apply that, unless we experience noticeable problems, in many cases it's still worth applying.

Parameters Orientation: Here, we select the orientation of the noise structures at pixel level. The default Columns works for most sensors. Some CMOS sensors may require Columns and rows. Just Rows is unusual. Multiscale layers: Superbias uses a multiscale approach to isolate oriented and large-scales structures. This is the number of layers that will be analyzed. The default value of 7 works well for master bias that were created from a stack of approximately 20 single bias frames. We can try to decrease this value to 6 or 5 for larger stacks. Median transform: The multiscale analysis and decomposition done by Superbias can be done either via a median transform (in which case, we enable this option) or via a Starlet transform (wavelets). The default (Median transform enabled) is recommended. Exclude large-scale structures: When enabled, the larger-scale layers of the input master bias are removed internally before computing the column (or row) mean values. This generally produces a more accurate representation of the column (or row) bias levels in the superbias frame. It is recommended to leave it enabled.

249

TGVDenoise Process > NoiseReduction

TGVDenoise is a novel noise reduction tool based on Total Generalized Variation (TGV), a fairly new mathematical framework with multiple image optimization applications in addition to noise reduction. While the edge preservation and noise removal features of TGVDenoise are similar to other existing noise reduction algorithms, one of the most notable characteristics that differentiates TGVDenoise (or any other TGV-based process) is that it can also be applied in situations where the image is not assumed to be piecewise constant, avoiding the staircase effect that other noise reduction methods generate and that results in more blocky, less faithful results. TGVDenoise relies currently on two parameters – Strength and Edge protection – that can be more challenging to fine tune than those in many other processes in PixInsight for whose the default values often work well. Ultimately, as it often happens, it may take several times until we start to warm up with the process of adjusting these values.

When to use TGVDenoise TGVDenoise is an excellent choice whenever we would like to reduce the amount of visible noise in our images. Most workflows place TGVDenoise during the nonlinear stage, however TGVDenoise can be used with both, linear and nonlinear images. It is important to know that, when applied to linear images, TGVDenoise preserves image linearity but not photometric accuracy. TGVDenoise can also be used at both stages – linear and nonlinear – during the same processing session, with a gentle application when the image is still linear, and a more aggressive application later when the image has already been applied a nonlinear stretched.

250

However we apply it, TGVDenoise is particularly effective with high-frequency noise, while lowfrequency noise is usually better defied with other tools like MultiscaleLinearTransform.

Parameters RGB/K mode: Apply TGVDenoise to the RGB channels of a color image, or the grayscale component of a monochrome image. When selected, the Chrominance tab becomes unavailable. CIE L*a*b* mode: Apply TGVDenoise separately to Lightness and Chrominance, by separating the L* (lightness) and a*b* (color) components in the CIE L*a*b* color space. Apply: When enabled, TGVDenoise will be applied to the active component (active tab). Strength: Strength of the diffusion process that smooths the image. Higher values will result in higher smoothing of the image. A balance between this value and Edge protection (next) is key to successful noise reduction while preserving significant structures. While the default value may be a good starting point for some nonlinear images, it is often too high when applied to linear images. Edge protection: Often dubbed “the most critical parameter of the TGVDenoise tool,” here, we define a threshold of protection over edge features in the image. The lower the value, the higher the protection over small details, and vice-versa: the higher the value, the more noise reduction is applied over these edge features. Smoothness: This parameter defines how smooth the noise reduction is going to be. As stated earlier, TGVDenoise does not assume the image to be piecewise constant, but more like “piecewise smooth” (the image surface slowly varies), and with this parameter we can adjust this “smoothness.” For that reason, if we lower this value from the default 2, the staircase effect that TGVDenoise so well avoids may start to appear. It is recommended to use the default value of 2 in most cases. Iterations: TGVDenoise is an iterative process where, in each iteration, it slightly diffuses the image based on the latest state of the image, until it reaches a convergence where TGVDenoise determines that no more appreciable diffusion is possible. The default 100 is a good starting point while trying to fine-tune the rest of parameters. However, for a final application of TGVDenoise, a minimum of 300 to 500 iterations is recommended. This parameter depends on the setting for Automatic convergence (next). Automatic convergence: When enabled, TGVDenoise will stop iterating when it detects that the difference between the last two iterations is smaller than the Convergence value (next), or when the number of iterations reaches the value in the Iterations parameter, whichever happens first. 251

When disabled, TGVDenoise will execute the number of iterations set in the Iterations parameter, regardless. Note that when Automatic convergence is enabled, TGVDenoise will not work on an image preview. We would need to apply it to the entire image in order to evaluate the results. Convergence: When Automatic convergence is enabled, if the norm of the difference between two iterations is smaller than this value, TGVDenoise will stop iterating. The default value works well as a good convergence point. Increasing this value could cause TGVDenoise to stop too soon, before an acceptable convergence has actually been reached, while decreasing this value may result in unnecessary iterations. Local Support

TGVDenoise offers a local protection mechanism via a support image (similar to a mask), and due to the nature of the algorithm, such support is often necessary. In the case of linear images, it is actually required, mainly because linear images mostly suffer from Poisson noise whereas TGVDenoise assumes constant Gaussian noise, so a local support image helps TGVDenoise discriminate between high and low SNR areas in a more Poisson-like fashion. Preview: If enabled, when we apply the process to an image, rather than applying the noise reduction, TGVDenoise will show us a preview of the local support image that would be used during the iterative denoising process. This gives us an idea about how the support image looks like after adjusting the four parameters under Support Image (next). Support image: Here, we can select an image to be used as the local support image. Only grayscale and images of the same pixel dimensions as the target image will work. If we enable Local Support but don't specify a support image, TGVDenoise will use the intensity component (I in the HSI color model, analogous to lightness) of the target image as the local support image. Noise reduction: We can apply some noise reduction to the support image on-the-fly by adjusting this parameter. This is the number of wavelet layers to be removed from the support image. A value of zero means no noise reduction applied to the support image. For most images, that's the preferred option, although very noisy images may benefit from a softer support image. Midtones/Shadows/Highlights: We also can adjust the histogram of the support image with these three parameters. They define a simple histogram transform that is applied to the support image. Increasing the midtones value tends to remove protection and lowering it would cause the opposite effect, while increasing the shadows will remove protection very fast. Lowering the highlights will add protection but very low values usually protect way too much. 252

UnsharpMask Process > Convolution

Despite what its name might suggest, the unsharp mask method is used to sharpen our images, not to “unsharpen” them. The disorienting name comes from the way this sharpening process works, where a blurred (unsharped) version of the original image is created and then subtracted from the original as an edge detection mechanism, creating the unsharp mask, which is effectively a highpass filter. With that in place, contrast is enhanced, creating the effect of a sharper image. UnsharpMask is PixInsight's implementation of this classic sharpening algorithm, plus a few perks, such as an accurate threshold parameter to protect dark and low SNR regions from unsharp masking effects, an efficient deringing mechanism – so we can apply an unsharp mask without generating dark halos or rings around bright image features, for example – and dynamic range extension parameters.

When to use UnsharpMask UnsharpMask is not an image reconstruction process but a detailenhancement mechanism. It works best to correct for Gaussian blur in nonlinear images, usually late in the processing workflow and without abusing it (less is often better). It is recommended to use UnsharpMask with a mask protecting low signal areas. Since UnsharpMask offers a deringing option, having the mask to protect structures that might suffer from ringing is optional. That said, there are other tools in PixInsight that allow us to enhance details and even reconstruct image features much more efficiently, therefore the use of UnsharpMask is definitely optional.

253

Parameters USM Filter

StdDev: Standard deviation in pixels of the Gaussian filter. Higher values will sharpen larger structures, while smaller values will execute the sharpening at lower dimensional scales. Note the two sliders to adjust this parameter, with the top slider used to set values between 10 and 250 and the bottom slider for fine adjustments between 0.10 and 10. Values between 0.5 and 2 tend to work best, with higher values usually resulting in severe artifacts. Amount: This is the filter strength. 1 to fully apply the filter, 0.10 (the minimum allowed value) for minimal effect. Target: The image components to which the filter will be applied. •

Lightness (CIE L*): Apply the unsharp mask filter only to the lightness of the target image. This option may produce less prominent halos (ringing) in color images than targeting the RGB components but it may also lead to color artifacts.



Luminance (CIE Y): Use the CIE Y component (luminance). We select this option to apply the filter to the luminance of a linear RGB color image. Since it is rare to use UnsharpMask on linear astronomical images, selecting this option is not common.



RGB/K components: Apply the filter to each of the RGB components of the target image individually. We should use this option with color images when the other options generate visible color artifacts in the image.

Deringing

For information about ringing artifacts and deringing, please review the documentation in MultiscaleLinearTransform about the topic. Dark: Deringing regularization strength for dark ringing artifacts. Increase to apply a stronger correction to dark ringing artifacts. The best strategy is to find the lowest value that effectively corrects the ringing, without overdoing it. Bright: This parameter works exactly as Dark but for bright ringing artifacts. Since each image is different, the right amount varies from image to image. It is recommended starting with a low value – such as 0.1 – and increase as needed before over-correction becomes obvious. 254

Output deringing maps: Generate dark and bright deringing map images, as long as their deringing strength value is not zero. Dynamic Range Extension

The dynamic range extension works by increasing the range of values that are kept and rescaled to the [0,1] standard range in the processed result. Use the following two parameters to define different dynamic range limits. We can control both the low and high range extension values independently. Low Range: Shadows dynamic range extension. High Range: Highlights dynamic range extension.

255