Fits to maps of different resolution

Increasing the resolution of temperature maps

Here we discuss another example of modified blackbody fits, the fitting of surface brightness images (dust emission) that are observed at different wavelengths and with different spatial resolution (different beam sizes).  The goal is again to derive images for parameters such as the colour temperature of the dust emission. When one analyses spectra built from such image data, the normal approach is to convolve all observations to the lowest common resolution. This ensures that data at each wavelength corresponds to emission from the same area on the sky. However, this can be wasteful if the input images have very different spatial resolutions – one is practically throwing away all information that individual maps had of smaller angular scales.

One alternative would be to consider this as a fitting problem where the goal is to determine high-resolution maps of the model parameters, which include the total intensity and the colour temperature of the emission. The spectral index is here assumed to be constant but this is not a necessary assumption. These maps define a model that is compared to observations and optimised to find the best correspondence. The comparison includes a convolution from the higher resolution of the model parameter maps to the lower resolution of the observed surface brightness values. The model resolution could be similar to the highest resolution of the individual observed maps.  A higher resolution might be possible but in that case the problem can easily become unstable – deconvolution to higher resolution requires data with high signal-to-noise ratios and good knowledge of the shapes of the beams used in the observations.

This is in principle a straightforward optimisation problem but in practice it is difficult, especially when one is dealing with large surface brightness maps. There are a few times more free parameters than there are pixels in the model maps. What is worse, all parameter values depend on each other via the convolution operations. Whenever a pixel value changes in the model, the comparison to the observations requires the very expensive recalculation of the convolved images of the model predictions, one image per observed map. One could use a standard library routine to do the optimisation (the optimised function thus also including the convolutions). In practice this may be too slow or would not work at all. After all, the number of free parameters can easily be several millions.

Beam sizes are typically much smaller than the full size of the images. Therefore, in practice the update of a single parameter value only affects surface brightness predictions within a rather limited neighbourhood. If there was a very fast way to test updates, including the calculation of the changes in the convolved model predictions at the different wavelengths, one could use a simple stochastic gradient descent algorithm and still find the solution within a reasonable time.

To be continued…