# Earthshine blog

## "Earthshine blog"

A blog about a system to determine terrestrial albedo by earthshine observations. Feasible thanks to sheer determination.

## Fit variability from a single image

Post-Obs scattered-light rem.Posted by Peter Thejll Apr 03, 2014 08:18AM
We explore how well albedo can be determined from just a single image, varying the fit approaches.

A single 100-image stack of the Moon was selected, aligned (or not, see below), bias-reduced and averaged and then subjected to our model fit. The fit selects 9 21-row wide 'strips' cutting across the DS edge of the disc so that we have 50 columns of sky and then 100 columns of disc to fit on. The 21 rows are averaged in observation as well as model.

Applying this method we get 9 estimates of the terrestrial albedo. We selected three methods for centering stack-images:

1) Align the images in the stack once allowing for sub-pixel shifts, then average
2) Align iteratively, refining the alignment wrt an updated reference image, then average
3) Do not align at all, just average

In this figure we show the albedo determined from the three sets:
The determinations are shown in sequence, separated by vertical dot-dashed lines - method 1,3 and 2 (in that order). The overall mean and standard deviation is shown

We see systematic deviations: all Method 1 values are below the mean, all method 3 values are above the mean and method 2 is a bit above and a bit below.

The mean of these values is 0.3047 with a standard deviation of the mean of 0.0014 (0.45% of the mean), which is not too bad! However, we could clearly do better if the method-dependency was not present. How to remove this?

Methods that shift images by non-integer-pixel amounts risk being non-conservative - i.e. the total and area-fluxes are not conserved. We have elsewhere in this blog estimated how bad the problem can be [see http://iloapp.thejll.com/blog/earthshine?Home&post=369 also see http://iloapp.thejll.com/blog/earthshine?Home&post=299]. We found that it was slight, means can only change by much less than 0.1% during a sub-pixel shift of the image.

Not aligning images at all risks comparing 'smeared' observations with 'knife-sharp model images. The smearing occurs due to image-wander during exposure and can be as much as a pixel or two.

Revision of our alignment methods seem in order, given the above. Possibly shifts by whole pixels only? This is at least conservative away from edges.

Note that method 3 (the middle set) has the least internal scatter. For this set the standard deviation of the mean is just 0.2% of the mean. That is pretty strange, actually, since this is the 'do not shift images before coadding' method. Hmm. Then again, perhaps it is not strange, and we are simply seeing that the likely most fuzzy image has the most stable fit - as the strips select different parts of the light-and-dark lunar surface to fit on, contrast causes the fit to wander if the image is sharp. But on the third hand, our method now allows for 'contrast scaling'. Hmmm, mark 2.

Shifting and co-adding involves two issues - the interpolation required at a given non-integer pixel shift may not be flux-conservative (our tests seem to show this is only a small problem). The shift itself may be poorly estimated. Our shifting method is based on correlation - it does not look specifically at such things as 'are the edges still sharp' - i.e. Sally Langfords Laplacian edge-detection method. The use of edge-information only is not clearly the best method since it is based on very few image pixels and thus suffers from noise. Perhaps a hybrid method can be envisaged - "correlations and maintaining-sharp-edges"?