It would be a Good Thing if we could explain observed images in terms of a 'noise model' - that is, a model that explains why the signal and the noise bears a certain relationship.
We consider here a stack of 100 raw images taken in rapid succession. The difference between adjacent images should only contain noise in that any objects shown in both images should subtract. The difference between two series drawn from the same poisson distribution but independent has a variance that is twice the variance in either of the series subtracted, so we apply a factor of 1/2 to the difference image to get the variance image.
We subdivide such a variance image into smaller and smaller squares, and in each square we calculate the variance. We generate an image of the ratio between the variance image an dthe mean original image, also subdivided In the case where everything is Poissonian this image should be a surface with value 1. In reality there will be noise in this surface - and there will be strcutures seen wherever imperfect obejct subtraction took place. Here is the result:
In the three rows we results from different subdivisions - into 8x8, 4x4 and 2x2 squares. In the leftmost column is a histogram of the values of the image and to the right is plot of the profile across the middle of the image.
We see, to the right, that variance over mean (VOM) is not 1 everywhere. We are evidently picking up a good deal of variance near lunar disc edges [The image corresponding to the above is a quarter Moon with the BS to the right and DS to the left, situated in the image field center]. We see that the sky manages to have VOM near 1 and that parts of the DS does this, but that most of the BS appears to have VOM>1. Even ignoring the peaks that are due to edges we see a value for VOM near 2 or more [also seen in the rest of the 100-image sequence].
Since we used raw images we have to subtract a bias level. We subtracted 390 from all raw image pixels. We applied the ADU factor of 3.8 e/ADU to all bias-subtracted image values. We then calculated the variance and subtracted the estimated RON (estimated elsewhere at near sigma_RON=2.18 counts, or variance_RON=4.75 counts²; consistent with ANDOR camera manual "8.3 electrons").
The subtracted bias is a little small - the observed mean bias value is nearer 397, but if we use that value we get strange effects in the images - only a relatively low value of the bias mean gives a 'flat profile' for the sky in a slice across the image. This is one poorly understood observation.
We also do not understand why the VOM is nicely near 1 on the DS while it fails to clearly be near 1 on the BS - both areas of the Moon have spatial structure which is bound to contribute to the variance in the difference image during slight misalignments.
That is the second poorly understood thing.
Progress towards a 'noise model' is therefore underway, but there is some distance to go still.
How would non-linearity in the CCD influence the expectations we have from Poisson statistics?