Real World ProblemsPosted by Peter Thejll Apr 21, 2014 09:58AM
If we take a stack of 100 images, calculate the total counts in each fram and tabulate this, and then take the list of totals and compare the mean of the list to its variance we find that variance is MUCH more than the mean. This should not be the case if the totals were Poisson-distributed.
We find that variance is 100s to 1000s times bigger than the mean of the list.
This could possibly be due to shutter variations - the shutter is ... less than fantastic ... after all.
Let us inspect the problem and see if it was less at the beginning of operations at MLO. Perhaps wear made it worse as time went on? We did collect 50000 images or so.
Perhaps atmospheric transmission changes on short time scales is this big? Each of the 100 images in a stack took about a second to capture. Does sky transparency change by a lot over that timescale, on a 1x1 degree field?
Real World ProblemsPosted by Peter Thejll Feb 13, 2014 07:16PM
An artist, Jeremy Sharma, in Singapore, has asked us what Pantone colour code the earthshine colour corresponds to. He heard of us through the Guardian story.
I can see two ways to answer him: One is via a conversion from colour temperature to Pantone code (or, equivalently, hexadecimal RGB code), another is to take a realistic image of Earth and averaging the RG and B channels.
I cannot, yet, find a conversion from colour temperature to Pantone code or RGB intensities, but the temperature to use is given by B-V=0.44 which is near 6400K, I think.
For these images of Earth:http://news.bbc.co.uk/2/hi/8547114.stm
I can take the average of the R G and B channels of both images - omitting the black sky around earth.
the RGB averages are:
89.5127 92.4462 115.971
or, rounded to nearest integer
90 92 116
The hexadecimal equivalent of this triplet is #595C73
(use IDL 'Z' format to work that out ...)
On this pagehttp://goffgrafix.com/pantone-rgb-100.php
(and links thereon) it is possible to find Pantone colour codes and the equivalent hexadecimal code. The above value does not appear on these pages - but a tool at http://goffgrafix.com/colortester.php
allows you to type in the above hexadecimal number and see the corresponding colour as shown on a computer terminal.
The above colour can be 'brightened' or 'darkened' by multiplying each of the RG and B values by some number (same for all three) and then converting to hex code. In doing this avoid saturating any of the three (they must be smaller than 255).
I did this so that a lighter rendition of the same colour would appear, and the result is
which you can also type into that page and see (they allow showing of two such colours next to each other).
Real earthshine is thousands of times fainter than moonshine so I doubt you can 'see' the colour if you make the numbers realistic (in fact, RG and B would be 0 in such an image since the numbers are integers and cannot be smaller than 1 without being 0). The two above colours have the same tones but differ in brightness.
I expect that a dye maker or a paint shop with a smart machine can mix your colour according to the hexadecimal code, instead of the Pantone colour code.
Real World ProblemsPosted by Daddy-o Jul 05, 2013 03:44PM
We have noticed that some images have a JD time that differs from the JD implied by the time information in the FITS file header. By checking (almost) all files that have a JD in the filename against the time in the header the following list of suspect images are revealed:
This list is the JD from the filename - the value is 1 hour larger than the JD implied by the DATE field in the FITS header. None of these appear on the list of good images as determined by Chris. and his 'tunelling' code which inspects magnitudes vs lunar phase.
Real World ProblemsPosted by Peter Thejll Jun 12, 2013 07:30PM
With the whole telescope system back at DMI we are working to bring it back to life. After that we will try to fix the problems that beset us.
So far (latest activity on top) we have this:
July 8 2013: All PCs on internet reachable from outside, and so is the iboot and the NAS c an be seen from various machines.
July 4 2013: ibootbar is now reachable from woof by using, in browser on woof, 'ibootbar'. On egregious you need 'ibootbar.usr.local'. Eigil says woof is reachable from home using woof instead of via nordlyset in ssh menu. egregious should also be thusly reachable from ssh.
July 2 2013: Eigil Pedersen has made it possible to reach the woof linux machine by giving it an IP number. It can in turn see the iBoot viaits 192 IP number. Working on the NAS.
June 24 2013: Located the Aurora cloud sensor, and downloaded software. Will try to set up on Wine/Linux machine.
June 20 2013: Working to install static IP-numbers. Connected DEC-axis motor and its limit-switch so that we do not have to rely on dubious short. Devices still do not react when acessed from LabView.
18 June 2013: We figured out that the limit switches must be connected - then the mount comes alive. LabView still freezes when you click on whatever.
June 14 2013: Wired up part of the mount head with the Dome Breakout Box. Still need to understand why LabView is talking about "no licenses" for "Report Generation Toolkit" and the "Internet Toolkit". Where were the licenses before?
June 12 2013: Booted up the Watchdog, woof and PXI computers. Labview looks strange - some VIs are not working and then none will run?
June 11 2013: Reset all the 115V selectors to 230 V and applied power - nothing blew up!
May/June 2013: Uncrated the control rack and the telescope tube. Inventoried the other boxes.
Real World ProblemsPosted by Daddy-o May 23, 2013 09:30AMLargest box:
Telescope tube plus its attached cables.
Rack of electronics.
Very long and thick bundle of cables.
Rod for Hohlraum source + footSmaller Wood box:
LCD monitor + VGA cable
Laser printer Samsung ML-2525
Small white box: Spare lamps + relay
Holhraum Sphere in its own box
Box of fan filters for rack
2 x Shutter boards + Edmund pack of flat soft things + a small mw laser + a filter labelled 'IR cut'
European power supply for Axis camera
2 Axis cameras with robofucus lenses, and additional lenses and mounting brackets.
Rain and light sensor - DMI?
Väisala device in 'chineese tower' affair.
DMI IR rain sensor
Väisala humidity sensor HMT100
Cables + spare PXI parts + bracket for adjusting SKE
Uniblitz shutter # VS25ST1-100
Vincent Associates 710P Shutter Interconnect Cable
3 Thor labs FW drivers / electronics. Model FW102B. Implies that we have
three Thor labs FWs - the colour FW, the ND FW, and one more - somewhere!
Box of mixed cables - VGA + DVI + 2xUSB repeaters
2 sealed metal envelopes with "D.C.D. Panel Mount SA 4 COMP"
Tools for shutter tuning
Spare Ferrorperm Knife edges on glass.
Mixed VGA, DVI, USB cables along with 2 USB extender cables
US 115V power cables - to PC-type plugs.Blue box:
Polar alignment scope
Point source lamp + scope
Magma + mounting HW
Dome breakout box
base w. electronics
Power for Axis camera (perhaps it moved to Wooden box?9
A part of the PXI - probably the spare or original PC
Spare HD for PXIWhere are:
Mount handbox controller?
Aurora cloud sensor?
Various interface cards for camera to PXI?Additional
Real World ProblemsPosted by Daddy-o May 23, 2013 08:34AM
Here is a collection of links useful in our attempts to power up the rack of control system, at DMI:
iBootBar : http://dataprobe.com/support/index.html
Real World ProblemsPosted by Peter Thejll May 05, 2013 02:31PM
Our efforts to measure albedo precisely and accurately are limited by the natural variability. Albedo is inherently a quite variable property of the Earth - viewed day-to-day [see presentation linked to below for details on the GERB data variability!]. Just how much does global-mean albedo vary - in the short run and over many years?
We have CERES albedo data for the period 2000 to 2012. It is given as monthly mean data for a 360x180 degree grid. We take this and calculate global-mean monthly values. Some results are here:Data downloaded from: http://ceres.larc.nasa.gov/cmip5_data.php
these are the monthly-mean data prepared by NASA for the CMIP5 effort.
I calculate the global weighted means for the given monthly-mean values
of upward and incoming shortwave light at TOA.
The linfit slope is:
Slope: 5.4920926e-08 +/- 5.7758642e-07 in albedo units/day.
This is an INsignificant slope - per decade it would amount to 0.09 %
of the mean albedo. Using, instead of the slope, the +/- errors on the slope we get:
+/- 0.97 % per decade.
The mean we get is:
Mean albedo: 0.218214
this is small. Publications using these data say: 0.29 ... there is some
problem with accounting for regions' weights. I omit all areas where the
incoming flux at TOA is less than 12 W/m² - this helps avoid Inf's and
NaN's. I weight each area with the cosine of the latitude of cell middle.
Month Albedo S.D. S.D. as % of mean
1 0.228720 +/- 0.000820013 or, +/- 0.358522 %.
2 0.222061 +/- 0.000866032 or, +/- 0.389997 %.
3 0.206976 +/- 0.000856246 or, +/- 0.413694 %.
4 0.215791 +/- 0.000886397 or, +/- 0.410766 %.
5 0.219209 +/- 0.00129653 or, +/- 0.591460 %.
6 0.219878 +/- 0.00122390 or, +/- 0.556628 %.
7 0.215800 +/- 0.000900811 or, +/- 0.417429 %.
8 0.212036 +/- 0.000893605 or, +/- 0.421440 %.
9 0.203454 +/- 0.000899703 or, +/- 0.442214 %.
10 0.215973 +/- 0.00123064 or, +/- 0.569814 %.
11 0.227462 +/- 0.000582274 or, +/- 0.255988 %.
12 0.231050 +/- 0.00111927 or, +/- 0.484429 %.
What do we learn?
We learn that albedo is remarkably constant when observed by satellite. There is no discernible slope to the data but if we use the 1 sigma uncertainties on the slope as upper limits we find that per decade the albedo has changed less than 1%.
The monthly means follow an understandable annual cycle (maxima in NH and SH winters with minima in March and September). The spread around monthly means amount to 0.25 to 0.6% of the monthly mean value.
Climatologically it is an open question whether albedo ought to change with climate drift. During the observing period global mean surface temperatures have changed by about +0.1 degree C [see http://www.ncdc.noaa.gov/sotc/service/global/global-land-ocean-mntp-anom/201101-201112.png ]. This is during the 'hesitation period' that is much discussed presently. During other decades mean T has risen much more - but we have no albedo data from these periods.
Using the above 1 sigma upper limits on slope of 1% per decade we see that if the slope is due to changes in T then the relationship is 1% per 0.1 degree or 10% per degree. This is based on an upper limit and the true value is closer to 0% per degree.
Note that the previous argument is unrelated to the EBM based relationship that works for equilibrium climate only - there, and only there, the expected relationship is -1% per degree.
So, what does that give us? If we were to observe a larger slope we could use the data in the "satellites are getting it wrong" mode - as Pallé et al did for a while. If we measure no slope we can hope to set more stringent limits to the slope than the above satellite values do - can we determine the slope of global mean albedo to better than the 1% per decade above? In this
presentation I found upper limits of 0.2% per decade based on a null hypothesis of 'no albedo change' and realistic observing limitations. The numbers used were based on Frida Benders CERES data, but they do not differ enormously from the present, longer, ones.
So can we reach 0.2% error per decade, observationally?
This requires a discussion of the single-frame errors we get as well as the period-mean data we can expect. More alter!
Real World ProblemsPosted by Daddy-o Apr 24, 2013 10:34AM
We update yet again the list of 'best images' that Chris has generated by inspection of compliance of absolute magnitudes against lunar phase.
We can removea few more images by hand inspection. We found about 10 that have 'cable in view' as well as various near-horizon problems. The list is here and now contains 525 images:
We note that Johanne is working her way through many images and finding 'bad focus' cases. Since we believe these are coincident with 'not the right filter acquired' cases, we shall eventually be further updating the list of best images.
Real World ProblemsPosted by Daddy-o Oct 23, 2012 08:38AM
What is the effect on observational coverage if we have different numbers of observatories and observe in different ways?
Since this depends on what we mean by 'observational coverage' we define OC as 'largest fraction of the time with continuous observations'. Note that this is different from, say, 'largest fraction of days where at least one good observation was made', OK?
For 1,2,3 ... observatories chosen from the list of known observatories (in the IDL code observatory.pro) and evaluating at 15 minute intervals we get the following (non-optimal, but pretty good) results when January and July are combined:Upper panel: The red curve shows OC for Moon above 2 airmasses and Sun lower than 5 degrees under horizon. Blue is same but for 2 airmasses. The dip at 5 in the red curve is an artifact of the search method we use - exhaustive search between 44 available observatories would be too expensive so we seach for best of 100 random picks of 1,2,3,4 ... in the list of 44. Lower panel: The same, but evaluated for more stations, best of 200, and with a 10% random - uncorrelated - occurence rate of 'bad nights' (clouds, for instance).
We see that, compared to this
we have less OC - that's because that search was for 'Moon above horizon, Sun below' instead of the more realistic constraints used here.
We see that extending observations from AM 2 to AM 3 is equivalent to adding two observatories for midrange values.
We see that adding many more observatories is in the end a loosing proposition - the 7th observatory on the blue line adds nothing compared to the sixth.
a)More exhaustive searches can be made, but takes time. This would probably smooth the curves above and also uncover slightly better solutions.
b) We have restricted the site choices to the positions of known observatories. Since most observatories are on the NH summer months (when the Moon is not as high in the Northern sky) there is a handicap.
c) The method is slow - because the altitude of Sun and Moon are evaluated from very precise routines. Simpler and faster expressions for altitude could be used - but one for each observatory would be needed.
Real World ProblemsPosted by Peter Thejll Oct 19, 2012 10:58AM
The Moon is not observable all the time from a single observatory, all year round. To get complete - or almost-complete coverage we ask: where should the earthshine telescopes in the network be placed? The plot below shows, with coloured symbols, when the Moon is observable (defined as Moon up and Sun down), for 5 observatories around the world. MLO is Mauna Loa, MSO is Mount Stromlo in Australia, LCO is Las Campanas in Chile, SAAO is South African Astronomical Observatory. The underlying curve shows the earthshine intensity for one month. The Earth was modelled as a cloud-free sphere with bright continents and dark oceans - hence the zig-zag nature of the curve. Addition of randomly placed clouds would tend to dampen the amplitude of the zig-zags by about a factor of two.
When the underlying curve is not covered by coloured symbols it means that the Moon is not observable from any observatory. We note that this happens particularly when the earthshine is intense - that is near Full Earth, which is New Moon. This is clearly because the Moon is almost never in the sky alone when it is New (it has either risen shortly before the Sun, or will set soon after the Sun). Observability is best near Full Moon (New Earth) near day 15 - but then observing earthshine accurately is very difficult.
On average a single observatory experiences the Moon above the horizon and the Sun below, 25% of the time. Given the choice of these 5 observatories the total observability depends on time of year - given in the panels.
Clouds and the need for the Moon higher in the sky cuts the observability.
The above simulation is for two months of the year - the declination of the Moon changes with the seasons so the relative contribution by each observatory changes with time.
Real World ProblemsPosted by Chris Flynn Sep 24, 2012 12:09PM
We weren't sure of how exactly JPL models the Moon's brightness in HORIZONS, i.e.http://ssd.jpl.nasa.gov/horizons.cgi
The following plot shows that JPL calculates the actual observatory-object distance in giving the Moon absolute magnitude, but does not use any albedo map of the Lunar surface -- so it is symmetrical with phase on either side of new moon (say).
Lower panel : apparent magnitude of the Moon over a 12 month period (September 2010-2012) computed with Horizons, and shown in a narrow range of the illuminated fraction (40 to 50 percent). The scatter in the apparent magnitude around the trend is ~ 0.1 mag.
Most of this scatter is due to the changing distance of the moon around its orbit (between ~0.019 and 0.023 light minutes). The middle panel shows the magnitude if we correct the photometry to a standard distance (0.0215 light minutes) -- this reduces the scatter to 0.03 mag. (The distance provided by Horizons includes the position of the observatory on the Earth -- this can make a difference of up to ~12,000 km in the Moon-Observatory distance).
The upper panel shows the residuals in a least squares fit to the middle panel, as a function of Julian day over the 1 year sample period. This clearly shows that most of the 0.03 mag scatter in the middle panel is due to the changing Sun-Earth distance during the course of the year. Accounting for this reduces the scatter to <0.01 mag. We interpret this to mean that no surface features of the Moon are being included in the Horizons' apparent magnitude estimate, since we expect considerably more scatter than that (but we are working on this!).
Real World ProblemsPosted by Daddy-o Aug 13, 2012 03:06PM
The image below is taken at the IIWI computer with the CCD camera
attached to the board, sitting in a PCI slot on the IIWI. The software
used is SOLIS.
This image looks like things we have seen before. The broad stripes are reversed but that may be something in the display software settings. It is a very primitive setup - all I have managed to do is take a
picture at some long exposure time. The CCD is still attached to the
telescope, I think, so the shutters are closed and what we see is a flat+bias
frame. As it was dark in the dome when the image was taken the 'signal' is dark current - not light. The bands are structures in the flat
field. The spots are noise. The large spots are possibly CR hits?
I think the image proves that the camera is able to take pictures. The
noise may be due to a damaged cooler or - more probably - that the
cooler is not switched on (I don't know how to do that) yet.
It is at least an image from the camera - so board and camera must be
OKish. Why then are there no images when the camera is attached to the PXI? One
answer could be that the PXI was damaged during the MLO power surge. I
am leaning towards that theory now.
Added later: Here is an image with cooler ON.
The minimum values is 402 - a bit high, but at least it now looks like a bias frame. I would say that there is nothing wrong with the camera or its board. So the problem must be in the PXI!
Real World ProblemsPosted by Daddy-o Jul 10, 2012 10:50AM
July 10 2012: A power problem at MLO has been reported. At the moment the power is back, and all our machines - except
the PXI can be reached.
If the PXI is damaged this could be the end of current efforts to use the telescope.
Hopefully access will be back soon and we can gather some more data, before the decisions we have to make in September.
Real World ProblemsPosted by Daddy-o Jul 09, 2012 03:23PM
Following on from the Data Summary post
, we have reviewed all the good data and built an understanding of what the flux-ratios are between B and the other filters. We then use that ratio to test all observations to see if the FW was behaving as expected or showed signs of malfunction. The malfunctions often mean that all fluxes are the same, consistent with one single filter being set instead of the sequence of filters the script asks for.
Focusing then on only those nights where the FW was well behaved we extracted all total fluxes (total bias-corrected image counts divided by nominal exposure time), limited the data to airmasses less than 4, removed the lunar eclipse, and then plotted all fluxes against lunar phase:(sorry about the strange cropping!) In different colors we see the extinction-corrected fluxes as a function of lunar phase. We only show data between 30 and 90 degrees (90=half Moon) since this is all we can acquire in co-add mode. The blue,green,red and orange symbols correspond to B,V,VE2, and VE1+IRCUT (very similar filters), respectively. The curves are 4th order polynomia fitted with a robust code that omits outliers.
We see some scatter around the lines - some of it (e.g. near 50-55 degrees is probably clouds. Some of it, in IRCUT/VE1 near 90 degrees may be 'no filter was inserted at all', or shutter exposure time was longer than requested.
From the fits we can generate flux-tables for use in identifying the remaining data: There are many more images available but they seem to be with unknown filters because the filter acquired and requested were simply not the same. These images may still be of use in DS/BS and albedo analysis - we just need to figure out what filter was used!
Apart from causing many lost images (i.e. we get a bias frame instead) the shutter semes to mostly work as expected when it opens at all. The Filter Wheel, however, selects random filters (as far as we presently know), or no filter at all, when it does not work.
The present work enables a data-selection filter for use in post-observational processing.
Note that the present data are not relevant for DS/BS studies as scattered light has not yet been removed. First we have to identify the images that can be further analysed!
Real World ProblemsPosted by Peter Thejll Jul 07, 2012 07:24PM
This report summarises the data we have until July 2012. We have about 24 good nights of data ...
Real World ProblemsPosted by Peter Thejll Jun 30, 2012 08:37AM
Just realised that the images with bad focus are probably images where the right filter was not set. This should be checked by comparing flux, phase and focus (3 Fs ..). Could perhaps be used as a method to select only those images taken with known filter.
Real World ProblemsPosted by Peter Thejll Jun 15, 2012 11:22AM
If we take all our data and plot the flux for each image (total counts divided by nominal exposure time) against the lunar phase (0 phase is Full Moon) at the time of observation we get plots like this:This is for the B filter, and the top panel shows the raw flux. We see the outline of the expected 'phase law' with some scatter. In the second panel we see the data corrected for extinction (the data are taken at different airmasses and must be extinction-corrected). The scatter is not really reduced, so we conclude that the scatter is mainly due to something else. In the third panel we have more or less taken out the phase law (determining the phase law is a matter of research - BBSO does it with empirical methods, we are going to use the latest Hapke et al phase laws, but at the moment merely correct for the 'geometric illumination fraction').
The factors that cause the scatter that remains could be
a) clouds - i.e. flux is reduced, by the passage of a thin cloud, compared to data at the same phase but without clouds.
b) The shutter failed so that we do not have access to the actual exposure time - merely the one we asked for.
c) The color filter that was used was simply not the one we asked for.
d) The Moon was not centred in the frame and part of the flux is missing because the Moon is outside the edges of the picture. Lunar eclipses also enter as a problem but happen, of course, only at full moon - we have examples of this and know how to eliminate these frames (and full moon data will not be used for DS/BS work anyway).
I think that data suffering from problem a could be eliminated by some sort of image analysis that looks for an uneven sky - but it will be a tough job to separate this effect from c.
Problem b will not affect the DS/BS ratio. Remember that the above is mainly the BS we are looking at! When the shutter fails it often causes 'dragged images' and they can quite easily be eliminated.
Problem c is worse - we know from the problems we encountered, in trying to find the focus for each filter, that the FW does not always select the filter we ask for. Due to the lack of 'polling' in the system design we are victims of 'timeout': proceedings may halt while the system waits for a filter to be selected, but after a preset time the system simply moves on and does the next command, such as an exposure. How can we detect and eliminate the exposures that were taken through the wrong filter? (a future design should have a method to query the system about the identity of the filter in the beam: some sort of coding system?).
In the plot above there are clearly two sequences of flux values for phases between +90 and +150 with a flux-ratio of about 3.5. Is it possible to understand if this is due to selection of a different filter (or no filter at all)?
From experience we know that the flux-ratio between the B filter and no filter at all is likely to be near 10, while the ratio between B and V is like 2 or 3. So - it may not be easy to assign a 'real filter' to each observation on the basis of flux (and phase-information) alone.
Since DS/BS is unaffected by shutter problems but is affected by 'wrong filter' issues we should be able to design a data-examination method. More to follow on this.
Seperately, a delicate analysis method for the presence of drifting clouds in the image is needed in order to eliminate problem a. Suggestions are welcome.
Problem d can be handled with image analysis, and this will be implemented as part of a data-validation filter.
Real World ProblemsPosted by Peter Thejll May 24, 2012 01:21PM
The FW problem is not resolved - but the usual work-around worked also this time: reboot everything. Power cycle. Then the 55536 error code went away, and the script to set focus and take images and save data worked as intended. This time I believe the results because the collected fluxes at the same exposure time through different filters depended on the filter in a sensible way:
Filter median flux
The focus is still hard to determine from data like this, though:
It seems to me that the radius measurement may be useful, and the SD method. Not all filters can have their focus determined from the above - wider limits must be set, and I will do that to get a clearer 'peak' answer.
The above is the result of a single run through all filters so the occurrence of 'double tracks' in some places is really mystifying.
It remains unclear why the FW did not move, but it is probably related to that 55536 signal. The other error code was a reference to the dome not finished with a home seek. Not sure if that influences the ability of the FW to move??
None of this explains the previous problem we have had with VE2 alone - in a sequence of good B,V, VE1 and IRCUT VE2 was suddenly out of focus, and then back in. Hmm.
Real World ProblemsPosted by Chris Flynn Mar 29, 2012 09:59AM
The image shows a cable which the moon can move behind as it is setting at 285 degrees azimuth and 25 degrees latitude.
Arrows show the dim line which is the cable against the sky. The middle arrow shows a glint off the cable from the Moon.
The cable can whip around quite a bit in the wind. It clearly shows up in the CCD images when the Moon is behind it.
Real World ProblemsPosted by Peter Thejll Dec 14, 2011 11:32AM
We can go far by analysing various synthetic images - but real-world images are different and come with problems we have to solve and adapt to. I suggest we use this blog category to add things we have found. Now and then I will collect submitted items onto a Master List Of Real World Problems (MLORWP!).
This is the MLORWP I can think of right now:
1) Real images are not centred in the image frame - take this into account when using synthetic images for analysis of some sort - the synthetic images are all right in the middle of the frame.
2) Real images have slight variations in scale and rotation due to slippage etc in the hardware. While we can always measure what the problem is we need to make sure that e.g. synthetic images are generated for the same conditions (e.g. image rotation - the CCD is mor eor less free to rotate!).
3) Use of FFT methods to convolve images carry some consequences - one Chris put his finger on is the centering of objects by the very act of folding: We need to make sure that when we model real images the resulting synthetic image is offset by the right amount. Henriette's Python project with Kalle Åström at Lund U could come in handy here.
4) The noise in real images is probably higher (never lower) than that given by Poisson's distribution.