I have definitely been putting off writing this comparison because it required me to do a bit more research. That being said, I think it was really helpful to help me see the direction that my conclusion if going to take. I want to emphasize how different instruments observe clouds differently and that there is objective measurement of sky cover. A big reason for this is the uncertainty that human observers have, including me! This is nothing like picking rice off a black background, the classic matlab example for thresholding.
Wu et al. compare six different cloud observing instruments, three ground based including the TSI and three satellite borne, for cloud fraction. They emphasize how different the field of view as well of the spatial scales of these instruments are. For example, the observations from pencil beam instruments like cloud radar and lidar have a field of view less than .5 degrees, while the TSI has a field of view around 160 degrees. Based on the instruments’ different fields of views, spatial extent, and sensitivities, Wu et al did not expect to find identical cloud fractions from each instrument. However, they highlight the importance of knowing the magnitude of the difference so that there is no confusion for applications like checking models. There is a fair amount of understanding about the limits of these instruments. The millimeter wavelength cloud radar (MMCR) usually misses high clouds and cannot distinguish clouds during drizzling or heavy precipitation. Due to distortion around the edges, the TSI may overestimate the cloud fraction so in this study, they restrict the TSI to 100 degrees instead of 160 degrees. The cloud fraction retrievals from satellites are not without fault either, with problems of scale and processing. The spatial scale and temporal scale are much larger compared to most ground based measurement. They found three of the instruments showed a increasing cloudiness trend from 1998 to 2009 and three didn’t, but these were not divided by ground based and satellite. The TSI is among the group that showed no significant trend as is one of the satellite observations data sets. Wu concludes saying that, “any activity that professes to produce a value for cloud fraction should also give some valuation of what limit is used to delineate between cloud and no cloud”(Wu 19). This limit could be a color based threshold, like in my methodology, or in altitude, field of view, or aerosol content based.
While the data records of these instruments may not be completely comparable, they each have their strengths and weaknesses. Which instruments to use depends on the application and which features need to be emphasized. There is no objective truth when it comes to cloud cover and maybe even less so when it comes to cloud type. The complications of getting cloud cover from various instruments is also highlighted in Boers 2010 comparison paper. These complications are especially relevant during instrument changes, like the switch from humans to ceilometers at weather stations. He says that “for the interpretation of long climate records such discontinuities are very hard to handle and subject to debates that are difficult to steer toward acceptable conclusions” (Boer 15). This also highlights the social consequences of incompatible data records that lead to heated debates. As climate models are already heavily criticized, it is problematic that a main feature within them has so much uncertainty in the data record. And unfortunately, these comparison studies “demonstrate that a universal definition of cloudiness, which depends on some subjective form of threshold detection such as lidar‐ based optical depth may be out of reach by the more commonly deployed instruments” (Boer 15).
Doing this research, I ask myself often about the cloud cover and cloud types both looking up at the sky and at our data. I am regularly at a loss. There are so many clouds that don’t fit into the neat definitions and perfect examples that I was taught to recognize. Even just testing myself on where a cloud edge is is difficult. These observational difficulties are concerning to me and other scientists in this field especially as we learn more about the particular effects of different cloud phenomena. A recent study by Koren et al. found that the wispy parts of clouds that are difficult to define a distinct edge or mass play a huge role in determining Earth’s radiative budget (Koren et al. 2007). With the uncertainty I have, how do I program a computer to return a cloud cover or cloud type?
Not many studies have compared multiple different instruments observations of cloud type. Partly, this is because of the difficulty of aggregating the results with different methods and cloud type breakdowns. The studies that do compare instruments with cloud type generally use human observations of cloud type as the “truth,” like I do in my methodology above. The classic cloud classes are primarily based off of visual cues, where as many instruments use other techniques. One study found an overall agreement between the cloud radar and a human observer to be about 64% (Wang and Sassen 1674), but they ran into difficulty finding current human records of cloud type and were restricted temporally and spatially. Cloud type, even more so than cloud cover is dependent on the eyes of a human observer.