Monday, 10 April 2017

Upcoming meetings for homogenisation scientists

There are several new meetings coming up that may be interesting for people working on homogenisation. If you know of more, please write a comment. Please note that the abstract submission deadline for EMS is already in 11 days.

Urban climate summer school
21-26 August 2017 | Bucharest, Romania. Registration deadline: 15 May 2017
Climate monitoring; data rescue, management, quality and homogenization
4–8 September 2017 | Dublin, Ireland. Abstract deadline: 21 April 2017.
11th EUMETNET Data Management Workshop
18–20 October 2017 | Zagreb, Croatia. Abstracts deadline: 31 May 2017
C3S Data Rescue Service Capacity Building and 10th ACRE Workshops
4-8 December 2017 | Auckland, New Zealand.
Workshop - Data Management for Climate Services
April 2018 | Lima, Peru.




Climate monitoring; data rescue, management, quality and homogenization

EMS Annual Meeting: European Conference for Applied Meteorology and Climatology 2017 | 4–8 September 2017 | Dublin, Ireland
The abstract submission deadline: 21st April 2017.

OSA3.1. Climate monitoring; data rescue, management, quality and homogenization
Convener: Manola Brunet-India
Co-Conveners: Ingeborg Auer, Dan Hollis, Victor Venema

Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.

In this session, we welcome contributions (oral and poster) in the following major topics:
  • Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks
  • More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records
  • Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability
  • Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables
  • Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms, using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables
  • Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets


Related are the sessions: Metrology for meteorology and climate and Climate change detection, assessment of trends, variability and extremes.





Urban climate summer school


University of Bucharest, Bucharest, Romania
August 21-26, 2017
Registration deadline: 15 May 2017

Organizers : Research Institute of University of Bucharest (ICUB), Urban Climate Research Center at Arizona State University (ASU), Urban Water Innovation Network (ASU-CSU), Society for Urban Ecology (SURE), Interdisciplinary Center of Advanced Research on Territorial Dynamics (CICADIT)

Rationale and goals : Urban areas impart significant local to regional scale environmental perturbation. Urban-induced effects, simultaneously with impacts owing to long-lived emissions of greenhouse gases, may trigger additional physical and socioeconomic consequences that affect the livelihoods of urban dwellers. While urban areas amass more than 50% of the world population, and three of four Europeans live in a city, the systematic monitoring and assessment of urban climates, mitigation of and adaptation to adverse effects, and the strategic prioritization of potential solutions may enable enhanced preparedness of populations and local authorities. Such challenges call for enduring scientific advancements, improved training and increased awareness of topical issues.

This summer school aims to provide structured information and skill-building capabilities related to climate change challenges in urban areas, with a primary focus of creating an active pool of young scientists that tackle the major sustainability challenges facing future generations. The critical areas to be covered refer to
(1) modern monitoring of urban environments
(2) modelling tools used in urban meteorology and climatology
(3) adaptation and mitigation strategies and their prioritization
(4) exploring critical linkages among environmental factors and emerging and chronic health threats and health disparities. Those attending can expect to gain an understanding of the state-of-the-art and be capable to use the most appropriate tools to address specific problems in their respective fields of interest.
The summer school is intended for doctoral and post-doctoral students who already have basic knowledge and interest for urban climate issues.

More information ...





11th EUMETNET Data Management Workshop

Zagreb, Croatia, 18 – 20 October 2017
More information will appear later on the homepage: http://meteo.hr/DMW_2017

Main Topics

  • Data rescue: investigation, cataloguing, digitization, imaging
  • Climate observations: standards and best practices, definition of climatological day, mean values
  • Metadata: WMO Information System (WIS), INSPIRE, climate networks rating guides
  • Quality control: automatic/manual of climate time-series, on-line data, real-time observations
  • Homogenisation of climate time-series from sub-daily to monthly scale, homogenisation methods, assessment of inhomogeneity
  • Archiving: retention periods, depository, climate service centres and data collections for scientific and public use, databases, data access, user interface, data distribution

Call for Abstracts

Presentations will be oral or posters. Abstracts should be written in English, short, clear, concise. Figures, tables, mathematical symbols and equations should not be included. Abstracts should be sent before May 31st 2017 and send to dmw2017@zamg.ac.at. Authors will be informed about the acceptance of their papers by the scientific committee early in September.

Conference Venue and Programme

The workshop will be held in the building of Croatian State Archives: Marulićev trg 21, Zagreb, Croatia.

Wednesday, October 18th 2017

08:30-09:30 registration
09:30-16:00 sessions
17:00 - guided tour, ice breaker

Thursday, October 19th 2017
09:00-17:00 sessions
19:00 workshop dinner

Friday, October 20th 2017
09:00-15:30 sessions

Further Information

Conference registration fee is 80 €. Details on registration procedures and the workshop in general will be available
on the website: meteo.hr/DMW_2017 (later)
Contact: dmw@cirus.dhz.hr

Scientific Organization

Ingeborg Auer (ZAMG)
Peer Hechler (WMO)
Dan Hollis (UKMO)
Yolanda Luna (AEMET)
Dubravka Rasol (DHMZ)
Ole Einar Tveito (MET Norway)





C3S Data Rescue Service Capacity Building and 10th ACRE Workshops


The C3S Data Rescue Service Capacity Building and 10th ACRE Workshops will be held at NIWA in Auckland, New Zealand during the week of the 4th-8th of December this year. There is no homepage on this meeting yet, but more information will follow later on: www.met-acre.net. This homepage also gives information on the previous annual ACRE workshops.





Workshop - Data Management for Climate Services

Taller – Gestión de Datos para los Servicios Climáticos

Location: Lima, Peru
Time: April 2018 (date to be defined)
Organized by: CLIMANDES - Climate services to support decision making in the Andes Supported by: Swiss Agency for Development and Cooperation (SDC) and the World Meteorological Organization (WMO)
Region: Ibero-American Countries
Duration: 3 days (9:00 a.m. - 5 p.m.)
Number of participants: 80 - 100

Introduction

The implementation of the WMO-led Global Framework for Climate Services (GFCS) strengthens the capabilities of National Meteorological and Hydrological Services (NMHSs) through its five pillars (Observations and Monitoring; Capacity Development; User Interface Platform; Research, Modeling and Prediction; Climate Services Information System). In this context, SENAMHI and MeteoSwiss are developing the first workshop on "Data Management for Climate Services" focusing mainly on the first three of the mentioned pillars. The workshop will be carried out in Peru by members of the CLIMANDES project with the support of SDC and WMO.

The workshop "Data Management for Climate Services" is addressed towards both the technical and the academic community involved in the implementation of national climate services. The workshop focuses on sharing knowledge and experiences from the provision of high-quality climate services targeted at WMO's priority areas and their citizens. The methodologies will cover topics such as quality control, homogenization, gridded data, climate products, use of open source software, and will include practical examples of climate services implemented in the Ibero-American region. The workshop will contribute to the continuous improvement of technical and academic capacities by creating a regional and global network of professionals active in the generation of climate products and services.

Objectives

  • Strengthen data management systems for the provision of climate services.
  • Share advances in the implementation of climate services in the Ibero-American region.
  • Interchange with other NMHSs on best practices in climate methodologies and products.
  • Improve the regional and global collaborations of the NMHSs of the Ibero-American region.
  • Show the use of open-source software.

Outcome The following outcomes of the workshop are envisaged:
  • A final report providing a synthesis of the main results and recommendations resulting from the event.
  • The workshop builds the first platform to exchange technical and scientific knowhow in Ibero-America (WMO RA-III and IV), and among participants from all other regions.
  • Hence, the workshop contributes to the creation of a regional and global network in which knowhow, methodologies, and data are continuously shared.

Content

The workshop will consist of four sessions consisting of presentations, posters and open discussions on:

● Session 1:
  • Data rescue methods: methods for data rescue and cataloguing; data rescue projects
  • Metadata: methods of metadata rescue for the past and the present; systems for metadata storage; applications and use of metadata
  • Quality control methods: methods for quality control of different meteorological observations of different specifications; processes to establish operational quality control

● Session 2:
  • Homogenization: methods for the homogenization of monthly climate data; projects and results from homogenization projects; investigations on parallel climate observations; use of metadata for homogenization

● Session 3:
  • Gridded data: verification of gridded data based on observations; products based on gridded data; methods to produce gridded data; adjustments of gridded data in complex topographies such as the Andes

● Session 4:
  • Products and climate information: methods and tools of climate data analysis; presentation of climate products and information; products on extreme events
  • Climate services in Ibero-America: projects on climate services in Ibero-America
  • Interface with climate information users: approaches to building the interface with climate information users; experiences from exchanges with users; user requirements on climate services

Furthermore, hands-on sessions on capacity building, e-learning, the use of open-source software, and on ancestral knowledge in Ibero-America will take place during the workshop. The workshop is complemented by an additional training day on climate data homogenization and a field visit at the end of the workshop.

Organization

The Meteorological and Hydrological Service of Peru SENAMHI will organize the workshop on “Data Management for Climate Services” in close collaboration with the Federal Office of Meteorology and Climatology MeteoSwiss. The workshop is part of the project CLIMANDES 2 (Climate services to support decision making in the Andes) which is supported by the Swiss Agency for Development and Cooperation SDC and by the World Meteorological Organization (WMO).

For more information and to get notified when the date is known please contact: Climandes.

Sunday, 19 March 2017

Did the lack of an election threshold save The Netherlands?



The Netherlands. Also known as flat Switzerland and as the inventors of the stock market crash. A country you think of so little that we were surprised by the international attention for the Dutch election last week. Although The Netherlands is the 17th economy in the world we are used to being ignored,* typically not making any trouble.

But this time the three part question was whether after Brexit and Trump also The Netherlands, France and Germany would destroy their societies in response to radical fundamentalist grandpas campaigning against radical fundamentalist Muslims. The answer for the Dutch part is: no.

To be honest, this was clear before the election. The Netherlands has a representative democracy. The government is elected by the parliament. The seats in parliament depend closely on the percentage of votes a party gets. This is a very stable system and even when Trump was inaugurated, the anti-Muslim party PVV polled at 20%, no way near enough to govern. The PVV survey results plotted below are in seats, 20% is 30 seats. Every line is one poling organization.

Due to the Syrian refugee crisis the PVV jumped up in September 2015. They went down during the primaries as the Dutch people got to know Trump and the refugees turned out to be humans in need of our help. After getting elected, Trump favorability went up; Americans gave Trump the benefit of the doubt. The same happened to the PVV; if America elects Trump, he cannot be that bad? Right? Right? While Trump was trampling America as president and filled his cabinet with shady corrupt characters, the PVV dropped from 20% to 13% (20 seats).



There is no guarantee the drop of the PVV was due to Trump, but the temporal pattern fits and the leader of the PVV, Geert Wilders, is a declared fan of Trump. People campaigning against the PVV made sure to tie Wilders to Trump. For example in this AVAAZ advertisement below. I hope AVAAZ will also make such videos for France and Germany.



I would certainly not have minded the election being a few months later to give Trump the possibility to demonstrate his governing skills more clearly. This will also help France and Germany. In addition Germans know their history very well and know that German fascism ended with holocaust it did not start with it. It started with hatred and discrimination. The most dangerous case is France with its winner-takes-all presidential system.

Fascism: I sometimes fear... (by Michael Rosen)

I sometimes fear that
people think that fascism arrives in fancy dress
worn by grotesques and monsters
as played out in endless re-runs of the Nazis.

Fascism arrives as your friend.
It will restore your honour,
make you feel proud,
protect your house,
give you a job,
clean up the neighbourhood,
remind you of how great you once were,
clear out the venal and the corrupt,
remove anything you feel is unlike you...

It doesn't walk in saying,
"Our programme means militias, mass imprisonments, transportations, war and persecution."




I expect that it also hurted the PVV that Wilders did not show up for most of the debates. Without the solution-free animosity of Wilder it was possible to have an adult debate about solutions to the problems in The Netherlands. Refreshing and interesting. The last days he did show up, the level immediately dropped, making clear what the main Dutch political problem is. Wilders.

As the graph below shows the Dutch parliament will have 13 parties. This has triggered a debate whether we need an election threshold. 



A poll made around the election shows that a majority of 68% would be in favor of an election threshold of at least 2 seats (1.3%) and 28% even favor a threshold of 5 seats (3.3%). As the map below shows such a threshold would fortunately still be on the low side internationally.


   <1%
   ≥1%, <2%
   ≥2%, <3%
   ≥3%, <4%
   ≥4%, <5%
   ≥5%, <6%
   ≥6%, <7%
   ≥7%
   Each chamber has a different threshold.

I think a threshold, even a low one, is a bad idea. The short-term gains are small, the short-term problems are big and we risk a long-term decline of the Dutch political culture, which is already at a low due to Wilders. The arguments are not specific for The Netherlands. I hope these thresholds go down everywhere they exist.

The main argument in favor is that small parties make it harder to form a coalition government. This is true, small parties need visible influence to make governing worthwhile and survive the next election, which means they get an over-proportional piece of the pie. This makes other coalition partners worse of, which makes negotiations harder.

However, next to the small parties, which are hard to include in a government, we also have the PVV, which is hard to include because of their ideology and lack of workable ideas. The small parties in this election (PvdD, 50+, SGP, DENK, FvD) have 17 seats combined, while PVV has 20 seats. Getting rid of the small parties would thus reduce the problem by less than half. Not having large toxic parties in parliament would be at least as important.

Also without small parties we now need four parties to build a government. The election threshold would need to be very high to reduce that to three parties. So the benefits are small.

If the threshold were that high, an immediate problems would be that people voting for small parties are not represented in parliament and also less in the media. This is unfair.

This can have severe consequences. In Turkey the election threshold is 10% and in 2002 they had a case where 7 sitting parties were below this threshold and a whooping 46% of all votes were without representation in the parliament. That is a big price to pay for making it somewhat easier to build a government.

An election threshold also stimulates strategic voting, where people do not vote the party they agree with, but a party that will get into parliament or government. In the last Dutch election election a quarter of the voters voted strategically. The right wing VVD and the social democrat PvdA were competing for the number one spot. In the end they made a coalition government, which was thus not supported by the population, was highly unpopular and lost heavily this election. That is not a dynamic you want to enforce.

Strategic voting can also mean that a new party that does have sufficient support to pass the threshold does not get votes because many do not trust they will make it and many keep on voting for an existing party they like less.

Last week's Dutch election had a turnout of 80%. Having more parties means that people can find a better match to their ideas. A faithful ideologue may just need two parties, his own and the one of the enemy. If you just think of the left-right axis, you may be tempted to think you only need two or maybe four parties to cover all ideas. Whatever "left" and "right" means. It feels real, but has those funny names because it is so hard to define.

Political scientists often add a second axis: conservative to progressive. The graph below puts the Dutch parties on both axis. Left to right on the horizontal axis and progressive at the top and conservative at the bottom. The parties that care most about the environment and poor people (GroenLinks, SP, Christen Unie, D66) are still all over the map. The vertical axis also shows how materialistic the parties are, with parties that care about the distribution of money and power in the middle and parties that find immaterial values important at the top and the bottom. In other words: we need multiple parties to span the range of political thought and have parties that fit well enough to get out and vote.

Having a choice also means that it pays to pay attention to what happens in politics. American pundits like to complain that Americans are badly informed about politics and the world, but why would the voter pay attention? The US set up an electoral system where the voter has nearly no choice. The US has two parties that are way-out-there for most people.

Because of the districts a vote nearly never matters, especially after [[Gerrymandering]]. There are just a few swing districts and swing states where a vote matters. That is really bad for democracy. Changing the system is more helpful than blaming the voters.



Let me translate the party names for the foreigners. GroenLinks is a left-wing green party. D66 an individual freedom loving (liberal) party with a focus on democratic renewal. PvdA is traditionally a social democratic party, but has lost its moorings. SP is a social democratic party like the PvdA was two decades ago. GroenLinks and SP typically vote with each other, but GroenLinks are the educated people and SP the working class. (It is sad that does not mix.)

VVD used to be a pro-business individual liberty party, but has become more conservative and brown. CDA a center-right Christian democratic party. Christen Unie is an actually Christian party that tries to follow the teachings of Christ and cares about the environment and the (global) poor. SGP is a quite fundamentalist Christian party that likes the Old Testament more. PVV is the anti-Muslim authoritarian party. For the Americans: Most of the policies of Bernie Sanders are Christian democratic (although they would use different words to justify them).

That politics is much more than one axis can also be seen in a transition matrix. The one below shows how voters (or non-voters) in 2003 voted in 2006. A reading example is that people who voted CDA in 2003, voted CDA in 2006 in 71% of the cases and voted PvdA in 3% of the cases. There are many transition that do not follow the left-right axis or the conservative-progressive axis. People are complicated and have a range of interests.

 2006
2003 CDA PvdA VVD SP GroenLinks D66 Christen Unie PVV Other Non voters
CDA 71 3 6 6 0 0 4 2 1 6
PvdA 3 59 2 20 3 1 1 1 1 9
VVD 23 3 55 3 0 1 1 5 2 7
SP 4 11 0 70 6 0 2 4 2 2
GroenLinks 3 7 1 25 46 1 4 0 2 9
D66 8 17 17 15 12 23 2 0 5 0
Christen Unie 2 2 0 2 0 0 91 2 0 0
LPF 7 4 18 14 0 1 0 36 5 15
Other 10 2 2 10 2 0 7 2 57 7
Non voters 6 6 3 9 0 0 0 5 1 70

The main problem is on the long-term. An election threshold limits competition between parties. A threshold makes it harder to split up a party or to start a new one. That is nice for the people in power, but not good for the democracy within the party and for the voters. Parties become more vehicles of power and less places to discus problems and ideas.

With a high threshold the party establishment can kick people or small groups out without having to fear much consequences. A wing of a party can take over power; neutralize others with near impunity. When a party does not function well, becomes corrupt, starts to hold strange positions or sticks to outdated ideas, voters cannot easily go to an alternative. In the map with thresholds above you can see that high thresholds are typical for unpleasant not too democratic countries.

You see it in the USA where the corporate Democrats thought they could completely ignore the progressives because they would be forced to vote for them lacking a real alternative and in the face of grave danger to the Republic. Politics in Germany is much more about power (with a 5% threshold) than in The Netherlands, where politicians make compromises and try to get many people on board. There is no way to prove this but I think the election threshold is important for this.

That is why countries with low thresholds have parties with new ideas such as environmentalism or the hatred of Muslims or old fashioned niche ideas like general racism. In the latter cases you may like that these ideas are not represented in parliament, but the danger is that it suddenly blows up and Trump becomes president. Then it is much better to have Wilders in parliament making a fool of himself, making public that many of his politicians have lurid and criminal pasts, and demonstrating that he cannot convert his hatred into working policies and legislation. It also gives the decent parties the possibility to respond in time to the real problems the voters of such parties have, which they project on minorities.

The lack of competition also promotes corruption. It makes corruption less dangerous. In the extreme American case of two parties a lobbyist only has to convince party D that he can also bribe party R and both party can vote for a bill that transfers power to corporations on a Friday evening without any possibility of voters to intervene. In the extreme case the corruption becomes legalized and the politicians mostly respond to the wishes of the donor class and ignore everyday citizens. The disillusionment with democracy this creates makes it possible for anti-democratic politicians like Trump or Wilders to go beyond their small racist niche.

So my clear advice is: Netherlands, do not introduce an election threshold. America, get rid of your district system or at least introduce more competition with a [[ranked voting system]].



Related reading

In Dutch: Which effects would an election threshold have had on the 2012 election? Welke effecten zou een kiesdrempel hebben?

To my surprise The Netherlands already has a small election threshold, you need votes for at least one seat and otherwise there is no rounding up. See Wikipedia in Dutch on election thresholds: Kiesdrempel

In Dutch: How good were the polls? Hoe dicht zaten de peilingen bij de uitslag?


* Also Angela Merkel has visited The Netherlands only 6 times in her 12 years of rule.

Sunday, 5 March 2017

Global warming in the original Celsius scale

A short post with a question of counterfactual history.

The Celsius temperature scale developed by Anders Celsius (1701–1744) himself had 0 °C at the boiling point of water and freezing was 100 °C.

It had the advantage that negative numbers would not occur in normal use. Daniel Gabriel Fahrenheit (1686–1736) achieved this for his temperature scale by choosing the lowest temperature in his village or a brine mixture as zero. Negative numbers may well have been controversial at the time. Only in the 17th century the idea of negative numbers was accepted by western mathematicians.

The forward scale we are used to was independently developed by several of Celsius' contemporaries. What would have happened if we had kept to original Celsius scale?

In forward degrees Celsius global warming produces an upward curve. In our current culture that is associated with progress and growth.



In the original Celsius scale the same plots would look more depressing like this.



If the temperature graphs had looked like the graphs of Arctic sea ice would that have changed the course of history? Would we have taken the problem seriously in the 1990s?




Tuesday, 21 February 2017

Politics is not rational



Hillary Clinton lost the presidential election because people are not rational. Except for racists and millionaires it would have been in the best interest of everyone to have voted for Clinton. But we are not rational, we do not always look at our best interests. Real humans are not homo economii.

That includes me. As a scientist it is my job to keep a cool head. I hope you will excuse me for thinking I do my job reasonably well. I like to see myself as rational, but naturally I am not, especially learning about the ultimatum game shocked my self-perception.

It is a very simple and pure economic game. Reducing a problem to its essence like this has the elegance my inner physicist loves. In the ultimatum game, two players must divide a sum of money. The first player has to propose a certain division. The second player can accept this division or reject it. If the offer is rejected both players do not receive any money. In its purest form, the experiment is played only once and anonymously with players that do not know each other.

Time for a short thinking pause: What would you do? How much would you offer as player one? Below which percentage would you reject the offer?


Initially, I wondered why economists would play this game. Surely player one would would offer 50/50 and player two would accept. But that was my irrational side and my missing economic eduction. A good economist would expect that player two would accept any non-zero offer: it is better to get something than nothing, and that thus player one will make the smallest possible offer. Reality is in between. Many people offer 50%, but many also do not. These offers below 50% are, however, also regularly rejected. Player two is apparently willing to hurt himself to punish unfair behavior. This game and many variations and similar games lead to the conclusion: humans are not purely selfish, but have a sense of fairness.

As a student of variability, for me the key aspect of the ultimatum game is its non-linearity. You either get something or nothing. In case of nonlinear processes, such as radiation flowing through clouds, variability is important. A smooth cloud field reflects more solar radiation than a bumpy cloud field with the same amount of water. The variability of the cloud water is important because the flow of radiation through clouds is a non-linear process.

By sometimes rejecting low offers, player two gets better offers from player one. This is especially clear when the game is played multiple times with the same players. In the beginning quite large offers are rejected to entice larger offers later in the game. How humans evolved a sense of fairness to be able to also benefit from this in one-off games is not yet understood. Fairness is surprising because a cartoon version of evolutionary theory would predict that altruism is only possible among kin. But the empirical evidence clearly shows that fairness belongs to being human. (Just like competition.)


Knowledge will come only if economics can be reoriented to the study of man as he is and the economic system as it actually exists.
Ronald Coase


Fairness is but one emotion that it not rational, not "productive". It offers some protection against unfairness, such as wages going lower and lower. Offering and accepting jobs are yes-no decisions under uncertainty for both parties. If there is one term that is often used in labor conflicts it is "unfair wages" or "unfair labor conditions". All the while economists wonder why unemployment is higher than the friction unemployment of rational actors and blame anything but their faulty assumptions.

Anger is also not productive, but fear of anger forces the haves to make better offers to the have-nots. Amok runs are not productive, mass shootings are not productive, suicide attacks are not productive. I would venture that independent of the proclaimed rationalizations, they signal a lack of justice and fairness.



The American election was also seen as unfair by many. The two parties had both selected historically unpopular candidates. Had the historically unpopular Trump not run, Clinton would have been the least popular candidate since polling started on this question. The main reason to vote was not to get other candidate.

With both candidates and parties so unpopular, with the historical unpopularity ratings of Congress and Washington the enormous partisan tribalism in America is surprising. The main pride of both tribes seems to be that they are at least not members of the other tribe. The lizard people have managed to pit the population against each other, while they loot the country and drag the world down. Do help me in the comments how "they" did this.



Many felt the election was a trap. In such a case one can expect irrational behavior. Or as Michael Moore elegantly said: Trump is the human Molotov cocktail they could throw through the window of the establishment. I am afraid the voters will find it was the window of their own house.

One mistake the Democratic establishment made in their support for Clinton was to expect rational behavior. They learned about economics and its political counterpart [[public choice theory]]. Both theories assume rational behavior. The Democrat establishment assumed that the working class had no other options than to vote for them because the Republicans would make their lives even worse.




Nic Smith, a self-described "white trash hillbilly from the holler" from coal country, on Trump voters: They are desperate to believe in something.

In a rational world the establishment would be right and player two would take the non-zero Clinton offer, in the real world people are fed up with begin treated unfairly and seeing inequality and corruption jointly grow for decades. In the real world having to choose the lesser evil, election after election, over and over again, makes it ever more likely the voters will sulk. That the Democrat establishment had just put up their middle finger to half of their party during the primaries likely also did not help putting people in a more rational mood.



Last year's presidential election was an extreme example, but a two-party system invariably mean that many people do not feel represented and are dissatisfied. [[A transferable vote]] would do a lot to fix this and gives the voters the possibility to vote for their candidate of choice without losing their vote.

A two-party system is also much more prone to corruption. A large part of the politicians will be in save districts and do not have to fear the wrath of their voters. Where the voters do have some choice, the corporations only have to convince politician D that they will also bribe politician R and both can do so with impunity.

A corrupt two-party system is not much better than a one-party system. In a representative democracy with more than two parties there would be real competition and the voters could vote for another politician.



What can we do to break this ultimatum game? The rhetoric and tribalism in America is unique. Humans are social animals and our group is important to us, but the US tribalism in beyond normal. For example, 34% of Trump voters being willing to say Trump's inauguration was the biggest ever is not normal.

Tribalism and emotions are not good for clear thinking and needs to be fought. The only thing we can change is how we act ourselves, we should try to reduce unnecessarily antagonizing people. When you have to say something bad about the corrupt Republican politicians in Washington make clear you mean them and do not use the term Republicans, which also means every single member of the group, most of whom also reject corruption.

I am only talking about who you address. Please stand your ground, there is no need to keep on moving our position in the direction of corrupt unreasonable politics. That only signals you do not believe in your ideas. If there is one thing frustrating about US politics it is weak corporate Democrats continually moving in the direction of ever more corrupt Republican politicians in the name of appeasement and in reality because they have the same donors.

Given the lack of a real choice one can also not blame the voters for every character error of their candidate and for all policies. For fashion icon Ken Bone the election was a choice between his personal benefit as coal worker and the greater good. Many Trump voters voted for Obama before. Some people say they voted Trump expecting him not to be able to execute his racist plans because they are unconstitutional. That may be a rationalization and for me Trump's overt racism would be a deal breaker, but not all of his voters are automatically bigots, even if many clearly are.


Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
Martin Luther King, Jr


Most people simply voted the party they always voted. There are people who have their health insurance via the Affordable Care Act who voted Republican and are likely to lose coverage. They thought the Republicans would not do something as barbaric as repealing the ACA without replacement. Thousand of people will die every year when that happens, but the repeal means that billionaires will have to pay less for healthcare and they own the Republican politicians, so I am less optimistic they will not do it.

Do not go around calling every Trump voter a personalized Donald Trump, make them an offer they cannot refuse. Especially the Democratic establishment should stop blaming everyone but themselves for not voting for their inevitable candidate. Rather than scolding their voters, they should make the left an offer they cannot refuse.

That offer would be a non-corrupt candidate. That would be an offer Democrat and Republican voters alike would find it hard to refuse. It is, unfortunately, the one compromise the Democratic establishment is least willing to make. The people in power are in power because they are good at selling out to corporations.

This video gives a good overview of the corruption in America and how it impacts normal people via politics and the media. Since corruption became worse the workers no longer shared in the increases in productivity and the politicians respond to the wishes of the donor class and not the working class. Readers from the USA may think political corruption is normal because it slowly and imperceptibly grew, but in its enormity it is not normal. It was much better before the 1970s it is much better in other advanced nations.



Fortunately several initiatives have sprung up after the Trump election debacle and after Sanders showing it is possible to campaign for the presidency without taking donor money. As an offspring of the Sanders campaign Our Revolution will run a large number of candidates under one political and organizational platform. Similar, but very clear in their wish to primary and get rid of corporate Democrats, are the Justice Democrats.

The non-partisan group Brand New Congress also wants to help (Tea Party) Republicans that do not accept money into Congress. I would love to see more of this on the Republican side. In Europe conservative parties are conservative, but not corrupt and not bat shit crazy. They are people you can have an adult conversation with and negotiate. They may prioritize the environment less, but do not childishly claim climate change does not exist. Getting non-corrupt Republicans into office may even be worth the time of US liberals.

The group 314 Action (inspired by π) work to get more Science, Technology, Engineering and Math (STEM) people into politics. If you love money and power, science is the weirdest career choice you can make. Thus I would expect the scientists that run for office to be mostly clean. The climate "debate" shows that nearly all climatologists are not touched by corporate corruption, while there are strong incentives for coal and oil companies to bribe them.

Let's work to end corporate rule, get the corporation out of politics and send them back to take care of the economy.


Following The Ninth: In The Footsteps of Beethoven's Final Symphony.



Related reading

The big lesson of Trump's first 2 weeks: resistance works

The magazine Correspondent: This is how we can fight Donald Trump’s attack on democracy. Focuses on how to change the media, which has become more pressing in the Age of Trump

Chris Hedges: We Are All Deplorables. "My relatives in Maine are deplorables. I cannot write on their behalf. I can write in their defense. ... I see the Christian right as a serious threat to an open society. But I do not hate those who desperately cling to this emotional life raft"

Thomas Frank in The Guardian: How the Democrats could win again, if they wanted

CNN Money: U.S. inequality keeps getting uglier

David Roberts of Vox: Everything mattered: lessons from 2016's bizarre presidential election - WTF just happened?

Political Polarization in the American Public - How Increasing Ideological Uniformity and Partisan Antipathy Affect Politics, Compromise and Everyday Life

North Carolina is no longer classified as a democracy by Andrew Reynolds, Professor of Political Science at the University of North Carolina at Chapel Hill.

A law professor's warning: we are closer to oligopoly than at any point in 100 years. Economically. The political power of the corporations is also increasing

The first days inside Trump’s White House: Fury, tumult and a reboot. "Trump has been resentful, even furious, at what he views as the media’s failure to reflect the magnitude of his achievements, and he feels demoralized that the public’s perception of his presidency so far does not necessarily align with his own sense of accomplishment."

An important piece for poll nerds by Nate Silver: Why Polls Differ On Trump’s Popularity?

Variable Variability: The ultimatum game, a key experiment showing intrinsic fairness and altruism among strangers


* Photo at the top, Be Human, is by ModernDope and has a creative commons CC BY-SA 2.0 license.

Sunday, 5 February 2017

David Rose's alternative reality in the Daily Mail

Peek-a-boo! Joanna Krupa shows off her stunning figure in see-through mesh dress over black underwear
Bottoms up! Perrie Edwards sizzles in plunging leotard as Little Mix flaunt their enviable figures in skimpy one-pieces
Bum's the word! Lottie Moss flaunts her pert derriere in a skimpy thong as she strips off for steamy selfie

Sorry about those titles. They provide the fitting context right next to a similarly racy Daily Mail on Sunday piece of David Rose: "Exposed: How world leaders were duped into investing billions over manipulated global warming data". Another article on that "pause" thingy that mitigation skeptics do their best to pretend not to understand. For people in the fortunate circumstances not to know what the Daily Mail is, this video provides some context about this Murdoch "newspaper".

[UPDATE: David Rose' source says in an interview with E&E News on Tuesday: “The issue here is not an issue of tampering with data”. So I guess you can skip this post, except if you get pleasure out of seeing the English language being maltreated. But do watch the Daily Mail video below.

See also this article on the void left by the Daily Mail after fact checking. I am sure all integrityTM-waving climate "skeptics" will condemn David Rose and never listen to him again.]



You can see this "pause" in the graph below of the global mean temperature. Can you find it? Well you have to think those last two years away and then start the period exactly in that large temperature peak you see in 1998. It is not actually a thing, it is a consequence of cherry picking a period to get a politically convenient answer (for David Rose's pay masters).



In 2013 Boyin Huang of NOAA and his colleagues created an improved sea surface dataset called ERSST.v4. No one cared about this new analysis. Normal good science. One of the "scandals" Rose uncovered was that NOAA is drafting an article on ERSST.v5.

But this post is unfortunately about nearly nothing, about the minimal changes in the top panel of the graph below. I feel the important panel is the lower one. It shows that in the raw data the globe seems to warm more. This is because before WWII many measurements were performed with buckets and the water in the bucket would cool a little due to evaporation before reading the thermometer. Scientists naturally make corrections for such problems (homogenization) and that helps make a more accurate assessment of how much the world actually warmed.

But Rose is obsessed with the top panel. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment (ERSST.v4) and the thin red line the previously estimated global temperature signal (ERSST.v3). Differences are mostly less than 0.05°C, both warmer and cooler. The "problem" is the minute change at the right end of the curves.

The mitigation skeptical movement was not happy when a paper in Science in 2015, Karl and colleagues (2015), pointed out that due to this update the "pause" is gone, even if you use the bad statistics the mitigation skeptics like. As I have said for many years now about political activists claiming this "pause" is highly important: if your political case depends on such minute changes, your political case is fragile.



In the mean time a recent article in Science Advances by Zeke Hausfather and colleagues (2016) now shows evidence that the updated dataset (ERSSTv4) is indeed better than the previous version (ERSSTv3b). They do so by comparing the ERSST dataset, which comes from a large number of data sources, with data that comes only from only one source (buoys, satellites (CCl) or ARGO). These single-source datasets are shorter, but without trend uncertainties due to the combination of sources. The plot below shows that the ERSSTv4 update improves the fit with the other datasets.



The trend change over the cherry-picked "pause" period were mostly due to the changes in the sea surface temperature of ERSST. Rose makes a lot of noise about the land data, where the update was inconsequential. As indicated in Karl and colleagues (2015) this was a beta-version dataset. The raw data was published; that is the data of the International Surface Temperature Initiative (ISTI) and the homogenization method was published. The homogenization method works well; I checked myself.

The dataset itself is not published yet. Just applying a known method to a known dataset is not a scientific paper. Too boring.

So for the paper NOAA put a lot of work into estimating the uncertainty due to the homogenization method. When developing a homogenization method you have to make many choices. For example, inhomogeneities are found by comparing one candidate station with multiple nearby reference stations. There are setting for now many stations and for how nearby the reference stations need to be. NOAA studied which of these settings are most important with a nifty new statistical method. These settings were varied to study how much influence that has. I look forward to reading the final paper. I guess Rose will not read it and stick to his role as suggestive interpreter of interpreters.

The update of NOAA's land data will probably remove a precious conspiracy of the mitigation skeptical movement. While, as shown above, the adjustments reduce our estimate for the warming of the entire world, the adjustments make the estimate for the warming over land larger. Mitigation skeptics like to show the adjustments for land data only to suggest that evil scientists are making global warming bigger.

This is no longer the case. A recommendable overview paper by Philip Jones, The Reliability of Global and Hemispheric Surface Temperature Records, analyzed the new NOAA dataset. The results for land are shown below. The new ISTI raw data dataset shows more warming than the previous NOAA raw data dataset. As a consequence the homogenization now does not change the global mean appreciably any more to arrive at about the same answer after homogenization; compare NOAA uncorrected (yellow line) with NOAA (red; homogenized).



The main reason for the smaller warming in the old NOAA raw data was that this smaller dataset contained a higher percentage of airport stations. That is because airports report their data very reliably in near real time. Many of these airport stations were in cities before and cities are warmer than airports due to the urban heat island effect. Such relocations thus typically cause cooling jumps that are not related to global warming and are removed by homogenization.

So we have quite some irony here.
Still Rose sees a scandal in these minute updates and dubs it Climategate 2; I thought we were already at 3 or 4. In this typical racy style he calls data "wrong", "rogue", "biased". Knowing that data is never perfect is why scientists do their best to assess the quality of the data, remove problems and make sure that the quality is good enough to make a certain statement. In return people like David Rose simultaneously pontificate about uncertainty monsters and assumes data is perfect and then get the vapors when updates are needed.

Rose gets some suggestive quotes from an apparently disgruntled retired NOAA employee. The quotes themselves seem to be likely inconsequential procedural complaints, the corresponding insinuations seem to come from Rose.

I thought journalism had a rule that claims by a source need to be confirmed by at least a second source. I am missing any confirmation.

While Rose presents the employee as an expert on the topic, I have never heard of him. Peter Thorne, who worked at NOAA, confirms that the employee did not work with surface station data himself. He has a decent publication record, mainly on satellite climate datasets of clouds, humidity and radiation. Ironically, I keep using that word, he also has papers about the homogenization of his datasets, while homogenization is treated by the mitigation skeptical movement as the work of the devil. I am sure they are willing to forgive him his past transgressions this time.

It sounds as if he made a set of procedures for his climate satellite data, which he really liked, and wanted other groups in NOAA to use it as well. Was frustrated when others did not prioritize enough updating their existing procedures to his.

For David Rose this is naturally mostly about politics and in his fantasies the Paris climate treaty would not have existed with the Karl and colleagues (2015) paper. I know that "pause" thingy is important for the Anglo-American mitigation skeptical movement, but let me assure Rose that the rest of the world considers all the evidence and does not make politics based on single papers.

[UPDATE: Some days you gotta love journalism: a journalist asked several of the diplomats who worked for years on the Paris climate treaty, they gave the answer you would expect: Contested NOAA paper had no influence on Paris climate deal. The answers still give an interesting insight into the sausage making. What is actually politically important.]

David Rose thus ends:
Has there been an unexpected pause in global warming? If so, is the world less sensitive to carbon dioxide than climate computer models suggest?
No, there never was an "unexpected pause." Even if there were, such a minute change is not important for the climate sensitivity. Most methods do not use the historical warming for that and those that do consider the full warming of about 1°C since the 19th century and not only short periods with unreliable, noisy short-term trends.

David Rose:
And does this mean that truly dangerous global warming is less imminent, and that politicians’ repeated calls for immediate ‘urgent action’ to curb emissions are exaggerated?
No, but thanks for asking.

Post Scriptum. Sorry that I cannot talk about all errors in the article of David Rose, if only because in most cases he does not present clear evidence and because this post would be unbearably long. The articles of Peter Thorne and Zeke Hausfather are mostly complementary on the history and regulations at NOAA and on the validation of NOAA's results, respectively.

Related information

2 weeks later. The nailing New York Times interviewed several former colleagues of NOAA retire Bates: How an Interoffice Spat Erupted Into a Climate-Change Furor. "He’s retaliating. It’s like grade school ... At that meeting, Dr. Bates shouted that Ms. McGuirk was not trustworthy and belonged in jail, according to an internal log ..." Lock her up, lock her up, ...

Wednesday. The NOAA retiree now says: "The Science paper would have been fine had it simply had a disclaimer at the bottom saying that it was citing research, not operational, data for its land-surface temperatures." To me it was always clear it was research data, otherwise they would have cited a data paper and named the dataset. How a culture clash at NOAA led to a flap over a high-profile warming pause study

Tuesday. is a balanced article from the New York Times: Was Data Manipulated in a Widely Cited 2015 Climate Study? Steve Bloom: "How "Climategate" should have been covered." Even better if mass media would not have to cover office politics on archival standards fabricated into a fake scandal.

Also on Tuesday, an interview of E&E News: 'Whistleblower' says protocol was breached but no data fraud: The disgruntled NOAA retiree: "The issue here is not an issue of tampering with data".

Associated Press: Major global warming study again questioned, again defended. "The study has been reproduced independently of Karl et al — that's the ultimate platinum test of whether a study is to be believed or not," McNutt said. "And this study has passed." Marcia McNutt, who was editor of Science at the time the paper was published and is now president of the National Academy of Sciences.

Daily Mail’s Misleading Claims on Climate Change. If I were David Rose I would give back my journalism diploma after this, but I guess he will not.

Monday. I hope I am not starting to bore people by saying that Ars Technica has the best science reporting on the world wide web. This time again. Plus inside scoop suggesting all of this is mainly petty office politics. Sad.

Sunday. Factcheck: Mail on Sunday’s ‘astonishing evidence’ about global temperature rise. Zeke Hausfather wrote a very complementary response, pointing out many problems of the Daily Mail piece that I had to skip. Zeke works at the Berkeley Earth Surface Temperature project, which produces one of the main global temperature datasets.

Sunday. Peter Thorne, climatology professor in Ireland, former NOAA employee and leader of the International Surface Temperature Initiative: On the Mail on Sunday article on Karl et al., 2015.

Phil Plait (Bad Astronomy) — "Together these show that Rose is, as usual, grossly exaggerating the death of global warming" — on the science and the politics of the Daily Mail piece: Sorry, climate change deniers, but the global warming 'pause' still never happened

You can download the future NOAA land dataset (GHCNv4-beta) and the land dataset used by Karl and colleagues (2015), h/t Zeke Hausfather.

The most accessible article on the topic rightly emphasizes the industrial production of doubt for political reasons: Mail on Sunday launches the first salvo in the latest war against climate scientists.

A well-readable older article on the study that showed that ERSST.v4 was an improvement: NOAA challenged the global warming ‘pause.’ Now new research says the agency was right.

One should not even have to answer the question, but: No, U.S. climate scientists didn't trick the world into adopting the Paris deal. A good complete overview at medium level.

Even fact checker Snopes sadly wasted its precious time: Did NOAA Scientists Manipulate Climate Change Data?
A tabloid used testimony from a single scientist to paint an excruciatingly technical matter as a worldwide conspiracy.

Carbon Brief Guest post by Peter Thorne on the upcoming ERSSTv5 dataset, currently under peer review: Why NOAA updates its sea surface temperature record.

Monday, 30 January 2017

With some programing skills you can compute global mean temperatures yourself

This is a guest post by citizen scientist Ron Roeland (not his real name, but I like alliteration for some reason). Being an actually sceptical person, he decided to compute the global mean land temperature from station observations himself. He could reproduce the results of the main scientific groups that compute this signal and, new for me, while studying the data noticed how important the relocation of temperature stations to airports is for the NOAA GHCNv3 dataset. (The headers in the post are mine.)

This post does not pretend to present a rigorous analysis of the global temperature record; instead, it intends to show how easy it is for someone with basic programming/math skills to debunk claims that NASA and NOAA have manipulated temperature data to produce their global-average temperature results, i.e. claims like these:

From C3 Headlines: By utilizing questionable adjustments based on even more questionable assumptions, NOAA managed to produce an entirely fabricated increase in the global warming trend from 1998 to 2012.

From a blogger on the Hill: There’s going to have to be a massive effort to pick apart failing climate models and questionably-adjusted data.

From Climate Depot: Over the past decade, NASA and NOAA have continuously altered the temperature record to cool the past and warm the present. Their claims are straight out Orwell's 1984, and have nothing to do with science'

The routine

Some time ago, after reading all kinds of claims (like the ones above) about how NASA and NOAA had improperly adjusted temperature data to produce their global-average temperature results, I decided to take a crack at the data myself.

I coded up a straightforward baselining/gridding/averaging routine that is quite simple and “dumbed down” in comparison to the NASA and NOAA algorithms. Below is a complete description of the algorithm I coded up.
  1. Using GHCN v3 monthly-average data, compute 1951-1980 monthly baseline temperatures for all GHCN stations. If a station has 15 or more valid temperatures in any given month for the 1951-1980 baseline period, retain that monthly baseline value; otherwise drop that station/month from the computations. Stations with no valid monthly baseline periods are completely excluded from the computations.
  2. For all stations and months where valid baseline temperature estimates were computed per (1) above, subtract the respective baseline temperatures from all of the station monthly temperature temperatures to produce monthly temperature anomalies for the years 1880-2015.
  3. Set up a global gridding scheme to perform area-weighting. To keep things really simple, and to minimize the number of empty grid-cells, I selected large grid-cell sizes (20 degrees x 20 degrees at the Equator). I also opted to recalculate the grid-cell latitude dimensions as one goes north/south of the equator in order to keep the grid-cell areas as nearly constant as possible. I did this to keep the grid-cell areas from shrinking (per the latitude cosines) in order to minimize the number of empty grid cells.
  4. In each grid-cell, compute the average (over all stations in the grid-cell) of the monthly temperature anomalies to produce a single time-series of average temperature anomalies for each month (years 1880 through 2015).
  5. Compute global average monthly temperature anomalies by averaging together all the grid-cell monthly average anomalies, weighted by the grid-cell areas (again, for years 1880 through 2015).
  6. Compute global-average annual anomalies for years 1880 through 2015 by averaging together the global monthly anomalies for each year.
The algorithm does not involve any station data adjustments (obviously!) or temperature interpolation operations. It’s a pretty basic number-crunching procedure that uses straightforward math plus a wee bit of trigonometry (for computing latitude/longitude grid-cell areas).

For me, the most complicated part of the algorithm implementation was managing the variable data record lengths and data gaps (monthly and annual) in the station data -- basically, the “data housekeeping” stuff. Fortunately, modern development libraries such as the C++ Standard Template Library make this less of a chore than it used to be.

Why this routine?

People unfamiliar with global temperature computational methods sometimes ask: “Why not simply average the temperature station data to compute global-average estimates? Why bother with the baselining and gridding described above?”

We could get away with straight averaging of the temperature data if it were not for the two problems described below.

Problem 1: Temperature stations have varying record lengths. The majority of stations do not have continuous data records that go all the way back to 1880 (the beginning of the NASA/GISS global temperature calculations). Even stations with data going back to 1880 have gaps in their records -- there are missing months or even years.

Problem 2: Temperature stations are not evenly distributed over the Earth’s surface. Some regions, like the continental USA and western Europe, have very dense networks of stations. Other regions, like the African continent, have very sparse station networks.

As a result of problem 1, we have a mix of temperature stations that changes from year to year. If we were simply to average the absolute temperature data from all those stations, the final global-average results would be significantly skewed from year to year due to the changing mix of stations from one year to the next.

Fortunately, the solution for this complication is quite straightforward: the baselining and anomaly-averaging procedure described above. For those who already familiar with this procedure, please bear with me while I illustrate how it works with a simple scenario constructed from simulated data.

Let’s consider a very simple scenario where the full 1880-2016 temperature history for a particular region is contained in data reported by two temperature stations, one of which is located on a hilltop and the other located on a nearby valley floor. The hilltop and valley floor locations have identical long-term temperature trends, but the hilltop location is consistently about 1 degree C cooler than the valley floor location. The hilltop temperature station has a temperature record starting in 1880 and ending in 1990. The valley floor station has a temperature record beginning in 1930 and ending in 2016.

Figure 1 below shows the simulated temperature time-series for these two hypothetical stations. Both time-series were constructed by superimposing random noise on the same linear trend, with the valley-floor station time-series having a constant offset temperature 1 degree C more than that of the hilltop station time-series. The simulated time-series for the hilltop station (red) begins in 1880 and continues to 1990. The simulated valley floor station temperature (blue) data begins in 1930 and runs to 2016. As can be seen during their period of overlap (1930-1990), the simulated valley-floor temperature data runs about 1 degree warmer than the simulated hilltop temperature data.


Figure 1: Simulated Hilltop Station Data (red) and Valley Floor Station Data (blue)

If we were to attempt to construct a complete 1880-2016 temperature history for this region by computing a straight average of the hilltop and valley floor data, we would obtain the results seen in Figure 2 below.


Figure 2: Straight Average of Valley Floor Station Data and Hilltop Station Data

The effects of the changing mix of stations (hilltop vs. valley floor) on the average temperature results can clearly be seen in Figure 2. A large temperature jump is seen at 1930, where the warmer valley floor data begins, and a second temperature jump is seen at 1990 where the cooler hilltop data ends. These temperature jumps obviously do not represent actual temperature increases for that particular region; instead, they are artifacts introduced by the changes in the mix of stations in 1930 and 1990.

An accurate reconstruction of the regional temperature history computed from these two temperature time-series obviously should show the warming trend seen in the hilltop and valley floor data over the entire 1880-2016 time period. That is clearly not the case here. Much of the apparent warming seen in Figure 2 is a consequence of the changing mix of stations.

Now, let’s modify the processing a bit by subtracting the (standard NASA/GISS) 1951-1980 hilltop baseline average temperature from the hilltop temperature data and the 1951-1980 valley floor baseline average temperature from the valley floor temperature data. This procedure produces the temperature anomalies for the hilltop and valley floor stations. Then for each year, compute the average of the station anomalies for the 1880-2016 time period.

This is the baselining and anomaly-averaging procedure that is used by NASA/GISS, NOAA, and other organizations to produce their global-average temperature results.

When this baselining and anomaly-averaging procedure is applied to the simulated temperature station data, it produces the results that can be viewed in figure 3 below.


Figure 3: Average of Valley Floor Station Anomalies and Hilltop Station Anomalies

In Figure 3, the temperature jumps associated with the beginning of the valley floor data record and the end of the hilltop data record have been removed, clearly revealing the underlying temperature trend shared by the two temperature time-series.

Also note that although neither of my simulated temperature stations have a full 1880-2016 temperature record, we were still able to compute a complete reconstruction for the 1880-2016 time period because there was enough overlap between the station records to allow us to “align” them via baselining.

The second problem, the non-uniform distribution of temperature stations, can clearly be seen in Figure 4 below. That figure shows all GHCNv3 temperature stations that have data records beginning in 1900 or earlier and continuing to the present time.


Figure 4: Long-Record GHCN Station Distribution

As one can see, the stations are highly concentrated in the continental USA and western Europe; Africa and South America, in contrast, have very sparse coverage. A straight unweighted average of the data from all the stations shown in the above image would result in temperature changes in the continental USA and western Europe “swamping out” temperature changes in South America and Africa in the final global average calculations.

That is the problem that gridding solves. The averaging procedure using grid-cells is performed in two steps. First, the temperature time-series for all stations in each grid-cell are averaged together to produce a single time-series per grid-cell. Then all the grid-cell time-series are averaged together to construct the final global-average temperature results (note: in the final average, the grid-cell time-series are weighted according to the size of each grid-cell). This eliminates the problem where areas on the Earth with very dense networks of stations are over-weighted in the global average relative to areas where the station coverage is more sparse.

Now, some have argued that the sparse coverage of certain regions of the Earth invalidate the global-average temperature computations. But it turns out that the NASA/GISS warming trend can be confirmed even with a very sparse sampling of the Earth’s surface temperatures. (In fact, the NASA/GISS warming trend can be replicated very closely with data from as few as 30 temperature stations scattered around the world.)

Real-world results

Now that we are done with the preliminaries, let’s look at some real-world results. Let’s start off by taking a look at how my simple “dumbed-down” gridding/averaging algorithm compares with the NASA/GISS algorithm when it is used to process the same GHCNv3 adjusted data that NASA/GISS uses. To see how my algorithm compares with the NASA/GISS algorithm, take a look at Figure 5 below, where the output of my algorithm is plotted directly against the NASA/GISS “Global Mean Estimates based on Land Data only” results.

(Note: All references to NASA/GISS global temperature results in this post refer specifically to the NASA/GISS “Global Mean Estimates based on Land Data only” results. Those results can be viewed on the NASA/GISS web-site; scroll down to view the “Global Mean Estimates based on Land Data only” graph).


Figure 5: Adjusted Data, All Stations: My Simple Gridding/Averaging (blue) vs. NASA/GISS (red)

In spite of the rudimentary nature of my algorithm, my algorithm produces results that match the NASA/GISS results quite closely. According to the R-squared statistic I calculated (seen in the upper-left corner of Figure 5), I got 98% of the NASA/GISS answer with a only tiny fraction of the effort!

But what happens when we use unadjusted GHCNv3 data? Well, let’s go ahead and compare the output of my algorithm with the NASA/GISS algorithm when my algorithm is used to process the unadjusted GHCNv3 data. Figure 6 below shows a plot of my unadjusted global temperature results vs. the NASA/GISS results (remember that NASA/GISS uses adjusted GHCNv3 data).


Figure 6: Unadjusted Data, All Stations: My Simple Gridding /Averaging (green) vs. NASA/GISS (red)

My “all stations” unadjusted data results show a warming trend that lines up very closely with the NASA/GISS warming trend from 1960 to 2016, with my results as well as the NASA/GISS results showing record high temperatures for 2016. However, my results do show a visible warm-bias relative to the NASA/GISS results prior to 1950 or so. This is the basis of the accusations that NOAA and NASA “cooled the past (and warmed the present)” to exaggerate the global warming trend.

Now, why do my unadjusted data results show that pre-1950 “warm bias” relative to the NASA/GISS results? Well, this excerpt from NOAA’s GHCN FAQ provides some clues:
Why are there more cold (negative) step changes than warm(positive) step changes in the historical land surface air temperature records represented in the GHCN v3 dataset?

The reason for the larger number of cold step changes is not completely clear, but they may be due in part to systematic changes in station locations from city centers to cooler airport locations that occurred in many parts of the world from the 1930s to through the 1960s.
Because the GHCNv3 metadata contains an airport designator field for every temperature station, it was quite easy for me to modify my program to exclude all the “airport” stations from the computations. So let’s exclude all of the “airport” station data and see what we get. Figure 7 below shows my unadjusted data results vs. the NASA/GISS results when all “airport” stations are excluded from my computations.


Figure 7: Unadjusted Data, Airports Excluded (green) vs. NASA/GISS (red)

There is a very visible reduction in the bias between my unadjusted results and the NASA results (especially prior to 1950 or so) when airport stations are excluded from my unadjusted data processing. This is quite consistent with the notion that many of the stations currently located at airports were moved to their current locations from city centers at some point during their history.

Now just for fun, let’s look at what happens when we do the reverse and exclude non-airport stations (i.e. process only the airport stations). Figure 8 shows what we get when we process unadjusted data exclusively from “airport” stations.


Figure 8: Unadjusted Data, Airports Only (green) vs. NASA/GISS (red)

Well, look at that! The pre-1950 bias between my unadjusted data results and the NASA/GISS results really jumps out. And take note of another interesting thing about the plot -- in spite of the fact that I processed only “airport” stations, the green “airports only” temperature curve goes all the way back to 1880, decades prior to the existence of airplanes (or airports)! It is only reasonable to conclude that those “airport” stations must have been moved at some point in their history.

Now, for a bit more fun, let’s drill down a little further into the data and process only airport stations that also have temperature data records going back to 1903 (the year that the Wright Brothers first successfully flew an airplane) or earlier.

When I drilled down into the data, I found over 400 “airport” temperature stations with data going back to 1903 or earlier. And when I computed global-average temperature estimates from just those stations, this is what I got (Figure 9):


Figure 9: Unadjusted Data, Airport Stations with pre-1903 Data (green) vs. NASA/GISS (red)

OK, that looks pretty much like the previous temperature plot, except that my results are “noisier” due to the fact that I processed data from fewer temperature stations.

And for even more fun, let’s look at the results we get when we process data exclusively from non-airport stations with data going back to 1903 or earlier:


Figure 10: Unadjusted Data, Non-Airport Stations with pre-1903 Data (green) vs. NASA/GISS (red)

When only non-airport stations are processed, the pre-1950 “eyeball estimate” bias between my unadjusted data temperature curve and the NASA/GISS temperature curve is sharply reduced.

The results seen in the above plots are entirely consistent with the notion that the movement of large numbers of temperature stations from city centers to cooler outlying airport locations during the middle of the 20th Century is responsible for much of the bias seen between the unadjusted and adjusted GHCNv3 global-average temperature results.

It is quite reasonable to conclude, based on the results presented here, that one major reason for the bias seen between the GHCNv3 unadjusted and adjusted data results is the presence of corrections for those station moves in the adjusted data (corrections that are obviously absent from the unadjusted data). Those corrections remove the contaminating effects of station moves and permit more accurate estimates of global surface temperature increases over time.

Take-home lessons (in no particular order):

  1. Even a very simple global temperature algorithm can reproduce the NASA/GISS results very closely. This really is a case where you can get 98% of the answer (per my R-squared statistic) with less than 1% of the effort.
  2. NOAA’s GHCNv3 monthly data repository contains everything an independent “citizen scientist” needs (data and documentation) to conduct his/her own investigation of the global land station temperature data.
  3. A direct comparison of unadjusted data results (all GHCN stations) vs. the NASA/GISS adjusted data temperature curves reveals only modest differences between the two temperature curves, especially for the past 6 decades. Furthermore, my unadjusted and the NASA/GISS adjusted results show nearly identical (and record) temperatures for 2016. If NASA and NOAA were adjusting data to exaggerate the amount of planetary warming, they sure went to an awful lot of trouble and effort to produce only a small overall increase in warming in the land station data.
  4. Eliminating all “airport” stations from the processing significantly reduced the bias between my unadjusted data results and the NASA/GISS results. It is therefore reasonable to conclude that a large share of the modest bias between my GHCN v3 unadjusted results and the NASA/GISS adjusted data results is the result of corrections for station moves from urban centers to outlying airports (corrections present in the adjusted data, but not in the unadjusted data).
  5. Simply excluding “airport” stations likely eliminates many stations that were always located at airports (and never moved) and also fails to eliminate stations that were moved out from city centers to non-airport locations. So it is not a comprehensive evaluation of the impacts of station moves. However, it is a very easy “first step” analysis exercise to perform; even this incomplete “first step” analysis produces results that strongly consistent with the hypothesis that corrections for station moves are likely the dominant reason for the pre-1950 bias seen between the adjusted and unadjusted GHCN global temperature results. Remember that many urban stations were also moved from city centers to non-airport locations during the mid-20th century. Unfortunately, those station moves are not recorded in the simple summary metadata files supplied with the GHCNv3 monthly data. An analysis of NOAA’s more detailed metadata would be required to identify those stations and perform a more complete analysis of the impacts of station moves. However, that is outside of the scope of this simple project.
  6. For someone who has the requisite math and programming skills, confirming the results presented here should not be very hard at all. Skeptics should try it some time. Provided that those skeptics are willing and able to accept results that contradict their original views about temperature data adjustments, they could have a lot of fun taking on a project like this.

Related reading

Also the Clear Climate Code project was able to reproduce the results of NASA-GISS. Berkeley Earth made an high-level independent analysis and confirmed previous results. Also (non-climate) scientist Nick Stokes (Moyhu) computed his own temperature signal: TempLS which also fits well.

In 2010 Zeke Hausfather analyzed the differences in GHCNv2 between airport and other stations and found only minimal differences: Airports and the land temperature record.

At about the same time David Jones at Clear Climate Code also looked at airport station, just splitting the dataset in two groups, and did found differences: Airport Warming. Thus making sure both groups are regionally comparable is probably important.

The global warming conspiracy would be huge. Not only the 7 global datasets also national datasets from so many groups show clear warming.

Just the facts, homogenization adjustments reduce global warming.

Why raw temperatures show too little global warming.

Irrigation and paint as reasons for a cooling bias.

Temperature trend biases due to urbanization and siting quality changes.

Temperature bias from the village heat island

Cooling moves of urban stations. From cities to airports or simply to outside a city or village.

The transition to automatic weather stations. We’d better study it now. It may be a cooling bias.

Changes in screen design leading to temperature trend biases.

Early global warming

Cranberry picking short-term temperature trends

How climatology treats sceptics