Saturday, September 24, 2016

Twelve coin problem

Update: New constructive algorithm appended.

Things are a bit quiet in climate blogging - so with a weekend coming I'll honor my ancient promise of diverting to some recreational math. I saw a few weeks ago a mention at Lucia's of the old twelve coin problem. Twelve coins, one of which is fake and of different weight to the others, and to be found with three weighings on a balance (Update: the usual spec, as here, is that you also have to say whether it is heavier or lighter). This first came to prominence in WWII, when it was said to have become a distraction to the war effort in places like Bletchley Park, leading to suspicions that it had been planted by the Germans. A suggested counter was for the RAF to drop it on Germany.

I first encountered it when I started University; it was posed in circumstances where I was expected to be able to solve it, and I was embarrassed. A year later, I went to a lecture on information theory (newish in those days). I was struck by the proposition, helpful in later years, that the information in a result was a function of prior uncertainty. So that is the clue - each weighing should be arranged so each of 3 outcomes was as near equally probable as could be managed, maximising prior uncertainty. Then I could solve it easily, and also versions with more coins.

The basic constraint is that in N weighings there are 3N possible outcomes, while with n coins, there are 2n situations to resolve, since only once coin is false, and could be heavy or light. So for 12 coins, there are 24 possibilities and 27 outcomes of 3 weighings, so it is tight but possible. An alternative to the equiprobable outcome method is a requirement that each weighing should be so that each outcome was resolvable with the remaining weighings.

When I saw it mentioned again, I started thinking of a constructive algorithm that would also prove feasibility for all cases where there were enough results to theoretically resolve. I had an idea of describing the proof, but then my other recent hobby of Javascript programming seemed it might help. So I've made an interactive version (below) with the information needed to solve.

You can choose a number of coins up to 121, and then start (or restart any time). There are boxes for left, right and off-scale. The buttons representing coins have colored bands top and maybe bottom. The top band is what I call the HL score. Initially each coin might the bad one, heavy or light (HL). But once there has been a weighing that tilted, some possibilities fail. On the side that dropped, the coins could no longer be light, hence H. On the other side, they are L. And any not on the balance must be good (G). So the total number of possibilities remaining is the HU score = 2*HL+H+L.

There is another more subtle score - DU. This applies to the coins on the balance before weighing, and relate to whether you will gain information if the left balance goes down or up. HL coins will always become either H or L, so they are not scored. But if left pan down, then L on left or H on right would then be known to be good, so they are marked as D; if the left pan went up, that would tell nothing new about D coins. And similarly H on left or L on R would be counted as R.

You can move coins to the right (cyclic) by mouse click, or left by click with shift key pressed. If they are H or L, you will see the lower DU bar change as they move. When a weighing has been set up, click the weigh button.

The color scheme for top bars is HL black; H orange; L blue and G yellow. For the bottom, only H and L coins on the balance have color, and it's red for D, green for U. Here is an image of the gadget immediately following afirst weighing of 12 coins. Note the balance tilt. The four coins on the left are now H because they are on the heavy side, and U because they are H and left. On the right, they are L similarly, and also U. The off balance coins are all G, yellow.



You'll see a table with the HL counts for on and off, and the D and U counts. Three are in large fonts, and those are the ones needed for solution. The strategy is that for every weighing, the D and U should be as near equal as possible, and the off HL score should be about half the on. More critically, each should be ≤ 3M, where M is the number of weighings to follow after the one being set up. So in the image, for the next weighing, these numbers should all be ≤3. So take 3 off, then swap the others until the D and U scores are each ≤3. You'll see that as you rearrange, the system pads with known good coins (if available) to retain balance, and removes surplus. You can try to minimise usage of dummies; you should be able to reduce the need to at most two. If known good coins aren't available (first weighing), don't worry; the system will imagine them present.

So here is the gadget. Just press start, then move coins for weighing with click and shift-click, bearing in mind the above strategy. As long as the three big numbers are ≤ 3M, you can solve in minimum weighings. It is solved when there is just one coin with H or L color showing.



I have to say, it is now a bit mechanical. You don't need to know the coin numbers, or even what the colors mean, as long as you keep your eye on the scores. But the Javascript isn't solving it - it's just adding up what you could count yourself, but displayed in a helpful way.

Update I have changed the DU score to count 1 each for HL (black) coins. This is more consistent when there are such coins. It means that all 3 large font numbers should now be made as nearly as possible equal an ≤3^M.

Update - proof

I originally planned this post as a constructive proof - show a method that is sure to work. I think the scoring goes a long way there. But a method can be spelt out. Suppose we have a set of coins with HL score g≤3N+1, to resolve in N+1 weighings. It's enough to show that a weighing can be made to reduce this to an assured score h≤3N. And the point of the big font numbers in the gadget is that they are the respective HL scores after each possible outcome.

Let g=2*p+q, where 3N≥p≥q≥0. Normally p=3N is OK, but if that leaves q<0, you can reduce p until q is positive. Then take a set of coins, HL score q (including all known good ones), to be kept off the balance. If all coins have HL score 2 (as at the start), that's still possible, because g and hence q must be even.

Then put the remaining coins one by one on the balance. Depending on what side, the D and U scores will increment by 1 or zero. Put all the H coins on first, alternating sides so the gap between D and U is never more than 1. Then the L, also so as to increase whichever of D,U is less. That will also alternate, so there will be a max imbalance of coin numbers of 2. Then finally, if there are HL coins, put them on whatever side improves coin number balance; each will increase both D and U by 1.

This ensures D+1≥U≥D. And since D+L is the HU score, when loaded, D=U=p≤3N. Since D,L and q are the respective HL scores after each possible balance result, and each ≤3N, that completes the proof.

Numbers of coins on each pan may differ by up to two. After the first weighing, this can be balanced by adding known good coins. On the first weighing, if g<3N-1, you can always choose p even, so the coins can split equally. In the worst case where g=3N-1 (eg 13 coins in 3 weighings) an external known good coin is needed.

Update: New constructive algorithm.

I thought of a simple constructive algorithm, with notation. Whenever a coin is on a tipped balance, you have half-knowledge about it. If it is on the down side, you know it can't be light, so I call it a H coin. On the other side, L. A known good coin I'll call G; one not half-known U

I'll call an odd trio, one that is HHL or HLL. An odd trio is the most that can be resolved in one weighing. You weigh an HL against a GG. If HL tips up, L is the culprit, if down, H. If balanced, you know the coin taken off is the bad one, and you know if it is H or L.

Any half-known duo duo can be resolved by just balancing one coin against a G. So at the end of the second weighing, we must have nothing harder than an odd trio.

So start with 8 on the scale, 4 off. This makes the 3 outcomes equally likely. If =, the 8 are G. Put 3 of the 4 on the scales, with one G. If = again, there is just one coin left, and can be weighted against a G. Otherwise, we have an odd trio.

If the first weighing tipped, remove a HHL, Leave a HLL in place, and interchange the remaining HL. Add a G to even coins for next weightng. If =, the odd trio HHL taken off has to be resolved. If it tips the HLL has it. If it tips the other way, the HL needs to be resolved.

In this table, each cell contains 3 groups. The first is the coins set aside, the other two are those placed on the scales. It stops after third weighing, because all outcomes can be resolved as odd trio or easier in third weighing.

Weighing 1UUUU UUUU UUUU
Outcome 1GGGG HHHH LLLLUUUU GGGG GGGG
Weighing 2HHL HGL LLHU UU UG
Outcome 2HHL GGG GGGGGG HGG LLGGGG GGL GGHU GG GGG HH LG
Weighing 3H HL GGL HL GGL H GG U GH HL GG










Tuesday, September 20, 2016

CRUTEM (HADCRUT) versions are documented and accessible

I have encountered at WUWT ongoing complaints about HADCRUT 4 updates. It is currently in a thread here, but goes back to an earlier post here. The complaints typically say that the new versions always raise current anomalies, and suggest that they are poorly documented. In fact, the changes are extensively noted; see directory here.

In the earlier thread Tim Osborn commented here, to say mainly that the changes were due to changes (mainly additions) to station data, and listed the particular additions to HADCRUT 4.3. He also later made the important point that there is a good reason why the trends rise with new data. HADCRUT is a land/ocean set, but the empty cells are mainly on land, and the new data allows some of them to be filled. HADCRUT is an average by grid (area-weighted), in which cells without data are simply not included. That has the effect of assigning to them the global average, which is dominated by sea temp. If new stations assign to empty cells genuine land values, that will increase the trend, because land is warming more rapidly. HADCRUT had artificially low trends because of this missing value policy, as was remedied by Cowtan and Way (2013) - discussed here in a series of posts, with links here.

But another feature of HADCRUT transparency is insufficiently appreciated. For Ver 4, at least, they give a complete listing of station data for each version, with each station file documented. Here is a typical version file; it is for 4.4, but just change that URL to 4.2 or whatever you want. Each links to a zip file of the station data for that version (except for Poland), which has a URL like http://www.metoffice.gov.uk/hadobs/crutem4/data/previous_versions/4.4.0.0/station_files/CRUTEM.4.4.0.0.station_files.zip. I'm spelling out the URL because if you click on it, it will immediately download about 18Mb. But again, you can edit for other versions.

I couldn't find, though, inventory files, except for 4.5. But it's easy enough to make them from the file headers. So I've done that, and placed the zipfile here (612 Kb). It has a csv file for each of 4.2-4.5, and the columns have 3 letter abbreviations meaning:
  • num - a unique HAD station number
  • nam - name of station
  • cou - country name
  • lat - latitude in deg
  • lon - longitude in deg
  • alt - altitude in m
  • sta - start year of data
  • end - end year
  • sou - source id code
UEA has an explanations file here, which is the best source I have found for the source id codes, but unfortunately it dates from 2012, and there is a new one pretty much with each new dataset that has come in. I'd be glad to hear of something more recent. It isn't really a problem, because the later numbers are in order of addition, so are easy to work out. Note that the files are in number order, but countries are not necessarily consecutive blocks.

So I thought I would just post this information, so that people who really want to know what HADCRUT is up to can look it up. I may in future produce a Google map.



Friday, September 16, 2016

Arctic ice freezing, Antarctic melting

As many have now noted, Arctic sea ice stopped melting rather abruptly after 6 September, and has lately been freezing quite rapidly. The pattern is quite similar to last year, but a few days earlier. In both JAXA and NSIDC the minimum was lower than 2007, but higher than 2012, so in secoind place (but in NSIDC, only just).

Meanwhile, more surprisingly, Antarctic ice has been melting strongly for six days, and now stands below the other years of this century at least, for this time. From the radial plot here, I'll show the NSIDC plots for recent days. The plot spans most of September.

ArcticKeyAntarctic

The red curve is 2002. Orientation same as NH.


Thursday, September 15, 2016

Thermodynamics of climate feedback

I have been describing and responding to blog arguments about climate feedback and circuit analogies here and here. The arguments have continued, and they do provoke ideas. I'm going to write some down in this post.

The usual circuit analogy has surface temperature as voltage, and TOA flux as current. I showed in the first post that the feedback, including Planck, could be regarded as conductances. It's interesting to probe what this might mean. The units are watts/m²/K, which are actually the units of entropy/s/m². Does entropy make sense?

I wrote about entropy and atmospheric fluxes here and here. Sunlight (Q=240 W/m² global average after albedo) arrives, does things, loses the capacity to do work, and eventually leaves as thermal IR. It has accumulated entropy, or if you prefer, lost free energy. You might think that with a heat sink at 3K (space) the heat could go on doing work. But in fact you need a miniumum temperature to radiate that flux to space, which for Earth is about 255K (note 1). That constitutes a resistance. To get Q=240 W/m² to flow to space, you need 255K (voltage).

In our system, that resistance, inverted, is the Planck conductance, or feedback. It represents the entropy flux to space. It's really the maximum or optimal entropy flux for 240 W/m². In fact, emission to space comes from a rather large component at about 225 W/m², from GHGs, and some from the surface at average 288K (atmospheric window). We know uniform blackbody emission exports most entropy for a given flux, because any variation means that more entropy could be generated by transporting heat from the hotter parts to cooler.

This lies behind a supposed failing proclaimed in a WUWT post of Lord Monckton. The Planck feedback calculated for the Earth at 255K, the temperature for uniform BB emission of 240 W/m², is 3.75 W/m²/K, and Lord M thinks they erred by not using it. But its inadequacy has been long known, and I wrote in the previous post how Soden and Held (among others) did a thorough study with GCMs to get a value of about 3.2 W/m²/K. The difference is usually attributed to absorption in the atmosphere, but thermo gives an alternative viewpoint, which I find more useful. It is the entropy export reduced by the non-uniformity (sub-optimality) of apparent emission temperature.

Tuesday, September 13, 2016

GISS up 0.13°C in August

GISS is up from 0.85°C in July to 0.98°C in August. This compares with a larger rise of 0.21° in TempLS (but GISS rose in July when TempLS dropped slightly, so over two months, about the same), and is much more than the small rise in the NCEP/NCAR index. It is also the warmest August in the record (next was 0.79°C in 2011).

I'll show the map comparisons below the fold. The updated comparison plots with 1998 are here

Sunday, September 11, 2016

Unicode gadget

I'm planning a new page which has gadgets that I use for blog writing etc. The first entry is likely to be a Unicode writer; here is a draft you might like to try. Unicode is the massive collection of special characters which you can probably access using your system cahacter generator - on Windows it is a Windows Accessory called Character map.

Unicode chars come in groupings; a good listing is here. Each char has a number and you can render them in HTML using the scheme &#2022; for character 2022 (in decimal). But they are widely rendered and usable as characters, in browsers and editors. The first 255 chars are just ascii. A particular aspect of usability is that you can use them in blog comments where most fancy html is not allowed.

I find them very useful for maths, chemical formulæ etc. You can use them where latex is unavailable, and there is much less overhead than latex. Here are some examples:
Ca⁺⁺+CO₃⁻⁻⇌CaCO₃₍ₛ₎
∫₀¹uⁱ⁻¹(1-u)ⁿ⁻ⁱ⁻¹du = β(i,n) = Γ(i)Γ(n)/Γ(i+n)

You can cut and paste from a char map as in Windows, or write the long form HTML as above. But that is tedious, as they come in lists of thousands, many for all the various language scripts. So I've collected a manageable table which I think has most that I'll ever need, and added an editable phrase generator, below the jump.

Thursday, September 8, 2016

Big rise (0.21°C) in surface temperature in August

Surprising, but TempLS mesh is so far an outlier here. The TempLS mesh global anomaly rose from 0.65°C in July to 0.86°C in August (base 1961-90). This is a change after a period of slow decline, and is almost back to the level of last January. TempLS grid showed a much smaller rise of 0.043°C. These results are consistent with the NCEP/NCAR index (up 0.02°C). The satellite measures varied; UAH6 LT was up 0.05°C, but RSS down just 0.01°C. The reason for the discrepancy seems to be the big variation in Antarctica, which is variably seen and weighted.

The spherical harmonics map is here:



The regional temperature variations are similar to those in the NCEP/NCAR report, though the warmth in Russia is more pronounced. In the breakdown plot, you can see the big contribution from the change in Antarctica. SST also rose significantly.



On this basis, I would expect a rise of at least 0.1°C in GISS, with maybe smaller rises in NOAA and HADCRUT (less affected by Antarctica). In other news JAXA sea ice extent had a late melt rush and dropped below 2007 to be second only to 2012. It won't catch 2012, and the main marker remaining is whether it will drop below 4 million sq km. It is 4.023 now. NSIDC is also second place.



Tuesday, September 6, 2016

More on climate feedback

Recently I posted on Climate feedbacks and circuits. It was in the context of a series of articles by Lord Monckton at WUWT. The articles have continued, with more promised. Willis has joined in, though they are not always on the same page. And I've been commenting there.

My last post was in response to much talk of positive fedbacks and instability. I showed an active circuit (borrowed from Bernie Hutchins) which wass table and would emulate the usual equation of feedback in the conetxt of climate sensitivity:
ΔTeq = ΔF/( 1/λ0 – Σci)
My main observation is that this is just like Ohm's Law, with the feedback coefficients c as conductances in parallel. The need to resort to active feedback comes from the fact that some c's, corresponding to positive feedback, may be negative. But the basic simplifying idea off adding conductances still applies.

In a much-cited review paper, Roe 2009, roe argues that feedbacks should be seen as Taylor series in disguise. I think he could have said just chain rule, since he uses only first order. But I want to develop this, because it is the basis for the method used in an also much-cited (including AR4) paper by Soden and Held, 2006. Lord M's latest post is based on the claim that S&H have made an error, but he really has no idea of what they did. So I'll try to say something about that here.

Saturday, September 3, 2016

NCEP/NCAR global up slightly

For the second month running, the Moyhu NCEP/NCAR index rose slightly, from 0.414°C to &0.428°C. Temperatures were warm mid-month, but dips at start and finish. As usual, the reanalysis shows local variation in Antarctica, where data is patchy, but overall it seems warmth predominates. Otherwise few major features - East Europe was warm. There seems to be a "pause" in cooling, which increases the likelihood of record heat in 2016.

In other news, there has been a late spurt in JAXA sea ice melt, which seems likely to take 2016 minimum past 2007, and exceeding only 2012. NSIDC says similar.

And OT, but the Blogger platform has been a bit shaky lately. I think the only thing affecting users at the moment is that on Chrome the blogroll is updated but not always  kept in time order. Firefox seems OK. I'm reluctant to intervene, since I don't have a good javascript diagnostic tool on Chrome.