Jump to ContentJump to Main Navigation
Gravity's Ghost and Big DogScientific Discovery and Social Analysis in the Twenty-First Century$

Harry Collins

Print publication date: 2013

Print ISBN-13: 9780226052298

Published to Chicago Scholarship Online: May 2014

DOI: 10.7208/chicago/9780226052328.001.0001

Show Summary Details
Page of

PRINTED FROM CHICAGO SCHOLARSHIP ONLINE (www.chicago.universitypressscholarship.com). (c) Copyright University of Chicago Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in CHSO for personal use (for details see http://www.chicago.universitypressscholarship.com/page/privacy-policy). Subscriber: null; date: 30 March 2017

(p.355) Appendix 4 A Sociologist Tries to Do Some Physics

(p.355) Appendix 4 A Sociologist Tries to Do Some Physics

Source:
Gravity's Ghost and Big Dog
Publisher:
University of Chicago Press

The technical physics/philosophical arguments that supported my attempt to intervene in the little dogs debate went as follows.1 Forgetting about gravitational waves having “fingerprints,” no one can know by means other than statistical inference whether a zero-delay coincidence is a real signal or a lucky chance. Therefore, one should redescribe the search for gravitational waves as a (p.356) search for “zero-delay coincidences.” The scientists, though they describe themselves as searching for gravitational waves, have no choice but to search for coincidences because, at best, that is, what they see. That they are searching for coincidences is not affected by the fact that under the right circumstances, they will later describe zero-delay coincidences as “observations of” or “evidence for” gravitational waves.

Each zero-delay coincidence has, as I saw it, a certain “scientific value” made up of two components. The first component has to do with the quality of the coincidence as gauged by whether the devices are in a good, quiet state when the coincidence is found, whether the environmental monitors have picked up anything that would cause the setting of a warning flag, whether the waveform is plausible, and so on. The second component relates to the offset coincidences. Scientific value is reduced every time an offset coincidence is found in the time slides. Find a real time coincidence along with few or no offset coincidences and confidence is much higher than if lots of offset, and therefore noise-generated, coincidences of similar size are found in the background generated by time slides. Under this philosophy one never asks if this zero-delay coincidence is a gravitational wave because one accepts that one cannot know, one simply asks what is the “scientific value” of “this” zero-delay coincidence.2

Imagine that there are lots of zero-delay coincidences in the data streams and hardly any noise so that, in the absence of data-quality flags or other reasons for vetoing the coincidences, they would have high scientific value. Such a situation can be represented schematically as in the top half of figure 26, which shows “ribbons” from two detectors, A and B, each showing excursions. Here there are five zero-delay coincidences and a couple of noise excursions that are not opposite each other.

To make it easier to visualize and talk about, the bottom half of the figure shows a second version of A and B with the zero-delay coincidences marked as exclamation marks. It must be understood that, barring “fingerprints,” which did not feature at this stage of the argument, there is no knowable difference in substance between an exclamation mark and a solid bar. In other words, the only thing an exclamation mark indicates is (p.357)

Appendix 4 A Sociologist Tries to Do Some Physics

Figure 26. Lots of real coincidences and two bits of noise

what might be called “coincidenceness.” I’ll use the exclamation mark version in the rest of the argument.

Imagine, now, that one carries out the maximum number of time slides with the ribbons arranged as a continuous loop. If the zero-delay coincidences are left in, the number of offset coincidences is 36 − 5 = 31. This is because there are six components in each data stream and they can combine in 6 × 6 different ways to form coincidences but five of those are at zero delay and must be subtracted once the exercise is completed.

If, on the other hand, the zero-delay components—the exclamation marks—are removed at the outset, there will be only a single offset coincidence found in the time slides. So in the first case the zero-delay coincidences would be devalued by the existence of thirty-one offset coincidences and in the second case they would be devalued by the presence of only one offset coincidence. It is obvious that the right answer is the second one; there are lots of zero-delay coincidences and their value is hardly diminished at all by the fact that there is one offset coincidence in the data streams. This has nothing to do with gravitational waves—it is (p.358) simply about the logic hunting for coincidences with “scientific value” and working out that scientific value by doing time slides and counting offset coincidences.

The next step is an “argument from induction.” Remove one of the zero-delay coincidences from figure 26. The numbers change a bit but the argument does not. Then remove another and the argument still remains unchanged. Continue this way until there is only one zero-delay coincidence left and the argument still remains unchanged. Add some more offset noises and the argument still remains unchanged. At no point is there a sudden change in what is going on—nowhere does quantity change to quality. If this argument is correct, the right way to do the analysis must be to remove the components of the zero-delay coincidences before doing the time slides whether there are lots of zero-delay coincidences or only one. The argument, if it is right, works because it looks only at the logic of coincidence-hunting and does not slip into the language of real signals versus noise; it works because exclamation marks and solid bars are treated as identical.

In the six months before the envelope was opened I could get only one physicist respondent to take these ideas seriously.3 That physicist agreed in principle with the point that we should not be asking whether the signal was real or not but merely working with the coincidences that we had in front of us. But, like Dogwood (see note 1), the respondent had a very different method, based on Bayes’s theorem, for proceeding from there.

After the envelope was opened and the debate about Big Dog was over, one respondent who found time to look back at my argument questioned the “inductive” part of the procedure—where I remove coincidences one by one in order to work out what to do in the one-coincidence case by inference from the many-coincidence case of figure 26. This respondent argued that though there was no sudden change as exclamation marks were removed there was still a big difference between the beginning and the end point of the inductive procedure because of the large effect of a single extra offset coincidence in the actual, one-coincidence, case, as compared to the hypothetical, many-coincidence case of figure 26. This may well be a good (p.359) argument but the point is that it was an argument and to have an argument one must start with something that is worth arguing against rather than simply dismissing. So, right or wrong, it was encouraging.

A lengthier discussion of my argument about little dogs was to take place on 12 July 2011. It coincided with a big gravitational-wave conference that happened at my home university, Cardiff.4 I was invited to give a public lecture on my work at the meeting. My talk was attended by many of the gravitational-wave scientists and I was able to persuade two of the most senior persons in the gravitational-wave community to come to my office on the following day to discuss the little dog business. For a couple of hours we argued the matter back and forth using PowerPoint slides. My conclusion was, once more, that my approach may not be the best possible but was not unreasonable. The basis of this claim was the good natured tone of our discussion, the fact that the meeting continued into the evening as a social occasion, and an e-mail I wrote the following day, which was not met with a rebuttal.5

Dear [R] and [S],

Thank you very much for yesterday’s talk—it was very interesting and useful to me, not to mention very enjoyable. Sorry, S, that you could not make dinner.

S, I attach the newly published paper I mentioned about interactional expertise.

I thought I would try to sum up what I thought came out of the discussion to see if you agree this is fair.

First, in earlier e-mail to me R you say, among other things:

I believe that the problem here is that you have misunderstood (or poorly described) what we are trying to do. We are trying to discriminate between accidental coincidences of noise events which do not have a common origin and coincidences which do have a common origin, e.g., a gravitational wave. Your argument, that we are looking for coincidences, doesn’t make this distinction clear, and will confuse readers. Turn one of the coincidences [in figure 26] into a coincidence between two noise glitches i.e., one pair of!’s into a pair of |’s and then see whether your reasoning still holds.

(p.360) I hope that yesterday’s discussion will have convinced you that any mistakes on my part do not emerge out of any lack of understanding of this crude nature. Certainly, nothing said yesterday indicated to me that I did not understand at this level.6

On the matter of time slides etc. I thought I learned two things that I had not grasped or fully grasped before:

  1. 1) I had not fully grasped your “eliminate one coincidence at a time” strategy and this was because, as you pointed out, there was no strength dimension in my ribbon-diagram. I now see that if the coincidences in my diagram had been ordered in terms of their strength then a strategy that started with the strongest one and based an estimate of its sigma value on time slides containing only noncoincident glitches that were as strong or stronger would be a reasonable one; it would do no worse in terms of risking a false negative than the Big Dog case where there was only one potential signal. I still can’t work out whether this strategy would work if the components of the “strongest signal” were significantly asymmetric so that you have to include lots of weak glitches—including the components of other coincidences—but I can see that your approach is correct in principle (unless nature is capricious enough to give you a lot of coincidences of nearly equal strength). If you have any further thoughts on the asymmetry problem please let me know. Anyway, I’ll try to explain this in the book.

  2. 2) Completely new to me was the suggestion that in a case of multiple coincidences (e.g., like the 5 on my ribbon diagram) the background one would look for in the time slides would be accidental sets of five coincidences rather than single coincidences. This seems to me to make a lot of sense—though, again, I would guess there is a lot of devil in the detail. Once more I will try to explain this in the book.

  3. 3) My overall conclusion from 1 and 2 above is that (a) my argument in the book does not lead to the best approach in this matter and that, unsurprisingly, you are well ahead of me in thinking out this stuff but (b) the arguments I put forward were not completely stupid and trying to think it through independently has been a worthwhile exercise in a number of ways not least because it resuited (p.361) in a clearer understanding and my gaining more knowledge of the thinking in the field …7

    That is roughly how I think yesterday’s discussion will affect the book so, if you still have time and patience for this, let me know if you think that is reasonable or where you think it is not.

    Cheers

    Harry

To explain, under what I refer as point 1, above, what R pointed out to me was that all the coincidences in my figure were of equal magnitude. He suggested that in the normal way they would vary in magnitude. If this was the case one could proceed by taking the biggest one first and running time slides with the equivalent of the little dogs left in. This way, the biggest one would not produce a spurious signal when combined with the other components of the coincident signals because they would not be large enough. If the biggest component was found to be a signal it could then be eliminated and the next biggest could be analyzed in the same way and so on. I had not thought of that way of proceeding and had to admit that it made good sense. On the other hand, my response, which I offered at the time of the discussion, was to ask what would happen if the two components of the largest coincidence (as measured by signal-to-noise ratio), were of different sizes. In that case, when the large one was “run against” the opposite ribbon, spurious signals with equal overall signal-to-noise ratio would be generated by coincidences with components equal in size to the smaller component of the original. This would make the whole procedure less reasonable. And it was the case that Big Dog itself was made up of components of very different sizes—one being between 1.5 and 3 times as big as the other.8 Under such circumstances it does not seem to me that choosing the biggest signal to eliminate first works as well as it appears to at first sight. Readers should be getting the sense of the extent (p.362) to which I was trying to go beyond interactional expertise into the realms of contributory expertise.

Under point 2, R suggested that the right way to analyze data of the kind I had invented was not to look at it one coincidence at a time, but to consider the likelihood of finding spurious sets of five coincidences at a time. This too was a new idea to me and seemed a good way to do the analysis; at first sight it seems to resolve the little dog problem for large numbers of coincidences and, using the same kind of inductive inference that I used, resolves the case of a single coincidence in favor of leaving the putative signal in. This was the first I had heard of these notions, though S intimated that some of them had been discussed in the streams of e-mails that went backward and forward and promised to forward the relevant mailings to me; unfortunately they did not come.

So, we can see that I am simply not as good as R at thinking out solutions to these problems but my approach was sensible enough to elicit new (or new to me who had been listening hard for 6 months), solutions. The point of this detailed exposition is to establish something claimed in the chapter 17—that my approach may not have been the best but it was not crazy.

Notes:

(1) . Some more minor arguments of my own are found in the main text. For instance, I work out that sigma is unaffected by increasing the run length unless the little dogs are removed and this, I argue, suggests, in a philosophical sort of way, that they ought to be removed leaving it possible, under certain assumptions, to improve the limit on the FAP.

I had an additional argument that, for economy’s sake, I relegate to this footnote: In the case of something like the Big Dog, however the time-slide analysis in handled, it is going to be concluded that there is very little chance that the signal was anything other than a gravitational wave. Let us say that at the very worst it is concluded that there is something like one chance in a hundred that it is not a gravitational wave. Yet if it is a gravitational wave then it ought to be removed from the data stream before the time-slide analysis is done. In terms of the tradition of physics, one in a hundred is not good enough to license a discovery announcement but it indicates the extent to which the components of the coincidence ought to be discounted when the time-slide analysis is done—the effect of any little dogs ought to be divided by 100. In effect, this means throwing them out. This is not the most conservative assumption but it isn’t bad. It might be that this last argument is not so different from this remark, made by Dogwood in an e-mail of 26 October 2010, but I am not really sure.

A really correct treatment would use Bayesian statistics to evaluate the odds ratio of any given coincidence being signal vs. background, but we don’t have the mechanics to do that. Bayesian statistics would take into account the estimated prior probability of the Big Dog being a signal.

(2) . I originally wrote up these ideas in pseudoscientific notation: If we call the number of offset coincidences “O” then, to use pseudoscientific notation, SV = V/ f(O); this translates as SV, the scientific value of a zero-delay coincidence, is equal to its value as affected by environmental monitors, wave form, etc, divided by some “function” (f) of the number of offset coincidences. This was greeted with such visceral scorn by at least some of the physicists that this footnote is the last place we will see such a quasi-formula.

(3) . It would not be unreasonable to say that in trying to do physics I was acting, or at least opening myself up to being defined, as a “crank.” We can define a crank as someone who believes they have a special insight into the physical world even though they are not properly connected into the specialist oral and institutional culture. It is this culture—the paradigm, or form-of-life of the field—that sets out the boundaries of what counts as a proper problem and the boundaries of what can count as a proper solution to existing problems. Cranks go for problems and solutions that lie outside these boundaries.

(4) . 9th Edoardo Amaldi Conference on Gravitational Waves, Cardiff University, 10–15 July 2011.

(5) . These two scientists are frequently represented in the book, identified as anonymous trees, but here I’ll use letters instead of tree names.

(6) . Readers who follow my argument that there is nothing to be seen but coincidence will realize that it makes no sense to replace an explanation mark with a solid bar.

(7) . A missing sentence here refers to the promise of one of the scientists to send me the e-mails, which didn’t arrive, where similar issues had been discussed before. There were another couple of paragraphs on other topics, which will be brought back under another heading.

(8) . In response to my query, Peter Saulson explained: “Signal to noise ratio is the usual way to compare signals in two interferometers. In the CBC search, we introduced a variant (NewSNR) to take account of whether the signal was a good match to our templates. The Combined NewSNR is what was reported in the paper. For the Big Dog, H1 had a NewSNR = 10.29, and L1 had a NewSNR = 6.29. I don’t know whether it makes more sense to talk about a ratio of these numbers, or a ratio of their squares, but those are the numbers to compare” (personal communication, 31 January 2012).