I have a couple of questions about W’s RA paper of the “I’m pretty sure I’m missing something” sort. So if anyone has ideas of what I’m missing I greatly appreciate the help.

1. The key to W’s response to O&S seems to be the nature of step 3.

[T]he third step of robustness analysis involves interpreting the mathematical structures as descriptions of empirical phenomena. In the predation case, theorists have to decide how two coupled differential equations will explicitly map on to the properties of real or imagined predator-prey systems. (738)

But I’m not sure what this involves. The most natural way for me to read it is to say that the investigators are now deciding what the link is between the model descriptions and the models (and on to targets). But surely this has been done already when the initial model descriptions were generated.

When W explains how this answers O&S he says,

Standard issues in confirmation theory concern whether a particular kind of model, such as the logistic growth model, is confirmed by the available data. However there is a prior confirmation-theoretic question that is often asked only implicitly: If the population

isgrowing logistically, can the mathematics of the logistic growth model adequately represent this growth. (740)

The way I naturally read this, such a confirmation is trivial. What is it for a population to grow logistically except that its growth can be described by a logistic function?

W goes onto say that confidence in a positive answer to the question above is bought by “demonstrating that the relevant mathematics could be deployed to make correct predictions” (740). What this means beyond “theories are constructed using the maths and predictions are compared to real-world events” I don’t know. All the above only seems to indicate that the models that go into the hopper for robust analysis were developed in normal scientific (empirical) ways (and therefore RA is not non-empirical confirmation). But this is exactly what I take Levins’ response to be at the beginning of the paper (on 733) (though W has tidied up to what Levins’ response may apply, i.e. his more complex formulation of robust theorems).

2. My other confusion concerns the “two key questions” on page 739. W takes answering these as the key to ensuring that the antecedent of the conditional holds, and that all the *ceteri *are *paribi*. They are:

1. How frequently is the common structure instantiated in the relevant kind of system?

2. How equal do things have to be in order for the core structure to give rise to the robust property?

Answering the second is essentially the fourth step of analysis, so I’m not quite sure what it’s doing here. But that is a minor complaint.

W’s discussion of the first question seems to indicate that the greater the variety of models that are put into the hopper is, the more likely it is that the robust property will be true of target systems, since it is more likely that the common causal structure will actually be present. But then he also says that “This would allow us to infer that when we observe the robust property in a real system, then it is likely that the core structure is present and that it is giving rise to the property” (739). Wouldn’t the conditional representation of robust theorems have to be a bi-conditional for this to be the case? He says that the question is best addressed empirically, so the “relevant kind of system” is indeed talking about target systems. So in applying this to the Volterra example, would it be that various pred-prey systems were examined, the various models were developed that fit those systems, and then the models were all thrown in to the hopper and Volterra came out as the robust property with the given antecedent as the core structure? And if we studied a lot of systems in generating many models, then we can be more confident that we’ve really pared the core down to the important parts for the robust property. Is that what he’s saying?

In sum: I am confused. Thoughts?

I think that your analysis of the first question is right Josh.

To put it another way, my understanding is that RA merely examines models which themselves already have a low-level confirmation which is implicit in their development and use as models which represent to some degree a given target system. The common structure (i.e. robust theorem/principle) that it picks out from a series of models with a common conclusion then is not imbued with some new non-empirical confirmation but is confirmed by virtue of its reliance on the models which again are low-level confirmed. Models with low-level confirmation are, I think, the same as what you call models which have “developed in normal scientific (empirical) ways.” This may in fact be the same response as Levin gives to Orzack and Sober on pg 733 though to be honest I’m not sure exactly what he’s saying there. Weisberg does say though that the problem with Levin is that he does not offer an adequate description of RA and it may be the case that Weisberg believes that Levin’s response is unclear precisely because he does not offer a specific enough mechanism in which RA is empirically confirmed, which Weisberg then provides.

I’m still thinking about the second question.

Josh said:

“[T]he third step of robustness analysis involves interpreting the mathematical structures as descriptions of empirical phenomena. In the predation case, theorists have to decide how two coupled differential equations will explicitly map on to the properties of real or imagined predator-prey systems. (738)

“But I’m not sure what this involves. The most natural way for me to read it is to say that the investigators are now deciding what the link is between the model descriptions and the models (and on to targets). But surely this has been done already when the initial model descriptions were generated.”

First off, I’m kinda hazy on what exactly a “model description” is. Here’s what I think is going on with that: According to my undergraduate phil sci course, a model is a language-independent semantic structure. So I think a model description is any syntactical representation of the model. Could be a computer program, a set of mathematical equations, a picture, or even a physical model. (And a model description could refer to multiple models, just as a model could have multiple descriptions.)

To answer your core question: A model doesn’t necessarily need to resemble anything in the real world – it could be vacuous. Until you actually try to apply it to the real world, a model could just be a fun game to play on a computer. In order to actually use a model, you have to figure out what bits of the real world are analogous to certain pieces of the model. For example, in order to use a darwinian model, you have to figure out what bits of the real world are inheritable traits.

If a model description is what I think it is, then a model description doesn’t have to do this step of linking the model to the target system. A model description is just a particular set syntactical tokens that have the semantic structure of the model (when interpreted using the language the description was written in).

Josh said:

“Standard issues in confirmation theory concern whether a particular kind of model, such as the logistic growth model, is confirmed by the available data. However there is a prior confirmation-theoretic question that is often asked only implicitly: If the population is growing logistically, can the mathematics of the logistic growth model adequately represent this growth. (740)

“The way I naturally read this, such a confirmation is trivial. What is it for a population to grow logistically except that its growth can be described by a logistic function?”

Remember that all mathematically correct models are tautologies. A correct model should look like something vaguely along the lines of “if x, then y,” such that “if x, then y” is analytically true. The hard part of applying the model to the real world is figuring out if x is true in the real world. If you don’t get y in the real world, and the cetris paribus conditions hold, then that would tend be a good sign that x isn’t the case in the target system.

X in this case is “the population growing logistically.” Y is the results of some logistic function. “If x, then y” *is* trivial, but is x the case? That last part is decidedly non-trivial. Robust analysis seems to be about figuring out how much x can vary while still yielding something qualitatively identical to y.

Josh said:

“But then he also says that “This would allow us to infer that when we observe the robust property in a real system, then it is likely that the core structure is present and that it is giving rise to the property” (739). Wouldn’t the conditional representation of robust theorems have to be a bi-conditional for this to be the case? He says that the question is best addressed empirically, so the “relevant kind of system” is indeed talking about target systems.”

If you ignore the “likely,” then it looks like W is indeed affirming the consequent pretty hard. However, I’m pretty sure that he means that this would be a confirming observation for the hypothesis that the core structure is present in the target system. (Think Bayes theorem.)

As to the first question, it is the third step that is key in answering the objections of being ‘non-empirical’.

I see the process as this… The scientist applies conjectures or stresses or whatever of a similar type to multiple models. These models all point to a similar answer in the models, which itself is some form of model, possibly mathematical. That all these models pointed to the same lead (my word) is the product of robust analysis, but, like any other theory or model, low level confirmation is carried out to see if it makes any sense. So, I see robust analysis as a lead producing method that does not remove the need for viable empirical confirmation.

Question two,

I see the step four process as testing the lead that is the potentially robust property. The two questions provided later (p. 739) are questions that are answered in the process of ‘discovering’ the robust property by doing whatever to existing models and these questions help give us confidence that the robust property will indeed be a viable lead to test in step four and to have verified as plausible with some reference to the real world.

Maybe my understanding is wrong, but I see robust analysis as a lead producing method, and the leads still need some confirmation. It is this confirmation that makes the entire process remain empirical.

It looks like robust analysis is not really a method of confirmation, as such. More a way of tweaking a “theory” or meta-model.