Author Archive

Articles

Scientific Anarchism

In Discussion on April 30, 2010 by dcmzb8

Dan here. This post is way too long, but it’s as short as I can make it.

Thesis: We have *no* reason to think that *any* synthetic proposition is necessarily true, for any strong sense of necessary.

Beatty’s Evolutionary Contingency Thesis claims that there are no laws in biology, because it is possible to find exceptions to any distinctly biological generalization. When we find an exception to a law-like generalization, we can save it from falsification by modifying the antecedent of the generalization’s conditional so that it doesn’t apply in the case of the exception. Do this enough times, and you either have a conditional that belongs to a lower-level science, or an analytically true tautology. I don’t have a major problem with calling such a tautology a law, since I think this is merely a linguistic dispute. I would prefer to call such a tautology a model, not a law, but this is partly because I like the cool name for my position.

However, I think we might eventually be able to extend the ECT to lower levels of science. Take Hempel’s example of a law-like generalization: you can’t have a sphere of U-238 1 km in diameter (because the thing would explode!). Now consider the fine-tunning thesis. In our current physical theories, there are about 16 free parameters. These are constants where we have no theoretical basis for setting their values – they have to be obtained empirically. According to the fine-tunning thesis, stellar formation is only possible for a small range of the possible values for these free parameters. One possible explanation for this is that there is some processes that is creating a large number of universes (possibly infinite) with the values of the free parameters assigned randomly. We just happen to be in one of the very few universes that can support observers, otherwise, we wouldn’t be around to observe. So, if we could explore other universes, we would quickly find exceptions to Hempel’s U-238 generalization.

So, it might be the case that, for *any* synthetic proposition in our current body of theory, we will eventually discover that this proposition can only be explained by a set of analytically true conditionals (a model) + the very contingent empirical proposition that the model applies in this case.

As for nomic necessity: Per this hypothesis, all “laws” are contingent, and nomic ‘possibility” (which is defined in terms of holding scientific laws constant) means holding the contingency base that makes a given set of laws true constant between possible worlds. The trouble with biology is that we have nomic ‘impossibilities’ that are actual. At some point, this may be true for chemistry and physics as well. So, we will have to bite the bullet: accept that we can have actual nomic impossibilities, or let nomic necessity recede to the point that it resembles logical necessity.

Articles

Walsh’s Sure Thing Paper

In Uncategorized on April 16, 2010 by dcmzb8

A bit of a blast from the past here, but…

I agree with Walsh that population genetics doesn’t explain why individual organisms have the traits they do. For that, you’d have to follow the “developmental” history, including giving an account of why the organism’s parents had the genes they did, where the initial mutation came from, etc. However, Walsh is claiming that evolution isn’t providing a causal explanation at all, even at the population level. I disagree, and I think the arguments he has against the two-factor model fail. I would argue that evolutionary theory does provide a causal explanation at the population level, just not the individual level.

His most important argument should work against both the two-factor and single-factor models of natural selection and genetic drift. This is the one involving Simpson’s Paradox and the Sure Thing Principle. I think it does work if what we’re trying to provide a causal explanation about individual organisms. However, his argument fails if the explanadum is about *populations*.

I think the key premise of his argument is this: “Subpopulation size does have an influence on C, but this relation is constitutive, not causal.” (pg 20) This is true if we are talking about populations of individuals. But if we are comparing different sets of populations – populations of populations – then the size of the population does have causal force. I personally think of population size in the context of genetic drift as something like a physical object’s mass. A larger population is more resistant to the spread of “new” genes, just as a large mass is more resistant to a change in momentum from a force. By divying up the overall population into subpopulations, we’re comparing two different populations of populations – a population of large populations (with one member, the overall population) and a population of small populations (the subpopulations). The causal model involved is fairly straightforward, and doesn’t resemble a Simpson case at all.

His other two arguments are more minor. Genetic drift may have been originally intended as an error term, but in quasi-indeterministic causal models, error terms are simply causal factors that the model doesn’t account for. If an error term works in a predictable way, then it should eventually be brought into the model. As for the modularity argument, I think Walsh may be misrepresenting modularity (does it mean you can intervene on one cause without interfering with the other, or that you can intervene on one cause while holding the other *fixed*?), but I can’t be sure without reading Woodward more closely.

Articles

Sober Ch 1 reading

In Uncategorized on January 20, 2010 by dcmzb8

A word of warning: The pages are slightly out of order – pages 4 and 5 come after page 17.

– Dan