Author Archive

Articles

Simplicity Discussion

In Class matters on May 4, 2010 by Jenny

While Forster’s “The New Science of Simplicity” is a nice introduction into using Akaike’s information criterion and Bayesian information criterion for adjudicating the trade-off between parsimony and goodness of fit in model selection, it won’t be the primary focus of tomorrow’s discussion.  For tomorrow, I want us to look really closely at Sober’s main argument in “Let’s Razor Ockham’s Razor.”  There’s a lot of good stuff in there.

Articles

Odd Man Out

In Uncategorized on April 20, 2010 by Jenny

This may be a pretty naive question; so bear with me.

Yasha writes that the three-person Odd Man Out game is a better model than the Prisoner’s Dilemma or Stag Hunt for modeling coalitions (partly) because it can capture chimp behavior of coalition-forming.  But I wonder whether Odd Man Out can even do that.

In chimpanzee societies, the alpha male has the best reproductive success; so all the guy chimps want to be him.   To get an edge on the competition, some male chimps form agreements with other male chimps to team up and beat the rest out.  When one coalition is the only left standing, the two members duke it out.  The victor becomes the alpha male.  The other guy is beta.

But because Beta wants to be alpha, he forms a coalition with another male chimp, and together they overthrow Alpha.  Then, the other chimp becomes New Beta and Beta becomes New Alpha.  Unfortunately, then New Beta wants to be the alpha (so he can get girls), so he overthrows New Alpha with the help of another friend, and this process continues indefinitely (or until one chimp is beaten to death.  Gruesome.).

In the chimp scenario I’ve outlined, there are more than 3 players (Alpha, New Alpha, New Beta, & soon-to-be Newest Beta).  But Odd Man Out can only model three-player games (as far as I know).  So why would we think that Odd Man Out can explain such behavior?  Is it because only three players are being modeled at any given time?  But we could easily model four.  We could model soon-to-be-Newest Beta trying to convince New Beta to help New Alpha overthrow Alpha so New Beta can be the next alpha.  Why would we stop at three players?  Is the reason for stopping at three players any better than Skyrms’s and Binmore’s reasons for stopping at two?

Thoughts?

Articles

Lewontin’s argument for why genes are not buckets

In Uncategorized on April 12, 2010 by Jenny

Lewontin argues that it is mistaken to characterize genes as determining one’s capacity.  But I’m confused as to why this is so; maybe someone can help me out.

He writes, “In a trivial sense every genotype must indeed have a maximum possible metabolic rate, growth rate, activity, or mental acuity in some environment, but, as we have just seen from the actual experimental data on reaction norms, the environment in which that maximum is realized is different for each genotype.  Moreover, the ordering of genotypes from ‘restricted’ to ‘enriched’ will change from genotype to genotype.  Obviously there will be some environments that will be lethal or severely debilitating for any conceivable genotype, but these are irrelevant to the problem” (28).

I don’t understand why the fact that the environment in which the maximum is realized is different for each genotype supports the idea that genes are not buckets.  Or why the ordering of genotypes, too, supports the conclusion that genes are not buckets.  Does anyone understand this argument who could explain it to me?

Articles

Change-relating invariance

In Uncategorized on April 8, 2010 by Jenny

Here’s a question that Yasha and I were talking about yesterday that, as far as I know, neither of us has resolved.

Walsh writes that citing a change-relation invariance “discharges the metaphysical function of a good explanation.”  But why should we think that citing a change-relation invariance tells us anything metaphysical–especially when the subject matter are statistical concepts, whose referents may not actually exist?

Ideas?

Jenny

Articles

A basic confusion

In Uncategorized on March 23, 2010 by Jenny

I have a very basic question about Lewis’s seminal “Counterfactuals” that I hope can be cleared up rather easily.

Lewis says that A –> C is nonvacuously true if and only if C is true at all the closest A-worlds.  But how do we determine what are the closest A-worlds?  More broadly, how do we determine whether one world is closer to the actual world than another one?  I know that sameness of laws is important, check!  But what else?

And need we be very precise in our determinations of which worlds are closer than other worlds?  Or, for the purposes of counterfactual analysis, do we just have to get things roughly right?

Thanks for your help,

Jenny

Articles

Returning to Robustness

In Discussion on March 7, 2010 by Jenny

The article from Orzack and Sober that we read this week returned to a lot of questions that were never answered in our first discussion of robustness.  That is, why should we think that a robustness analysis works?

Orzack and Sober deliver (at least) two hearty blows to Levins’s account of robustness:

1. “If we know that each of the models is false (each is a ‘lie’), then it is unclear why the fact that R [i.e. the robust theorem] is implied by all of them is evidence that R is true” (538).

2. For a robustness analysis to work, the models need to be independent.  Otherwise the ‘robustness’ we find may simply reflect a commonality of the models’ frameworks, rather than a truth regarding something the frameworks describe.  But the models cannot be logically independent nor does it make sense to talk about their statistical independence (539-40).  So robustness analyses do not work.

Then the question is, why should we still think robustness analysis has any confirmation or explanatory power, given these criticisms?

Thoughts?

Articles

Using Weisberg’s Framework to Evaluate Optimality Models

In Discussion on February 28, 2010 by Jenny

In Potochnik’s article “Optimality Modeling and Explanatory Generality,” Potochnik argues that even if we had perfect information regarding the genetic information of species, we would still have reason to use an optimality approach to model natural selection, rather than use models that include such genetic information.  Her reason for claiming that an optimality approach is useful regardless of our access to genetic information is that an optimality approach tells us about the “fitness-conferring interactions between organisms and their environment” in a way that more complex models could not.  And in certain contexts, explanations that capture such fitness-conferring interactions are better for our scientific explanations than ones that do not.

While I won’t go through Potochnik’s argument for why this is so, I think it is interesting to note that her argument seems to be easily framable in terms of Weisberg’s categories of idealization.  Here’s how it would go in Weisbergianese:

Lewontin (and others) think that the optimality approach to modeling natural selection is a Galilean idealization, in that it introduces simplifying distortions to gain computational traction that will later be de-idealized as we gain more information about the target phenomenon.  For Lewontin, as we gain more genetic information about the target phenomenon, we will be able to correct many of the distortions we make in optimality models, until we eventually dispel with such models altogether.

Potochnik counters by saying that the optimality approach shouldn’t be understood as a Galilean idealization, but as a minimalist idealization, where the goal is to include in our model only the primary causal factors that give rise to our explanandum.  Thus, even as we progress in our understanding of genetics, our model won’t change or be obviated.

If this re-casting of Potochnik’s argument is correct, then we would expect the representational ideals of the two forms of idealization to be different.  And indeed, we see that this is so: Lewontin’s preferred form of explanation (the Galilean idealization) aims for the representational ideal of completeness, while Potochnik’s preferred form (the Minimalist one) aims at describing the primary causal factors, given certain fidelity rules (which say how precise our explanation has to be).

If Weisberg’s idealizations can be applied, we would also expect for the two forms of explanation to be non-competing, in that they account for different phenomena. Again, this is also true.  As Potochnik concludes, both models of explanation have different explananda: the optimality approach explains “long-term phenotypic evolution by natural selection with a particular interest in the fitness effects of organism-environment interactions” while models that take into account lots of genetic information do not.

So what I want to know is whether anything is lost in translating Potochnik’s framework into Weisberg’s idealizations?  Can everything she explained about the optimality approach be explained in terms of Weisberg’s framework?  I would suspect that it cannot, but I don’t really have a good reason why this might be so.  My hope is that in demonstrating where Potochnik’s framework comes apart from Weisberg’s, we can gain some traction on which sort of framework better captures the debates between optimality modelers and their opponents.