Author Archive

Articles

Evolution and development: explanation and causation

In Uncategorized on April 17, 2010 by Lynn Chiu

I have a question about the explanatory aspect of projects that integrate evolutionary and developmental biology.

Mayr claimed that explanations of traits can be divided into the ultimate and proximate. The ultimate explanation deals with the “why” question, which is answered by evolutionary biology; while the proximate explanation deals with the “how” question, which is answered by developmental biology.

For example, why is there a certain trait X? The ultimate answer would appeal to its survival value in the past, also taking into consideration historical contingencies such as the effects of random events, drift, etc. The proximate answer would show how the trait came to be from an embryo or other initial conditions, it would specify the causal stages that lead to the final form of the trait.

The distinction seems nicely contained until evo-devo or devo-evo theses come into the scene.

Traditionally conceived, natural selection deals with end-products. Regardless of how a product is produced, reproductive variation in the products result in evolution by natural selection. On the other hand, development deals with the building process of these end-products. Regardless of whether the end-product ultimately survives, the building up of the end-product from its initial stage is development.

However, some synthetic theories argue that natural selection acts not only on the end-product, but also the developmental process, “while” it is developing. Therefore, not only is the end selected, the different inter-mediate stages might also be selected for their own properties. This is a case in which development is not black-boxed from evolution. On the other hand, the developmental process of a trait itself becomes part of the selection forces on itself and other traits, this is how the evolutionary process is not black-boxed from development.

If this is the case, the ultimate/proximate explanatory distinction would break down. Every moment is simultaneously an evolutionary and developmental process, the explanation of a trait then is just a causal one that points out how traits change. One can only arbitrarily cut the causal link into long term and short term durations to make the evolution/development distinction meaningful.

Thoughts?

Articles

Pre-class discussion: Metaphors

In Discussion on April 11, 2010 by Lynn Chiu

Hi all,

As a warm-up for our discussion on Tuesday, I’m opening a thread for pre-class discussions. This week we’re going to focus on two widespread metaphors describing the relation between the internal (genetic or “beneath the skin”) and external (or environmental) factors that determine organismic ontogeny and phylogeny.

These metaphors are, as described in Lewontin’s book “The Triple Helix”,

(1) the genetic blueprint metaphor (Chapter 1),

which concerns the development of organismic form

and

(2) the key-lock metaphor (Chapter 2),

which concerns the evolution of organismic form.

On Tuesday, we will attempt to identify the key components these metaphors and Lewontin’s arguments against them. Then we will discuss in detail the niche construction hypothesis (as proposed by Odeling-Smee, et. al. in their book: “Niche Construction: the neglected process in evolution”) and how it rejects the second metaphor.

Feel free to post your thoughts on these metaphors and we will include them into our discussion on Tuesday.

-Lynn

Articles

Organism Vs. Environment Debate Reading Priority

In Uncategorized on April 7, 2010 by Lynn Chiu

Dear all,

For next week’s discussion on Organism VS. Environment, here’s the suggested priority:

1.  Lewontin’s Triple Helix Chapter 2

2. Odeling-Smee, et. al.’s Niche Construction Chapter 1

3. Lewontin’s Triple Helix Chapter 1

Extra reading:

4. Odeling-Smee, et. al.’s Niche Construction Chapter 10  (a summary of their arguments)

There’s also a glossary of terms from Odeling-Smee, et. al. in the extra reading folder.

-Lynn

Articles

Recommend: Lewontin’s “The Triple Helix”

In Uncategorized on March 29, 2010 by Lynn Chiu

Hi all,

Coming up the week after the next is my section on the Organism/Environment debate.  We will focus on the three themes introduced in the first three chapters of Richard Lewontin’s important little book: “The Triple Helix” (amazon link) and connect them with arguments for and against niche construction.

We are going to cover three out of the four chapters of the book, which is 70 pages long. Lewontin’s book is highly accessible to those without any background in biology or philosophy. Andre and I highly recommend this book, if you order it now, it’ll be in your hands by the time we discuss it!

End of advertisement, back to work! Have a great spring break!

Best,

Lynn

Articles

Macro/micro VS. actual/possible

In Uncategorized on March 3, 2010 by Lynn Chiu

I had a slight difficulty wrapping my head around the distinction and comparison made by Andre between macro/micro phenomenon versus actual/possible string of events.

In Putnam’s example of the peg and the hole, the reductionism debate was centered on whether micro-phenomenon, the molecular (or any level lower than the peg/hole) causal interactions, can sufficiently explain the macro-phenomenon of the peg going through the hole, substituting the explanatory power of the macro-level phenomenon.

In this context, if the explanandum is this particular peg going through this particular hole at this particular time (thus these particular molecules are interacting with these other particular molecules), then both macro and micro level explanations are explaining actual events.

That these explanations can serve to explain possible, not actual events, rely on whether they are aimed at explaining a type of phenomena or a token phenomenon. If the latter, then the explanation for this particular event (name it event P) cannot carry over to other unique events. If the former, then both macro and micro level explanations can carry over to other macro or micro level explanations. The macro level explanation could explain how other different token pegs and holes interact; the micro level how different molecule configuration types would interact.

Thus, the macro/micro distinction does not imply the possible/actual distinction (i.e., the micro explanation does not lay out the actual causal links while the macro explanation serves to explain possible high level phenomenon with different low level instantiations), I do not understand why they are lumped together.

Articles

Part II of the kinds of theories that are reducible to each other

In Discussion on February 25, 2010 by Lynn Chiu

Wenwen’s comment on my other post about the kinds of theories that can be reduced to each other induced me to post a part II. This is primarily due to its relation with the topic of this week–model idealization.

In that post, I puzzled over the kinds of theories that can be reduced to each other. My understanding of Nagelian formal reduction of a higher-level theory, a lower-level theory and bridging laws (of the terms and laws specified in the two theories) does not include specifications on what kind of models can be plugged into the higher and lower levels. Anything goes, it seems like, as long as there can be bridging laws. But what kinds of theories allow bridging laws? What kind of bridging laws?

Maybe this is a question Nagel already answered, so please inform and enlighten me if you know.

My discussion below relies on the assumption that theories are idealizations of the target phenomenon. This is arguable, so please attack this point if you don’t feel comfortable with this.

Following Weisberg’s distinction between different representational ideals, the higher and lower level theories may be models that are aiming for different goals: completeness, primary causations (1-causal), predictions (Maxout), simplicity (qualitative matches between the model and the target) or generality (application range). Depending on what the models are for, what the bridging laws capture would be dramatically different. I will discuss the first four because generality concerns the actual/possible range of future applications, which is not my focus here.

completeness: if they are both providing completeness models, then they are both introducing distortions that will be removed eventually. Therefore, the bridging laws are basically linking the same entities and the same laws, though accounted for differently by the two formal systems because of  their particular distortions.

1-causal: If they are both providing 1-causal models and both accurate, then if reduction is possible, the higher-level causation links can be explained by the real causal links at the lower level.

This is very different from how higher/lower level theories would look like if their purpose is to achieve predictions or qualitative similarity with the target phenomenon. These models are not concerned with getting the actual causal links “right”. It is possible that the models match the surface phenomenon and predict really well, but are not structurally isomorphic to the causal structure of the phenomenon at all.

Therefore, there are two ways reduction could go:

(1) the bridging laws demonstrate how they the two models parallel in prediction

(2) since the “surface phenomenon” is different at different levels, the bridging laws might be showing how the phenomenon matches each other at the different levels via their descriptive models, respectively.

So…depending on what models one is talking about, the “reductions” mean completely different things!!!!

If you buy my argument, looking back to the relation between Classical Mendelian Genetics (CMG) and the achievements of molecular genetics (MG), what kind of reduction does Waters, etc. have in mind?

I have the feeling that the general consensus is models that are for 1-causation. However, is this what CMG and MG really aim to do? Suppose they are both aiming for determining the core causal connections. Next, we have to figure out whether we’re discussing this question because  (1) CMG and MG are accurate in predicting and describing the surface phenomenon or because (2) CMG and MG posit laws and entities that are real.

If the former, then even if they actually are 1-causal models, we are not reducing them for what they actually are but for what they achieve as prediction and/or simplicity models. If this is so, we are applying the wrong sort of reduction to these models.

If the latter, well, that has to be PROVEN. If one of them is wrong (ex. there are no genes), then even if they are accurate in their descriptions and predictions, reduction cannot be done at all!!! This was what I think was lacking in the papers we read last week. They did not show that the assumptions and posited entities/laws of CMG were still correct but seemed to only rely on the CMG’s accurate descriptions of the phenomenon.

Whew!

Articles

The kinds of theories that are reducible to each other

In Discussion on February 16, 2010 by Lynn Chiu

After a very pleasant evening chatting with Leo and Josh, I’ve finally clarified what’s been bothering me with the readings this week, particularly Waters’. Upon reading the blog afterwards, I realized that Todd might be touching something on similar lines. However, since this is a separate issue in itself, I decided to post an independent article.

The issue is about what makes theories “higher-level” than a “lower-level” one, such that we can debate whether they are reducible to each other or not. Some theories reside in domains that are obviously in nested hierarchical relation to each other, such as social psychology theories versus cognitive neuroscience theories. These obviously induce the question of whether they can be reduced to each other. Others, are in the same domain, such as Newtonian physics versus the Special Relativity Theory, and thus do not bring about the question of reducibility but which is right or wrong.

However, within the same domain, there seems to still exist theories that treat with topics that are relatively “macro” and “micro”. The topic of our focus today is Classical Mendelian Genetics (CMG) versus molecular biology (MB) (or sometimes molecular genetics). They both belong in the biology domain, but the former talks about patterns that are at a more macro-scale than those of the latter. The question then becomes, is there a genuine difference in level between CMG and MB such that there is an issue of reducibility?

I think there are three possibilities.

The first is that CMG and MB are genuinely different levels of explanation, with the targets of their investigation phenomena happening at different but simultaneously occurring levels. In this case, the two would be reducible in the formal Nagel’s sense of theoretical reduction.

The second is that CMG and MB are two theories that aim to describe the same level of phenomena but are constructed on different assumptions (set of axioms) and laws. In this case, it is not proper to say that the two are reducible to each other but that one is right and the other is wrong.

The third is that CMG and MB are theories that are both useful, that they correctly grasp patterns in the world. Therefore, the two are informally reducible in the sense that the patterns that are usefully described in one can be usefully described in the other.

I think we need to be clear about what we think about the two theories such that we find it meaningful to discuss whether they are reducible to each other or not, and in what sense reducible.

My oversimplifying opinion is that Todd seems to be aiming for (2), while Josh is for (3). Waters on the other hand, seems to set out to do (1), but I only see him providing evidence for (3).

As for myself, I’m thinking more that (2) is actually the case. But an explanation of my own views would make this post wayyyy too long. I’ll put in the comments later!

Looking forward to your comments!