Author Archive

Articles

What happens when comprehensibility exceeds cognitive power?

In Uncategorized on March 8, 2010 by toddallinmorman

What has struck me recently in contemplating the staggering complexity of ecological and environmental phenomena is the distinct possibility that we may never be able to comprehend the intricacies of the biosphere in the fullness of the details. But this has caused me to contemplate the evolving staggering potential we have to exponentially increase the cognitive power of the universe.

Levins has argued that even if computers could deal with the hundreds of partial differential equations with thousands of millions of variables involved, the results would be incomprehensible to our limited faculties. Computers that you or I could buy are likely more powerful than anything considered a ‘good’ computer when he wrote, so how has this increased computing power changed things? Human potential has remained largely untapped and I see two potential solutions, each with the same potential problem.

One is to develop creative, intuitive, and imaginative artificial intelligences that will only be limited in their computational speeds by the resistance of the matter involved. Then these creations could suss it out and explain it to us, hopefully in terms that we could hope to understand. The primary difficulty here is that whatever theory developed might require hundreds of years just to read by an unaugmented human.

The other possibility is to artificially augment human memory and computing capacity, and hope these new people could explain it to us.

What does it mean for science if it all is theoretically comprehensible, but it is practically impossible to express the theory of everything in a single human lifetime such that anyone could hope to understand? Or are we expecting (hoping?) the patterns in the staggering complex data to be governed by principles necessarily comprehensible to our beautiful and amazing, but still limited, minds?

Articles

Problems with adaptationism.

In Uncategorized on March 2, 2010 by toddallinmorman

In another thread as an aside I called adaptationism ‘silly.’ I did not go into the details because the thrust of the thread was on models and explanation and my opinion of adaptationism was not pertinent, but I included the editorialization nonetheless. Here is a more substantive presentation. S&O define the adaptationist hypothesis as: Natural selection is a sufficient explanation for most nonmolecular traits and these traits are locally optimal. I think B&R do a fine job at demonstrating the problems. The proposition assumes optimality. The term ‘most’ is unclear and ill defined. The notion of ‘sufficient’ is unclear. Optimality is defined as simply having a higher fitness by S&O, but elsewhere it has been implied that optimality creates something of a steady state of dominance, “As a results other phenotypes are eliminated from the population (or nearly so) or prevented from invading.” (internal citations omitted, S&O p.362.) Sufficiently is alleged to be confirmed by a ‘censored’ by an optimality model (note optimality is now being assumed), “Natural selection here provides a sufficient explanation because taking other factors into account could not significantly enhance the predictive accuracy of the optimality.” (S&O 363.) I already ranted a bit about ‘most’ before in the original thread, but to add to that, if most means barely more than half, this hypothesis seems fairly uninteresting. About half the time the hypothesis wont be true in such an instance. In addition, confirmation is alleged to come from a model, of whose accuracy we cannot independently verify, that does not ‘significantly’ differ from the ‘optimality’. But, optimality was previously defined as simply being the most fit trait, so how can one significantly differ (the question is either/or)? S&O must mean that the prediction of the model for fitness must not be ‘significantly’ enhanced by containing the relevant factors. While ‘optimality’ in the first definition is trivially verifiable as it does not have degrees, optimality models appear to provide projections as to the actual fitness. Thus S&O must be arguing that the model does not differ significantly from another model’s predictions. But what if both models are wildly inaccurate? B&R modify S&O to take care of some of these difficulties and make the adatationist hypothesis a two part hypothesis: Natural Selection is the sole process involved in the evolution of T (from some point at time at which all of the relevant variants exist in the relevant lineage). AND T is locally optimal. (B&R 192) This is a vast improvement over S&O, but by couching this in terms of evolution (here meaning the change in fitness and thus a change in relative preponderance of trait that is represented by a real number) we are faced again with the precise value of fitness alluded to by S&O. B&R demonstrate that this hypothesis is trivially false in all cases as other factors are involved in the exact fitness of a trait and the precise fitness will be different. (See pages 196-97). This is, of course, assuming optimality, which we have no reason to assume. Please also note that adaptationism ignores mutation as the source of traits in the first place and thus only deals with a very localized place in time. It seems that adaptationism is trying to claim that natural selection (assuming optimality, which is not a fair assumption) is the cause of the fitness of a trait. Without characterizing it as the ‘primary’ cause, this is simply not true, as the fitness with only natural selection will vary. If one is simply saying natural selection is the primary cause of the current level of fitness, this seems trivially true, as it is hard to imagine a model created without natural selection that would come anywhere close in providing an accurate prediction of fitness, except in the most rare and extreme of cases. I have to run to class.

I am so sorry, the stupid cut an paste eliminated my paragraph breaks. I need to go.

Articles

Back to multiple realizable argument difficulties

In Uncategorized on February 24, 2010 by toddallinmorman

At the end of the last class I had some serious difficulties following what was being objected to and why. I have finally reviewed my notes and my questions follow below. First, I’d like to say my earlier question regarded why people care so much about the reducibility of CMG to MB, not ‘why do we care if reducibility is possible at all?’ or ‘why do we care if multiple realization is a legitimate foundation for reduction?’ These last two questions I find interesting. Thus this is not a revisiting of my prior question in this topic.

(To reply to Lynn’s related post: I think CMG is trivially non-reducible after reading Ernst because it does not actually explain our more detailed understanding of phenomena, while MB does- so it is a paradigm shift. Back to class: I also do not feel the peg in the hole example is a case of multiple realization, as the general laws governing mater, space, and electron repulsion are not specific to the particular atoms that make up the peg- so general atomic theory laws suffice.)

So, the questions addressed at the end were:

If P explains Q, is it also true that having Ai explains having Q?

and

Do lower lever laws of the form Ai implies Bi explain higher laws if P then Q (assuming being Q implies some Bi as the cause of its being Q)?

Given that, under multiple realizability, for P to be P it must have some (unknown) Ai for it to be P, and that for Q to be Q it must have some Bi, it seems to me that this lower level law, of relative complexity, must offer an explanation for ‘why Q ?’

This is because A implies the proper corresponding B (I am done doing subscript- it slows things down too much);

A implies P;

B implies Q;

Therefore the presence of A implies P, a corresponding B, and then Q. Thus Q is explained by the laws and connectors.

Similarly, P implies some A;

Some A implies a corresponding B;

Some B implies Q. Thus the higher level law, P implies Q, is explained in more detail (if we can determine the specific A linked to P, or alternatively the B linked to Q).

I’ve tried to review the other works we’ve read to find the basis of an argument that the multiple realizability complex rule does not explain the phenomena and the higher level rule (actually it seems to me to explain much more fully, if one can learn the details, but not necessarily with more utility). Was someone actually claiming that the answer to either of the questions expressed in class were ‘no’?

If so, please explain. If it is just a matter of ‘better’ explanation, I frankly don’t care, because that is just simply applying some arbitrary definition of ‘better’.

Articles

Difficulties with Weisberg, “Qualitative Theory and Chemical Explanation”

In Uncategorized on February 20, 2010 by toddallinmorman

“Qualitative Theory and Chemical Explanation” by Weisberg is a bit frustrating to read. He is mistaken in his mathematical notation, and this causes a serious problem in trying to read what he is trying to say. He also makes a serious mathematical error regarding infinite sets, subsets, and equality.

On notation, on page 1075 in ‘instantiations’ (5) and (6) he means to write that the constant k can vary across a range of values from 0.9 to 1.1 or 0.99 and 1.01 in the respective instantiations. He says as much in the following sentences, but the problem his ‘instantiations’ do not say that. If one remembers their order of operations, placing a constant like 0.1 or 0.01 directly next to a parenthesis indicates multiplication to be carried out before one adds or subtracts the product from the newly introduced constant 1.0 or 1.01. In addition, while in certain scientific notation the plus/minus symbol might mean ‘within this given margin’, it more commonly refers to both adding or subtracting the precise amount after the symbol, after the product is taken, in mathematics. It took me half an hour of trying to find a technical definition in statistics of ‘precision’ that might explain his huge leaps in logic, before I realized it was just strange notation outside of the well established order of operations.

When writing of p-generality, Weisberg makes a serious error regarding set size. While it is true that the range of numbers from 0.99 to 1.01 is a subset of 0.9 to 1.1, it does not follow that the subset is smaller than the full set. The part Weisberg misses is that both are infinite sets. Set theory defines sets as having equal size if and only if there is a function that produces a one to one correspondence between the members of the two sets. If we take a member of the set from 0.99 to 1.01 to be x and 0.9 to 1.1 to be y, the function that provides a one to one matching is f(x) = 10(x – 0.9) or y = 10(x – 0.9). Take any element of the subset and place it into f(x) and you will get an element of the other set. It is a simple matter of algebra to derive the inverse of this function if one starts with a y to find x, the corresponding element of the subset. The inverse function is: x = 0.9 + y/10 or x = 0.9 + 0.1y.

Thus as the sizes of the sets of logically possible values of k have not changed, there has been no change in p-generality, while, by definition there is an increase in precision. A subset being defined as more precise than the set it comes from (p. 1075- indented part 2/3 of the way down). As p-generality remains constant- that is infinite- precision increases. To be charitable, his logic holds only if the sets involved are finite or of different orders of infinity. In his example, without more, the orders of infinity involved are the same.

Articles

Why is all this ink being spilled over the alleged (im)possibility of reducing Mendelian Genetics to Molecular Biology?

In Uncategorized on February 15, 2010 by toddallinmorman

As an intellectual exercise, this process of reduction has its interest, but the tone of the literature strikes me as a topic people actually care about. At first when contemplating this process it appeared likely that classical Mendelian Genetics (CMG) was the theory left behind in a scientific paradigm shift, as in the kind Kuhn outlined. In such a case we would be leaving behind an outdated concept of genes being the operative what-not that explained the dispersion of traits in organisms. Some of Kitcher’s arguments against reductionism pushed me in that direction. I was pretty much ready to let the concept of gene go entirely as all that outdated nineteenth century thinking.

Then Sober and Waters changed my mind by pointing out that Molecular Biology (MB) has redefined genes and claims that many genes are involved in a complex way to (potentially) account for CMG genes. The difficulty then arises that, as presented, CMG is a little unclear as to what exactly it might be saying. Does it present simple laws or complex explanations that are very organism and trait specific? Depending on how we answer this question, we will get different answers.

For example, if CMG has simple laws that don’t really account for the details of actual observed phenomena, this would be more of a paradigm shift, and we should just move on. The theory does not accurately explain phenomena, and if MB does- it would be disturbing if CMG could be reduced to MB. If on the other hand CMG is a great deal of specific rules, bordering on the order of annotated observations, then it might be a sloppy (by comparison, but still innovative and insightful in its observations) theory, but potentially completely explainable by MB. Much of Sober’s argument hinges on both MB and CMG being ‘correct’ (not being versed in the subtleties of philosophy of science nuance here- I’d say prefer to say ‘viable in the presence of the known phenomena’).

So, it seems to me there are two potential reasons for caring so much about the argument over reduction, and both involve contrasting metaphysical assumptions about the nature of existence. One reason to care about the reducibility of CMG is that demonstrating its reducibility would be to present it as a continuing viable explanation, and negate the position that a paradigm shift had occurred, as the theories are commensurate. This position assumes that there is one ultimate scientific truth of the universe that is (theoretically) discoverable.

The alternate reason to care assumes that CMG is still viable in face of the known phenomena, as well as MB, and proving the impossibility of reduction will demonstrate (or at least aid in preserving the viability of the position of ) the disconnected nature of the various scientific fields. This is naturally a desirable place to be as the implications of full scientific explanation brings up frightening possibilities of an intelligence socially and personally engineering all human outcomes or activity (or alternatively being determined without any purpose). If psychology, psychiatry, and the social sciences are separate from physics, then some form of human agency may be retained (though, I would love a coherent definition of ‘free will’ other than the ‘self-causing thing’).

While the nature of existence will not be determined by the ‘winner’ in an argument over the reducibility of CMG to MB, I guess I can see why someone might care beyond the pure intellectual puzzle of it. Are there less grand (or more grand than the completely banal) reasons for caring so much about this intellectual exercise?

Articles

Rotation for Comments on Reading Summaries

In Uncategorized on January 21, 2010 by toddallinmorman

Please find my comments to the professor on this issue he suggested I  post on the blog for comment. The important part is the proposed rotation method in the final paragraph.

Todd

Your suggested method of randomly selected private comments would likely leave several persons each week without someone’s comments.

There are currently listed 14 students in the class. There is a 12/13 chance that any particular student will not be chosen when one person randomly selects a student. With this chance happening 13 times (as the person would not review herself), there is a 106993205379072/

302875106592253 or about a 35% chance any one student will not have their summary commented on. Thus it is statistically likely an average nearing 4.9 students per week will not have comments on their summaries.
A simple way of ensuring that each person receives comments would be to have each person comment on the person whose the name is the number of weeks into the course below their own name and if one reached beyond the bottom of the list, continue counting down at the beginning. Thus each week a person has an assigned commenter, and each person will comment on another person each week. One would, of course, skip themselves on week 14. The algorithm is also simple, approachable, and everyone would know who to expect a comment from.
[This was written before I learned that some on the list may not be posting summaries. The simple tweak for that would be to exclude them from the master list of order.]