Articles

Are Models Analogies?

In Uncategorized on February 22, 2010 by bcnjake

After working on the analogy/metaphor distinction this week and reading the Weisberg pieces, I’m curious about your input on the following:

Take an analogy to be qualitative isomorphisms between two or more systems.  As the number of isomorphisms grows, so does the strength of the analogy; a sufficiently strong analogy will end up having both predictive and explanatory power.  For example, if I say that planets are baseballs orbiting the sun, this analogy doesn’t have any great predictive or explanatory power.  On the other hand, if I say that the heart is a pump, I have a stronger analogy, since there are a number of isomorphisms between hearts and pumps.  As a result, I can reasonably predict both how and why blood moves throughout the body.

Now consider models.  As I understand models in the reading, they are supposed to have predictive or explanatory powers (or both) regarding the systems they model.  If this is the case, can we say that models are a class of analogy?  I suppose it depends on the origin of the predictive/explanatory power.  If the predictive/explanatory power is the result of an isomorphism, it seems that the model would be an analogy, and (as the Catalans say) if not, not.

But how can a model have explanatory power without some form of isomorphism between the model and the system it seeks to describe?  That would be a rather odd position to hold.  Predictive models, on the other hand, seem to be able to accurately predict outcomes without needing qualitative isomorphisms.  On the other hand, it seems intuitively plausible that without an isomorphism, the predictions will eventually divorce themselves from reality.  If this happens, do we really want to call the system a model?  Take the case of sportswriter Leonard Koppett, who claimed to be able to predict whether the Dow Jones would finish the year up or down.  His model was 100% accurate for (I believe) three decades, but lost steam because his predictor was whether an NFC or AFC representative would win the Super Bowl.  Clearly no isomorphism exists between Super Bowl winners and stock performance.  Do we want to take this as a model, no matter how poor, or do we want to say that it’s something else?

So here’s where I’m at: Analogies between systems share isomorphisms.  Models are predictive or explanatory.  Explanatory models intuitively seem to involve isomorphisms.  Predictive models don’t require isomorphisms, but those that don’t seem fatally flawed.  Do we (1) say that predictive models that are this flawed are somehow not models and claim that all “proper” models are analogies, (2) say that not all models are analogies, because even the flawed predictive models are still models, or (3) argue that, for some reason, models are something separate from analogies?

Advertisements

10 Responses to “Are Models Analogies?”

  1. Interestingly, you are making an analogy between models and analogies based on their common properties of being able to predict and explain. However, they are very different species. The way you put it, all theories, including scientific theories, are analogies because they are meant to explain and predict, but there is no “thing” that these theories are comparing to.

    When we draw an analogy between the heart and the pump, there are two things in which we compare. We hope that whatever is true of one object is also true of the other.

    But when we theorize, regardless of whether we are trying to cut nature at its joints and retrieve the core causal factors or trying to simplify the system for pragmatic handling, we are not comparing the system we’re theorizing about with…anything. The isomorphism between the model/theory and the phenomenon is not the same kind of isomorphism between a heart and a pump. The pump is not derived/simplified nor core components of the heart.

    However, I think you made an interesting point about the value of predictive but non-isomorphic models/theories.
    So I am shifting the discussion to the debate between isomorphic vs. non-isomorphic but predictive models. Is the latter flawed if it does not account for underlying structure? (I hope this was what you were asking)

    If our interest is in understanding what’s “real” out there, that is, the core causal factors or the detailed causal links that underly our target phenomenon, I do not see why predictability but not isomorphism would be useful for us at all. If one already KNOWS that a theory is non-isomorphic, i.e., is not cutting nature at its joints, then the theory does not provide us with any true facts about the underlying nature of the phenomenon.

    The AI industry relies on a reverse-engineering methodology to construct artificial human mind-like intelligence. instead of modeling exactly how the human brain is constructed, they design algorithms that match our outputs. It might be achieved by modeling our human heuristics and cognitive functions/limitations. However, even if their project successfully predicts human mentality and behavior, how would that help us with understanding our minds?

    It might be argued that the mind is multiply-realizable. But all the possible realizations share the same structure/properties that make them realizations of the same thing. Thus if an AI system really is a realization of the human-mind, it should share structures/properties that ARE isomorphic to how our system works. The level of isomorphism might vary: it might require modeling at the level of the neurons (connectionism), or at the level of Turing-machine functions (machine functionalism), or at the level of behavior (behavioralism). Regardless of which, they would all be attempts to cut nature at the joints, not aiming for mere predictability.

  2. The problem I see here is one that appears to come up over and over in the few philosophy of science discussion I have been a party to. There appears to be an underlying assumption of knowledge and truth at some level in the scientific theory. This is what appears to me to be implicit in Jake’s model question.

    In a sense, a model cannot be an analogy because there is nothing to analogize to. What we do know is that we have complex phenomena and science hopes to explain it. Part of this process is different forms of modeling. Our current best understanding may be completely modeled in mathematical formulation, but this is only a model of our current best explanation. Idealized models will ignore known complex phenomena in an attempt to understand components of the known phenomena, or to better explain them in manageable ways. The most interesting variant of models are black box predictive models that may have no relation to anything we may ultimately theorize (and more or less relation to unknown ultimate reality) taken together to try and create avenues of inquiry in robust analysis.

    Scientific models are tools of understanding and investigation that help at different points in the endeavor to explain the complex phenomena of existence. The actual truth of the phenomena is unknown so there can be no knowledge as to the validity or strength of any analogy. Given the nature of scientific progress, it is quite likely all the current scientific theories will be replaced with new paradigms that are incompatible the current understandings in the centuries to come.

    Ptolemy’s model of the solar system has predictive value when relating to some limited phenomena, but none when relating to our current paradigm’s understanding of theory to explain our more extensive knowledge of larger phenomena. Models often analogize to nothing, and, when representing directly scientific theories, will likely be representations of things unsupportable by future knowledge of phenomena. Thus even ‘explanatory’ models are at best analogies only to the status of our understanding, and nothing more.

  3. I have been building models all my adult life. Having said that, I do not share a common vocabulary with Michael Weisberg. As an example, I would have failed my PhD prelims if I defined robustness as he does. But, let me wade in with a response to Jake’s question. Models are not analogies. They are abstractions or representations of some target. Analogies can be construed to be representations of some target, as well. And I think I like the “multiple isomorphisms” as some concrete measure of the soundness of the analogic representation. But, in my vocabulary, the model represents the target through idealization and abstraction, not through analogy. I might construct a model of a target based upon some analogy, as I think Lynn suggests about the AI system.

    Neoclassical economics relies on abstractions and idealizations for the construction of models of consumer decisions. No analogies are tolerated.

  4. Just a short thought provoked by REW:

    When did scientists ever use analogies as part of their scientific methods?

    After four years of lab work in bio and psychology, the only time I’ve ever heard of serious talk about analogies is when philosophers *accuse* the scientists of unconsciously incorporating assumptions from analogies. For example, in cases like DNA as blueprint, brain as machine, etc., instead of merely using loose metaphors to ease public understanding, some scientists could easily gets carried away and actually treat their theories as modeling the DNA *as* information containing blueprints of the final form of the organism and brains *as* Turing-machines with the same nature and computation power. In these cases, it is not so much that these scientists were actively using analogies to solve problems but were too sloppy in their reasoning.

  5. Lynn, see an essay by Dedre Gentner and Michael Jeziorski called ” The shift from metaphor to analogy in Western Science”. In this, they discuss Boyle, Rutherford, and Carnot and their analogic approaches in science. I wouldn’t call it sloppy. As I recall, their story-telling was pretty good.

    I also remember sitting through a compelling seminar by Douglas Hofstadter, entitled “Analogy as the core of cognition”. He argues that virtually all of our thought processes are based on analogy (and sometimes metaphor!). Granted, Hofstadter can be peculiar. But he is not alone in looking at the roles of analogy in scientific thought.

  6. Lynn, you made a couple points I want to touch on. First, you suggest that analogy has no place in the scientific method. That seems wrong. If you take the standard account of the scientific method, Characterization, Hypothesis, Prediction, Experiment, Evaluation, and Confirmation, it would seem that analogy could be an important part of the first step. Obviously, analogy isn’t necessary, but when figuring out how to think about a problem, analogy can be useful.

    Perhaps your instances of accusing scientists of analogizing are cases in which coincidental similarities appear to be qualitative isomorphisms. If we say the brain is a computer, this obviously doesn’t mean that every aspect of the brain links isomorphically to a computer, or vice versa. While there are some clear examples (e.g. the brain has no place for a CD-ROM), less clear examples may trip up scientists, leading them to interpret coincidence as isomorphism without experimentation, and leading to accusations of “analogizing.” This, I think, suggests that not only are analogies useful to scientists, but also that the differentiation between qualitative isomorphism and coincidental similarity is important scientific work, and not “merely” tools for public understanding.

    The other thing you mentioned was that “brain-as-Turing-machine” and “DNA-as-blueprint” are loose metaphors. This gets to the heart of what I’ve been thinking about. Scientists (and philosophers) switch back and forth between the words metaphor and analogy quite loosely. This is problematic, because analogies and metaphors are different things, and using them interchangeably leads to confusion.

    My working definition of analogy is at the top, so I won’t repeat it. Very roughly, though, while an analogy says true things about hearts and pumps or brains and computers, metaphors fail to say anything true. If I say that André is a lion, any similarity is purely coincidental; I haven’t said anything true about André or about lions. Metaphors are devoid of qualitative isomorphisms. As a result, while analogy can be useful to science, metaphors have no place in science. Certainly, they can lead to understanding in individual cases, but they necessarily don’t aid in scientific characterization, experimentation, etc.

  7. Jake, I’m not sure that metaphors fail to say anything true. If this were the case, what would be their purpose? If one draws a metaphor of André as a lion, presumably it is because he shares some trait that is associated strongly with lions (e.g. he is bemaned, vicious, or likes to take naps under trees while his wife hunts 🙂 ). The more I have thought about this, the more difficult I find it to not say that metaphors are merely analogies that hold only one or two things to be analogous between the objects being compared.

  8. First of all, thanks to REW for pointing me to the interesting articles, they are directly relevant to my question and I will be kicking away my midnight/afternoon naps to read them carefully.

    In response to Jack:
    Again, more interesting points worth pondering.

    First of all, Andre made the important distinction between discovery and justification. I begin to see its importance in the debate we’re having right now. When “figuring out how to think about a problem, analogy can be useful,” is this useful in the sense of aiding discovery, or useful in the sense of adopting the theory explaining one phenomenon to explain another? If the latter, when they start using the models in one to model the other, it is not an analogy anymore but pure modeling. Thus making the latter collapse into the former and serve also as only a beginning aid.

    The accusations from philosophers are thus targeting the second step of the process: after transferring the model of one phenomenon to an analogous phenomenon, are there any unexamined assumptions that are also transferred? Do these assumptions fit the new phenomenon? In the case of economics vs. evolutionary theory, when the two are considered to be analogous, the model explaining evolution is then transferred to economics. Whether the model fits, needs to be adjusted or is plain wrong depends on whether the assumptions of one is applicable to the other.

    Therefore, currently, I still hold that yes, analogies are useful, useful for discovery. But when modeling, it is not a practice of making analogies anymore.

    Secondly, seconding Josh, I also find it hard to distinguish between the way Jake differentiated metaphors and analogies. If Andre and lions do not share ANY features at all, how could they be metaphors? If they DO share features, Jake seems to be stating that some but not all sharing features count as analogous, that is, if I’m making an analogy, there must be underlying isomorphisms between the two systems, but if I’m making a metaphor, then what I am pointing out is not real isomorphic features, but merely superficial qualities. But are there such metaphors? What metaphor would intentionally grasp two features that are merely incidental properties and try to make a substantial point?

    In some cases, I think by metaphor and analogy Jake is referring to different “levels” of analogy. The analogy between the brain and the computer can be very fine-grained, such that every neuron has to resemble a function in a Turing machine. However, it could also be less fine-grained, that is, the brain as a whole is a function, it produces certain output depending on certain inputs. In the latter case, the inner workings could be very different from a computer (no CD-ROMs, etc.) but the more abstract level pattern is still isomorphic to the computer. In this case, some metaphors would be an higher-abstract level analogy.

    I also agree with Josh that some other metaphors might just be holding a couple features that are analogous, but not providing a whole picture. In multiple-model idealizations, one can use multiple models that capture different features of the system (even if they contradict), thus these kinds of metaphors might be analogies that are not full-blown to explain the entirety of the system.

  9. Very interesting topic. I have some thoughts:
    (1) A model can have many disiderata. It can be explanatory power, predictive power,completeness, simplicity…

    A model for weather forecast may not capture all the causes for a snow, but it can be useful in terms of prediction. If this is the case, then a model doesn’t have to be isomorphic with the target phenomenon. So, I would say that a model doesn’t need to be analogous (in Jake’s sense) to the target phenomenon.

    (2) Does an analogy require isomorphism?
    Of course isomorphism is sufficient for analogy, but it may not be necessary. An argument by analogy, say Thomson’s violinist case, doesn’t seem to point out any isomorphism. Maybe we want to restrict the analogy to analogy between systems.

    But even with this restriction, I don’t see why isomorphism is required. Matthewson and Weisberg talk about differnt model-world relationship (180): isomorphism, homomorphism, metric relationships. It seems to me that as long as two systems have some important features in common, we are able to draw an analogy. Suppose biological evolution and economic evolution are analogous, are they isomophic? If we think they are, then we need to disregard some factors that cause the phenomenon. Then, whether two systems are isomorphic also depends on how we characterize isomorphism: should all the causal factors be considered? At which level should the system be isomorphic (i.e., how precise)?

    (3) Metaphors. I agree with Jake that metaphors are literally false. I also agree with Josh and Lynn that metaphors say something true. They are true in terms of sharing some features. Intuitively, metaphors are weaker than analogies (in Jake’s sense). e.g., economic evolution is biological evolution. But if we lower the standard of analogy, the situation may differ.

    I am quite unsure about the metaphor/analogy distinction.

  10. I’ve always taken the metaphor/analogy distinction to be a rather artificial one. A metahpor is merely an analogy that does not explicitly state that it is an analogy. The metaphor “Andre is a lion” is obviously false. Yet the goal of metaphors shouldn’t be true. A good metaphor in the form “X is Y” aims to point out that “X is like Y” is true in certain respects. “X is like Y” is clearly an analogy.

    Now as far as the explanatory powers of analogies goes, I have some doubts. If we were to say that system A is like system B, I’d be making an analogy. But the analogy itself does not have explanatory power until we consider the explanatory power of one of the systems. We must claim that system A has explanatory power before we can claim that system B has explanatory power. Models, however, function in a different way. Models, like Lynn said, do not compare a system to some other system. Models aim to put a more generic and abstract form to a given system rather than compare to some other actual system. A model is not a system, a model merely describes some functions of a system.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: