Articles

What happens when comprehensibility exceeds cognitive power?

In Uncategorized on March 8, 2010 by toddallinmorman

What has struck me recently in contemplating the staggering complexity of ecological and environmental phenomena is the distinct possibility that we may never be able to comprehend the intricacies of the biosphere in the fullness of the details. But this has caused me to contemplate the evolving staggering potential we have to exponentially increase the cognitive power of the universe.

Levins has argued that even if computers could deal with the hundreds of partial differential equations with thousands of millions of variables involved, the results would be incomprehensible to our limited faculties. Computers that you or I could buy are likely more powerful than anything considered a ‘good’ computer when he wrote, so how has this increased computing power changed things? Human potential has remained largely untapped and I see two potential solutions, each with the same potential problem.

One is to develop creative, intuitive, and imaginative artificial intelligences that will only be limited in their computational speeds by the resistance of the matter involved. Then these creations could suss it out and explain it to us, hopefully in terms that we could hope to understand. The primary difficulty here is that whatever theory developed might require hundreds of years just to read by an unaugmented human.

The other possibility is to artificially augment human memory and computing capacity, and hope these new people could explain it to us.

What does it mean for science if it all is theoretically comprehensible, but it is practically impossible to express the theory of everything in a single human lifetime such that anyone could hope to understand? Or are we expecting (hoping?) the patterns in the staggering complex data to be governed by principles necessarily comprehensible to our beautiful and amazing, but still limited, minds?

Advertisements

25 Responses to “What happens when comprehensibility exceeds cognitive power?”

  1. A couple of points:

    1. Why is it important to understand? You probably don’t understand the complexity of most devices in the modern world today.
    2. In a sense, you are also describing the “limits of science” – much like the Heisenberg uncertainty principle. Take a neural net (a form of artificial intelligence). What if we need the same number of artificial neurons to simulate a human brain such that there is a one to one correspondence? What if we need the same number of computational units to simulate the complexity of the universe? Aren’t we simply substituting one form of complexity with another? In such a case there is an infinite regress that places a limit on understanding.
    3. Models and theories are tools – why would society want to invest scarce resources in the pursuit of “understanding” without a practical payoff?

    • 1. It is important for humanity to understand for its own sake. The pursuit of knowledge and understanding are values in and of themselves. Additionally, understanding the complexity of ecosysytems and evolution will allow for them to become consciously managed and shaped to fulfill human ends.
      2. I am more interested in what it means on a more contemplative or metaphysical level for there to be theroetically achievable answers to scientific questions (not like Heisenberg uncertainty) but that are congatively unobtainable. If there is no a priori reason knowledge cannot be achieved, what does it mean for the existence of that potential knowledge that it is practically impossible? Or is it really practically impossible?
      3. I don’t care about models in this post, but understanding the actual complexity of ecosystems and evolution. Resources are not at this time scarce, only alocated in highly inefficient and destructive ways. The practical ‘payoffs’ of understanding would include human management of ecosystems and evolution for imporved human quality of life and an increase in resources dedicated to human happiness.

      • Sorry I’m coming back to this a bit late – I’m not receiving the email notifications of follow-up comments.

        Just as a bit of background, I have built computational models – both agent-based and system dynamics – as well as analytical solutions to problems. I would say that I have a pretty good understanding of the limitation of models and theories.

        ” The pursuit of knowledge and understanding are values in and of themselves.” “Resources are not at this time scarce”

        I disagree. There is always an opportunity cost – even if it is just how you want to spend the rest of your human life span.

        I am reminded of a chapter in “The Mind’s I” called “A Conversation with Einstein’s Brain” where the author assumes that the neuronal states of Einstein’s brain could theoretically be captured at death and transcribed into a huge book. If we knew all the neuronal pathways we could presumably trace auditory inputs to speech outputs, thus literally having a conversation with the deceased Einstein – even if it took centuries to transcribe the state changes in the book.

        So, to place this in Todd’s context, what if we built a giant computer to undertake these calculations for us? Is this feasible? And, if it is feasible, then why not build a planet-sized brain that is superior to that of Einstein’s brain and have a really cool conversation with that?

        This, in turn, is the scenario created by Douglas Adams in the Hitchhiker’s Guide to the Galaxy, where the Earth is secretly a super-computer intended to generate the Ultimate Question to the Ultimate Answer of “42” (originally divined by Deep Thought) with the process estimated to take some 10 million years. Of course, the mice end up creating a faux question so they could get paid without waiting 10 million years.

        Several conclusions can be drawn from Adams’ farce:
        1) there is no guarantee that a large (or infinite) investment will generate a positive payoff;
        2) the answer may be trivial (which is a reiteration of the first point) or incomprehensible (requiring yet further investment); and
        3)the purpose of an investment may be subverted by self-interested parties.

        Thus, promises that the “holy grail” will be found if we just invest a little (or lot) more or try a little (or a lot) harder should be viewed with suspicion and with an eye on the payoff and who’s interests are being served. It is in this context that I refer to the “limits of science” as more and more investment yields seemingly smaller and smaller “results”. What is the stopping rule? At what point do we stop saying that fundamental science “might” lead to massive breakthroughs in the future?

  2. Todd, I sympathize with a need to understand the complexity of ecosystems and the evolutionary processes that characterize the long-term dynamics within (and between) them. However, Heraclitus was correct in a physical sense as well as a figurative (metaphysical?) sense with “one can never step into the same river twice”. The biosphere is unboundedly complex at any moment and ever-changing. To know such a system is impossible. So we bound our study of the system physically, often arbitrarily, so as to reduce the dimensionality of the study. We take static, discrete measurements to bound the problem in the time dimension. Then we use various heuristics to limit the dimensionality of what remains: ceteris paribus, idealizations, theory, models. We may invoke analogies to reduce the information processing necessary to assess the still complex relationships that remain. Meanwhile, everything has moved on.

    As you note, when we move from system study to system management — preserving scarce natural resources vs. exploitation — we overlay onto the system human frailty. We cannot hold even simple, well-defined chunks of the biosphere as common property. We develop ever more arcane political solutions to property rights for human interaction with the biosphere. It is a huge step to believe that greater knowledge of ecosystems will lead to more enlightened existence within it. Alas.

  3. Steve said:

    “Why is it important to understand? You probably don’t understand the complexity of most devices in the modern world today.”

    I may be naive about this, but I would hope that *somebody* out there understands a given man-made device even if I don’t. Although I have heard worries about the impossibility of tracking down backdoors in computer chips, given that the US military often buys off-the-shelf parts that are made in foreign countries. That might be one reason why we should want to understand the complexity of the real world – little things hidden in the noise might eventually bite us.

    “Models and theories are tools – why would society want to invest scarce resources in the pursuit of “understanding” without a practical payoff?”

    For the same reason it invests scarce resources in philosophy and pure mathematics? Although understanding might be linked to a practical payoff. Depends on what you mean by “understanding.” I’m not sure what “understanding” means exactly, but one definition could be the knowledge of how to manipulate a system in order to get certain desired outcomes. Surely there is some room for profit there?

    Todd said:

    “If there is no a priori reason knowledge cannot be achieved, what does it mean for the existence of that potential knowledge that it is practically impossible? Or is it really practically impossible?”

    Well, obviously it means that it is inaccessible to *us humans*. Could mean that if a brighter race of cognizers show up, we humans could be in trouble. Even if they are our descendants or actual uploaded humans.

  4. A few things:

    It certainly is the case that the human mind cannot grasp the totality of the causes and effects behind some natural phenomenon at any given time. But could the human mind take each part piece by piece and understand it? We need to distinguish between two sense of “incomprehensible”. First, there is the sense in which something is incomprehensible because the human mind’s CAPACITY is limited. I cannot grasp a chilegon (1000-sided figure) for exactly this reason. The problem isn’t that I don’t understand any of the terms, the problem is that I can’t imagine it. Hell, it’s hard to even imagine a 50-sided figure. But this kind of incomprehensibility is due to processing limitations. We should think about this as in-practice incomphensible. If we could make our minds more efficient and powerful, then we could comprehend it, but right now, we can’t

    Second, there is the sense in which someone is incomprehensible because the concept cannot be grasped. The standard example of this is the thing-in-itself or reality-as-it-really-is. We cannot grasp what it would be like because all we have access to is our perceptions. Such a concept is in-principle incomprehensible. I don’t think that the totality of the causes and effects of a given natural phenomenon is this kind of incomprehensible. I think it’s the first kind. I also think that Levins probably can’t support the claim that the best models for nature are the second type of incomprehensible.

  5. I agree with Pete that the incomprehensibility we are discussing here is the kind that is a posteriori, not a priori. I also agree with Dan that the problem might only be relative to human minds (more intelligent beings may be able to understand the ecosystems). The issue is just whether human beings are able to improve their understandings.

    If what Randy said is true (and it seems true), then it’s *extremely* difficult to understand the ecosystems. Just like practicing anatomy may prevent us from knowing the reasons of an organism because the organism is dead before being examined, using models to understand ecosystems may prevent us from understanding the dynamic processes in ecosystems because the ecosystems are changing all the time. But it doesn’t seem impossible to understand the ecosystems, since we can still observe the process while it is going on lively.

    As for Todd’s suggestion about artificial intelligence, the problem is whether human beings are able to create AI. Since we can create computers that are far more advanced than us in carrying out some functions, AI is not impossible. But whehter we can create AI that is smart enough to *think* rather than carrying out functions, I don’t know.

  6. Randy’s comment has stirred new thought in my mind. The use of augmented or artificial intelligences could potentially serve as tools to generate comprehensible scientific theories from incomprehensibly detailed phenomena.

    One could potentially develop detailed tools of noninvasive data collection based upon out more detailed understanding of physics that could be entirely managed by automated sensors. This data would then be collated and analyzed by artificial intelligences that would then postulate (potentially) new ecological scientific hypotheses based upon this analysis. There is every potential that these artificially generated hypotheses would be accessible to unaugmented human human capacities.

    So, there is the alternative potential that inaccessible data might be actually governed by accessible properties.

    Side notes on other comments: I am only dealing with understanding of phenomena, not things in themselves. Science does not deal in such questions, so ultimate knowledge of the things behind phenomena are irrelevant.

    As to more enlightened management of the biosphere- there we already have a multitude of untapped human potential and when we get about to providing social organization for humanity rather that anti-social hierarchies of force, there will be much better management.

    I am, here, more interested in the potential of humanity to actually create a conscious evolution and a conscious ecology that is self-directed and self organizing. Of course we would have to have a conscious society first to implement that. But unlike the material and intellectual preconditions for full self directed ecological and evolutionary consciousness, the material and intellectual conditions already exist for a conscious self-organized human social insitutions.

    The comprehension of the environment means we could potentially shape and direct the environment and evolution in conscious ways.

  7. The seminar and Jay’s handout cleared some things up for me. I’m afraid I still haven’t read Levins’ paper yet, but here’s what I think is going on. Somebody please yell if you think I got it wrong. I think Levins is mainly talking about the wider topic of what makes a model manageable by humans rather than the more narrow topic of human cognitive limitations (this narrow topic does play a part in the wider one, as I’ll get to in a bit). A model can fail to be manageable in at least three ways. It can be unmeasurable, unsolvable and uninterpretable.

    Measurablity is a matter of sheer physical logistics rather than cognitive limitations. A model is “immeasurable” if field scientists can’t gather enough data to yield predictions. An immeasurable model might be more complete, but not terribly useful. Might make a good paperweight, or maybe a boat anchor.

    If a model is unsolvable, that’s a problem but a more minor one. Weisberg says something about how a more general form of analysis can make up for a model being not analytically solvable, but this will not always work. I’m afraid I don’t really have the math background to follow the discussion on this issue, but it appears to be a matter of the formal properties of the model, not human cognitive limitations. Advances in mathematical techniques might help, but it doesn’t seem to be a matter of sheer brain power (but possibly computing power).

    Interpretablity seems to be the only issue that rests on human cognitive limitations. Unfortunately, I’m kinda hazy on what it means for a model to be interpretable. One factor might be being able to understand the causal relationships in the model. A model is generally a simplified representation of reality. Let’s say the model is a determinate simulation of a system over time. You make a small change in the initial conditions, which result in a large change in the outcome. You try to track down how exactly the change in initial conditions causes the change in the outcome, but you can’t do it, and neither can anybody else. If the model is intended to help us understand the causal relationships of the target system, and we can’t understand the causal relationships of the *model*, that’s a Bad Thing, right?

    It’d be nice to know just how these three issues tie in with generality, precision, and realism.

    • Dan, my take on interpretability is based on my own experience with large computer models of complicated systems (I’ll not use complex here, as it connotes something different) and how that resonates with my reading of Levins. What one ends up with is a large number of equations with the same number of unknowns — variables– that are linked parametrically. Think of eight herbivores, three omnivores, and six types of plants linked in a food web with a couple dozen abiotic variables that are linked in growth curves, consumption rates, reproduction rates, etc. In the extreme case, there would be one parameter for each of the variables in all of the equations, though practically we find some variables omitted in each equation. But, in the end we have a solution in which any parameter is effectively linked to all other parameters (at least HUNDREDS). So, reproduction rates for herbivore one depend on the photosynthesis rates for all six plant types, the consumption rates for all herbivore x plant species combinations, the predation rates for all omnivores (as well as their plant consumption), the reproductive rates of all other organisms, life expectancies, carrying capacity of the ecological niches, ….. BLEAH!

      So, how do you interpret the model to ask the question,”What happens if plant species one gets a fungus that reduces its growth by 20%”? It is uninterpretable due to the complex interactions. We can’t find the logical path through the parameters to trace out the effects.

  8. Second big post in a row, but this one is more piece-meal…

    Pete said:

    “It certainly is the case that the human mind cannot grasp the totality of the causes and effects behind some natural phenomenon at any given time. But could the human mind take each part piece by piece and understand it?”

    This seems to along with what I muttered about man-made technological artifacts. I might not understand the gizmo, but hopefully *somebody* understands it. Division of scientific labor? I could see how this might not work in some domains. Let’s say that we can model evolution in a manageable way if we ignore ecology, and vice versa. However, we find out that evolution and ecology operate on commensurate timescales, so when we model one process we can’t ignore the other and still get accurate predictions. However, if we try to model evolution and ecology together, we get a model that is measurable and analyzable, but it’s so complex that it’s almost as hard to figure out as the real thing.

    Wenwen said:

    “As for Todd’s suggestion about artificial intelligence, the problem is whether human beings are able to create AI. Since we can create computers that are far more advanced than us in carrying out some functions, AI is not impossible. But whehter we can create AI that is smart enough to *think* rather than carrying out functions, I don’t know.”

    If all else fails, we will probably be able to brute force the problem eventually. Make a computer model of the human brain down to the neuronal level, and let it rip. Hopefully there is a more elegant way to do human-level AI, but I’m not sure either. I’ve generally heard that computers should be powerful enough to do human-level AI around 2030. The question might be if we can keep up on the software side. Also, can we make an AI that thinks *better* than humans? How far would just overclocking a simulated human brain take us?

    Todd said:

    “The comprehension of the environment means we could potentially shape and direct the environment and evolution in conscious ways.”

    In some ways, we already do this. Land management, selective breeding, genetic engineering. What would happen if we nailed down climate modeling to the point where we could tailor the greenhouse effect so that it benefits a single nation, perhaps at the cost of others? Nice climate for , everybody else gets the shaft?

    • Dan,

      Nation-states are a relatively recent historical phenomena and will be transcended when human social organization becomes conscious, thus your concern is a valid one only if our technological accomplishments continue to outstrip our social and moral advancement.

      As to the general proposition of consciously directing ecology and evolution… first humanity is still not conscious as a species being but still grappling with our potential, so we are not yet conscious, thus consciousness cannot be applied to these things collectively. But, we also do not have the understanding to consciously shape ecology and evolution yet, but can only grope about with trial and error, and much unintended consequence.

      Real understanding of our ecology will be necessary for humanity to become the consciousness of ecological development; that is a level of self-awareness the planet has not yet achieved.

  9. Steve:

    I don’t think ‘opportunity cost’ necessarily leads to ‘scarce resources.’ This is likely merely a semantic difference, but finite resources is not the same, to me, as scarce resources.

    The reality is that the accumulation of surplus labor in the different forms of capital have eliminated scarcity of those resources necessary for maintaining bare human existence. The continual accumulation of this capital will further increase the availability of resources for expanding human ends and potential, until we reach the limits of what is available in our corner of the universe. Thus there is every expectation that phenomenally powerful computers that require even fewer resources will be available in the future.

    That said, I agree that taken to absurd lengths that dedicating all human resources to something as silly as a planet level computer might not be the best choice for humanity to make. I shall leave that decision to the future as I have every faith that posterity will have the capacity to decide whether or not such an endeavor is the best use of the available materials.

    As for an artificial intelligence that might be able to suss out how to provide us the knowledge with which to consciously create our ecosystem as we like as a species- I think that might be within our reach and, provided the knowledge is used humbly with a reverence for life and existence, would provide substantial practical benefit to human existence.

    Additionally, we have more than enough right now to feed, house, clothe, educate, and provide health care to all and still have plenty of resources left for at least some exploration of knowledge for its own sake. Thus there is no ‘scarcity’ despite the reality that everything requires material inputs in a world of practically finite material availability.

    • Todd, I would have hoped that your answer would have focused on the three rules that emerge from Douglas Adams’ farce.
      1) Uncertainty about return on investment
      2) Inability to interpret outcomes
      3) Self-interest

      World income per capita is about $10,000 per person. In the United States it is approaching $50,000 per person. You assume a subsistence income is <$10,000 per year and that any mis-distribution should be "rectified" such that the average person in the US would take an 80% cut in purchasing power. That, my friend, is socialism and socialism is a failed economic model. Can you guess why?

      • Income is an irrelevant abstraction; the question is the increased per capita productivity from the dark ages where nearly everyone was involved in food production.

        You are also mistaking state capitalism (Stalinism) a largely, but not totally, discarded system with socialism. I think you may also be mistaking discarded for ‘failed.’

        But, what is your criteria for a failed economic system? Any human economic system would require as a metric for success the delivery of the basic necessities of life to the population, and thus should be a critical measure of success.

        Setting aside the state capitalists of North Korea, the remaining Stalinist world has a much better record of delivering basic necessities than the non-Stalinist capitalist world at the time being. Eight hundred million people live in hunger in a world that produces more than enough for everyone to eat, with enough surplus to feed another half billion people, and 80 million a year starve to death because capitalists cannot make enough money off of them.

        Compared to feudalism, where people only starved in times of crisis, a small fraction of the populace is involved in food production with the current development of the means of production, yet millions starve. This is an indication of a failed economic system.

        To be clear, I reject Stalinism as inhuman as much as I reject other failed capitalist economic and political systems.

        But this, to me, seems very far afield from philosophy of science.

        As to the Douglas Adams problem, as I said, the future can make its own decisions. We have finite resources, and we should democratically determine how to use them, but as we have no democratic economic system at this time, and we have other concerns at this moment, I’m happy to leave world sized computers of questionable utility to those living in a socialist future to debate.

        But, if we never achieve socialism, the capitalists will likely build the damned thing while billions starve, despite your important points regarding utility. That is because the capitalist class controls investment decisions in a capitalist economy and the system is not designed to provide for the basics of human life.

      • Todd, I began my studies of agriculture and agricultural economics in 1970. I have been learning ever since. To say income is meaningless and delivery of basic necessities is what matters is sophomoric at best. To suggest “the Stalinist world” does a superior job of delivering basic necessities is utter nonsense. Feudal societies were indeed dominated by agricultural workers. They starved ALL THE TIME. Life expectancy was 25 years.

        I’d like to suggest that the rhetoric of political systems and institutions is better placed elsewhere. We have some serious issues with respect to the rhetoric of the philosophy of science: causation, explanation, theory, models, etc. The group uses these pages to struggle with developing shared understanding. But I’ll let André be the arbiter of this.

      • Randy, I completely agree. I apologize for letting politics slip into this. I also want to apologize for being completely unclear.

        Income as a concept seems irrelevant to me when faced with the per capita and per acre productivity issue (not the distribution issue). It is much easier to produce the necessities of life, thus we have more time to allocate to production of other materials. I thought it was a simple truth that we were much much more productive than we were in feudal times.

        The delivery of basic needs is a completely separate issue.

        It was separately my understanding that in a feudal economy, when crops were abundant (however rarely that might have been), and much more was produced than needed to feed all, starvation was a rarity.

        I am also much more interested in the phil sci content. In my defense, I was not the first to call a political or economic model ‘failed’ but I apologize for going deeper rather than letting it go.

    • I don’t think ‘opportunity cost’ necessarily leads to ’scarce resources.’ This is likely merely a semantic difference, but finite resources is not the same, to me, as scarce resources.

      Todd, the causation runs the other way. Because all resources are scarce, there are opportunity costs to their use. Your time is scarce. Using it for blogging incurs an opportunity cost: what you have foregone by not doing something else. Using corn to make ethanol to fill SUVs means that there will be hunger in grain-importing countries. The opportunity cost of fuel is food foregone. There can be a time dimension: using oil to drive in 2010 means there is an opportunity cost; there will be less oil for future generations.

      Some resources are finite; i.e. scarce relative to renewable resources. That means their opportunity cost is very high. Generally, we find that markets sort this out. Rosources with high opportunity costs become expensive and other things substitute in. Close substitutes will, over time, have prices that (nearly) equilibrate. This suggests that their opportunity costs will approach a common level as well.

      • This is entirely a semantic issue from my perspective. I clearly do not know the technical definition of ‘opportunity cost’. I suspect you are using a technical definition for ‘scarce’ here as well. My time is finite, not scarce by my reckoning by common usage.

        Meriam webster on scarce:
        1 : deficient in quantity or number compared with the demand : not plentiful or abundant

        By this definition neither food nor my time is scarce, but the available supply is finite.

  10. I just want to add a few thoughts to the debate between Todd and Steve. They argue about whether understanding and knowledge have values apart from the opportunity cost.

    We can distinguish what kind of values we are talking about. If we talk about intrinsic value (value in and of itself, for its own sake, not dependent upon anything that’s valuable), then I totally agree with Todd that understanding is valuable. If we talk about instrumental value (value dependent upon anything that’s intrinsic valuable, value that’s non-intrinsic), they we may want to ask what’s the goal of understanding and how useful is understanding. It seems to me that Steve is talking about prgamatic value(?), value that is concerned with the payoff and costs. If we make sure what kind of value we are talking about, then maybe Todd and Steve do not disagree with each other.

    • Interesting point, Wenwen. I don’t think there is much disagreement here. I personally feel that nothing, save human life, has intrinsic value, and that all value is measured and created by humans.

      That said, I am both arguing that knowledge, with no potential utilitarian application, can be of value just due to the satisfaction, happiness, or appreciation it brings people, AND that with specific regards to ecology and evolution management the knowledge gained will likely have direct utilitarian applications and values that would potentially outstrip the investment of resources in short order.

      Now, the ultimate answer, or the ultimate question to go along with that… I don’t think that is achievable, but if it has no potential utilitarian applications it would be of value just for the satisfaction of curiosity. But like all knowledge, none of this is worth suffering, need, or want imposed upon those who have not voluntarily chosen such suffering for the sake of the knowledge.

  11. Todd and Randy,

    Arguably, politics is influenced by the very images of science we are discussing. Could I suggest that Todd (or others in the class) read Hayek on the Use of Knowledge in Society?

    http://www.econlib.org/library/Essays/hykKnw1.html

    If one assumes that the allocation of goods and services can be “calculated” then it follows that a supreme mind (or Science with a capital “S”) should be able to solve the problem for all of humanity. (A tradition that seemingly flows from Plato’s concept of the philosopher-king). Todd (and many others) clearly believe that the “secret” is there to be unlocked, if only we could understand it when the supercomputers give us the ultimate answer. This is a very reasonable position given the developments in science over the past few centuries.

    Hayek is on the other end of this spectrum (perhaps in the same school as Heraclitus and his river). I think Hayek is close to seeing the economic system as an evolutionary process. One should allow the economic agents to adapt to local changes in supply and demand – knowledge can never be complete and local incentives matter.

    I believe these two positions have major ramifications for political economy but also lie at the heart of what is possible (and not possible) in science.

    • Steve,

      Hayek’s essay would be a useful read, but I think I’ll hold off on this for 1 month. The seminar schedule has us drilling down on some specific philosophical issues (explanation, causation, laws) in the coming weeks. Hayek would be a distraction. But I’ll ask Peter Klein to lead a discussion of Hayek when we gain altitude again. Thank you for the suggestion.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: