To the nihilists

The role of creativity in the search for ‘searching necessity’

There are people I know who, having lost religion in their adolescence, throw themselves into the pursuit of science and technology with all the fervor of “irrational rationality.” Later, as they begin to mature, it dawns on them that science may not be the panacea they once believed it to be. Cracks begin to appear. Their thoughts begin to turn to those eternal questions for which science offers no answer:  “Why am I here?”, “What is my purpose?”.  More alarmingly, they begin to see themselves from a scientific perspective, and when the whole of the universe is your reference point, a single human life is but an insignificant blip in celestial time.

Perhaps they then attempt to figure out what went wrong, and reach back to the fundamentals of science.  They begin to peel back layers upon layers of logical systems in search of the lowest common denominators.  They begin to ask themselves, “Why Greenwich?”.  In other words, why, for example, should our system of coordinated time be built around Greenwich as opposed to a more relevant city?  It occurs to them that all systems are built around a similarly arbitrary empirical basis.

After failing to find meaning in the empirical, they might then take refuge in pure reason, in logic and math.  But soon they find themselves caught in a web of equivalencies, with everything defined in terms of everything else. Reason is unmasked as an empty tautology and the disillusionment is complete.

Nihilism sets in. A mocking smile is turned on those deluded individuals who still dare to ask the big questions. Although they have turned their scepticism back against science itself, they have not torn themselves from the framework of a scientific worldview.  They fail to see that nihilism can only exist on the basis of division and differentiation.  They are above all scientists, and it does not occur to them that there might be something wrong with ripping the fabric of the world into two pieces – into empirical and rational components – and then surmising that, because each piece taken alone is intrinsically meaningless, it must necessarily follow that the world itself and everything in it must also be intrinsically meaningless.

At this point, if nihilism does not lead to despair, some turn to simple, almost hedonistic pleasures and distractions: YouTube videos, lolcats, or whatever happens to be the latest internet or entertainment trend.  Others devote themselves with renewed energy to their day-jobs and family, with the conviction that the only thing that matters is the biological drive to pass along ones genes and provide for the future.  Others combine the two approaches.

. . . the intellect remains eternally confined within the realm of the conditioned, and goes on eternally asking questions without ever lighting upon any ultimate answer . . . Since, then, he cannot appease his inquiring intellect by evoking any ultimate and inward cause, he manages at least to silence it with the notion of no-cause, and remains within the blind compulsion of matter since he is not yet capable of grasping the sublime necessity of reason.  Because the life of sense knows no purpose other than its own advantage, and feels driven by no cause other than blind chance, he makes the former into the arbiter of his actions and the latter into the sovereign ruler of the world.

-Friedrich Schiller, from Letters on the Aesthetic Education of Man

One might argue that the countless diversions supplied by modern entertainment are symptomatic of nihilism: their purpose is escapism, the goal of the participants, sweet oblivion.  It seems to me, however, that these diversions are the result of a one-sided nihilism, one in which the senuous is at least implicitly recognized as an important element in life. This is easier to see with respect to the physical drive to provide for oneself and ones progeny. This drive is not a form of nihilism, but rather a testimony to the power of the materialism inherent to the scientific framework from which they never truly left.  The nihilism of those who embrace this physical drive is incomplete, but they nonetheless feel that there is something missing from their lives, so they still consider themselves nihilists. Why?  If one accepts the premise that man is a synthesis of the mental and the physical, then I maintain that the former, the mental, has been denied in some way.  Is there perhaps, a parallel mental drive corresponding to the physical drive to pass along ones genes and provide for future generations?  In what follows, I argue that there is, that this parallel drive is this creative drive, and that the creative drive is the anecdote to the kind of nihilism that often plagues the most technically and scientifically-minded individuals.

First, some justification is no doubt in order for any statement which implies that scientifically-minded individuals are neglecting their minds.  A brief exposition should serve to clarify my meaning.  I will begin by explaining my understanding of “mind” in this particular context.

To think is to confine oneself to a single thought that one day stands still like a star in the sky.

– Martin Heidegger

It is not uncommon for those who philosophize to preface or introduce their work in terms of a search for a unifying thought or principle. Some, as for example, Heidegger in the above quote, or Schopenhauer in the preface to The World as Will and Representation, take this to an extreme and write thousands of words in explanation of what they claim to be a single thought.  But what does it mean “to confine oneself to a single thought” in this manner?

In his book What is Life?, the physicist Erwin Schrödinger set forth the idea that a defining characteristic of life was a  reduction of entropy, or “negative entropy.”  Whereas inanimate matter falls into more and more disarray, eventually reaching thermodynamic equilibrium, life strives to maintain order against this constant threat of chaos.* Thermodynamic equilibrium is to a living system nothing other than death.  This tendency towards order caused Schrödinger to speculate as to whether another law of thermodynamics would eventually be discovered to accommodate the phenomenon.

Entropy and negative entropy (perhaps better thought of as free energy) strike me as a modern formulation of the twin forces that mankind has marvelled over at least since the Presocratics in ancient Greece began referring to them as “strife and love.” In this framework, life can be viewed as a brazen charge against the second law of thermodynamics.  Note that this is a broader definition than that given by those who would reduce life to the set of physical processes characteristic of all particular life-forms. Rather, under this alternative view, life is a striving, a search for order and unity, and in man, life has evolved an ever more powerful tool in furtherance of its goal.  This tool is man’s mind.  It is the mind which strives to take in a stream of random, disorganized information and spit it back out in ordered, expressible terms.  The mind longs for unity in knowledge.

Science hinders that individual quest for unity in two primary ways.  First, there is the problem of specialization. Almost by definition, the degree of specialization required to practice in any field today precludes an individual’s search for a more unified body of knowledge.

We have inherited from our forefathers the keen longing for unified, all-embracing knowledge. The very name given to the highest institutions of learning reminds us, that from antiquity to and throughout many centuries the universal aspect has been the only one to be given full credit. But the spread, both in and width and depth, of the multifarious branches of knowledge by during the last hundred odd years has confronted us with a queer dilemma. We feel clearly that we are only now beginning to acquire reliable material for welding together the sum total of all that is known into a whole; but, on the other hand, it has become next to impossible for a single mind fully to command more than a small specialized portion of it. I can see no other escape from this dilemma (lest our true who aim be lost for ever) than that some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them -and at the risk of making fools of ourselves.

-Erwin Schrödinger, from What is Life?

The second problem with modern science is that it has become a slave to the material. That is, as it exists now, the ne plus ultra of science – its greatest aim – is the improvement of material circumstances.  With very few exceptions, scientific funding is contingent on a showing of concrete, practical benefits, usually expected within a relatively short time-frame. Neither the patent system nor the grant system are designed to evaluate new directions in research leading to unknown practical benefits or benefits expected to accrue on a long-term basis. The paradoxical result is that this gap in our incentive structure sometimes leads to slower advancement of the material goals the system exists to support. Is it possible that the promotion of science for science’s sake alone – for the sake of greater understanding, greater unity in thought- could lead to even greater, though perhaps unexpected, material benefit?

These deficiencies in modern science are what lead me to claim that scientists are, in some ways, neglecting their minds. They despair over the thought that science will never be able to answer the big questions, never be able to give them a purpose for living, but they don’t realize that these questions can only be asked from a reference point outside of life, that is to say, the questions are invalid.  Life itself is that very struggle towards order and unity, and one neglects that drive at the expense of their own sense of happiness and fulfillment.

What then, is my suggestion to an unhappy nihilist? (Because, above all, of course, the particular breed of nihilist I have been describing looks to concrete, practical suggestions).  The solution is simple:  create (or, perhaps some people would prefer the term invent). Creation is nothing more than life -the drive to order and unification – in action.  If you feel uninspired, then broaden your knowledge. Seek to learn from fields of knowledge outside your own, and you will find the creative drive will kick in on its own accord as your mind inevitably begins to incorporate the newly discovered knowledge within the framework of what you already know.  The key is to let the mind’s search for unity dictate the material form of the creation, and not the other way around.

A more common problem, I fear, than lack of inspiration, however, is lack of training in expressive mediums. For those who are easily overwhelmed by the possibilities hidden in the sheer amount of information available, it helps remember that any creative break-through must first be expressed conceptually, usually in pictorial or written form.  If your particular field does not allow opportunities for practice and training with creative expression in a suitable medium, then it might be worth your effort to learn more about the different expressive mediums themselves.  This is to me a task for arts and humanities, which has a role both in introducing a person to the historical current of endeavor towards a more all-embracing knowledge, and in training a person in the use of various expressive mediums so they might be capable of contributing their own work.

There are many ways of going about this quest. Ian Fairweather, for example, was an Australian painter who described the act of painting as something which gave him “the same kind of satisfaction that religion, I imagine, gives to some people.”  It was his “searching necessity,” and I suppose it is the spirit in which I write.  It doesn’t seem to matter which medium you choose.  Just get started with something.  Creativity breeds more creativity.  As soon as you succeed in conceptualizing and expressing one theme, you will find that the theme also serves as a lead, sending you on your way to read more, to learn more, to do more, about another aspect needing clarification.  Start small, and soon your particular works will begin to form to the boundaries out of which a larger picture can arise.  Once you are able to see that picture with greater clarity, your previous works can also serve as the raw material from which you may sculpt your meta-picture.  Repeat the process and learn just how far your ideas can propel you.

But the fact that today I still stand by these ideas, that in the intervening time they themselves have constantly become more strongly associated with one another, in fact, have grown into each other and intertwined, that reinforces in me the joyful confidence that they may not have originally developed in me as single, random, or sporadic ideas, but up out of a common root, out of some fundamental will for knowledge ruling from deep within, always speaking with greater clarity, always demanding greater clarity.  For that’s the only thing appropriate to a philosopher.  We have no right to be scattered in any way, we are not permitted to make isolated mistakes or to run into isolated truths.  By contrast, our ideals, our values, our affirmations and denials, our if’s and whether’s, grown out of us from the same necessity which makes a tree bears it’s fruit – totally related and interlinked amongst each other, witness of one will, one health, one soil, one sun.

-Nietzsche, from the Genealogy of Morals

I would change one thing about the quote from Nietzsche above, and that is, instead of saying that philosophers have no right to be scattered, I would say that a human being has no right to be scattered.  To exist scattered in the way Nietzsche uses the term is to lose part of ones humanity.  Everyone needs to discover their own “searching necessity,” and once you set foot on that path, it wouldn’t surprise me if you never watched another youtube video again.  Well, unless of course, you choose video as your means of creative expression, but that leads me to a whole other topic . . .

* It is also interesting to note that, if entropy is time’s arrow, then life strives in the opposite direction, towards the past. This is intuitively expressed by people everyday when they speak of their battle against time. But it is also interesting to consider this notion in relation to the mind specifically.  We are forever seeking explanations, attempting to find the origin of what exists. Thus, from the time of Plato, philosophers have been interested in the role of recollection and memory in the obtainment of knowledge.

Leave a comment

Filed under Essays

On mind and matter

A sketch to serve as a placeholder for my thoughts until I am able to carry out a more thorough examination . . .

Everyone, it seems, is a materialist today.  To even raise the question of mind and matter somehow seems quaint, of mere historical interest.  Few people would doubt that mental experiences arise from the underlying physical phenomenon of the brain. In fact, for most people, “mind” is synonymous with the brain.  But is this issue really as banal and irrelevant today as it might seem?

Sure, dualism of mind and matter, at least as it was originally formulated by Descartes and the occasionalists, is no longer a serious option. Yet there are some serious difficulties with the modern materialist/physicalist perspective.  These problems seem to me to fall into three main areas:  qualia, radical skepticism, and contradiction.

As the problem of qualia has become one of the more well-known argument against materialism, I will begin with it.  Qualia is, in short, a term for the characteristic of “what it’s like” to experience something. Qualia are generally considered to have four properties, first identified by the philosopher Daniel Dennett. They are ineffalbe (incapable of being communicated), intrinsic, private, and directly apprehensible to consciousness.  Probably the most often cited argument for the existence of qualia is Mary the color scientist. The basic idea is that, even if a color-blind person knew, theoretically, everything science could teach us about colors, she would still not know what it was like to experience, for example, redness.  Thus, there are some truths that cannot be accounted for by the physical or material.

The second problem is that materialism leads to the the kind of radical skepticism typified by the brain-in-a-vat problem.  If all of our mental experiences could be reduced to physical phenomenon, then there is no way to tell whether we are in fact, experiencing reality as it appears, or whether we are instead a disembodied brain connected to some supercomputer simulating our reality. The brain-in-a-vat scenario violates our common sense, which in and of-itself is a reason for rejecting it.  More interestingly, this illustrates how arguments against materialism tend to point towards arguments for the opposite extreme of idealism, or the position that the ultimate nature of reality is based on mind or ideas.  It is a short jump from the brain-in-a-vat argument to the argument that there is also no way to be sure of the reality of the brain itself.  That is, if the external world can not be known apart from our mental conception of it, how can we know that there is anything out there but our mental conceptions? In this way, mind and matter appear to form an antinomy, and there doesn’t seem to be any logical way of determining which of the two positions is the better one.   That leads me to the next, related, problem of the apparently contradictory aspects of mind and matter.

Mind and matter, in a lot of ways, appear to be mutually exclusive.  For example, common sense holds that our mental states are free in a way that physical states are not.  We assume that, even when we have no control over an event, we can at least choose the way we react to it. We take comfort in the fact, for example, that we can choose to make lemonade out of lemons, so to speak.  More generally, common sense allows for free will. Contrast this with our conception of physical events, which we conceive to be products of causal necessity, linked in a never-ending causal chain. In addition, there is the sense in which some, although not all, mental phenomenon have a timeless characteristic about them.  The intangibility of ideas, for example, leaves them immune to the deleterious effects of time.  They are eternal, whereas physical phenomenon are in a state of constant change and transformation.

These problems lead me to reject the simple materialist position as leading to an ultimate explanation of reality.  It seems to me that the physical and the mental are distinct and irreducible properties. Yet I am unable to fall back to dualism as an adequate explanation either, primarily because a dualist position leads to a subsequent problem as to how an interaction between mind and matter is possible.  Rather, I am inclined to adopt a dual-aspect monism.  I tend to believe that mental and the physical are like two sides of the same coin.  Or, a better analogy might be to the wave-particle duality of physics: sometimes the phenomenon of light is better understood as a wave, while at other times, it behaves more like a particle. Thus, unlike many materialists, I find a Kierkegaardian definition of human nature very appealing. Man is “a synthesis of the infinite and finite, of the temporal and the eternal, of freedom and necessity.”  However, I understand this definition through the lens, not of dualism, but of dual-aspect monism.

Leave a comment

Filed under Essays

Kierkegaard and Heidegger: Logicians?

Thoughts after reading Soren Kierkegaard’s The Sickness unto Death

The Sickness unto Death is the short (approx. 130 page) book in which the philosopher Soren Kierkegaard sets forth his famous definition of a self:  a synthesis of the psychical and the physical that relates itself to itself.  The merits of this classic philosophical work have, of course, already been proven by time.  I will not therefore attempt to catalogue its numerous strengths or presume to discuss its weaknesses.  Instead, by writing this review, I intend to point out a few of the things that I found most thought-provoking about the book in light of the philosophical development that has taken place after the book was published.

A human being is, according to Kierkegaard, “a synthesis of the infinite and finite, of the temporal and the eternal, of freedom and necessity.”  A human being is not a self, however, until this psychical-physical synthesis relates itself to itself.  As is readily apparent, Kierkegaard does not shy away from paradox and self-reference.  In fact, it often seems as though he regards self-reference as having a significant role to play in the solution to various different forms of inevitable contradiction.  Contrast this to view of most logicians, who regard self-reference as a source of contradicton and paradox.   In fact, self-reference has in the past been regarded as such a problem-child that one of the most typical ways of dealing with it is, in effect, to disinherit it, and kick it out of logical systems entirely (e.g., Alonzo Church’s or Bertrand Russell’s theory of types).

Kierkegaard goes on to define “God” as “that everything is possible.”  He also states the definition in reverse, and notes “that everything is possible means the being of God.”  Further, he defines imagination as the “medium for the process of infinitzing.”  That is, it is not a particular type of “capacity,” but instead the “capacity instar omnium” (the capacity for all capacities).  Definitions like these leave one with the impression that The Sickness unto Death is the book that results when you attempt to apply logic to the human soul.  Such a description, however,  contains not a little bit of irony, as Kierkegaard himself  notoriously railed against attempts to “rationalize” Christianity.  Perhaps Kierkegaard would find enough humor in the irony to forgive my painting of him as a logician.

Here, as well as in Fear and Trembling and other books, Kierkegaard maintains his sharp distinction between the religious and the ethical (for him the stage governed by reason) by locating the religious in the realm of paradox and contradiction.  But today, logicians are more fascinated than ever by these very phenomenon, some even taking the view that true contradictions exist.  I wonder, if Kierkegaard were around today, would he consider himself a dialetheist?

Kierkegaard appears to have an clear grasp of different logical levels and what he refers to as the “chasmic qualitative abyss” between them.  He often even refers to God and man in precisely this sense, as “two qualities separated by an infinite qualitative difference.”   He also seems to possess a natural understanding with respect to how collisions between these logical levels lead to mind-bending paradoxes and contradictions.  This understanding is, to me, the most significant aspect of Kierkegaard’s work, and the key to understanding it.

This peculiar internal logic strikes me even more forcibly in the German philosopher Martin Heidegger, who was largely influenced by Kierkegaard.  Heidegger is very obscure writer, and often different to read, but the difficultly is not the same as, for example, it is with the French post-modernists, who seem to write nonsense in some kind of an attempt to prove that everything is nonsense.  Rather, the difficulty with Heidegger is in the structure of his writing:  he also employs a lot of self-reference, and often lacks a vocabulary to reach the underlying level he attempts to explore.  Thus, he often stops to clarify that what he is saying does not “beg the question” or amount to a circular argument.  Usually, when at first glance he seems to be arguing in a circle, you come to realize that he is not, because he has switched to a different level, and is trying to explain a higher level in terms of one underlying it.  Strangely enough, I haven’t found any commentators discussing the structural aspect of Heidegger’s work, or Kierkegaard’s, for that matter.

In the early 20th century, the logician Bertrand Russell published a book explaining some of the more eccentric pieces of Leibniz’s philosohpy through the prism of the then recent developments in classical logic.  According to Russell, what might first appear as fantastical about Leibniz’s views, was actually the result of Leibniz’s strict application of the principles of logic as he understood at the time of his writing.   Unfortunately, Leibniz lacked the mathematical tools to formalize and clarify his expositions.

I am of the opinion that a similar exercise with respect to Kierkegaard and Heidegger would be a worthwhile pursuit.  That is, interpreting Kierkegaard and Heidegger through the lens of modern paraconsistent logic, would, I suspect, prove to be highly valuable.  After all, it often seems that the greatest advancements in thought are first arrived at intuitively, it is only much later that we have the tools available for formalization.

To the objection that a logical analysis of Kierkegaard’s work must necessarily take it out of the realm of the religious and away from God, contrary to Kierkegaard’s intent, I reply with Kierkegaard’s own words:  “to be able to be forward toward God, one has to go far away from hm.”

Leave a comment

Filed under Book reviews

The Logic of Local Attractions

On a weekend trip to Riga, the capital of Latvia, my husband posed an amusing question.  ‘Why,’ he wondered, ‘did we bother taking the trip to Riga, when all there is to see are local attractions?’  ‘After all,’ he  continued, ‘we can see local attractions anywhere.’

His argument proved more difficult to refute than common sense says it should be, especially on a warm, lazy vacation day.  It seems to involve a play on words similar to the kind Lewis Carroll employed to great effect in Through the Looking Glass. For example, when Alice tells the White King that she sees “nobody” along the road, the King replies that he wishes his eyesight was as keen as Alice’s, for he can hardly see real people, much less “nobody.”  The king seems oblivious to the fact that “nobody” does not refer to any particular person, but instead, refers to the fact that there is no particular object in the relevant set of objects (here, people) which corresponds to a predicate (being located along the road).

My husband’s argument appears to exploit the same language feature.  The phrase “local attractions” is not itself a particular attraction or attractions, but instead the phrase refers to a set of objects which correspond to a predicate. Presumably, in this case, the predicate would be something like “located in the immediate vicinity of the main subject.”

Thus it appears that my husband is guilty of something like a confusion of  logical types.  His argument uses a term designating a set as if it was also a member of itself.

Let’s hope this refutation is sufficient to convince my husband to travel again!


This picture my husband took of a sign in Riga may have very well inspired his topsy-turvy logic. The picture (possibly an illustration of Europa's abduction??) seems to depict an alternate world in which fish fly and birds swim.


Filed under Essays

Romanian Nature

Last week we went to visit my husband’s parents in Beius, Romania, a relatively small town just across the Hungarian border, about a four hour drive from Budapest. Now, there are people I know who hate the outdoors – who, for example, refuse to go camping or anywhere without a full-size bed, private bathroom, and make-up, hair and personal care supplies. I was never one of those people, and must admit, I even felt some degree of pride in not being that bad. I always loved my version of “roughing it,” which usually consisted of camping, canoeing, and/or hiking trips. Sleeping bags, tents? No problem. Outhouses? I can deal with the smell. Western amenities? Don’t need them.

Or do I? See, every time we travel to Romania, I morph into that cliche, spoiled western girl that I had tricked myself into believing I wasn’t. First, my allergies kick in. In the United States and Sweden, where I live now, I hardly notice them anymore, but in Romania, no matter the season, my nose turns into a faucet and I can’t go anywhere without a package of tissues with me. When we visit my husband’s grandparents, who live in an even more remote, rural area than his parents, I am reduced to sitting in the house with a Kleenex held to my nose. Next, my husband or someone in his family will inevitably ask for the English word for any number of old, obscure gardening, needle/yarnwork, cooking or other tools, and my complete ignorance as to the work involved is betrayed by my answer: “I have no idea. I’ve never seen something like that before.” And let’s not forget the subtle glances exchanged when I try to politely decline or am simply unable to finish my serving of chicken or pig brain, liver, or heart, or react strangely when I’m told that the egg his grandmother is offering me is a delicacy because it was never laid, but rather an undeveloped egg found inside the hen when it was cut down. Once, on a trip to a neighboring city about an hours drive away, I suggested we try eating out at a restaurant. That suggestion was quickly declined, as they explained that restaurant food was over-priced and much, much worse than anything prepared at home by my mother-in-law. Then, there are the shocked gasps when I try to throw out, for example, milk that has been sitting out all day that my toddler son never drank. (My father-in-law stopped me, and then consumed the curdled milk the next day, explaining that it was no different than yogurt). And I swear my mother-in-law is silently reproaching me every time I get out of the shower for having consumed way too much hot water.  These are just a few of the stark cultural differences I was struck by before.

This visit I kept noticing how goal-directed my behavior was, and how this aspect seemed to have a surprising effect on my relationship to nature.  During our visit, I always wanted to know ‘what the plan was:’  where were we going, how long were we going to be there, etc.  I found myself getting quite impatient when I didn’t know the answers to these things, which was almost always.  I usually consider myself a flexible person, so I was surprised how much it bothered me to leave the house with only a very vague idea of what I would be doing or how long I would be gone.   For an example of a typical outing with my in-laws, consider what happened on the way home from our last visit to my husband’s grandparents.  First,  we stopped by a neighbors’  so my son could play with a particular dog which had run up to the car to greet us.   Then we realized we forgot my son’s  jacket, so we went back to get that, and once again stopped to see the dog. My father-in-law then spotted a tree along our route with some flowers that could be used to make a medicinal tea, so we stopped to pick some of the flowers.  Next we noticed an inch worm crawling slowly along the back of one of the seats, so my father-in-law again pulled over in order to release the inch worm back into the wild.  My husband saw a male pheasant, so my father-in-law stopped and we got out to take pictures.  He wanted to try to capture a picture of the bird in flight, so we did some pheasant chasing in an unsuccessful effort to scare it into flying.   A little later, we spotted a female pheasant, and again stopped.  Lastly we stopped to take some pictures of a sad looking Romanian flag with the one of the tricolors completely sheared off.  No one had bothered to replace it.  After about the third stop in the chain, I began to get impatient and wanted to get on home.  At the same time, I really admired the enthusiasm and almost insatiable sense of wonderment and curiosity so characteristic of my in-laws.

This experience had me wondering whether it might be possible to lose the ability to stop and smell the roses.  William James drew a famous distinction between directed attention and involuntary attention, or fascination.  Direct attention allows us to concentrate on task and goal oriented behavior, while involuntary attention results when we allow ourselves to be distracted by a bird chirping, twigs snapping, or the scent of jasmine in the air.  Much has been said recently about the negative effects of direct attentional fatigue, and the restorative effects of natural surroundings. I wondered, however, if it was possible to have the opposite problem, and whether a learned effectiveness at blocking out distractions over my lifetime had caused me to lose my naive ability to interact with nature.  That is, an interaction in which one follows nature wherever it might lead, as opposed to imposing upon it one’s own will and goal.

Just when I was beginning to admire the Romanian attitude towards nature, my husband inadvertently alerted  me  to a darker side. As we were admiring the strangely sole kitten his grandmother’s cat had just given birth to, he remarked that it was fortunate that the cat only had one kitten.  ‘Otherwise,’  he said, ‘they would have probably had to kill the others with a pitchfork.’  Needless to say, I was absolutely horrified by this statement.  It was incomprehensible to me why they wouldn’t have the cat spayed in the first place, give the unwanted kittens away, or as a last resort, at least use a more humane method of killing.   My husband’s reponse was simply ‘what can I say, nature is cruel.’

Then, as we were leaving, on our drive back to the airport in Budapest, we nearly ran into a horse standing there in the dark right in the middle of the road.  My father-in-law honked his horn, but he barely moved.  He appeared to have a lame leg, and his ribs were visible.  My husband commented that it was sad that, now that new EU regulations forbid horses and carts on the main roads, many farmers, no longer having any use for their horses, simply abandoned them to starvation.

Nature’s cruelty or man’s?  Or is there a difference?


Filed under Essays

On neuroenhancement

The New Yorker recently featured an article on so-called “neuro-enhancement” or “cosmetic neurology.” As the name suggests, neuro-enhancement refers to off-label use of stimulants usually prescribed to children and young adults for attention-deficit hyperactivity disorder. The article cited some alarming statistics. For example, at some universities, the number of undergraduates taking prescription stimulants for non-medical use reached up to twenty-five and even thirty-five percent. Average use was reported to be around 4.1 percent of students according to a study completed in 2005. The statistics suggest that pill-popping among college students in order to stay awake and focused is becoming about as “normal” and socially acceptable as drinking a cup of coffee. Why is that so disturbing?

There doesn’t seem to be much of a debate as to whether measures should be taken to stop this practice. Unlike the case in the sixties, when hallucinogens and other “mind-expanding” drugs were all the rage across college campuses, there is no “war” on this type of drug. Perhaps this is partly because the stimulants have a legitimate medical purpose, and are prescribed to treat a recognized psychiatric disorder. Most of the drugs that became popular in the sixties have no corresponding medical use, although, at least in the case of marijuana, some people might argue otherwise. But I think a more likely explanation for the relative complacency surrounding the stimulants has more to do with today’s efficiency driven culture. Unlike drug usage in the sixties, the drugs today are not being taken in the context of a counter-cultural movement. To the contrary, they are being used for the purpose of meeting or surpassing the very demands that we as a society, via educational institutions, employers, or other organizations, place on our youth.

In law school, we had a word for hyper-competitive, brown-nosing students. We called them gunners, and they were not liked. You know the type– the kind of student who would resort to misfiling the books in the library that everyone needed to use for the completion of an assignment, and other deceptive and off-putting behavior, just to give themselves a slight edge. In the end though, it was never the gunners who earned the highest grades and got the best jobs. In our first year, almost inevitably, the top grade in any given class went to an unassuming student that no one else had really paid much attention to before. Later of course, we did notice them, and knew exactly who was going to be at the top of our class. They were always the students that floated through seemingly effortlessly, supported by a natural, unquantifiable knack for the nuances of legal analysis. That’s not to say they didn’t work hard. Sometimes those with the most natural talent were also the least motivated, and those students ended up falling behind, and after an initial advantage, ended up about the same place in the class rankings as the gunners and others who, while maybe not as diabolically competitive as the gunners, worked extremely hard but simply lacked that innate ability that catapulted their peers to the top.

Why do I relate these observations about the old pecking order at my law school? Well, it struck me that the gunners are going to be among the early adopters for this new class of neuro-enhancing stimulants. Might this make them a little more effective in their quest for classroom dominance? If the statistics cited above are correct, and such a high percentage of students are taking these drugs, then it seems likely that this indeed might be the case. This is disconcerting to me because of the type of ability that this could potentially devalue and displace.

What does it mean to have a “natural” ability or knack for something? Although this manner of description comes quite “naturally” to us in common speech, it’s a concept that’s very difficult to pinpoint exactly. I venture to offer a few characteristics that people usually have if they are described as having a knack for something. This is by no means an exhaustive list, and reasonable people may well disagree with the importance that I assign to these two factors, but it seems to me that the advantages enjoyed by a person with a knack fall into two broad categories. First, there is a quickness factor. A person with a knack usually picks up the target skill very quickly. A corollary to this might be that such a person does not stand to benefit as much from additional exercises and practice sets designed to teach lower level aspects the person with the knack already understands. Second, the perspective of a person with a knack seems to differ from that of their peers. A person with a knack tends to have more insight into the “big picture,” and perhaps largely for that reason, it is usually the person with the knack that comes up with innovative new ideas.

Both of these characteristics – quickness and the tendency to have a better grasp of the whole – are intangible features that don’t lend themselves very well to quantification or scientific study. They are also features that don’t respond well to training and practice as means of improvement. Whether you “get it” or you don’t seems to be governed, not by conscious control, but largely by subconscious processes. In many respects, what we call natural ability overlaps with a concept cognitive neuroscientists call “automaticity,” or perhaps, it is better to say that people with a knack seem to be able to automatize difficult or complicated tasks more quickly than others. Neuroscientists categorize a set of skills as automatic if it is inevitably and incorrigibly executed given the relevant stimulus, and once executed, it does not consume cognitive capacity. That is, other tasks can be performed in parallel. Paradigmatic examples include reading or playing a piano sonata. In short, people with a knack are able to quickly automatize certain necessary skills, which then frees up cognitive resources for use in exploring a given subject more deeply, or simply from more of a meta-perspective.

So what happens if the gunners begin setting the pace? They lead us right into a Heideggerean nightmare: instead of using technology to take over the drudgery and the minutiae and leave us free to actualize those creative impulses that, perhaps even more than reason, set us apart from animals, we are using technology to enslave our minds. Sparks of inspiration don’t come when all cylinders are firing, when for example, we are frantically putting together that quarterly report for the boss by the end of the day. They are far more likely to come when the mind is free to wander: while we are taking a walk, sitting by the side of a pool, or relaxing in the evening before we head off to bed. Neuro-enhancers may lead to an amazing ability to churn out more reports, but they also have the effect of crowding out the possibility for inspiration and originality, leaving behind the slick outer shell of an over-polished, two-dimensional world. Ironically, it appears that the things we don’t have conscious control over might be the very things that set us apart as individuals, for it is our subconscious, automatic, instinctual behavior that frees up cognitive capacity and saves us from becoming automatons, mere cogs in a machine. Perhaps society would benefit more if the best students, instead of following the lead of the gunners, took the afternoon off to smoke a joint or experiment with LSD.

Leave a comment

Filed under News Commentaries

Real Anatomy

John Warner and James Edmonson recently published a collection of photographs from early twentieth century gross anatomy classes in American medical schools. The pictures date from a time right after Kodak introduced cameras to the masses, and when it was considered fashionable for medical students to pose with the remains of dissected cadavers. The publication of these pictures, however, arrives at a time amid much discussion as to whether this rite of passage the first year of medical school – dissecting a human corpse – should be phased out in favor of new technology which allows for the possibility of virtual dissection.

Virtual dissection? I simply cannot understand how a virtual process can come even remotely close to offering students an experience similar in kind to that of a real dissection. I am not a doctor, so I suspect I must be missing something here. After all, my only first-hand experience with dissections I acquired in high school biology. I did visit a gross anatomy class once though. But that was years ago, when I myself was in my first year at law school. I was a bit tipsy, and it was sometime around 2 or 3 a.m. And no, I wasn’t supposed to be there. I had been out that night with a couple of friends who were in their first year at medical school, and they decided it would be fun to show me the cadavers. Despite my alcohol influenced mental state, the experience left its impression. The cadavers were preserved in what I guess was formaldehyde, in shiny metal, coffin-like tanks. A lever was pulled, and the corpses popped out of the boxes in a ghoulish version of jack-in-the-box, with the preserving liquid streaming off their bodies back into the tank. I remember telling my friends that the scene didn’t disturb me in the slightest, and that I found a nearby skeleton on display far creepier.  That was a lie.

One of my friends took me aside and told me, with a deeply serious, grim look on his face, to never even consider donating my body to science after my death, or to allow anyone I love do so. He said I wouldn’t believe the level of disrespect shown the cadavers, a fate he could not wish or imagine for anyone he had known or had cared about. I think now that he must certainly have turned into a fine doctor. I suppose the disrespect he spoke of was a mark of detachment, the beginning of a defensive mechanism that doctors often use in order to cope with the kind of emotional turmoil faced when constantly reminded of one’s own mortality. The knowledge that the corpse they were dissecting was once a living, breathing, human being – for most of them, this was their first real confrontation with the emotions stirred up by that simple fact. I wonder, will a computer program similarly be able to force them into that stare-down with the grim reaper, that dark, robed figure lurking about the hospitals that these young doctors will really be fighting against for the rest of their careers?

Another thing I remember from that night was a discussion, or rather, portions of a discussion, my friends had about a specific body part that was the focus at that time in their anatomy class. Unfortunately, I can no longer remember what this body part was or even the general area in which it was located. All I remember was that it was apparently very, very tiny, and didn’t seem to be located in any of the cadavers where the textbooks said it should be. They were trading stories about the surprising locations they ended up finding the part, or the bizarre ways it had been bloated, shrunken, or otherwise distorted nearly beyond recognition. How can a computer program capture these infinite variations? Wouldn’t the installation of virtual dissection programs necessarily involve a degree of standardization to a far narrower range than is seen in real life?

Proponents of virtual dissection maintain that the programs represent a vast improvement over the smelly, messy inconveniences of flesh. But isn’t that precisely what we are supposed to be training doctors to cope with – the smelly, messy inconvenience of the human body? There is something else that strikes one as they look through the old pictures: no one is wearing gloves. The pictures were taken before latex was invented. So I wonder, am I playing the part of a nostalgic and a Luddite? Were people back then voicing similar concerns concerning the use of gloves? Were they worried that something essential would be lost when they no longer felt the raw sensation of touching entrails and secretions with their bare hands? At what point is it that a change spurred by technological development brings about a qualitative, essential difference in experience, and when does it matter? Or does it ever matter?

Leave a comment

Filed under News Commentaries