Posts Tagged ‘ Kurzweil ’

Journal 09-18-03

Watched the Hollywood version of Solaris to my regret.  I started the Tsarkovsky version sometime last year, but wasn’t able to finish it.  That one seems really interesting and much more deep. But, I suppose this version wasn’t a total loss.  In fact, I think it brought up some interesting and relevant issues.   The basic idea is quite similar to Kurzweil’s proposal that it will possible to have computers that will be just like us, exceeding our intelligence and perhaps even biological functions.  Kurzweil belived that this will be our creation.  But, in the movie, it is believed to be created by artificial intelligence called Solaris.  Now, perhaps Kurzweil believes that Solaris is what we humans will in fact achieve, in that Solaris is not purely a mystical but a technological and a product of natural law, thought that law hasn’t been discovered on earth or explained [in this movie].  The doctor in the film is able to figure out the biology of it and combat it.

So in more detail about how this possibility presents itself:  It seems like Solaris reads human minds (including emotion) and then creates “visitors” based on that.  At first it seems to be solely based on memory, therefore in the main character’s case, since his visitor was dead, the existence of this visitor is limited to what the person remembers.  This also means the immortality of the visitor comes to the same point.  As the visitor said: “I’m suicidal because that’s how you remember me”.  But [the visitor] doesn’t ever die and instead seemed to go through a reverse process towards resurrection.  The amin character isn’t convinced that we are predetermined to repeat our past.  He believes he can pick up where he mistakenly left off (the fight before her suicide)and start anew, make changes.  The problem is, that even if it were possible, the future is created based only on HIS version of everything.  At the end of the film the thing that bothered him the most was the thought the he had remembered it wrong.  Remembered her according to his version of her.

That is the key!  At another point in the film debate of the existence of God obviously comes up.  And he characteristically believes that there can be no God, that our lives are just mathematics and logic.  She disagrees, naturally, mentioning that proof of God is awareness of mortality.  It doesn’t get debated any further.  But, while she (in remade-post-death-Solaris-form) is able to remember, to have accurate memories and emotions, she is also aware that she has not experienced it.  This is because it is HIS memory and not hers, which ended when she died (stopped being uniquely created and capable of creating memories).  She is not real because she can’t die in this form.  It is not just logic and memories and mathematics. There is more.

The ending didn’t quite sell me.  It is the same scene from the beginning before he goes to Solaris.  The events are repeated with minor changes.  1) Her picture is on the fridge (a point of contention: in Solaris he thought it odd that there are no pictures in his house since she died) 2) his cut doesn’t bleed, though he puts it under the faucet as if it were [as he remembered it].  He eventually sees that she is actually there. She convinces him that this time it is real, all our sins are forgiven.   It is very vague and lacking depth.  It’s the difference between the ultimate state of human reality being 1) a perfect world as we imagine it with pictures on the wall and getting to be with your wife and all your sins are forgiven (though you never worried or cared about it in the first place) and 2) sins are forgiven as ultimate freedom and individuality.  That was the key statement, actually, it was the only statement she made so the rest of it just didn’t provide the substance to back it up.  But to define more clearly the vague point:  she was real and he was real and they had nothing to do with it.  It wasn’t just the sum of their parts, but merely the idea that they are real.

**I’m going to have to watch this again tonight and fill this in – I think.**


Journal 09-17-03

A classmate wrote on the discussion list:

The biggest flaw with Kurzweil is that he uses the term spiritual .  I think he did this more for shock value than anything else.  He doesn’t equate spiritual with any sort of what I would consider spiritual, but rather use it as a synonym for conscience.  That is certainly not the connotation that one associates with spiritual.

I had this thought too, though more vaguely.  One of Kurzweil’s critic’s arguments focused, sort of, on this aspect.  Basically, this critic was saying that to use the word spiritual is to give it a lesser meaning or a watered down version of what spirituality means.  But, I tend to agree with my classmate’s more concrete statement that the word spiritual should be thrown out altogether.  It is simply the incorrect term to use and perhaps he DID use it for the controversial effect.

A funny conversation on what makes humans human.  My friend was wondering what it is that causes us to be bored. Do computers become bored?  Is boredom a chemical state of the brain?  I thought this was an interestingly depressing case for humanity.

Although, another case I’ve heard before is the idea of memory.  I think this one Kurzweil would be more enthusiastic about making a case for (that it is memory where computers exceed humanity).  But  how do you determine why my version or memory of something varies from another, though it is the same event being remembered?

**reminds me of a THIS AMERICAN LIFE episode clip**

What frustrates me (among the MANY things that do) is that [my attempt to defend humanity] seems to all amount to distinctions that really can’t be defined, at least not by science.   I believe it is quite clear by faith.  However, a scientific faith is sort of implied and one of Kurzweil’s critics points out this “promissory evidence’ as a weakness.  This is similar to allowing that because Kurzweil’s theory makes sense (to him) and he may have some to it from a theory that is actually credible, this does not make the result credible or the promissory result credible.

Journal 09-15-03

Having Mountain of Silence open while reading Kurzweil has been of great comfort.  It clears my head and reminds me that there is beauty in humanity.  One question that a person in this book asks Fr. Maximos is, “Who is more useful to society: a doctor or a monk?”  Fr. Maximos begins by saying that the question itself is flawed.  At least, he says, “It is characteristics of a modern way of thinking…an activist orientation tot the world”.  That orientation is that people are worthy based on their usefulness instead of who they are, instead of their humanity (if I were referring back to Kurzweil’s arguments).  Fr. Maximos even goes on to say that if we don’t view people first and foremost on who they are”…we run the risk of turning people into machines that produce useful things” (my emphasis).  He also comments on how this type of attitude toward ourselves– and I would add toward others, as well — often leads to psychological problems.

So, I think this is true and even think it follows to Kurzweil’s entire basis.  It is a modern way of thinking, which doesn’t [necessarily] mean it is right.  Another author, Ken Wilbur, claims we can know reality in three ways: eye of the senses (empirical science), eye of reason (philosophy, logic, math), and eye of contemplation (systematic and disciplines spiritual practice to open up the intuitive & spiritual faculties of self)  **unknown source, perhaps Ken Wilber? He goes on to attribute the Western trend to an imbalanced reality toward sense and reason and  away from contemplation.  I think Kurzweil would begin to argue that the eye of contemplation will be within the capabilities of this spiritual machines given the words: systematic and disciplines spiritual practice.  This gives spiritual virtually no meaning at all.  Thus in the  remaining part which defines a bit more of what spirituality is  (open up, intuitive, self) Kurzweil would lose ground.

His modern thought has allowed for viewing humanity in mechanical terms to the extreme.  And I would say that even his critics are victim to the same., as much as they argue the various points of his theory to refute it.  Denton states that “there is no longer any doubt that many biological phenomena are  indeed mechanical and that organisms are analogous to machines at least to some degere”.  The last part (analagous) I can handle, but hwy isn’t it the other way around?  Machines didn’t create biology, so how is it that biology can be described so firmly and coldly as “indeed mechanical”.  Isn’t it rather that we invented machines to do the work we want to do better and thus that machines are more biological?  Did I just say that?  Oh, context, context!  What I mean is the context out of which ideas are brought.  We create machines out of a model of biology, out of a model of ourselves.  Technology, combined with flawed/modern thought process (that thinking of ourselves in terms of usefulness alone) got us to the point of seeing these machines as human-like and allowing that ther inverse is true — that we, as humans, are machine-like.  These are both true, but do not imply a merging of the two, unless you think like Kurzweil.  And you can’t make me!

Journal 09-14-03

More thoughts on issues Kurzwiel stirs in me, mostly defining humanity apart from a solely mechanistic rationale.  Searle rebuttals with distinctions between computing symbols and conscious understanding.  Dembski follows by showing how computers lack a frame of reference, context (inability to get the joke).  Though, how many of us out there are just like that — ha ha.

Anyway,what Searle said got me thinking about the technical services side of the library (that being my current work).  Don’t we just sort out [make symbols or code] the info?  How much is consciousness understanding or how many decisions require that “gut” feeling?  A subject cataloger might argue that it is quite a bit — not a technical service, but an art. Taken too far in my train of thought, I wondered how many of us techies could be (ARE BEING) replaced by technology.  In fact we embrace it to a large extent — anything that helps us do our job faster.  What we’ve found is that this sometimes causes a predicament.  If you don’t use the fast technology your work becomes irrelevant (too slow, unnecessary work to get the job done).  On the other hand, embrace it to fully and one might end up wondering what your warm body is even doing there. Maybe that’s drastic.  But I have found myself twiddling my thumbs every now and then when a pile of work I tought would take two hours, I managed through in one.  This is also partly my keeping up with the pace.  My skills [get] faster and computers are [getting] faster.


Is this where libraries in general will find themselves if they embrace too fully the electronic format, if they abandon too  fully the traditional library?  I guess with relief I return to the fact that coincides with my last statement.  Since we (humans) will create the machines, we (librarians) will integrate them into the library.  Then — call me naive — I think any further argument Kurzweil makes (machines self-replicating and such) is too “out there” to worry about right now, if ever.

Journal 09-12-03

Reflections of text quotes from Are We Spiritual Machines?

Most of the complexity of a human neuron is devoted to maintaining its life support functions, not its information processing capabilities.  Ultimately, we will need to port our mental processes to a more suitable computational substrate.” (pp.29)

In this paragraph, Kurzweil becomes quite confusing.  I can’t readily tell who the “we” is that he’s talking about.  I would argue that humans do not need to port our mental processes to a more computational substrate.  But, my guess is, he is now talking about the superhuman.  Or is he?  This is what frightens me about his ideas and also how connected they are to his evolutionary philosophy, alluding to the idea that his is where we are headed as a species?  he goes on to talk about its place within the evolutionary construct (refuting the relationship to destiny of course) when he says:

We will reverse engineer the human brain, not because it is our destiny.  But there is valuable information to be found there that will provide insights in building more intelligent (more valuable) machines.  We would have to repeal capitalism and every visage of economic competition to stop this progression.” (pp.53)

I agree with this [last sentence] entirely, but then again think it probably not the best things that capitalism has or will lead to this state.  I think it is one of the dangers of a purely capitalistic system that is born out of and interwoven with a scientific materialist system.  The problem is that there are other systems.  Thought, as Denton points out, the traditional vitalistic alternative has virtually no support.  Not that I even know fully what THAT is!  I’m not a philosopher, so this  all becomes difficult for me to comment on.  Add to that Kurzweil’s own invented theories.  [For example], he admits,  “we assume that other humans are conscious” and his emphasis in this statement is the assumption.  But, yet, never does he admit that his own theories are assumptions, that the science and philosophy themselves are assumptions.

Journal 09-05-03

I just got Are We Spiritual Machines and am worried that it is going to be a difficult read.  I suppose as an optimist, I should say it will be a challenge.  Already I’ve come to these thoughts.

Does it matter?  If it is true — and I actually believe, for what its worth, it probably is possible on the terms Kurzweil describes — to have computers which exceed human intelligence.  Here is the challenge: believing in the terms themselves. As Searle explains, the theories by which Kurzweil concludes these arguments are theories of his own invention.  And this is the pattern of the entire scientific materialist, naturalist argument.  Accepting these terms goes contrary, not only to what I believe, but also to what seems common sense.  Therefore, philosophizing the possibilities of this thought seems like a ridiculous and daunting task.  I also admit my fear that I don’t have enough to refute  this thought either.  I just have what I believe, what I have faith in.  I was able to see that Searle and even Denton at times put into words the arguments I feel in my gut.  One being, from Searle, as I referred to earlier that the theory itself is based on theories that Kurzweil invented.

So, ultimately I’m back to my question of does it really matter — well I suppose ultimately this is what I am supposed to figure out through reading it.  Why does it in fact need to meet a wider audience?  What issues does it bring to the field of information management?  This is what all my friends are asking me: “So why are you reading that?”  “What does that have to do with libraries?”

To answer that at this stage seems premature, but my guesses are the same social issues that Fahrenheit 451 bring up.  Kurzweil does bring up the issue of regulation.  How do you regulate this kind of technology that is (in his opinion) only the natural next step in the evolution of man?  It is noted that the circumstances in which this is developing is not happening within the government, but with individual people and in the commercial market, making regulation of it even more tricky, and the reality and implications of it more frightening.

Kurzweil, R., Richards, J.W., & Gilder, G. F. (2002).  Are we spiritual machines: Ray Kurzweil vs. the critics of strong AI. Seattle, WA: Discovery Institute Press.

%d bloggers like this: