Spiritual Machines

Reflecting on Douglas Hofstadter's question, "Will spiritual robots replace humanity by 2100?", Kevin Kelly writes:
When thinking in the long term, especially about technology, I find it very helpful to think in terms of human generations. As a rough estimate I reckon 25 years per generation. Civilization began about 10,000 years ago (the oldest city, Jericho, was born in 8000 BC) which makes the civilization now present in Jericho and the rest of the world about 400 generations old. Tha's 400 reproductive cycles of mother to daughter. Four hundred generations of civilized humans is not very long. We could almost memorize the names of all 400 cycles if we had nothing much else to do. After 400 generations we are different people than when we began. We had the idea of automatons and robots only maybe 8 generations ago, and made the first electronic computers 2 generations ago. The entire World Wide Web less than 2,000 days old! The year 2100 is only four generations away, keeping the same human lifespan. If we morph into robots in 2100, civilized humans will have lasted only 400 generations. That would be the shortest lifespan of a species in the history of life.

[The] central question, the central issue, of this coming century is not “what is an AI,?” but “what is a human?” What are humans good for? I forecast that variants of the question “What is a human” will be a recurring headline in USA Today-like newspapers in this coming century. Movies, novels, conferences and websites will all grapple with this central question of “Who are we? What is humanity?” Fed by a prosperous long boom, where anything is possible, but nothing is certain, we’ll have more questions about our identity than answers. Who are we? What does it mean to be a male, or female, or a father, an American, or a human being? The next century can be described as a massive, global scale, 100-year identity crisis. By 2100, people will be amazed that we humans back here now, thought we knew what humans were.
I agree with Kevin's thesis here, but an equally important question [if only as a corollary] is, "What is a machine?" Barring some catastrophe, the boundaries between organism and machine, self and other, will gradually blur to the point where it will sometimes be difficult to tell the difference. Ubiquitous computing technologies -- the next-next-gen of so-called Web 2.0 applications -- will enable humans to colocate segments of their memories and even identities, moving beyond remote storage systems to "remote agency" systems. Where then will the boundary line be drawn? Is a software agent that intelligently acts on my behalf -- based on an acquired understanding of my needs and desires -- a mere robotic employee, or is it an extension of my self, a partner in the forging of my identity, a semantic feedback matrix that is uniquely my own? Like a book or other utterance, such an agent would be a partial representation of my inner being, but unlike traditional texts, it would be an active representation, capable of performing tasks or making additional utterances in a mode consonant with my projected identity. Furthermore, and most importantly, its active nature would enable a kind of collaboration between "it" and "me" in the evolution of my identity. Authors typically claim books as extensions of themselves; would the same hold true for a software or robotic agent that putatively contained and contributed to some essential aspect of selfhood?

(My response is also posted in the comments on Kevin's blog.)