Monday, February 1, 2016

USA Africa Dialogue Series - DEBATING ARTIFICIAL INTELLIGENCE


Part of the Project of Modernity or the Enlightenment Project as espoused by the "moderns" is that through Reason, Science and Technology, humans have no limit to the progress they can achieve. It is as though our destiny as people living in the modern world is a kind of modern utopia. Of course postmodern scholars do not agree with that. 

The document below is a summary of an interesting debate about artificial intelligence. For some, it is a sign of progress and for others, it is a pathway to make the lives of many humans irrelevant or at least redundant because if we can even create machines that are 65% in terms of human intelligence, they can do quite a lot to the point of making many humans irrelevant. Where will all this take us to? At least such machine can do what many human beings with little or no human capital cannot do.

 Although countries in the developing world can say that they are far behind, note that now a days, when a new technology is developed in the West, it takes just a short time before someone comes up with a reason to justify it being exported and adopted in the Third World. It was not like in those days when the division of labor was so strict and advanced technology was only used in the West and Third World countries only produced primary products. 

If you are willing to spare some time from the daily routine and what some call the "tyranny of the urgent," then invest some interest in this broad issue and subject in social theory, social change and development. Please take time to read the article below and make up your mind. You will learn about what is called "Singularity Theory" espoused by Singularitarians. 

The irony of this is that many people in Africa in strict neoliberal economic sense are what some economic geographers characterized as "surplus people" in the sense that they have no relevant human capital that is valued in the global economy of today, they have no discretionary income to spend that can make them players in the consumer society, and they do not produce anything of value that can make them crucial players in the system. If a person is not relevant in any of those areas, neoliberal globalization sees him or as a nuisance,--- I am sorry to say, for this is a huge statement to make about humanity. The economy literally treats "surplus people" as redundant or irrelevant given that one hundred thousand of them can die before 6 p.m. and it will not affect the shares in the global business hubs of the world the next day.

Yet these are humans with dignity in theory, and before God, they are created in His image as people of faith would ay. So what does this all mean in our world today? Are we just glorifying well-written constitutions that in theory guarantee human rights but in practice, just treat humans as irrelevant if they cannot find a way to be effectively relevant in the neoliberal global economy? Are we just reading the Holy Books that  preach the dignity of human kind but that stops short of taking a prophetic stance against Methodological Atheism? --- a situation where in theory people believe in God, the creator of heaven and earth, but in practice, given the way they operate on day to day basis, they behave as if such a principle is irrelevant.  It is relevant only in special movements like in Church or Mosque (for those that are Christians and Muslims) or during the performance of religious rituals, but ethical and moral implications of such prayers and rituals are not allowed to penetrate the day to day interactions and decisions, or transform the hearts, minds and character of the people.

***********************************************************************************************************************
NOVEMBER 26, 2014 11:40 AM

Enthusiasts and Skeptics Debate Artificial Intelligence

Duelling over the Singularity: Ray Kurzweil (left), who sees salvation in artificial intelligence; Jaron Lanier (right), a leading skeptic.
Kurt Andersen wonders: If the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

THE GREAT SCHISM

Artificial intelligence is suddenly everywhere. It's still what the experts call "soft A.I.," but it is proliferating like mad. We're now accustomed to having conversations with computers: to refill a prescription, make a cable-TV-service appointment, cancel an airline reservation—or, when driving, to silently obey the instructions of the voice from the G.P.S.

But until the other morning I'd never initiated an elective conversation with a talking computer. I asked the artificial-intelligence app on my iPhone how old I am. First, Siri spelled my name right, something human beings generally fail to do. Then she said, "This might answer your question," and displayed my correct age in years, months, and days. She knows more about me than I do. When I asked, "What is the Singularity?," Siri inquired whether I wanted a Web search ("That's what I figured," she replied) and offered up this definition: "A technological singularity is a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict."

Siri appeared on my phone three years ago, a few months after the IBM supercomputer Watson beat a pair of Jeopardy!champions. Since then, Watson has been speeded up 24-fold and fed millions of pages of medical data, thus turning the celebrity machine into a practicing cancer diagnostician. Autonomous machines now make half the trades on Wall Street, meaning, for instance, that a firm will often own a given stock for less than a second—thus the phrase "high-frequency trading," the subject of Flash Boys, Michael Lewis's book earlier this year. (Trading by machines is one reason why a hoax A.P. tweet last year about a White House bombing made the Dow Jones Industrial Average suddenly drop 146 points.) Google's test fleet of a couple dozen robotic Lexuses and Priuses, after driving more than 700,000 miles on regular streets and highways, have been at fault in not a single accident. Meanwhile, bionic and biological breakthroughs are radically commingling humans and machines. Last year, a team of biomedical engineers demonstrated a system that enabled people wearing electrode-embedded caps to fly a tiny drone helicopter with their minds.

Machines performing unimaginably complicated calculations unimaginably fast—that's what computers have always done. Computers were called "electronic brains" from the beginning. But the great open question is whether a computer really will be able to do all that your brain can do, and more. Two decades from now, will artificial intelligence—A.I.—go from soft to hard, equaling and then quickly surpassing the human kind? And if the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?

Those questions might seem like the stuff of late-night dorm-room bull sessions. But since the turn of this century, big-time tech-industry figures have taken sides: ultra-geeky masters of the tech universe versus other ultra-geeky masters of the tech universe. It's a kind of Great Schism separating skeptics from true believers, dystopians from utopians, the cautious men from the giddy boys. (And let's just say it: they're almost all male.) This existential argument over the Singularity was the subject earlier this year of a big-budget action thriller called Transcendence, starring Johnny Depp as a Singularitarian A.I. genius at M.I.T. who's poisoned by Luddite assassins, but before he dies, uploads his consciosness to a cloud. And now there's a movie about a real-life genius and key pioneer of the digital age: in The Imitation Game, Benedict Cumberbatch plays Alan Turing, the young Brit out of Cambridge University who dreamed up the computer in the 1930s and then helped crack the Nazis' machine-generated Enigma code during World War II.

MASTERS OF THE UNIVERSE

After the war, as the computer era began in earnest, Turing published a paper arguing that machines would eventually become intelligent, and suggesting a practical test, an "imitation game," in which computers would attempt to fool people by passing for human. Remarkably, the paper—written in 1950, when a state-of-the-art, 14-ton computer had a memory equal to a few pages of text—lays out the basic terms of the Singularity debate as it exists today. Turing paraphrases each of the major objections to the idea of truly intelligent machines—technical, religious ("Thinking is a function of man's immortal soul"), humanist ("The consequences of machines thinking would be too dreadful"), the problems of understanding real life's ambiguous informal rules (such as driving), and the presumed impossibility of a computer achieving consciousness ("Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt . . . could we agree that machine equals brain").

"I believe," Turing wrote, "that in about 50 years' time it will be possible to programme computers . . . to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." His game became known as the Turing test. In 2012, the biggest Turing test yet was conducted in Britain, with 30 judges interrogating 30 individuals—25 human beings and five pieces of humanoid software. The best computer program fooled the judges 29 percent of the time. Then this past June at the Royal Society in London, the same bot fooled 10 of 30 judges—33 percent, beating Turing's 30 percent threshold.

The two camps of digerati, Singularitarians versus skeptics, started forming at the end of the 20th century. At an important annual tech conference 16 years ago, Bill Joy, co-founder of Sun Microsystems, met Ray Kurzweil, inventor of text-scanning, music-synthesizer, and voice-synthesizer technologies. Kurzweil was in the hotel bar, insisting that before long computers would be sentient, and gave Joy the galleys of his forthcoming bestseller, The Age of Spiritual Machines, in which he argues that by 2029 we will have not only super-humanoid computers but also nanotech and biotech wonders that will eliminate most poverty and disease, producing the equivalent of "20,000 years of progress" during the 21st century.

Kurzweil, prolific author and Singularity theorist, and co-founder of Singularity University.

Joy freaked out. In the spring of 2000, he published a cover story in Wired, "Why the Future Doesn't Need Us," describing his epiphany and apostasy. He'd suddenly realized, he wrote, that he "may be working to create tools which will enable the construction of the technology that may replace our species." Like J. Robert Oppenheimer at the beginning of the nuclear age, Joy worried that perhaps, in the new Digital Age, he had become death, a destroyer of worlds.

In Kurzweil's The Age of Spiritual Machines there's only a single footnote reference to the "Technological Singularity," a term popularized a few years earlier by the computer scientist and science-fiction writer Vernor Vinge for the moment when machine intelligence surpasses the human kind. Vinge was deeply ambivalent about what he considered this inevitable near future. But Kurzweil was only excited, and determined to lead a movement to bring it on. He called his 2005 manifesto The Singularity Is Near and gave it a happy, triumphalist subtitle: When Humans Transcend Biology.

But the skeptics' camp has grown. Its members include Jaron Lanier, the virtual-reality pioneer, who now works for Microsoft Research; Mitch Kapor, the early P.C.-software entrepreneur who co-founded the Electronic Frontier and Mozilla foundations and has bet Kurzweil $10,000 that no computer will pass for human before 2030; Microsoft co-founder Paul Allen, who has an eponymous neuroscience-research institute and picked a fight with Kurzweil and the Singularitarians, accusing them of vastly underestimating the brain's complexity; Jaan Tallinn, the Estonian software engineer who helped create Kazaa and co-founded Skype, and who worries that "once technological development is yanked out of our hands"—with more autonomous and self-replicating computers—"it doesn't have to continue to be beneficial to humans"; and Elon Musk, the co-founder of Tesla Motors and the founder of the commercial space-travel company SpaceX, who says that "with artificial intelligence we are summoning the demon" and call A.I. probably "our biggest existential threat," one that may well cause something "seriously dangerous" within a decade.

Lanier, the virtual-reality pioneer, and critic of the Singularitarian "religion."

Passion and excitement and followers tend to accrue more to excited visionaries than to gloomy Cassandras. I met with Lanier at a café in Berkeley not far from his house. His look—long dreadlocks, baggy black T-shirt and pants, fully countercultural in a neo-hobbit-ish mode—doesn't exactly jibe with his old-fashioned conservatism. When he visits colleges and universities, he said, "wandering around a computer-science department and seeing what books are on the shelves, it seems like mine's on a third of the shelves and Ray's are on more than half of the shelves. So I'm in the minority party." At Microsoft, where he works, he says, there are some people in the labs who are into the Singularity culture, but their presence at Google is pervasive and even dominant.

ADVERTISEMENT

The difference is partly geographic. Utopianism is in the D.N.A. of the Bay Area, from its Gold Rush fairy-tale modern beginnings through the lush New Age dreams of lifestyle perfection. Out of the 60s Bay Area counterculture, Lanier explained, grew "a kind of magical thinking, in which you arrogantly assume that by perfecting your mind you can have more control of reality than other people of less perfected minds. That notion merged with this idea of A.I., and it blended into the idea that by perfecting the algorithmic sensibility you could change the world to your will." He went on, "I think it would be harder for somebody to disagree with the religion and be at Google." As Kapor told me, referring to co-founder and C.E.O. Larry Page and others, "There are people at Google who really believe in the dream."

Mitch Kapor, founder of Lotus Development Corporation and a co-founder of the Electronic Frontier Foundation, has bet Kurzweil that Singularity will not be achieved before 2030.

I met with Kurzweil at the Dunder Mifflin–esque offices of Kurzweil Technologies, in suburban Boston. He was in the process of packing up for a move west to Silicon Valley, about to begin his new job as Google's director of engineering—leading a research team to create A.I. software that can converse in fully human fashion. For all his intellectual extravagance, in person he is low-key and professorial, except for the five rings he wears. When I asked if he took the job because Google will be the most powerful entity as we transition toward the Singularity, Kurzweil said it "would be somewhat self-serving to say that Google will be the best," but . . . yeah. For years he'd had conversations with Page, whom he calls "a fan," telling him he wanted to "re-create the kind of intelligence that humans have" and hitch it to super-big data. "And he said, 'Well, why don't you do it here in Google?' "

Overseas there are now university institutes devoted to worrying about the downsides of the Singularity: at Oxford, the Future of Humanity Institute; at Cambridge, the Centre for the Study of Existential Risk (co-founded by Tallinn of Skype). But in America, we have Singularity University, the go-go postgraduate institute founded by Kurzweil and Peter Diamandis, the M.I.T.- and Harvard-educated tech entrepreneur, creator of the X Prize (multi-million-dollar contests to promote a super-efficient car, private moon landings, and the like), and co-author of the bestselling Abundance: The Future Is Better than You Think(2012). Singularity University (S.U.) is based at NASA Research Park, in Silicon Valley, and funded by a digital-industrial-complex pantheon: Cisco, Genentech, Nokia, G.E., Google. In its six years of existence, many hundreds of executives, entrepreneurs, and investors—what Diamandis calls "folks who have done a $100 million exit and are trying to decide what to do next"—have paid $12,000 to spend a week sitting in molded-plastic chairs on rollers (futuristic circa 1970s) listening to Kurzweil and Diamandis and others explain and evangelize. Thousands of young postgrads from all over the world apply for the 80 slots in the annual 10-week summer session that trains them to create Singularity-ready enterprises.

Peter Diamandis, co-founder of Singularity University and founder and chairman of the X Prize Foundation.

In Abundance, Diamandis lays out the Singularitarians' Utopian vision of how new technologies, made incredibly cheap by "exponential price-performance curves," will shortly provide plenty of clean water, food, and energy for all earthlings, as well as decent educations and adequate health care. In addition to their certainty and optimism, Singularitarians place great faith in the power of unfettered individuals—that is, themselves—to usher in the amazing future. I spent a day at S.U. during one of the weeklong executive sessions, and at one point all 80 attendees, three-quarters of them men, were herded outside. They were instructed to spread out over the parking lot, arranging themselves along two axes: to one side if they were more optimistic and the other if they were more pessimistic, at the top of the lot if they believed individuals matter most and at the bottom if they believed groups matter most. A large plurality clustered in the optimist-individualist quadrant.

"That was a philosophy my family gave me from a very young age," Kurzweil explained. " 'You, Ray, can overcome any challenge.' "

You, I asked, not one, not determined people in general?

"Well, yeah, humankind," he said. "And also me."

TRIPLING THE BET

The Singularitarians' fundamental tenet is that Moore's Law—the doubling of digital technology's bang-for-the-buck every year or two for the last half-century as microchips have gotten faster, cheaper, and more powerful—is no temporary thing: that the exponential increase in inexpensive computer power is bound to keep spiraling upward and onward. "Some people," Kurzweil said when I asked him about his peers who doubt the vision of never-ending, all-encompassing exponential change, "are incredibly resistant to it. If you've built up a whole intellectual edifice that's based on certain linear assumptions, that's kind of threatening to a lot of your ideas." When Diamandis was telling me about all the large old companies and organizations that are doomed because of their leaders' "human inability to understand exponentials," I realized it was the first time outside of science fiction that I've encountered human as a pejorative.

Kurzweil and Kapor will need to live to 81 and 79, respectively, to see who wins their Turing-test bet, in 2029. The test will consist of three judges each texting for eight hours with three people and one computer, and the threshold will be much higher than Turing's. The computer will need to persuade two of the three judges that it's a person and persuade all three judges collectively that it seems more human than two of its three human foils.

When they made the bet, 12 years ago, Kapor admitted that it was "possible to imagine a machine . . . winning Jeopardy!," but declared that it was "impossible to foresee when, or even if" a computer would be fluent in the full "scope of human experience," including "unusual but illustrative analogies and metaphors." He admits now that Watson, able to decipher all sorts of puns and metaphors instantly during its Jeopardy! victory, "was an impressive demonstration. It showed that computers will be able to do certain things that weren't anticipated." But he's sticking to his guns. "I told Ray I'd double or triple the bet. Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon."

Another digital macher who agrees is Marc Andreessen, a creator of the original Netscape browser and now a Silicon Valley VC. In one of his recent tweet-storms he addressed the subject: "I don't see any reason to believe there will suddenly be some jump where all of a sudden they are like super-human intelligence," he tweeted, and, "The singularity AI stuff is all just a massive handwave, 'At some point magic happens.' Nobody has any theory for what this is."

Lanier gets peevish even being asked to identify a moment when machine intelligence might become convincingly human. "This idea of a certain year is ridiculous. It's a cultural event. It's a form of theater. It's not science. There's no rigor. It's like saying, 'When will hip-hop really be an art form?' To take it seriously is to make yourself into a moron. It came from one of the most brilliant technical minds"—Turing—"so we give it credence."

But still, I pressed him, during some of our lifetimes won't computers be totally fluent in humanese—able to engage in any kind of conversation? Lanier concedes some ground. "It's true, in some far future situation, we're going to transition. . . . I think it's very hard to predict a year." Approximately when? "I think we're in pretty safe territory if we say it's within this century." Which is much sooner than I figured he'd meant by "far future."

ADVERTISEMENT

So crossing that practical threshold (machines perfectly simulating all kinds of human interaction) does seem inevitable and nearish. But the scientific threshold (computers actually duplicating human intelligence) is probably much further away. After a decade overseeing the Allen Institute for Brain Science, Paul Allen doesn't think we'll get there until after 2100. He threw the gauntlet down in an M.I.T.Technology Review article called "The Singularity Isn't Near." The brain is a machine, sure, but the complexities of its operation remain mysterious—way, way beyond our full practical understanding. In fact, the neuroscience breakthroughs of the past 20 years are revealing how much we don't understand, vastly expanding the scale and scope of the known unknowns. "By the end of the century," he wrote, "we will still be wondering if the singularity is near."

WHITEWASHING THE FENCES

Neither the Singularity believers nor the Singularity doubters define themselves in conventionally political or religious terms. They're all secular rationalists. They're all liberals in the old-fashioned sense—capitalists interested in alleviating misery efficiently. Yet the schism that divides them is essentially sociopolitical and religious.

First, the politics. Diamandis talks a lot about "the grand challenges" of "the billion-person problems," by which he means the wretched conditions in which the world's poorest people live. "How many equivalents of Da Vinci and Edison," he said to me, "have come into existence in parts of the world without communications and are never heard from?" When I sat down with Kapor near his weekend home in Healdsburg, a yuppified old Sonoma County town north of San Francisco, he said almost exactly the same thing: "The amount of sheer genius that is being completely wasted—who knows what brilliant discoveries would be made if we weren't throwing so much away?"

Rather than losing sleep over potentially catastrophic downsides to the whiz-bang future, the Singularitarians seem most concerned about misguided humans preventing its full flowering. Peter Thiel is the rare downbeat Singularitarian—pessimistic only because he worries that environmentalism and politics and regulation are already slowing innovation. (He is a supporter of libertarian causes and the right-wing Club for Growth.)

Being technologists, they tend to underappreciate the nitty-gritty quirks and complications of behavior and culture that shape history. An unalloyed engineering paradigm, Lanier explained, assumes "a kind of a linear, clean quality to progress that's just never true. Technologists tend to think of economics as unimportant because it always favors us. This idea that everything will become cheaper all at once is stupid."

Kurzweil as much as admits he only deeply cares and knows about technology and its theoretical impacts, about political economy and human psychology not so much. Concerning his prognostication ability—his 1999 prediction of a "continuous economic expansion and prosperity" through 2009, for instance, which he claims is correct—he told me that "when it gets into implications that include regulatory and social responses, it can be less accurate." He and Diamandis gloss over economics and politics in their books. In the latest one, Kurzweil mentions a Gallup poll that found a majority of Americans believe life will be worse for their children—but then simply says they're wrong, and trots out all the happy trend lines of the last 50, 100, and 1,000 years to prove it. When I asked Singularity University's C.E.O., Rob Nail, if the curriculum touches at all on politics or economics, he said not really, but they are "in discussions" about adding those subjects.

"It is absurd," Mitch Kapor said, "a kind of willful ignorance of history. . . . It's illegitimate to talk about a post-scarcity Utopia without talking about questions of distribution. There have always been these Utopian predictions—'electricity too cheap to meter' was the atomic promise of the 1950s. Oh, this time it's going to be different? The burden of proof is on them, and if they don't have something interesting to say, they don't have anything."

In his 2010 book, You Are Not a Gadget, Jaron Lanier made a cultural argument against our worshipful deference to computers. His most recent book, Who Owns the Future?, is all about politics, economics, power, jobs. It's not sentient machine overlords enslaving us in 2040 that alarms him, but human overlords using computer technology to impoverish and disempower the middle and working classes, starting right now. "Follow the money," he told me. "We're letting machine owners"—by which he means digital big business—"get rich at our expense and dehumanize us . . . . when I talk about these things, I'm definitely speaking against my own economic interest." As a digital inventor and Microsoft employee, he said, "I make money to the degree people believe in Ray. Belief in Ray makes me richer."

Today's big data and mass-market A.I., in Lanier's view, amount to a stupendous con: Google, YouTube, Facebook, and every other crowd-sourced digital business are Tom Sawyer, and we're whitewashing their fences for free because they've bedazzled and tricked us into thinking it's fun.

For Web search, for translation, for Siri, "masses of people have to contribute examples, which are then rehashed to create the illusion of machine intelligence. A.I. has turned into this way of masking human contributions, a way of devaluing people and ruining economic opportunity for the people who make technology work, the masses of people who create big data. So A.I. does not exist. . . . It's just the modern word for plutocracy."

"IT'S NOT STOPPABLE"

In this debate, each side accuses the other of being crypto-religionists, and both sides are correct. The contest is between one quasi-religious hunch versus another.

They tend to agree that we may never know for sure—objectively, scientifically—if any machine is conscious. (And as the great technology writer George Dyson, author of Turing's Cathedral, told me, "a real artificial intelligence would be intelligent enough not to reveal that it was genuinely intelligent.") Where they differ is how to deal with that difficulty. The Singularitarians would take any apparently sentient machine at its word. They consider the skeptics old-fashioned anthropocentric scaredy-cats, clinging to an outmoded conception of human beings—special, purely flesh and blood, uniquely conscious—and refusing to accept our next stage of evolution into human-machine hybrids. Kurzweil calls it "fundamentalist humanism."

In fact, Kapor's belief about what makes us human—consciousness exists, and it's not merely a curious side effect of the brain machine's computations—does amount to a belief in a soul. The idea that consciousness is finally just an engineering problem, he thinks, "misses critically important things. I cannot say with certainty that they're wrong. But we'll see."

Lanier's position is that even if human-equivalent intelligence is a soluble engineering problem, it's essential that we continue to regard biological human consciousness as unique. "If you don't believe in consciousness" as both real and the defining essence of humanity, he explained, then "you end up devaluing people."

On the other hand, the Singularitarians' belief that we're biological machines on the verge of evolving into not entirely biological super-machines has a distinctly religious fervor and certainty. "I think we are going to start to interconnect as a human species in a fashion that is intimate and magical," Diamandis told me. "What I would imagine in the future is a meta-intelligence where we are all connected by the Internet [and] achieve a new level of sentience. . . . Your readers need to understand: It's not stoppable. It doesn't matter what they want. It doesn't matter how they feel."

And his faith in machine rationality is such that he doesn't worry about some future tyranny of intelligent machines. Because "if robots and A.I. came together to form the Terminator robot," he asked me, "do you really think they'd give a shit and enslave humanity? [If] we are problematic to them," they'll find it "easier to move off-planet and go someplace else than to exterminate us."

Of course, the Singularitarians insist that theirs is not a faith-based dream at all. Yet their shape of things to come looks an awful lot like a millenarian vision of earthly paradise: no more hunger or poverty or illness or, finally, death. And that's only the beginning of what Kurzweil actually calls "a manifest destiny." Later in this century, he has explained, after the Singularity has moved us from the present Epoch Four to Epoch Five, we enter Epoch Six, the final one. "Swarms of nanobots infused with intelligent software as scouts" will head out from Earth to "harvest" and transmute every suitable material on nearby planets and asteroids into something called "computronium"—that is, "matter and energy organized optimally to support intelligent computation." In time, as his vision unfolds, machine intelligence will "saturate the universe." Finally, "the universe wakes up," and the transcendently intelligent hybrid human-machine civilization of the future will "engineer the universe it wants. This is the goal of the Singularity."

ADVERTISEMENT

Kapor once called the Singularity "intelligent design for the I.Q. 140 people." The universe transformed into one vast computer? "That's the deity. That's the higher power." As with religions, Lanier said, "It has its temples, the Singularity University. It proselytizes. You're supposed to believe certain things, and in exchange for that you get immortality."

He doesn't mean immortality in the figurative sense. In addition to his Singularity books, Kurzweil has published Fantastic Voyage: Live Long Enough to Live Forever and Transcend: Nine Steps to Living Well Forever. He considers most fatal illness the result of evolutionary errors that can be corrected, and refers disparagingly to "deathism"—our "millennium-old rationalization of death as a good thing." He believes medicine is "15 years away from a tipping point in longevity." So after the Singularity arrives, in 2045, he intends to upload a digital copy of everything in his elderly brain—that is, himself—into a machine. "I'm not even sure that that's going to be feasible in 2045. That's more like a 2050s scenario." So to live forever he needs to make it well past 100 as a biological creature.

Mitch Kapor basically sighs when he hears talk like this. "Among senior computer scientists," he told me, "Ray is not seen as someone whose views are to be taken seriously at all times."

WHOA!

When I was a kid I loved Star Trek and Tom Swift too, as the Singularitarians do. I saved up for a mail-order computer kit in the 1960s. My inclination has been to nod in hopeful agreement with Arthur C. Clarke's three laws—that when a distinguished scientist says something's possible he's probably right, but when he says something's impossible he's probably wrong; that "the only way of discovering the limits of the possible is to venture a little way past them into the impossible;" and that "any sufficiently advanced technology is indistinguishable from magic."

But I've also read history and lived long enough to know that the theoretically possible does not always become practical; that even magic-seeming technologies do not emerge in economic and political and cultural vacuums; and that they are always shaped and selected to serve particular interests and visions.

So I agree some with both sides about the Singularity. My hunch is that the brain probably is just a machine—but an amazing machine whose workings will remain baffling and irreproducible until long after I'm gone. I think the Singularitarians are probably right about the next decade or two. The ground will be softened as ever improving Siri-esque A.I. becomes ubiquitous and we start riding in cars that drive themselves.

Before long—in 15 years, 20, 25—I think we will start treating the smartest computers differently from the way we've ever treated machines. When pieces of A.I. software become the greatest Method actors ever, as convincing at playing people as Daniel Day-Lewis, we will bit by bit suspend our disbelief in machine consciousness. Gradually, we'll start to treat machines more like beings than things—as even the Singularity skeptics grant. Already, Lanier pointed out, there are plenty of people who treat their dogs as human, as more than human.

Machines, Kapor said, "could emerge into their own category that's quite distinctive—maybe related, second or third cousin, but not us. Dogs are a species of sentient beings; they have a kind of personhood. Nobody would confuse them with people, they're fundamentally different, but we share something. If we do get to a point where you could in good faith say, 'The machine is suffering,' that would be huge." When People for the Ethical Treatment of Machines is formed in 2030, don't be surprised.

On how humankind will cope, I tend to take the long view: new transformative technologies have discombobulated us before and we've managed to adapt—to the invention of writing and printing, to living in cities, to the Industrial Revolution and instant communication and automobiles and nuclear technology.

Our nuclear age, 69 years on, may be the most relevant case study. How have we avoided nuclear catastrophe? Not by letting technologists and their capitalist and government patrons proceed helter-skelter, but by means of prudent collective choice, hard political work, imperfect regulatory apparatus.

The Singularitarians' don't-worry-be-happy hubris makes me nervous. They need a large dose of caution, to understand and admit that if we're entering an unprecedented new technological era we also need to create an unprecedented new political economy to cope. On the way to robots and computers taking more and more of our jobs, we can't just write off the people whose livelihoods are eliminated as losers and moochers, or take it on faith that enough new human jobs will be created. As we embark on a wholesale "editing" of biology's "errors," we also cannot assume that the cornucopia of bioengineered health and longevity will magically trickle down. And, by the way, aren't billions of years of accumulated errors and mutations what make evolution work? Isn't it probably the haphazard, inefficient design of our brain that makes us human?

In other words, on this brink of a new technological epoch I say: Whoa—both Whoa, it looks really awesome, and Whoa, let's be very careful.

Peter Diamandis is right that we're at the beginning of "a transformation in what it means to be human and how society works and thinks," maybe even "a rapid evolution of our species as machines begin to become parts of our prefrontal cortex." But, he asked me, "Do people want to hear that? No."

A lot do want to hear that, I told him—that's why you guys have bestselling books and sellout conferences and an oversubscribed university built on NASA property and sponsored by Google and G.E. It's just that a cyborg near future also weirds us out.

He nodded. He shook his head. "Why does it weird us out?"



--
Listserv moderated by Toyin Falola, University of Texas at Austin
To post to this group, send an email to USAAfricaDialogue@googlegroups.com
To subscribe to this group, send an email to USAAfricaDialogue+subscribe@googlegroups.com
Current archives at http://groups.google.com/group/USAAfricaDialogue
Early archives at http://www.utexas.edu/conferences/africa/ads/index.html
---
You received this message because you are subscribed to the Google Groups "USA Africa Dialogue Series" group.
To unsubscribe from this group and stop receiving emails from it, send an email to usaafricadialogue+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

No comments:

Post a Comment

 
Vida de bombeiro Recipes Informatica Humor Jokes Mensagens Curiosity Saude Video Games Car Blog Animals Diario das Mensagens Eletronica Rei Jesus News Noticias da TV Artesanato Esportes Noticias Atuais Games Pets Career Religion Recreation Business Education Autos Academics Style Television Programming Motosport Humor News The Games Home Downs World News Internet Car Design Entertaimment Celebrities 1001 Games Doctor Pets Net Downs World Enter Jesus Variedade Mensagensr Android Rub Letras Dialogue cosmetics Genexus Car net Só Humor Curiosity Gifs Medical Female American Health Madeira Designer PPS Divertidas Estate Travel Estate Writing Computer Matilde Ocultos Matilde futebolcomnoticias girassol lettheworldturn topdigitalnet Bem amado enjohnny produceideas foodasticos cronicasdoimaginario downloadsdegraca compactandoletras newcuriosidades blogdoarmario arrozinhoii sonasol halfbakedtaters make-it-plain amatha