TECHNOLOGY

The Myth Of AI

Jaron Lanier
[11.14.14]

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.


[39:47]

JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future? Jaron Lanier's Edge Bio Page

THE REALITY CLUB: George Church, Peter Diamandis, Lee Smolin, Rodney Brooks, Nathan Myhrvold, George Dyson, Pamela McCorduck, Sendhil Mullainathan, Steven Pinker, Neal Gershenfeld, D.A. Wallach, Michael Shermer, Stuart Kauffman, Kevin Kelly, Lawrence Krauss, Robert Provine, Stuart Russell, Kai Krause 

INTRODUCTION

by John Brockman

This past weekend, during a trip to San Francisco, Jaron Lanier stopped by to talk to me for an Edge feature. He had something on his mind: news reports about comments by Elon Musk and Stephen Hawking, two of the most highly respected and distinguished members of the science and technology communiity, on the dangers of AI. ("Elon Musk, Stephen Hawking and fearing the machine" by Alan Wastler, CNBC 6.21.14). He then talked, uninterrupted, for an hour. 

As Lanier was about to depart, John Markoffthe Pulitzer Prize-winning technology correspondent for THE NEW YORK TIMES, arrived. Informed of the topic of the previous hour's conversation, he said, "I have a piece in the paper next week. Read it." A few days later, his article, "Fearing Bombs That Can Pick Whom to Kill" (11.12.14), appeared on the front page. It's one of a continuing series of articles by Markoff pointing to the darker side of the digital revolution.

This is hardly new territory. Cambridge cosmologist Martin Rees, the former Astronomer Royal and President of the Royal Society, addressed similar topics in his 2004 book, Our Final Hour: A Scientist's Warning, as did computer scientist, Bill Joy, co-founder of Sun Microsystems, in his highly influential 2000 article in Wired"Why The Future Doesn't Need Us: Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species."

But these topics are back on the table again, and informing the conversation in part is Superintelligence: Paths, Dangers, Strategies, the recently published book by Nick Bostrom, founding director of Oxford University’s Institute for the Future of Humanity. In his book, Bostrom asks questions such as "what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?" 

I am encouraging, and hope to publish, a Reality Club conversation, with comments (up to 500 words) on, but not limited to, Lanier's piece. This is a very broad topic that involves many different scientific fields and I am sure the Edgies will have lots of interesting things to say. 

—JB

Related on Edge:

Jaron Lanier: "Digital Maoism: The Hazards of the New Online Collectivism" (2006) "One Half A Manifesto" (2000) 
Kevin Kelly: "The Technium" (2014) 
George Dyson: "Turing's Cathedral" (2004) 


THE MYTH OF AI

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn't been asked to decide, and declare that corporations are people. That's a cover for making it easier for big money to have an influence in politics. But there's another angle to it, which I don't think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they're essentially algorithms.

If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel between people and algorithms. And the algorithms run on these big cloud computer facilities.

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us.

The Technium

Topic: 

  • TECHNOLOGY
https://vimeo.com/84396480

KEVIN KELLY is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He is currently editor and publisher of the popular Cool ToolsTrue Film, and Street Use websites. His most recent books are Cool Tools, and What Technology Wants. Kevin Kelly's Edge Bio Page

The Technium

Kevin Kelly
[2.3.14]

KEVIN KELLY is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He is currently editor and publisher of the popular Cool ToolsTrue Film, and Street Use websites. His most recent books are Cool Tools, and What Technology Wants. Kevin Kelly's Edge Bio Page


Introduction
by 
John Brockman

A few weeks ago David Carr profiled Kevin Kelly on page 1 of the New York Times Business section. He wrote that Kelly's pronouncements were "often both grandiose and correct." That’s a pretty good summary of Kevin Kelly's style and his prescience.

For the thirty years I've known him, Kelly has been making bold declarations about the world we are crafting with new technologies. He first  began to attract notice when he helped found Wired as the first executive editor. "The culture of technology," he notes, "was the prime beat of Wired. When we started the magazine 20 years ago, we had no intentions to write about hardware—bits and bauds. We wrote about the consequences of new inventions and the meaning of new stuff in our lives. At first, few believed us, and dismissed my claim that technology would become the central driver of our culture. Now everyone sees this centrality, but some are worried this means the end of civilization." 

The biggest change in our lives is the rate of change and while for many, Facebook and Twitter are a fact of life today. it's interesting to note that today (February 4th) marks only the 10th anniversary of the founding of Facebook by Mark Zuckerberg. When, during that same month, Forbes Magazine published their 2004 Billionaires List, it occured during the Edge Dinner in Monterey, California. Larry Page, present at dinner, made the list for the first time. When he showed me the Forbes headline, it was on his Blackberry pager. And it wasn't until 2006, just 8 years ago, that Twitter was founded by Ev Williams and his colleagues. If you read your news electronically at that time, most likely it was on a pager. "Sharing" was something you did at a Chinese restaurant.

Kelly recently successfully published an over-sized book based on his blog Cool Tools. He is one of the few actually making a living from a blog, while he is also reinstating print as a great publishing medium (Carr’s point).  He doesn’t just pontificate; he innovates himself.  He was one of the founders, for example, of the “quantified self” movement.

Kelly is well aware that his complete embrace of what he calls "The Technium", is a lightning rod for criticism. But, he points out that "we are still at the beginning of the beginning. We have just started to make a technological society. The technological changes in the next 20 years will dwarf those of the last 20 years. It will almost be like nothing at all has happened yet."

In the meantime Kelly is doing what he's been up to for decades, acting as a sensing and ruddering mechanism for the rest of us, finding his way through this new landscape.

—JB


TECHNOLOGY AND INNOVATION AS A NATIONAL DEVELOPMENT STRATEGY

A Conversation with
David Moinina Sengeh
[7.12.13]

I think beyond me, beyond our individual silos, to achieve prosperity and development in a place like Sierra Leone does not involve giving free devices to victims, which leads to low self-efficacy and dependence on external actors; we need to make new minds. That involves giving young people the platform to innovate, to learn from making, and to learn, and to solve very tangible problems within their communities.

DAVID MOININA SENGEH is a doctoral student at the MIT Media Lab, and a researcher in the Lab’s Biomechatronics group. David Moinina Sengeh's Edge Bio Page


[42:25 minutes]
 


TECHNOLOGY AND INNOVATION AS A NATIONAL DEVELOPMENT STRATEGY

These days I'm mostly asking myself two main questions: One of them is focused on augmenting the human body—redefining disability and disease; What does it mean to be human? What does it mean to have an extension of your body as a machine? How comfortable can you be? Beyond that, what happens when you connect those different machines—those different bionic elements—such that they can communicate with each other, such that they can communicate with your body and have a closed loop input/output between the machine and your body?

The second question is about how you use technology and innovation, as a national development strategy, such that it's not just one of an experimental thing here and there, (let's put laptops in here, let's set up and make a space over there, and let's take maybe the Silicon Valley model and drop it off in Sierra Leone). How do you answer the question, “If I give you 50 billion dollars, how could you use this through technology and innovation, or whatever else, to change a country towards prosperity? Towards good governance? Towards independence? How does that happen?” And so start with the question of connecting the human body to machines comfortably.

USING TECHNOLOGY AND INNOVATION AS A NATIONAL DEVELOPMENT STRATEGY

Topic: 

  • TECHNOLOGY
http://vimeo.com/81882605

I think beyond me, beyond our individual silos, to achieve prosperity and development in a place like Sierra Leone does not involve giving free devices to victims, which leads to low self-efficacy and dependence on external actors; we need to make new minds. That involves giving young people the platform to innovate, to learn from making, and to learn, and to solve very tangible problems within their communities.

COLLECTIVE INTELLIGENCE

Thomas W. Malone
[11.21.12]

As all the people and computers on our planet get more and more closely connected, it's becoming increasingly useful to think of all the people and computers on the planet as a kind of global brain.

THOMAS W. MALONE is the Patrick J. McGovern Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. He was also the founding director of the MIT Center for Coordination Science and one of the two founding co-directors of the MIT Initiative on "Inventing the Organizations of the 21st Century".

Thomas W. Malone's Edge Bio Page 


[31:45 minutes]


COLLECTIVE INTELLIGENCE

Pretty much everything I'm doing now falls under the broad umbrella that I'd call collective intelligence. What does collective intelligence mean? It's important to realize that intelligence is not just something that happens inside individual brains. It also arises with groups of individuals. In fact, I'd define collective intelligence as groups of individuals acting collectively in ways that seem intelligent. By that definition, of course, collective intelligence has been around for a very long time. Families, companies, countries, and armies: those are all examples of groups of people working together in ways that at least sometimes seem intelligent.

"THE CLOTHESLINE PARADOX"

Tim O'Reilly
[10.4.12]

If we're going to get science policy right, it's really important for us to study the economic benefit of open access and not accept the arguments of incumbents. Existing media companies claim that they need ever stronger and longer copyright protection and new, draconian laws to protect them, and meanwhile, new free ecosystems, like the Web, have actually led to enormous wealth creation and enormous new opportunities for social value. And yes, they did in fact lead in some cases to the destruction of incumbents, but that's the kind of creative destruction that we should celebrate in the economy. We have to accept that, particularly in the area of science, there's an incredible opportunity for open access to enable new business models.
 

TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., a leading computer book publisher. O'Reilly Media also hosts conferences on technology topics, including the O'Reilly Open Source Convention, the Strata series of conferences on big data, and Tools of Change for Publishing. O'Reilly Media's Maker Media unit publishes Make Magazine and operates Maker Faire, the world's largest gathering of DIY hardware enthusiasts and entrepreneurs. O'Reilly AlphaTech Ventures is a leading early stage venture capital firm.

Tim O'Reilly's Edge Bio Page


[14:21 minutes]


"THE CLOTHESLINE PARADOX"

[TIM O'REILLY:] I've been thinking a lot lately about a piece I read in Stuart Brand's, CoEvolution Quarterly back in 1975. It's called the "Clothesline Paradox." The author, Steve Baer, was talking about alternative energy. The thesis is simple: You put your clothes in the dryer, and the energy you use gets measured and counted. You hang your clothes on the clothesline, and it "disappears" from the economy. It struck me that there are a lot of things that we're dealing with on the Internet that are subject to the Clothesline Paradox. Value is created, but it's not measured and counted. It's captured somewhere else in the economy.

"THE CLOTHESLINE PARADOX"

Topic: 

  • TECHNOLOGY
http://vimeo.com/80821991

"If we're going to get science policy right, it's really important for us to study the economic benefit of open access and not accept the arguments of incumbents. Existing media companies claim that they need ever stronger and longer copyright protection and new, draconian laws to protect them, and meanwhile, new free ecosystems, like the Web, have actually led to enormous wealth creation and enormous new opportunities for social value.

THINKING IN NETWORK TERMS

Albert-lászló Barabási
[9.24.12]

One question that fascinated me in the last two years is, can we ever use data to control systems? Could we go as far as, not only describe and quantify and mathematically formulate and perhaps predict the behavior of a system, but could you use this knowledge to be able to control a complex system, to control a social system, to control an economic system?
 

ALBERT-LÁSZLÓ BARABÁSI is a Distinguished University Professor at Northeastern University, where he directs the Center for Complex Network Research, and holds appointments in the Departments of Physics, Computer Science and Biology, as well as in the Department of Medicine, Harvard Medical School and Brigham and Women Hospital, and is a member of the Center for Cancer Systems Biology at Dana Farber Cancer Institute.

Albert-László Barabási Edge Bio Page



[54:58 minutes]


THINKING IN NETWORK TERMS

[ALBERT-LÁSZLÓ BARABÁSI:] We always lived in a connected world, except we were not so much aware of it. We were aware of it down the line, that we're not independent from our environment, that we're not independent of the people around us. We are not independent of the many economic and other forces. But for decades we never perceived connectedness as being quantifiable, as being something that we can describe, that we can measure, that we have ways of quantifying the process. That has changed drastically in the last decade, at many, many different levels.

Pages

Subscribe to RSS - TECHNOLOGY