Latest topics
» Government Shutdown over Border Wall
by Neon Knight Yesterday at 18:58

» Historical European Martial Arts, Arms & Armour
by Neon Knight Yesterday at 18:34

» Now listening to . . .
by Neon Knight Yesterday at 4:12

» 23andMe update
by OsricPearl Sat 19 Jan - 22:23

» Brexit Saga Update
by Neon Knight Sat 19 Jan - 21:49

» How Poisons Kill
by Sary Fri 18 Jan - 0:27

» Blond hair has more genetic variants
by OsricPearl Tue 15 Jan - 3:25

» Reincarnation
by Sary Tue 15 Jan - 0:35

» Political Systems
by Neon Knight Sat 22 Dec - 22:57

» British Attitudes to Racism
by Neon Knight Sat 22 Dec - 19:46

» Twitter's dangerous approach to hate speech
by Neon Knight Fri 21 Dec - 18:04

» The Marrakesh Migration Declaration / Pact
by Neon Knight Wed 19 Dec - 23:39

» Psychopaths & Sociopaths
by Neon Knight Wed 19 Dec - 23:29

» When is the brain fully developed?
by Sary Wed 19 Dec - 20:51

» Distribution of EU Funding
by Sary Mon 17 Dec - 21:41

» Over-consumption of earth's resources continues
by Neon Knight Sat 15 Dec - 20:18

» Tommy Robinson
by Neon Knight Sun 9 Dec - 23:01

» Paris is Burning
by Sary Sat 8 Dec - 19:14

» Accounts of Apparitions
by Neon Knight Sat 8 Dec - 18:32

» Song Cover-Versions & Originals
by Neon Knight Wed 5 Dec - 23:22

» The tallest statue in Europe
by Neon Knight Sun 2 Dec - 23:57

» UK Migration Issues
by Neon Knight Sun 2 Dec - 23:42

» How populist are you?
by Neon Knight Wed 28 Nov - 21:46

» Germany's AfD Party
by Neon Knight Tue 27 Nov - 23:28

» Attitudes to flag flying in Britain
by Neon Knight Mon 26 Nov - 20:42

» Global Cooling Upon us?
by Sary Sat 24 Nov - 13:02

» What do the Irish think of the English?
by OsricPearl Fri 23 Nov - 0:23

» Are Scots still against independence?
by Neon Knight Tue 20 Nov - 19:40

» dNA.Land
by OsricPearl Tue 20 Nov - 18:43

» Origin of the Sámi
by OsricPearl Tue 20 Nov - 16:57

» Sci-fi vs. Reality: Are we really going to colonize space?
by Sary Tue 20 Nov - 0:27

» An AI apocalypse is possible
by Neon Knight Mon 19 Nov - 21:04

» Animals - News & General
by Sary Thu 15 Nov - 10:55

» Front National / Rassemblement National
by Neon Knight Wed 14 Nov - 18:16

» Hitler's friendship with a part Jewish child
by Neon Knight Wed 14 Nov - 1:21

» Italian 'land-for-children' idea to increase birth rate
by Sary Mon 5 Nov - 22:41

» 12 Common French Gestures
by Sary Mon 5 Nov - 22:21

» White majorities feel threatened [good article]
by Neon Knight Mon 5 Nov - 20:40

» There's only 11 mins of play in an American football game
by Jehan I Sat 3 Nov - 13:06

» English Teen Kidnapped for 12 years
by Jehan I Fri 2 Nov - 18:56

An AI apocalypse is possible

Reply to topic

View previous topic View next topic Go down

An AI apocalypse is possible

Post Neon Knight on Mon 19 Nov - 21:04  Quoting:

I want to convince you of something: that an ‘AI apocalypse’ is not a ridiculous thing to worry about. Sure, there are other, more near-future things to worry about involving artificial intelligence (AI) – including privacy and surveillance, and the use of AI-controlled weapons on the battlefield. But we can worry about more than one thing at a time. And while the idea of AI destroying humanity is, I think, not likely, nor is it so improbable that we can dismiss it, as some people do, as quasi-religious mumbo-jumbo, or bad sci-fi.

. . . The risk is not that AI might become ‘self-aware’, or that it might turn against its creators, or that it will ‘go rogue’ and break its programming. The risk is that, instead, it will become competent. The risk is that it will do exactly what it is asked to do, but it will do it too well: that completing what sounds like a simple task to a human could have devastating unforeseen consequences. Here’s roughly how that could go. One group that worries about ‘AI safety’, as it’s known, is the Machine Intelligence Research Institute (MIRI) in Berkeley, California. Their executive director, Nate Soares, once gave a talk at Google in which he suggested that, instead of The Terminator, a better fictional analogy would be Disney’s Fantasia.

Mickey, the Sorcerer’s Apprentice, is asked to fill a cauldron with water. When the Sorcerer leaves, Mickey enchants a broom to do it for him, and goes to sleep. Inevitably enough, the broom obeys him perfectly, eventually flooding the entire room and tipping Mickey into the water.

Of course, if Mickey simply told the broom to keep bringing water and never stop, then he’d only have himself to blame. But even if he’d told the broom to bring the water until the cauldron was full, it would probably still have gone terribly wrong. Imagine the broom filled it until the water was four inches from the top. Is that ‘full’? How about one inch? The broom isn’t sure. Well, surely when it’s right at the top, and water is splashing on the floor, the broom is sure? Well, probably 99.99% sure. But, crucially, not completely sure. It can’t do any harm to add more water, in case, say, its eyes are deceiving it, or the cauldron has a leak. You haven’t told the broom to “fill the cauldron until you’re pretty sure it’s full”, you’ve just said “fill it until it’s full”.

A human would know that other things – not flooding the room, for instance – are more important than ever-more-tiny increments of certainty about how full the cauldron is. But when you ‘programmed’ your broom ‘AI’, you didn’t mention that. The broom cares about nothing else but the fullness of the cauldron. What we humans think of as simple goals are actually surrounded by other, much more complex, considerations, and unless you tell the AI, it won’t know that.

There are other problems. For instance, the goal of ‘fill the cauldron’ is most easily completed if you, the broom ‘AI’, are not destroyed, or switched off, or given another new goal. So almost any AI would be incentivised to stop you from switching it off or destroying it – either by fighting back, or perhaps by copying itself elsewhere. And almost any goal you are given, you could probably do better with more resources and more brainpower, so it makes sense to accumulate more of both. Eliezer Yudkowsky, also of MIRI, has a saying: “The AI does not hate you, nor does it love you, but you are made out of atoms
 which it can use for something else.”

Steve Omohundro, an AI researcher, suggests that even something as harmless-sounding as a chess-playing AI, simply ordered to become as good at chess as possible, could be very dangerous, if precautions weren’t taken. It would, for instance, be in its interests to acquire unlimited amounts of matter to build more computers out of, to enable it to think ever more deeply about chess. That may not strike you as inherently dangerous, but if you consider that you are made of matter, and so is the Earth, you may see the potential problem. The fear is that a powerful, “superintelligent” AI could literally end human life, while obeying its innocuous-seeming instructions to the letter.

. . . Shane Legg and Demis Hassabis, the founders of Google’s DeepMind AI firm, are on record saying it’s a serious risk, and DeepMind has collaborated on research into ways to prevent it. Surveys of AI researchers find that a majority of them think that superintelligent AI will arrive in the lifetimes of people alive now, and that there is a strong possibility – roughly a 1 in 5 chance – that it will lead to something “extremely bad (existential catastrophe)”, i.e. human extinction.

I’m not saying that this is inevitable. But I do worry that people discount it utterly, because it sounds weird, and because the people who talk about it are easy to dismiss as weird . . . Just because the people saying something are weird, doesn’t mean they’re wrong.


Between the velvet lies, there's a truth that's hard as steel
The vision never dies, life's a never ending wheel
- R.J.Dio
Neon Knight
The Castellan

Male Posts : 1247
Join date : 2017-03-05

Back to top Go down

View previous topic View next topic Back to top

- Similar topics

Permissions in this forum:
You can reply to topics in this forum