Castle Europa
Would you like to react to this message? Create an account in a few clicks or log in to continue.
Like/Tweet/+1
Latest topics
» Now listening to . . .
by Sary Thu 2 May - 2:14

» Political Dimensions Test
by Neon Knight Thu 18 Apr - 21:29

» Song Cover-Versions & Originals
by Neon Knight Sat 13 Apr - 0:19

» Minimum Drinking Ages
by Sary Mon 8 Apr - 19:17

» Religious Followings in Iran
by Neon Knight Sat 16 Mar - 3:00

» European Border Disputes
by Neon Knight Tue 5 Mar - 1:49

» Cat Vision
by Sary Wed 21 Feb - 2:00

» Cool Masculine Art
by Neon Knight Thu 15 Feb - 1:08

» Apparitions & Hauntings
by Neon Knight Wed 31 Jan - 9:33

» Beautiful Feminine Art
by Sary Wed 24 Jan - 0:03

» Monty Python Scenes & Sketches
by Neon Knight Tue 23 Jan - 0:32

» Covid-19 in Europe
by Sary Sat 6 Jan - 2:04

» Xylitol - the ideal sugar substitute?
by Neon Knight Wed 20 Dec - 0:48

» Favourite Quotes - Wit & Wisdom
by Neon Knight Mon 20 Nov - 0:16

» Near-Death Experiences
by Neon Knight Sun 19 Nov - 23:34

» Early Fantasy Novels
by Sary Mon 16 Oct - 1:33

» Computer Simulated Life Experience
by OsricPearl Mon 9 Oct - 4:28

» Career change
by OsricPearl Mon 9 Oct - 4:16

» A normal explanation for some hauntings
by Neon Knight Sun 1 Oct - 1:15

» Alice Cooper dropped by cosmetics firm for trans comments
by Sary Mon 11 Sep - 12:44

» DNA Shared by Relationship
by Sary Sat 26 Aug - 2:10

» Inside Balkan Churches
by Neon Knight Sun 30 Jul - 23:59

» UK Migration Issues
by Neon Knight Mon 17 Jul - 1:36

» The Legendary Dogmen
by Neon Knight Sun 9 Jul - 22:57

» Ancient Archaeological Finds
by OsricPearl Thu 6 Jul - 15:11

» Time Slips
by Neon Knight Thu 22 Jun - 0:38

» European Monarchies
by OsricPearl Tue 9 May - 3:08

» Chesterton's Fence
by Neon Knight Thu 4 May - 16:18

» Physical Map of Europe
by Sary Sat 15 Apr - 0:31

» The maps of Europe and the USA compared
by Neon Knight Sun 12 Mar - 18:11

» John Titor - a time and dimensional traveller?
by Sary Wed 8 Feb - 0:03

» The Sex Pistols' notorious early appearance on TV
by Sary Tue 24 Jan - 2:39

» Responding to SJW Rhetoric
by Neon Knight Mon 9 Jan - 11:00

» Which countries should refugees go to?
by OsricPearl Thu 29 Dec - 15:53

» Spotting dark personality traits from faces
by Guest Wed 28 Dec - 23:16

» Victorian Christmas Traditions
by Neon Knight Fri 23 Dec - 19:13

» The mystery behind the Pied Piper story
by OsricPearl Wed 14 Dec - 4:00

» Saint Places
by OsricPearl Wed 14 Dec - 3:40

» Ancient Monuments
by Sary Fri 2 Dec - 3:30

» Personality traits linked to political orientation
by Neon Knight Tue 22 Nov - 13:49


An AI apocalypse is possible

View previous topic View next topic Go down

An AI apocalypse is possible Empty An AI apocalypse is possible

Post Neon Knight Mon 19 Nov - 21:04

https://unherd.com/2018/07/disney-shows-ai-apocalypse-possible/?=refinnar  Quoting:

I want to convince you of something: that an ‘AI apocalypse’ is not a ridiculous thing to worry about. Sure, there are other, more near-future things to worry about involving artificial intelligence (AI) – including privacy and surveillance, and the use of AI-controlled weapons on the battlefield. But we can worry about more than one thing at a time. And while the idea of AI destroying humanity is, I think, not likely, nor is it so improbable that we can dismiss it, as some people do, as quasi-religious mumbo-jumbo, or bad sci-fi.

. . . The risk is not that AI might become ‘self-aware’, or that it might turn against its creators, or that it will ‘go rogue’ and break its programming. The risk is that, instead, it will become competent. The risk is that it will do exactly what it is asked to do, but it will do it too well: that completing what sounds like a simple task to a human could have devastating unforeseen consequences. Here’s roughly how that could go. One group that worries about ‘AI safety’, as it’s known, is the Machine Intelligence Research Institute (MIRI) in Berkeley, California. Their executive director, Nate Soares, once gave a talk at Google in which he suggested that, instead of The Terminator, a better fictional analogy would be Disney’s Fantasia.

An AI apocalypse is possible 191da8-20141201-fantasia

Mickey, the Sorcerer’s Apprentice, is asked to fill a cauldron with water. When the Sorcerer leaves, Mickey enchants a broom to do it for him, and goes to sleep. Inevitably enough, the broom obeys him perfectly, eventually flooding the entire room and tipping Mickey into the water.

Of course, if Mickey simply told the broom to keep bringing water and never stop, then he’d only have himself to blame. But even if he’d told the broom to bring the water until the cauldron was full, it would probably still have gone terribly wrong. Imagine the broom filled it until the water was four inches from the top. Is that ‘full’? How about one inch? The broom isn’t sure. Well, surely when it’s right at the top, and water is splashing on the floor, the broom is sure? Well, probably 99.99% sure. But, crucially, not completely sure. It can’t do any harm to add more water, in case, say, its eyes are deceiving it, or the cauldron has a leak. You haven’t told the broom to “fill the cauldron until you’re pretty sure it’s full”, you’ve just said “fill it until it’s full”.

A human would know that other things – not flooding the room, for instance – are more important than ever-more-tiny increments of certainty about how full the cauldron is. But when you ‘programmed’ your broom ‘AI’, you didn’t mention that. The broom cares about nothing else but the fullness of the cauldron. What we humans think of as simple goals are actually surrounded by other, much more complex, considerations, and unless you tell the AI, it won’t know that.

There are other problems. For instance, the goal of ‘fill the cauldron’ is most easily completed if you, the broom ‘AI’, are not destroyed, or switched off, or given another new goal. So almost any AI would be incentivised to stop you from switching it off or destroying it – either by fighting back, or perhaps by copying itself elsewhere. And almost any goal you are given, you could probably do better with more resources and more brainpower, so it makes sense to accumulate more of both. Eliezer Yudkowsky, also of MIRI, has a saying: “The AI does not hate you, nor does it love you, but you are made out of atoms
 which it can use for something else.”

Steve Omohundro, an AI researcher, suggests that even something as harmless-sounding as a chess-playing AI, simply ordered to become as good at chess as possible, could be very dangerous, if precautions weren’t taken. It would, for instance, be in its interests to acquire unlimited amounts of matter to build more computers out of, to enable it to think ever more deeply about chess. That may not strike you as inherently dangerous, but if you consider that you are made of matter, and so is the Earth, you may see the potential problem. The fear is that a powerful, “superintelligent” AI could literally end human life, while obeying its innocuous-seeming instructions to the letter.

. . . Shane Legg and Demis Hassabis, the founders of Google’s DeepMind AI firm, are on record saying it’s a serious risk, and DeepMind has collaborated on research into ways to prevent it. Surveys of AI researchers find that a majority of them think that superintelligent AI will arrive in the lifetimes of people alive now, and that there is a strong possibility – roughly a 1 in 5 chance – that it will lead to something “extremely bad (existential catastrophe)”, i.e. human extinction.

I’m not saying that this is inevitable. But I do worry that people discount it utterly, because it sounds weird, and because the people who talk about it are easy to dismiss as weird . . . Just because the people saying something are weird, doesn’t mean they’re wrong.




An AI apocalypse is possible Englan11

Between the velvet lies, there's a truth that's hard as steel
The vision never dies, life's a never ending wheel
- R.J.Dio
Neon Knight
Neon Knight
The Castellan

Male Posts : 2365
Join date : 2017-03-05

https://castle-europa.forumotion.com

Back to top Go down

View previous topic View next topic Back to top


Permissions in this forum:
You can reply to topics in this forum