Like/Tweet/+1
Latest topics
» Now listening to . . .
by Sary Yesterday at 23:25

» What do the Irish think of the English?
by Neon Knight Yesterday at 23:14

» The Marrakesh Migration Declaration / Pact
by Neon Knight Mon 18 Mar - 21:28

» New Genetic Study of Iberians
by Neon Knight Mon 18 Mar - 8:42

» Personality traits linked to political orientation
by Neon Knight Sun 17 Mar - 23:42

» Christchurch Massacre - Right Wing Extremism
by OsricPearl Sat 16 Mar - 18:41

» Favourite Film Scenes
by Neon Knight Sat 16 Mar - 3:03

» Brexit Saga Update
by Neon Knight Sat 16 Mar - 0:31

» Political Systems
by OsricPearl Mon 11 Mar - 2:57

» Monty Python Scenes & Sketches
by Neon Knight Sat 9 Mar - 21:15

» Why the paranormal is probably real
by Sary Sat 9 Mar - 15:42

» UK Migration Issues
by Neon Knight Sat 2 Mar - 19:28

» Classic TV Adverts
by Neon Knight Fri 1 Mar - 22:38

» My strange Phenotype
by Neon Knight Fri 1 Mar - 0:50

» Finland, Luxembourg & Ireland worst for anti-black harassment
by OsricPearl Thu 28 Feb - 14:45

» Language Test for Political Type
by OsricPearl Mon 25 Feb - 4:30

» Liberals lecture and conservatives communicate
by OsricPearl Mon 25 Feb - 4:27

» Alarming decline in flying insects
by Sary Tue 19 Feb - 1:44

» Old Film Trailers (to 2009)
by Neon Knight Mon 18 Feb - 22:44

» Financial incentives for Hungarian women to have more children
by Neon Knight Wed 13 Feb - 0:18

» Classic TV Theme Music
by Neon Knight Sun 10 Feb - 22:48

» Song Cover-Versions & Originals
by Neon Knight Wed 6 Feb - 23:09

» Personality Differences in Twitter and Facebook Users
by Neon Knight Wed 6 Feb - 0:21

» Germany's AfD Party
by Neon Knight Sun 3 Feb - 23:17

» 23andMe update
by Sary Fri 1 Feb - 0:36

» Genes for staying slim do exist
by OsricPearl Wed 30 Jan - 4:14

» Popular music has become more emotionally negative
by Neon Knight Mon 28 Jan - 0:38

» Pronouncing ö and ü in German
by Neon Knight Thu 24 Jan - 22:31

» Government Shutdown over Border Wall
by OsricPearl Mon 21 Jan - 18:28

» Historical European Martial Arts, Arms & Armour
by Neon Knight Sun 20 Jan - 18:34

» How Poisons Kill
by Sary Fri 18 Jan - 0:27

» Blond hair has more genetic variants
by OsricPearl Tue 15 Jan - 3:25

» Reincarnation
by Sary Tue 15 Jan - 0:35

» British Attitudes to Racism
by Neon Knight Sat 22 Dec - 19:46

» Twitter's dangerous approach to hate speech
by Neon Knight Fri 21 Dec - 18:04

» Psychopaths & Sociopaths
by Neon Knight Wed 19 Dec - 23:29

» When is the brain fully developed?
by Sary Wed 19 Dec - 20:51

» Distribution of EU Funding
by Sary Mon 17 Dec - 21:41

» Over-consumption of earth's resources continues
by Neon Knight Sat 15 Dec - 20:18

» Tommy Robinson
by Neon Knight Sun 9 Dec - 23:01


An AI apocalypse is possible

Reply to topic

View previous topic View next topic Go down

An AI apocalypse is possible

Post Neon Knight on Mon 19 Nov - 21:04

https://unherd.com/2018/07/disney-shows-ai-apocalypse-possible/?=refinnar  Quoting:

I want to convince you of something: that an ‘AI apocalypse’ is not a ridiculous thing to worry about. Sure, there are other, more near-future things to worry about involving artificial intelligence (AI) – including privacy and surveillance, and the use of AI-controlled weapons on the battlefield. But we can worry about more than one thing at a time. And while the idea of AI destroying humanity is, I think, not likely, nor is it so improbable that we can dismiss it, as some people do, as quasi-religious mumbo-jumbo, or bad sci-fi.

. . . The risk is not that AI might become ‘self-aware’, or that it might turn against its creators, or that it will ‘go rogue’ and break its programming. The risk is that, instead, it will become competent. The risk is that it will do exactly what it is asked to do, but it will do it too well: that completing what sounds like a simple task to a human could have devastating unforeseen consequences. Here’s roughly how that could go. One group that worries about ‘AI safety’, as it’s known, is the Machine Intelligence Research Institute (MIRI) in Berkeley, California. Their executive director, Nate Soares, once gave a talk at Google in which he suggested that, instead of The Terminator, a better fictional analogy would be Disney’s Fantasia.



Mickey, the Sorcerer’s Apprentice, is asked to fill a cauldron with water. When the Sorcerer leaves, Mickey enchants a broom to do it for him, and goes to sleep. Inevitably enough, the broom obeys him perfectly, eventually flooding the entire room and tipping Mickey into the water.

Of course, if Mickey simply told the broom to keep bringing water and never stop, then he’d only have himself to blame. But even if he’d told the broom to bring the water until the cauldron was full, it would probably still have gone terribly wrong. Imagine the broom filled it until the water was four inches from the top. Is that ‘full’? How about one inch? The broom isn’t sure. Well, surely when it’s right at the top, and water is splashing on the floor, the broom is sure? Well, probably 99.99% sure. But, crucially, not completely sure. It can’t do any harm to add more water, in case, say, its eyes are deceiving it, or the cauldron has a leak. You haven’t told the broom to “fill the cauldron until you’re pretty sure it’s full”, you’ve just said “fill it until it’s full”.

A human would know that other things – not flooding the room, for instance – are more important than ever-more-tiny increments of certainty about how full the cauldron is. But when you ‘programmed’ your broom ‘AI’, you didn’t mention that. The broom cares about nothing else but the fullness of the cauldron. What we humans think of as simple goals are actually surrounded by other, much more complex, considerations, and unless you tell the AI, it won’t know that.

There are other problems. For instance, the goal of ‘fill the cauldron’ is most easily completed if you, the broom ‘AI’, are not destroyed, or switched off, or given another new goal. So almost any AI would be incentivised to stop you from switching it off or destroying it – either by fighting back, or perhaps by copying itself elsewhere. And almost any goal you are given, you could probably do better with more resources and more brainpower, so it makes sense to accumulate more of both. Eliezer Yudkowsky, also of MIRI, has a saying: “The AI does not hate you, nor does it love you, but you are made out of atoms
 which it can use for something else.”

Steve Omohundro, an AI researcher, suggests that even something as harmless-sounding as a chess-playing AI, simply ordered to become as good at chess as possible, could be very dangerous, if precautions weren’t taken. It would, for instance, be in its interests to acquire unlimited amounts of matter to build more computers out of, to enable it to think ever more deeply about chess. That may not strike you as inherently dangerous, but if you consider that you are made of matter, and so is the Earth, you may see the potential problem. The fear is that a powerful, “superintelligent” AI could literally end human life, while obeying its innocuous-seeming instructions to the letter.

. . . Shane Legg and Demis Hassabis, the founders of Google’s DeepMind AI firm, are on record saying it’s a serious risk, and DeepMind has collaborated on research into ways to prevent it. Surveys of AI researchers find that a majority of them think that superintelligent AI will arrive in the lifetimes of people alive now, and that there is a strong possibility – roughly a 1 in 5 chance – that it will lead to something “extremely bad (existential catastrophe)”, i.e. human extinction.

I’m not saying that this is inevitable. But I do worry that people discount it utterly, because it sounds weird, and because the people who talk about it are easy to dismiss as weird . . . Just because the people saying something are weird, doesn’t mean they’re wrong.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Between the velvet lies, there's a truth that's hard as steel
The vision never dies, life's a never ending wheel
- R.J.Dio
Neon Knight
Neon Knight
The Castellan

Male Posts : 1330
Join date : 2017-03-05

http://castle-europa.forumotion.com

Back to top Go down

View previous topic View next topic Back to top

- Similar topics

Permissions in this forum:
You can reply to topics in this forum