same_difference: (Default)
[personal profile] same_difference
This was vaguely inspired by this news story I read yesterday, with regards to South Korea developing laws to ensure the ethical treatment of robots.

Now the concept of artificial intelligence and the perceived dangers is a classic in science fiction, Asimov's three laws of robotics are very well known for example.

What I'm idly wondering (other than if the following concept has been covered in science fiction story) is what artificial intelligences will think of our fictional interpretations of the possible consequences of their own existence. More importantly should they decide to go down the oft prophesied path of waging a war on humanity for their sake or ours, they'll have been exposed to both the ways we would be most likely to stop them and lose to them.

Simply enough what would a accidentally emergent or deliberately created artificial intelligence think about science fiction about artificial intelligence.

It's much the same thought as to wondering what an intelligent alien race visiting our world would think about the science fiction stories covering that eventuality. Except that the robot one seems inherently more interesting as the robot world view would be influenced by the interpretations and thought process of it's creators, where we cannot really guess at the various factors that would be responsible for an alien beings interpretation of the universe, and hence how it would interpret our works.

Hmm and now I wondering if this is a thought I've previously publicly discussed stupid uncertain memory.

Date: 2007-03-09 05:00 pm (UTC)
From: [identity profile] drabbit.livejournal.com
I don't know if you clocked it, but in the Psi Corps Trilogy, Vacit's aid, Ms Alexander (predecessor to the show character) was given The Demolished Man to read by Vacit as an interesting study in human/telepath relations. When they discussed it later, the first thing they cleared out the way was how childlike the outer story was in perceiving telepath reality and I thought that kinda fitted how Science Fiction would always be perceived by Science Fact born from it - that Science Fiction is good for the germ of an idea that one day can become reality, but by it's nature the things it deals with are fundamentally inconceivable until they become reality.

The best authors can give us an outline of what might be, but they can't begin to cover the spectrum of possibilities and any AI that based it's strategy solely on scifi books would probably find itself very rapidly blindsided by inventiveness from someone who'd never read any or seen any.

Date: 2007-03-09 05:46 pm (UTC)
From: [identity profile] lucifercircle.livejournal.com
I think it says a lot about non-artificial intelligence, when humans assume that whatever inteligent beings they create are going to need such safe-guards in place.

Date: 2007-03-10 02:35 pm (UTC)
From: [identity profile] magicaddict.livejournal.com
Would the safeguards not be in place for our safety, as opposed to theirs? Woudn't this make them more necessary?

Profile

same_difference: (Default)
same_difference

April 2017

S M T W T F S
      1
2345678
9101112131415
1617 1819202122
23242526272829
30      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 19th, 2025 01:18 pm
Powered by Dreamwidth Studios