#artificialintelligence #consciousness
#artificialintelligence #consciousness
Originally shared by Singularity 2045
"Although the Turing test has been held as a benchmark for evaluating machine intelligence, there are many concerns regarding the type of intelligence likely to be identified by the Turing Test. One of the most frequently raised concerns argues that some artificial intelligences may simply convey the intelligence of their creators, and there is no way for the Turing Test to distinguish between a machine that is intelligent in this manner and one that is self-aware, conscious, and capable of reflexively thinking about itself."
#artificialintelligence #intelligence #machineconsciousness
http://ieet.org/index.php/IEET/more/chandler20130901
Originally shared by Singularity 2045
"Although the Turing test has been held as a benchmark for evaluating machine intelligence, there are many concerns regarding the type of intelligence likely to be identified by the Turing Test. One of the most frequently raised concerns argues that some artificial intelligences may simply convey the intelligence of their creators, and there is no way for the Turing Test to distinguish between a machine that is intelligent in this manner and one that is self-aware, conscious, and capable of reflexively thinking about itself."
#artificialintelligence #intelligence #machineconsciousness
http://ieet.org/index.php/IEET/more/chandler20130901
Reminds me of the Chinese Room thought experiment (http://en.wikipedia.org/wiki/Chinese_room)
ReplyDeleteThat also raises a question: if a computer could successfuly carry on a conversation and still not have a "conscious mind", how can we be sure the human brain itself isn't just "emulating intelligence" as well?
My answer to that dilema is: if something acts like it has a rational mind, we should treat it as if it does, regardless of wether it's faking it or not.