AI Theory...

  • (2 Pages)
  • +
  • 1
  • 2

27 Replies - 3050 Views - Last Post: 18 September 2012 - 10:08 AM

#16 Zero Cool  Icon User is offline

  • D.I.C Head

Reputation: 26
  • View blog
  • Posts: 100
  • Joined: 18-June 12

Re: AI Theory...

Posted 13 September 2012 - 08:40 PM

I am extremely exhausted physically and mentally and this thread is blowing my mind :stuart:
Was This Post Helpful? 0
  • +
  • -

#17 jon.kiparsky  Icon User is offline

  • Pancakes!
  • member icon


Reputation: 7562
  • View blog
  • Posts: 12,680
  • Joined: 19-March 11

Re: AI Theory...

Posted 14 September 2012 - 01:56 PM

View PostAphex19, on 01 September 2012 - 07:45 AM, said:

Have you ever heard of the Turing Test or the Chinese Room experiment? Alan Turing proposed that a computer should be considered intelligent if it can fool human beings in to thinking that they are speaking to a human. This test is notoriously difficult and (from what I can tell) most AI's fail miserably with proper testing. I mean, honestly, would Cleverbot (for example) fool most people under scrutiny? I doubt it. Perhaps. However, John Searl argued that even if a computer could pass the Turing Test, the computer still has no real understanding of what it's doing (it's just following instructions), so it shouldn't be considered intelligent. He demonstrated this using the Chinese Room experiment.



If we want to get into the phil of mind stuff, Searle's Chinese Room is more of an assertion than an experiment. He simply declares the Turing Test to be invalid, because the machine isn't made of the right "stuff" to think. So in short, he's resorting to something like a "soul" - some sort of thinky stuff that only people have. This is based on the "man born of woman" test: if you were made by a human daddy and a human mommy, you're human, and you're intelligent. Otherwise, no.

The Turing test is a more interesting one. The test there is not on the computer, it's on the person testing. The premise of the test boils down to this: if I can't tell whether the person I'm conversing with is human or not, I should act on the assumption that they are human. (the switch from {"intelligent" to "human" is intentional) This relates to the notion of "intentionality" which you'll find quite a lot in Daniel Dennett's writing. The idea there is that if it is useful to assume that an entity has beliefs, thoughts, desires, fears, and so forth, then we should so assume, to the degree that it is useful. This is as opposed to the "mechanical" stance, in which you assume that an entity is simply responding in rote mechanical fashion without such internal states,
For example, if you write a good chess program, you know every line of code that went into it. If something goes wrong with the program, you can debug it: this requires taking the mechanical stance. However, if you're going to play a game of chess, it's likely that you'll assign certain beliefs and desires to that program, and play as if it has those states. "The computer knows the rules of the game and wants to win" is a much more effective approach to that world than considering the code or the algorithm. The better your program, the more true this is. for example, if you wrote a program surprised you by trying to cheat (say, by moving a knight to a more convenient position when it thinks you're not looking), you might think you've written a very good program indeed, though perhaps not a very good chess opponent!

Consider two other cases. We might like to say "my car doesn't want to start this morning" but we wouldn't dream of respecting its wishes or trying to outsmart it. We'd just call a mechanic.
On the other hand, we sometimes talk about people as though they're simply machines, but even the most callous of us - ie, me - can see that it's simply more effective to treat those people as if they have beliefs, desires, etc., and to manipulate those beliefs and desires than to try to mechanically alter their behavior.


Searle's argument, as far as I can tell, ignores all of this in favor of a flat declaration that the program can't possibly know, because of the sort of thing that it is. Any argument from lived experience is simply declared to be null and void. The argument has never worked for me, because of this.

There's a bigger flaw in the argument as well. That is simply that Searle doesn't seem to believe that an intelligence can be composed of things which are not themselves intelligent. This is in flat conflict with everything we know or can expect to learn about the brain, and the mind that is a byproduct of the functioning of the brain. Neurons are not intelligent, they have no fashion sense or loyalty to their fellow neuron or hunger to make something of themselves. Human beings are intelligent - they have, to varying degrees, fashion sense, loyalty to their fellows, and ambition, among many others. These characteristics, which all together compose a mind, are the product of a brain, and ultimately of neurons. Searle cannot believe this and also maintain his argument.

Ergo, Searle's argument fails badly.
Was This Post Helpful? 0
  • +
  • -

#18 ishkabible  Icon User is offline

  • spelling expret
  • member icon




Reputation: 1622
  • View blog
  • Posts: 5,709
  • Joined: 03-August 09

Re: AI Theory...

Posted 14 September 2012 - 03:54 PM

Quote

I kind of agree with John Searl. I think that a computer should only be considered intelligent if it can show understanding and reasoning ability, although I'm extremely cynical of any such thing ever happening with the classic computer system. Maybe interfacing organic matter with computers is the way to go? Cyborgs? Our brains work quite differently to computers, and lead to different strengths. Humans are great at abstract thinking and reasoning, whereas computers are great at just executing specific tasks very quickly.


I would argue that humans have "weak Natural Intelligence". we too only follow steps laid out by our brain's pattern of neurons; unless of course there is something MUCH MUCH more interesting going on that I could only describe as "spiritual" and would truly give us free will. We just happen to have a system running that is so adept at solving problems that we can hardly understand it.

freewill sounds like a hoax to me. it means that there is something that doesn't follow some set of definable rules. at best "freewill" is caused by the unpredictability of quantum objects. I do however like to think I have freewill so I'm not sure what I want to believe. part of me says that I don't have an option think/do what I want and part of me says that life means nothing without that.

If I don't have freewill then I certainly can still enjoy things and I enjoy enjoying things :P that to me is what life is about. enjoying the ride

This post has been edited by ishkabible: 14 September 2012 - 04:01 PM

Was This Post Helpful? 0
  • +
  • -

#19 jon.kiparsky  Icon User is offline

  • Pancakes!
  • member icon


Reputation: 7562
  • View blog
  • Posts: 12,680
  • Joined: 19-March 11

Re: AI Theory...

Posted 14 September 2012 - 07:43 PM

View Postishkabible, on 14 September 2012 - 05:54 PM, said:

freewill sounds like a hoax to me. it means that there is something that doesn't follow some set of definable rules. at best "freewill" is caused by the unpredictability of quantum objects. I do however like to think I have freewill so I'm not sure what I want to believe. part of me says that I don't have an option think/do what I want and part of me says that life means nothing without that.

If I don't have freewill then I certainly can still enjoy things and I enjoy enjoying things :P that to me is what life is about. enjoying the ride

I take the materialist position, obviously, so I don't have any use for non-material entities in my explanations. No souls, so spirit, no "minds" that are not simply epiphenomenal byproducts of brain activity, no nothin'. I don't insist that others take this position in their own view, but it's the only one that makes sense to me.
I also don't make much of the quantum indeterminacy as source of free will notion. Randomness is not freedom, it's just an undetermined constraint. A particle in a state of Brownian motion is not free.
So where does that leave me? Well, if I'm to be strictly materialist about it, and if I take seriously the notion that my "consiousness" and my "mind" are simply products of normal brain activity (or perhaps abnormal, but in a normal sort of way) then I have to consider the possibility that at bottom I have no real free will.

That is, that the "I" that I perceive to be making these decisions and feeling these pressures to sleep in, to go to work in the morning, to drink a cup of coffee, to think about philosophy of mind, is simply a product of decisions bubbling up from parts of my brain (which are not themselves conscious), and the "I" part simply reflects and rationalizes the decisions that are made for me.

This is reasonable, there is in fact evidence supporting it in the literature. It's not absurd, and in fact it turns out that it doesn't bother me at all, and in fact it shouldn't. How can this be?

EDIT: I find that I've gone overboard with the wall of text here, and this seems like a good place to break off. Open the spoiler at your own risk. If you don't care about my position on free will and consciousness, you will miss nothing. If you do care, you'll have already opened the spoiler by now.

Spoiler

This post has been edited by jon.kiparsky: 14 September 2012 - 07:44 PM

Was This Post Helpful? 1
  • +
  • -

#20 ishkabible  Icon User is offline

  • spelling expret
  • member icon




Reputation: 1622
  • View blog
  • Posts: 5,709
  • Joined: 03-August 09

Re: AI Theory...

Posted 15 September 2012 - 01:03 PM

Ya, you articulated some of what I've thought about very well. Particularly the issue of pretending to have freewill; I can't see how I don't have freewill from one side yet freewill makes no sense. I don't think I've ever quite gone passed that point other than realizing the paradox. To me, even without freewill, life is still enjoyable. Saying life has no meaning without freewill is like saying a book is no good because you don't get to control the characters.

I kinda want to take a philosophy class at some point on this; we will see.

This post has been edited by ishkabible: 15 September 2012 - 01:06 PM

Was This Post Helpful? 1
  • +
  • -

#21 jon.kiparsky  Icon User is offline

  • Pancakes!
  • member icon


Reputation: 7562
  • View blog
  • Posts: 12,680
  • Joined: 19-March 11

Re: AI Theory...

Posted 15 September 2012 - 01:07 PM

View Postishkabible, on 15 September 2012 - 03:03 PM, said:

To me, even without freewill, life is still enjoyable. Saying life has no meaning without freewill is like saying a book is no good because you don't get to control the characters.


Stealing that.

Quote

I kinda want to take a philosophy class at some point; we will see.


Have a look at Dennett some time. Consciousness Explained can be read by any intelligent layperson, it assumes only curiosity; no background in philosophy or cognitive science is required.
Was This Post Helpful? 1
  • +
  • -

#22 NecroWinter  Icon User is offline

  • D.I.C Regular

Reputation: 35
  • View blog
  • Posts: 318
  • Joined: 21-October 11

Re: AI Theory...

Posted 15 September 2012 - 05:09 PM

this is the best thread ever, and the fact that philosophy is so useful with AI just makes me love it that much more.

As for the free will debate, I am a hard determinist. We are just machines, but instead of running on 1's and 0's we run on emotional reward systems. Logic its self is purely deterministic, same with reason. The only debate as to whether or not we have free will comes in terms of emotions, but they can be explained by chemical releases in the brain, which in turn are related to your genetics, and to a large part, your environment (which you dont have control over).

This post has been edited by NecroWinter: 15 September 2012 - 05:11 PM

Was This Post Helpful? 0
  • +
  • -

#23 ishkabible  Icon User is offline

  • spelling expret
  • member icon




Reputation: 1622
  • View blog
  • Posts: 5,709
  • Joined: 03-August 09

Re: AI Theory...

Posted 15 September 2012 - 08:47 PM

I was looking at electives in natural science and humanities/social science at my uni for my degree and I figured out that A) if I take all my required natural science electives in physics I can get a minor and physics and B ) that if I take all my required humanities/social science electives in philosophy I can get a minor in philosophy and that C) within the constraints of the electives I have to take these seem to interest me more than other possible electives regardless of weather I get the minors or not.

so I'm almost definitely going to be taking classes involving this.

This post has been edited by ishkabible: 15 September 2012 - 08:48 PM

Was This Post Helpful? 1
  • +
  • -

#24 superkb10  Icon User is offline

  • D.I.C Regular

Reputation: 29
  • View blog
  • Posts: 298
  • Joined: 27-November 11

Re: AI Theory...

Posted 16 September 2012 - 08:05 PM

I figure the way to go with creating an AI is to let it "evolve" like organic life did. We have to research into the first form of life. The most basic form of life, with the least developed form of thinking. Create something that works like that basic organism. Then, we create a set of rules, or an environment for the AI to work in, speed it up a few million times, wait a couple of years, and we get something like organic intelligence.
Was This Post Helpful? 1
  • +
  • -

#25 ishkabible  Icon User is offline

  • spelling expret
  • member icon




Reputation: 1622
  • View blog
  • Posts: 5,709
  • Joined: 03-August 09

Re: AI Theory...

Posted 16 September 2012 - 08:27 PM

is that all?
Was This Post Helpful? 1
  • +
  • -

#26 Choscura  Icon User is offline

  • D.I.C Lover


Reputation: 461
  • View blog
  • Posts: 2,219
  • Joined: 18-October 08

Re: AI Theory...

Posted 17 September 2012 - 09:11 PM

it almost sounds like exactly what everybody's been doing the whole time, doesn't it?
Was This Post Helpful? 1
  • +
  • -

#27 ishkabible  Icon User is offline

  • spelling expret
  • member icon




Reputation: 1622
  • View blog
  • Posts: 5,709
  • Joined: 03-August 09

Re: AI Theory...

Posted 17 September 2012 - 11:03 PM

it's sorta almost like a "genetic" algorthim
Was This Post Helpful? 0
  • +
  • -

#28 h4nnib4l  Icon User is offline

  • The Noid
  • member icon

Reputation: 1181
  • View blog
  • Posts: 1,673
  • Joined: 24-August 11

Re: AI Theory...

Posted 18 September 2012 - 10:08 AM

When you masturbate in public, you generally go to jail. When you intellectually masturbate in a forum, you get reputation points. I like it here.
Was This Post Helpful? 2
  • +
  • -

  • (2 Pages)
  • +
  • 1
  • 2