Reasoning v. other AI stuff

Acme Debugging L.Fine at lycos.co.uk
Sat May 3 23:45:11 EST 2003


georgedance at hotmail.com (George Dance) wrote in message news:<6312c50b.0305030655.60783711 at posting.google.com>...
> L.Fine at lycos.co.uk (Acme Debugging) wrote in message news:<35fae540.0304301838.331baf94 at posting.google.com>...

> Well, classical logic can give you no more than conclusions that are
> as true (or as certain) as your premises; if your premises have a
> probability of less than 1, so will your conclusion. But it can give
> you conclusions that are *as true* as your premises, and that's a
> pretty ... [Usenet or your poster chopped this here].

I didn't make the connection in your post with the references, but
now I remember you from sci.logic. You have the knack of bridging
the academic with the real-world (or maybe just willing to suffer the
embarrassment? ;-) Anyway, I would like you for a logic instructor.

> 1.  Either my wife is at home, or she's not at home.    (LEM)
> 2.  I can't see my wife, and she doesn't answer when I call her.
> (direct experience)
> 3.  If 2 is true, then it is false my wife is at home.
> 4.  If my wife is not at home, she's gone out.   (by definition)
> ----------------------------------
> 5.  My wife has gone out.                        2,3,4 HS

<snip the assignment of probabilities to the above>

> So you multiply (1x1x.9x1x1) and conclude that there is a probability
> of .9 that your wife has gone out.

I would like to add one more probability calc to round out the picture,
then make two comments. Let's say there is a .8 chance my wife's
boyfriend will come to town, and if he does there is a .7 chance she
will sneak off into the night to see him. Now assuming these two
arguments are completely independent, but coincidentally share
the same conclusion, the probability of the conclusion rises above
the probability in either argument alone.

My first comment is that there seems no need of a logic system
to put this on a computer, classical or otherwise. All one needs
is necessary and sufficient condition, or simple objective logic,
usually illustrated with Venn diagrams. Throw in some real world
examples of sets of things, and I think you have all the logic
needed to derive these simple probability calculations as well,
which calculations are more-or-less apparently sufficient
for how brains reason intuitively or at least what I do "in my head"
to successfully navigate life. I see no reason to use a more
complex system when this suffices for my narrow problem
definition of a first-step reasoner. Though, as I stated, you
might run out of applications pretty fast and need to add
various complex systems to answer some types of questions.

Second, while this logic works in an academic sense, and the
probability also works in an academic sense, the two don't
work on a computer in those real-world situations that would
satisfy my test of "useful and surprising" for a first-step reasoner.
Nobody should waste any time putting them on a computer
unless and until some other issues are resolved.

There is a rat's nest of logic behind the probabilities, which I
think is best expressed for AI application as needing to know
something about the question before you ask it. What you
know about the question determines where and how to ask it,
and how meaningful the answer is. Too keep it grounded in
the real world, and because it seems sufficient, I think this logic
is best analyzed in real-world argumentation. The logic or
meaning behind each different real-world example seems to
be particular to the example, and it seems an impasse to
generalize that logic in such a way that it would work on a
computer and still give you something "useful and surprising."

Larry

P.S. How do you know so much about my wife?

>
> Learning classical logic allows you to discover valid wff and valid
> inferences.  As a valid wff is always true, it (and all its
> substitution inferences) have a probability of 1 of being true.
>
> For instance, let's say that you want to know whether your wife is
> home, or if she went out.  So your mind works:
>
> 1.  Either my wife is at home, or she's not at home.    (LEM)
> 2.  I can't see my wife, and she doesn't answer when I call her.
> (direct experience)
> 3.  If 2 is true, then it is false my wife is at home.
> 4.  If my wife is not at home, she's gone out.   (by definition)
> ----------------------------------
> 5.  My wife has gone out.                        2,3,4 HS
>
> Step 1 is valid, so it has a probability of 1.  2 is not valid, but
> it's something you know directly and basically; so you'd likely assign
> it a probability of 1, as well.
>
> 3 is not valid, as it could be false in some models (your wife could
> be unconscious in a closet or something), but it has a pretty high
> probability; so you might assign it a probability of .9.
>
> 4 is valid, on the assumption that having 'gone out' is the same thing
> as 'not being at home'.  So it gets another 1.
>
> And the rules HS, by which you concluded 5, is also valid, so it gets
> another 1.
>
> So you know the probabilities of 1,2,3,4, and HS, a all being true
> separately; and if you multiply them, you get the probability of all
> being true at once.
> So you multiply (1x1x.9x1x1) and conclude that there is a probability
> of .9 that your wife has gone out.
>
> I'm sure you're thinking, "Well, duh!".  This is a pretty obvious
> inference.
> But that's why I used it as an example.  More complex chains of
> reasoning, using different premises of different degrees of
> probability, work in exactly the same way.



More information about the Neur-sci mailing list