Reasoning v. other AI stuff
georgedance at hotmail.com
Sat May 3 09:55:58 EST 2003
L.Fine at lycos.co.uk (Acme Debugging) wrote in message news:<35fae540.0304301838.331baf94 at posting.google.com>...
> Thanks for the references. Actually, I'm not looking for any logic
> system, just saying that logic used in most real-world reasoning
> needs to incorporate probability. I think different logic(s) will be
> needed for different types of questions. I would favor just a few
> simple probability calculations in a first-step reasoner, though it
> might run out of applications pretty fast.
> Agree it's nothing new.
Well, classical logic can give you no more than conclusions that are
as true (or as certain) as your premises; if your premises have a
probability of less than 1, so will your conclusion. But it can give
you conclusions that are *as true* as your premises, and that's a
Learning classical logic allows you to discover valid wff and valid
inferences. As a valid wff is always true, it (and all its
substitution inferences) have a probability of 1 of being true.
For instance, let's say that you want to know whether your wife is
home, or if she went out. So your mind works:
1. Either my wife is at home, or she's not at home. (LEM)
2. I can't see my wife, and she doesn't answer when I call her.
3. If 2 is true, then it is false my wife is at home.
4. If my wife is not at home, she's gone out. (by definition)
5. My wife has gone out. 2,3,4 HS
Step 1 is valid, so it has a probability of 1. 2 is not valid, but
it's something you know directly and basically; so you'd likely assign
it a probability of 1, as well.
3 is not valid, as it could be false in some models (your wife could
be unconscious in a closet or something), but it has a pretty high
probability; so you might assign it a probability of .9.
4 is valid, on the assumption that having 'gone out' is the same thing
as 'not being at home'. So it gets another 1.
And the rules HS, by which you concluded 5, is also valid, so it gets
So you know the probabilities of 1,2,3,4, and HS, a all being true
separately; and if you multiply them, you get the probability of all
being true at once.
So you multiply (1x1x.9x1x1) and conclude that there is a probability
of .9 that your wife has gone out.
I'm sure you're thinking, "Well, duh!". This is a pretty obvious
But that's why I used it as an example. More complex chains of
reasoning, using different premises of different degrees of
probability, work in exactly the same way.
More information about the Neur-sci