ashwin kelkar <ashwin_k18 at my-deja.com> writes:
> your point is intresting but then there is one small drawback. what
> about logic ?? MHHA-A will realise that his female is happy with
> someone else so he will eventually lay off and he also being a thinking
> creature will think : " i go to all the trouble to kill for her and
> everytime she says she is happier elsewhere" conclusion: " find someone
> else !!!" it is as simple as that. logic will turn him somewhere else
> or maybe even instincts !! it is not nessecary for emotions like hurt
> and all those kinds to make him do this !!
> My question is a serious one, please do not make fun of it this way,
I think the answer comes in two parts. First, humans evolved from
lower animals, which could not reason (at least not in anything like
the fancy way that *we* do), so nature had to come up with a
decision-making system that did not depend on reasoning. The
evolutionary time over which humans have been able to reason has not
been long enough to permit a completely different kind of control
system to evolve.
Second, reason alone is never a sufficient basis for an action.
Reasoning must always begin with axioms, such as "dying should be
avoided". It is extremely difficult, though, to come up with a set of
axioms that lead to the "correct" decisions in all situations, because
there always seem to be exceptions. For example, dying should usually
be avoided, but not if it can save the lives of one's children. If
there *are* usable axioms, they probably look something like "choose
the action that maximizes the fitness of your genes" -- but the
problem with *that*, of course, is that it presupposes that people are
capable of figuring out the effect of a given action on the fitness of
People who work in artificial intelligence have had great difficulty
getting systems to behave in intelligent ways by reasoning logically
from fixed sets of axioms. My impression is that a better approach is
based on "value driven decision systems", which assign a "goodness"
rating to any given situation, and choose actions that they believe
will maximize goodness. If you consider that emotions are the brain's
way of encoding the goodness of the situation, then I think their
existence and utility begin to make sense.