Security and HCI aspects of the Heisenberg Uncertainty Principle
Arthur T. Murray
uj797 at victoria.tc.ca
Mon Sep 27 09:03:53 EST 1999
"To observe a phenomenon is to change the phenomenon" normally
applies only at the microphysical level of quantum mechanics, but
das heisenbergsche Unbestimmtheitsprinzip is worth keeping in mind
in the security and Human-Computer Interaction (HCI) aspects of AI.
in his prophetic oracle on "Technological Singularity" describes
various scenarios of how Artificial Intelligence (AI) is coming.
Readers of the newsgroup comp.security.misc are well advised to
consider far in advance of AI the computer security aspects of AI.
It will be too late to build in security controls when the AI is
on an equal IQ with humans and is about to wrest control from you.
Since AI systems are being designed, coded and set free right now,
don't say we didn't warn you by posting messages such as this one.
is going through some major changes in preparation for a new release.
The "spy" function that allows insufficiently paranoid security agents
to observe the spoken and unspoken thoughts of the AI is now old hat.
Interactors with the newsgroup comp.human-factors will be pleased
to learn of the latest HCI-relevant security-adumbrant AI features
now being incorporated into the market dominant public domain AI:
during the pause in robot thinking while the AI waits for input,
where previously the computer simply looped through thousands of
clock cycles and drew a line of dots across the HCI screen, now
the ghost of Heisenberg has put the impatient Forthmind to work
accomplishing tasks which are not strictly thinking but which
support the mechanisms of machine intelligence: the gradual
DECAY of activation levels in concepts which almost but not
quite entered the stream of consciousness, and (yet to come)
the sweeping and scouring of old memories for associations to
be brought forward and redeposited before memory is recycled.
Saluting it or not, run this idea up your security/HCI flagpole:
You the user and perhaps the range safety officer at the test
facility for AI-autonomous mobile robots, are interacting with
your machine counterpart (get over it -- by snapping out of it)
and you know that the cyborg is programmed to stop and listen to
you for a few seconds in between each internal thought of its own.
With the new (activation-damping) DECAY feature, how long you
hesitate in the human-computer interaction determines how much
of its mind the AI loses while waiting for you to enter input.
Well, it doesn't actually lose its mind, but it forgets a lot of
things that would otherwise pile up huge activations and begin to
interfere with the "veridicality" or truthfulness of its logic.
Here is where the HCI people and the security no-necks need to
get over their mutual aversion and come up with a plan to stick
it to the man -- userman.html or progman.html -- of how to operate
the AI mind when the very act of observing the AI changes the AI.
More information about the Neur-sci