self-modifying code [(Re: TOP 10 (names in comp.sci)]

Andrea Chen fallinghawks at earthlink.net
Sun Aug 9 13:12:07 EST 1998


Terje Mathisen wrote:
> 
> Andrea Chen wrote:
> >         Electronic associative memories are possible and I believe do exist in
> > certain caches.  Extending the size of the data held within and allowing
> > the programmer direct use of these mechanisms could allow a flexible
> > highlevel (call by name) system.
> 
> Yes, but if that api is only used once, then it is (almost by
> definition) not timecritical, if you use it many times, then you will
> still be better off to do a one-time lookup of the actual address and
> then patch that in, i.e. standard link/load behaviour for any os.
> 


	I was in the midst of a lot of work last night and went along with
this, then I realized that you had cut out the relevant examples.
In many computer programs you will find routines (often central) of the
form:
	If X then Y 
		else if X1 then Y1 
			......

These often are disguised as a case statement but unless they are a
computed case the examination is sequential and if this is the center of
a complex environment there may be hundreds of conditions.  There are
indeed ways to optimize and branch searches directly in the code, but
this doesn't seem to be done all that often.
	If the condition can be expressed as a text (eg. "ls" or "left button")
then (I have found that a hash table often provides a more rapid find to
the desired name and that it can return the pointer.  Dictionary
structures also provide increased flexibility in that you can add new
names, delete names and change the value of names.  This provides
ploymorphism and a special set of functions added to a dictionary
"inherit" the previous set of definitions which might vary even if names
are shared.  In other words a simple structure provides some of the
desired features of object-oriented programming.
	Now it may be that the difference between a hashing algorithm (whose
members are stored in cache) and an actual electronic associative 
memory is small, it seems to me that it could be so.  But it would still
be a matter of several dozen cycles compared with 2 or 3.  This becomes
relevant if your system is assessing small (eg. Forth size) modules
which often are only a few dozen cycles in themselves.  At that point 1
might argue against associative memory, but still argue for a built in
hashing algorithm that used all those transisters to do the shifts and
what not in a few cycles.
	So much depends on the problem you are doing.  For example you are
assuming a "1 time patch" will put things in place while I'm assuming a
dynamic environment in which things change where one wants interpreter
flexibility provided (ideally) at the fastest possible speeds.  This is
in fact one of the reasons for self modifying code.  
	The general technique which I'm suggesting consists of organizing a
list of (named) modules in response to certain conditions rather than
compile segments of code in response to those conditions.




More information about the Neur-sci mailing list