self-modifying code [(Re: TOP 10 (names in comp.sci)]

Andrea Chen fallinghawks at earthlink.net
Sat Aug 8 22:57:05 EST 1998


Mentifex wrote:
> Mentifex wrote:
> 
> Dennis Ritchie <dmr at bell-labs.com> = rara avis in Internexu:
> >
> >John R. Mashey quoted Reiser, and then remarked on the
> >"interestingness" and "excitingness" of dealing with
> >on-the-fly compilation in modern architectures:
> >> [...]


	I wonder if the major stumbling block may not be the "paradigm" of
compilation.  I did some play in "associative" languages.  In certain 
problems these can exceed standard compiled solutions because a hash
table lookup of 1 of several hundred names can outperform the usual
linear search of these conditions in a (possibly poorly written) piece
of traditional compiled code.  Altering of code becomes simpler because
changes can be made by adding or altering definitions. Knopf warned
against associative structures because of possible logical flaws, but I
tend to believe the higher level of abstraction does reduce problems.
	Electronic associative memories are possible and I believe do exist in
certain caches.  Extending the size of the data held within and allowing
the programmer direct use of these mechanisms could allow a flexible
highlevel (call by name) system.
> >
> >> [...]
> >Perhaps most important, it's hard to make a good marriage
> >between on-the-fly compilation and any kind of portability
> >even between this and next year's CPU chip, let alone across
> >platforms. 

	Is portability the long term direction?  It seems to me an alternative
would be increasingly specialized architectures devoted to certain
problems.
	For example 1 of the big money items today is servers for SQL.  The
base of this language is simple set theory which is 
used to build the more complex relational operations.
	Now take the programmer associative memory I proposed.  Dump in the
members (or the first few hundred) of the first set.  With some
extension of the logic it seems possible to then sequentially take the
members of the second set and see if they are members (intersection) or
not (difference) of that first step.  It seems to me that it would be
possible to increase the speed of (some) SQL operations by orders of
magnitude.  Such a chip might lack sophisticated floating point
operations and many other basics of standard designs, but it could
dominate a multibillion dollar niche.
	Similarly for massive text processing a design which searched for
dozens of delimiters at 1 time could parse much more rapidly.   


 It's an altogether neat idea (cf. Java JIT compilers)
> >that will continue to be of use, but it is hard; beautiful
> >solutions on one platform today can turn out to be very
> >difficult to sustain on another platform.
> >
> 2).  All memeticists, please ask around, just what is Mentifex doing
>      here that is EXTREMELY BRAZEN, but a memeticist's dream hitchride?
> 
>    Self-modifying code:  Build an AI, then let it modify itself...

	Building an "AI" is really something.  I think the memetic goal might
be building a complex set of interrelated symbols which respond and
interact with other symbols.  It would seem to provide a way of creating
"memetic complexes" (though whether these are similar to those in the
human mind are another matter.)  I don't think the issue here is (at
least now) 1 of speed (which would justify low level architecture) but
rather 1 of designing the complex of memes (includeing memetic
regulators) and finding a high level language which makes the task
simpler.  I personally find that standard LISPs have a bit too much
clutter.  I used to play with a language called TRAC (text reckoning and
(incorrect usage) compiling).  Rather than the atom being the primal
structure it was based on easily modified text with direct associative
(rather than pointer based) addressing.  The few people I showed it to
found it relatively easy to use, but as a general rule people interested
in things like "memes" are not (I feel) interested in building formal
models and having made some attempts I feel that useful structures are
very, very difficult and perhaps beyond our capacity.  Actual pragmatic
models seem to be beyond our current capacities of expression.  If they
could be designed then it would be relatively simple to build languages
to express them.  Issues of speed (low level design) would only become
relevant when they started to work. 
	I do believe that if many scholars worked with languages which allowed
intricate connections of symbols along with logical manipulation
(including self modification) that techniques might slowly evolve.  But
once again the issue isn't hardware and not even primarily (software)
but the percieved need for such a tool.  Look at the rudimentary search
mechanisms and index schemes (rarely fully utilized) on the web and it
seems that the sophisticated organization of knowledge isn't a high
priority.  The issue isn't one of technology but of priorities and mind
set.




More information about the Neur-sci mailing list