I don't know if this is the right newsgroup to be addressing, but here goes
I'm a PhD student at Brunel University, Uxbridge in the UK interested in
biologically-inspired learning systems. In particular, I am interested in the
various evolutionary paradigms (eg: neo-Darwinian, "neo-Lamarckian", process-oriented/dialectic, etc) and I was wondering whether or not anybody in the group could point me in the right direction re: papers/books that attempt
to explain the following observation :
That the size of the genome (DNA content) of the Salamander is :
120 x 10*12 daltons
whilst that of the human being is :
3.6 x 10*12 daltons.
If one was to define complexity PURELY on the basis of genomic size, one would have to conclude that the salamander is MORE complex than the human being. Does this mean that we need to look at the underlying STRUCTURE/SYNTAX of the genome to resolve this issue ? I read in a book edited by Mae-Wan Ho & Sydney Fox entitled EVOLUTIONARY PROCESSES AND METAPHORS that replication of large sections of the genome could be for the purpose of either :
(i) Generation and storage of new functions (via various operators
such as mutation, recombination, etc ...) WITHOUT destroying
existing functions, ie: function-preserving adaptation
(ii) Robustness of existing functions whilst the genome is being
modified (by the various operators).
Please don't flame me on any lack of understanding I may be exhibiting; I would appreciate some HONEST answers - ie: is there an answer to this question or is the issue still within the realms of conjecture and debate ?
Could you please also specify some texts that I might consult to gain clarity on this subject bearing in mind that my background is digital systems/computer science. I am interested in how the issue of genomic size might bear on optimization techniques, in particular genetic algorithms.
Thanks in advance,