Is the GD Rose paper out?
Arne Elofsson Arne Elofsson
arne at hodgkin.mbi.ucla.edu
Thu Jul 13 23:46:00 EST 1995
In article <3u3t5m$pdb at saba.info.ucla.edu> legrand at tesla.mbi.ucla.edu (Scott Le Grand) writes:
> In article <1995Jul12.182420.12069 at alw.nih.gov>, johnk at spasm.niddk.nih.gov (John Kuszewski) writes:
> > I think that part of this is explained by his having "solved" these
> > structures in short pieces (because the program is computationally
> > expensive).
> This would be a lame excuse if true... We live in an era of inexpensive
> 300 Mhz desktop workstations...In fact, even within the LINUS paper, there are
> numerous instances of working with larger fragments i.e. the GroES prediction.
> My biggest problem with the paper is the 12 day turnaround from submission
> to acceptance. There are numerous ambiguities in the description of the
> methods (what proteins was it trained on? How do you assemble overlapping
> fragments? How were the fragments for the results selected? How consistent
> are independent LINUS runs on the same fragment? Why oh why did they neglect
> to show the DHFR data?) which should have been caught by the referees and fixed
> by the authors.
Yeah I can agree that 12 days seems very very short. (any reviewers wanna identify
However it must be assumed that by time of submission the (and the DHFR)
were the only simulations (with these parameters) done at time of submission.
They do not overlap overlapping fragments. And do not claim they do.
I do not agree this paper is more unambigous than many other papers.
The problem is that it was so extremely hyped out before the publication.
It is quite certain that they optimised their (very simple) parameters
on this training set, or a part of it, but they do not claim anything
else, so you can not hold them to that. (For instance what did you think
Jim did when he optimised his parameters for the 3d-1d paper ?)
> > To start another thread, are models of that resolution useful for
> > anything?
> A wonderfully controversial question. I'm in the school of thought that
> if I look at a model and it "looks" like the native structure (I know, horribly
> subjective), then it is useful no matter what the RMSD. One of the big
> problems with the results section in this paper is that the authors
> usually do not show us a complete model of the predicted structure, but only
> seemingly arbitrarily chosen fragments which "worked"...
If they did that (which I really doubt) it is fraud and scientific missconduct.
I have the feeling that actually all they did was what is shown in
the paper. And if you want to look at stuctures everything is there in
molscript pictures. What more can you ask for ?
Rose only claim to predict fragments of 50 aa. (However in the JHJ interview
they could "not see any reason why it should not work for whole proteins."
But that can be the interviewers selection of words.
> > |> It is interesting that such a simple method seems to work that well.
> > Precisely. I just saw Andrej Sali give a talk on MODELLER, and its
> > output is amazingly good. However, he's using a very large empirical
> > database. LINUS does extremely well for having so little starting
> > information.
> If LINUS is really predicting secondary structure as well as it seems
> (I'm betting that it's not), then it does seem the the whole game's a lot
> simpler than we thought. I can submit some apocryphal data here. In my PhD
> work, I used a Sippl potential to predict several protein structures. It did
> a wonderful job of secondary structure prediction on melittin, pancreatic
> polypeptide, and crambin (as good as LINUS I would daresay, but this was all
> helix and coil prediction and easy targets), but it did a miserable job packing
> things together. This work is summarized in Molecular Simulations 13:299-320.
> A lot of the figures in the LINUS paper look familiar to me.
But you could not predict any sheets. (:
And even if they do not do such a great work on tertiary structure packing
It is uch better than your phd work.
Skolnick also wrote in his 1994 papers (Kolinski & Skolnick, Proteins 1994)
that their potential performed very well in prediction sec.str. They
claimed to have a paper in preperation but atleast I have not seen it.
Their "sec.str. prediction" is probably as good for approximately as many
targets as Rose's. However their targets were less diverse and that was
not at all the focus on the papers. (Skolnick also had to use slightly
different potential functions for one protein (ubiquitin ?))
> > One last question: Are there any other algorithms that predict
> > secondary structure as well as LINUS?
> A tough question. That requires testing LINUS on a set of
> proteins not involved in its development and comparing it to
> the performance on those same proteins by PhD and GOR (assuming
> GOR does not use them in its database either). Ignore arguments
> that LINUS is not based on amino acid identity. If training set
> data is involved in any way in the development of a method, then
> it is not fair to rate the predictive power of a method by its
> performance on training set data. It is only fair to conclude
> that the method has learned how to reproduce the training set.
> The only fair test is on external data. The upcoming predictive
> targets Moult is putting together should be a wonderful example
> of this.
We do not actually need this, as long as people report what they do.
If you optimize your parameters so that they work very good on
a small set of proteins, (as probably Rose did). It is not bad
science to report that. Even if it would not work on anything
outside the test set it might be very useful and interesting.
However you are right that it is much more impressive to predict
a completely independent test set.
See you on J-club monday
From: Arne Elofsson
Email: arne at hodgkin.mbi.ucla.edu
More information about the Proteins