UNIX GCG Site: How much CPU power?

micha at amber.biophys.uni-duesseldorf.de micha at amber.biophys.uni-duesseldorf.de
Fri Sep 2 16:36:08 EST 1994


Keith Robison (robison at mito.harvard.edu) wrote:
: Our department is considering upgrading the central computer which is
: used to run GCG. We will definitely go with a UNIX box.  I was wondering
: if folks would be kind enough to share their experiences with how
: fast a machine you need.

: Key points which I can think of are:

: 	1) Architecture & speed (Sun, HP, SGI, Alpha, etc)
DEC AXP 5000/800 (>180 MHz CPU clock ..) 128 MB with
	1 GB system disk internal
	4 GB external disk for user programs, data, GCG programs and data
		( FULL ) - fast SCSI, DEC disk
	another 4 GB to come RSN ...

and VAXStation 4000/60 ( ?? speed ?? ) with 1 GB disk total for user stuff.

(VMS-Cluster, for the sake of completeness :-) )

Due to space limitations we built only the embl database, with a second
disk the others may follow ... Fragmentation can't be avoided, as we are 
running very disk-intensive RNA folding stuff here. :-(
Most user accounts are being moved to the VAXStation now, batch stuff
on the AXP.

: 	2) How many total users in department
20-30 users, most of them seldom active.

: 	3) A reasonable guess as to the number of simultaneous users
3-5 most of the time, 1-2 batch jobs (AXP side). Interactive users won't
notice batch jobs running ...
Connection to the system by telnet or DECNet; network bandwidth saturated
only by X ( online rotation of protein models ) or cluster disk traffic
( backup from one disk to another on a different system ).

: 	4) Is your current configuration working? (i.e. are people
: 	   happy with response time)
perfectly happy, to one exception: DEC GKS graphics on DECWindows 
(not the display itself, that's much too fast to look at: screen update
within one video frame, it's the menu window creation times that drive you
crazy or make impatient users click around on anything below, with the
obvious results :-( ). I expect the VMS process creation stuff is responsible
for that, better results on unix.
Stability: the AXP is running since march, fairly stable set aside the 
ethernet cluster problems (and one SCSI controller failure)

: 	5) Any relevant comments
Lots of X applications ( the user interface to come ) will keep the X server
busy ( takes o lot of our cpu time, 30 minutes per day ..).
And 'fast' disks seem to be the only bottleneck (besides process create times)
on our box.

One comment on the only Unix gcg system I know as a user:
Our computing center set up a Convex GCG package on their C210, with heavy
interactive load and some quantum chemists running MO packages of up to 
100 Meg core size.
One 'benchmark' example: 360 Nt. query fasta against em_ro,
AXP: 36 seconds cpu time, 2 min. 11 sec elapsed time ( disk-limited )
Convex: > 450 seconds cpu time, at 20 % cpu quota, ->  > half an our elapsed
time. No trace of vector processing speed here ( BLAST results may vary, as 
I recall ).
Another Convex GCG system is the DKFZ-Heidelberg service - when we stopped
working there, they had load averages > 15 ( our Convex freezes at above 12 ),
same hardware platform there!
I think we could beat this with our VAXStation ...

One thing that stands _against_ a Unix system (from my point of view, as a
'on-leg manual') is the fact that on VMS we have GenHelp and even GenManual
( VMSHelp style ) and on Convex GCG  they have the man pages, that is
raw-formatted versions of the original VMS help sources (all help level
keys preserved :-) ) piped thru the man system: you asked for help?
Here you get it - find your relevant piece of information with grep !!

We finally formatted the help files into HTML for a WWW server ...
(support your local computing center)

: Please post or E-mail; I will post a summary of E-mailed responses.


: Keith Robison
: Harvard University
: Department of Cellular and Developmental Biology
: Department of Genetics / HHMI

: robison at mito.harvard.edu 

	Michael Schmitz
	Biophysics Uni-Duesseldorf, Germany





More information about the Bio-soft mailing list