XPLOR Usage patterns..

Ethan A Merritt merritt at u.washington.edu
Sat Jan 11 15:46:05 EST 1997


In article <5b5pg5$sr7 at news.psc.edu>,
Ravishankar Subramanya <ravi at nereid.psc.edu> wrote:
>
>
>
> We at the Pittsburgh Supercomputing Center are seeking input on the utility
> of providing a very large, very fast version of XPLOR on our parallel Cray
> T3E (32GB, 300GFLOPS). 

Do you have reason to believe that large memory and a parallel machine 
would in fact result in a large speed increase?  

That has not been my experience with the existing XPLOR code. 
I have benchmarked XPLOR for typical X-ray refinement and MD runs on our
single and SMP DEC Alphas.  (DEC 4100 5/400, 2100 5/250 machines with
3 CPUs, 500MB to 1GB memory).  I have not found any significant
difference in run times when XPLOR jobs are given access to 1GB of
physical memory, as compared to runs limited to 64MB.  The
use of the DXML libraries to parallelize FFT computations onto multiple
processors does result in a speed increase, but not a dramatic one.  An
XPLOR jobs running on two processors will typically show a load of
110%-150%, with elapsed clock time correspondingly reduced.  Increasing
this to 3 processors shows a load of 200%+, but does not further
reduce the elapsed clock time over a two-processer run.

Now this may of course be due to inadequacies in the DXML libraries,
rather than an inherent limitation.  The point is, if you know how to
increase XPLOR throughput by modifying the code to be more parallel,
or to allow distributed processing, that may well be useful to more
people than would simply building a version for a specific Cray T3E
machine.

I would be interested in hearing from anyone who has tried customizing
XPLOR to run in a distributed computing environment (e.g. multiple
threads running on networked CPUs).  I made a brief stab at this
using DEC's "Parallel Software Environment" package, but didn't get
very far.

				Ethan A Merritt
				merritt at u.washington.edu



More information about the X-plor mailing list