IUBio

VMS memory management for GCG

Jasper Rees - tel +44 865 275567 jasper at molbiol.ox.ac.uk
Sat Jul 24 05:54:33 EST 1993


In article <930723.142647.6853 at medinfo.rochester.edu>, charles at MEDINFO.ROCHESTER.EDU ("Charles A. Alexander") writes:

> This question is directed more towards the system administrators who run GCG
> on their various systems.  Particularly, VMS platforms.
> 
> We just acquired a new VAX 4000 Model 90 with 32 MB of memory.  It's been 

Just a suggestion, but I should aim to get more in it than that. (Ours has 64
which seems about right for this usage). I should call you DECdroid in and
make it clear that he did not sell you enough memory and that he should have
examined your needs in more detail before letting you have a machine with as
little as 32 MB, and make like quite uncomfortable... this because the upgrade
path for 32 MB is not as trivial as you would like, so you are going to have
to get a good trade in for the two 16 MB boards that you have. The 90 has
nothing on the motherboard so you are going to have to replace them both if I
remember correctly. You can put in two 16 MB,s or one or two 64's. You might
be able to mix, I forget. 

> running like a real champ for us except whenever there are more than 3 
> FASTA jobs in the batch queue.  Then I get PAGEFILE fragmentation errors.
> 
> Now, the PAGEFILE.SYS  on our system is 65000 and the SWAPFILE.SYS is 2800.
> Based on DEC's rule of thumb those are pretty good page sizes.

We have hung our 48MB model 60 at that level :(( I have two pagefiles now, one
at 100,000 and a secondary at 200,000. Swapfile is 10,000. Though with 64 MB
in the model 90 we don't actually need that much anymore. 

> I also extended the WS (extent,set & def) quotas for each user.

To what ? 

these are our default UAF settings:

Maxjobs:         0  Fillm:       100  Bytlm:        32768
Maxacctjobs:     0  Shrfillm:      0  Pbytlm:           0
Maxdetach:       0  BIOlm:        40  JTquota:       1024
Prclm:           8  DIOlm:        40  WSdef:         1024
Prio:            4  ASTlm:       100  WSquo:        10240
Queprio:         0  TQElm:        10  WSextent:     20480
CPU:        (none)  Enqlm:       100  Pgflquo:      70000

Note that pqflquo is only this large to run the QuickSearch software. Nothing 
else needs that much. But this gives you some idea of the potential problem 
with a 65000 block pagefile. 

> However, this problem still persists in peak load hours, if those FASTA/BATCH 
> jobs get around 4 or 5.

Why not reduce the number of concurrent batch jobs? The overall throughput is
not going to be improved by running 4 or 5 simultaneously. I should allow 2 at
most. And then work out the best way to queue things to people don't block the
queue with large jobs and prevent the small stuff working through. You can use
several queues, get Tfasta and Profilesearch on a lower priority batch queue
than Fasta. And make sure your queues have enough WSEXTENT, we have the lower
priority ones set with 40,000 pages. No point giving users plenty if the queue
won't allow them to use it! 

In theory all these programs can be modified to submit to an appropriate queue
based on size of sequence, whether it is protein or DNA, whether it is a
complete Genembl:* search etc etc... to get the small stuff on higher priority
queues. In practice it makes a big difference to the return time the users
gets, if you have a busy machine, if you only have a few batch jobs at any one 
time then you shouldn't have a problem. 

Fasta is not especially memory hungry, you wait until you have user with a
2000 base MFOLD... (45,000 blocks of output file later, and probably 20 hours
of cputime :) so if you have this problem, limit MFOLD to a priority 0 queue
with a joblimit of 1. I think as default MFOLD ships with about a 500 bases
limit so this may not be a problem, unless you have demand for it to be
increased. 

> I have done all the things that GCG recommends like setting PGFLQUOTA to at 
> least 25000 (27,500) and VirtualPageCnt to the same (~ 73000).  

I have:

Parameter Name            Current    Default     Min.     Max.     Unit  Dynamic
--------------            -------    -------    -------  -------   ----  -------
VIRTUALPAGECNT             139072       9216       512   1200000 Pages

So double yours, and then check the dependencies. 

In general I guess you are going to seen to make it work out, then run AUTOGEN
(you have run AUTOGEN on this system haven't you ? :) and then work on the
output of that to see what *it* thinks you need, and then go up from there for
anything you want to increase. But do it when the system is busy... 

Finally, these are all OpenVMS/VAX numbers, for OpenVMS/AXP just multiply by a 
factor of at least 2. 

> But I really 
> think it is just the size of PAGEFILE.SYS and SWAPFILE.SYS that's the issue.
> Just how big is big enough?  Any ideas, comments or formulas?  My apologies if 
> the solution is glaringly obvious.

General comment though: Just not enough RAM... which seems to be a general 
problem. (Even at GCG :)... but that is Irv's story.. )

Solution?  Throw money at it. 

> Charles Alexander 
> University of Rochester Med. Ctr.
> Div. of Medical Informatics,
> 601 Elmwood Ave., Box BPHYS
> Rochester, New York 14642

Jasper Rees
Oxford University Molecular Biology Data Centre
and The Sir William Dunn School of Pathology

jasper at molbiol.ox.ac.uk



More information about the Info-gcg mailing list

Send comments to us at biosci-help [At] net.bio.net