CTOS provides the capability to define and use a caching service to improve performance on server workstations and on cluster workstations, even those without local disk. The purpose of this article is to unravel the mysteries associated with the care and use of the Cache.
Unlike the older "SPA" product, the CTOS II cache is a "write-through" cache, capable of caching both read-only and writable files. When a program issues a write to a file in the cache, both the cache block and the disk file are updated before control returns to the program. And of course, like any cache, read operations improve dramatically if the data to be read is already resident in the cache. At present, the same "write-through" cache is implemented under CTOS III, CTOS II and CTOS/XE.
Configuring the Cache Service
There are three basic parameters in Config.sys
(or WSnnn>Config.sys or SrpConfig.x.sys) which control the cache.
They are:
:CacheService: -
used to define and activate the caching service.
Note that you can also use :FileCacheService: with the same
parameters; either token is acceptable.
:FileCacheDefaultEnable: - used to inform the caching service
whether it should attempt to
cache all eligible files opened on local disk drives.
The default answer is YES.
:AgentCacheDefaultEnable: - used to inform the caching service that
it should attempt
to cache any eligible file that is opened on the server's
disk drives (that is, across the cluster lines).
The default answer is NO.
If you do not define a cache (:CacheService:)
then you will not get any cache performance improvements, since the cache
will not be active.
Let's look at the parameters used to define the cache and activate
the caching service:
:CacheService: (BlockSize=4096, BlockCount=2048, MinWorkingSetBlockCount=32)
The first parameter, BlockSize, is set at 4096 bytes, and should not be changed for current versions of CTOS. You can eliminate the BlockSize parameter altogether, as 4096 is the default value.
The second parameter, BlockCount, controls the size of the cache. Simple arithmetic shows that 256 cache blocks equal one megabyte of cache. You must allocate a minimum of 512K bytes of cache memory if you wish to activate cache (128 blocks), and the maximum is up to you and the amount of memory on your processor.
The third parameter, MinWorkingSetBlockCount, specifies the minimum number of cache blocks which the caching service must keep "unlocked" and available for replacement. Note that this value can be set to zero, in which case, when all cache blocks are "locked", caching of any new disk blocks is effectively turned off.
Placing a :CacheService: entry similar to the example above in your Config.sys file activates the caching service. But whether, or how, the caching service operates is also under your control.
Controlling the Cache Service
There are several means to control caching
available to you. Every file created on every disk in the system
has a set of bits in the file header which control caching.
The bits can specify one of three possibilities:
* prevent caching of this file
* enable caching of this file, or
* "don't care".
The default which is set at file creation time is "don't care".
To prevent a file from being cached, you use the command "Disable Caching", which sets the bit in the file header for that file that prevents the file from being cached by the caching service. Conversely, to specifically enable a file to be cached, you use the "Enable Caching" command to set the appropriate bit in the file's file header that informs the Caching Service to go ahead and cache read and write requests for the file. Both of these commands can also be used on an entire volume by specifying the volume name only (such as [Sys]), in the list of files parameter.
When you define the cache (with :CacheService:), the Caching Service is activated, and will automatically cache any file from local disks, except those which have been specifically disabled from caching. In other words, the entry:
:FileCacheDefaultEnable:YES
is the default, and to prevent automatic caching of all files from local disks, you would have to specify NO to the above parameter.
The Agent Cache Feature
The "Agent Cache" feature refers to files
which are opened by applications at the local workstation, but the file
resides on disk at the server. If the parameter in Config.sys reads:
:AgentCacheDefaultEnable:YES
then any files which reside on the server and are not cache-disabled will be cached in the local workstation's cache, as long as they are opened in Read Mode. The default for this parameter is NO, so if you want Agent Caching, be sure to specify YES.
Once a file which resides at the server is opened (or re-opened) in Modify Mode, the Caching Service purges the cache blocks for that file from the cache pool at the local workstation. The local Caching Service does not coordinate updates to the cache with the Caching Service on the server. Note that if the file were cached at the server, it would still be cached there; only the entries in the local workstation cache are purged.
Note that to use the Agent Cache feature, the size of the X-Block must be large enough to handle a complete cache block. CTOS II 3.3.1 and later OS's, and CTOS/XE s3.0.6 and later have had the default size for the X-Block changed from the original size of 2656 bytes to 4160 bytes to accomodate cache blocks. If the size of the X-Block at either workstation or server is less than 4160, the Agent Cache feature will not function.
The Agent Cache feature can be implemented on any CTOS II or CTOS III workstation OS, with or without local file system (vClstr, vClstrLfs, pClstr, pClstrLfs). It is, of course, meaningless when configured on a Server OS (and the parameter in Config.sys is ignored by Server OS's).
Locking files into Cache
You also have the ability to force the
Caching Service to "lock" the blocks for certain files in the local cache.
The "Lock In Cache" command provides this capability. You can specify
a list of files to be permanently locked into the cache, making their cache
blocks ineligible for replacement from the cache. This gives you
a similar function to the old "SPA" features of earlier versions of CTOS.
As a general rule, however, you should let the Caching Service determine what is best to keep in the cache, as its algorithms are quite efficient, and it most often "knows" better than you what should be in the cache and what is not needed. To reverse the effects of this "locking", you use the "Unlock Cache" command, and the cache service will then purge those blocks from the cache and make those cache blocks eligible for re-use immediately.
All about Cache Memory Disk
There is another use for the cache, as
well. That is "cache memory disk". If you wish to have cache
memory disk, then you must define and install the cache service..
It is the Cache Service which allocates "cache memory disk". An example
of a cache memory disk configuration entry:
:MassStorage: (Class=CacheMemory,
Unit=0,Device=M0,Password=M0,
Volume=MemDisk1,MaxSectors=2000,
MaxDirectories=6,MaxSysFiles=20,
MaxFiles=200,MaxTempFiles=180)
The parameters MaxWhatever above are all calculated from MaxSectors, and do not have to be specified. As long as the value for MaxTempFiles is greater than zero, a <$000> directory will be automatically created for you.
Cache Memory Disk is allocated from the same memory as the cache pool itself. This is important to understand. As your cache memory disk grows, the Caching Service "locks" in the cache blocks that pertain to the cache memory disk. The "locked" blocks remain in the cache pool -- they are no longer eligible for replacement. In other words, once a cache pool block is used for cache memory disk, it cannot be used for any other purpose.
Furthermore, cache memory disk is dynamically allocated by the Caching Service. This means that you can actually configure a cache memory disk that is larger than the amount of cache memory available. The only time you run into a problem is if you actually fill up the cache memory disk to the full capacity of cache memory. It amounts to the same thing as entirely filling a disk volume, but the error code returned is 1414 instead of 230.
One other important note: to use cache memory disk on a diskless workstation, that workstation must be running a ClstrLfs operating system. The pClstr and vClstr operating systems do not have code for file system operations (naturally), and reading and writing disk (even memory-disk) files locally is a File System function. You do not have to use a ClstrLfs OS to get Agent Caching to work; Agent Cache functions correctly with any flavor of CTOS II or III workstation OS.
Remember, Cache Memory Disk is treated exactly like a normal disk - it is managed by the file system and it has a volume home block, master file directory, fileheaders.sys (but no secondary file headers) and allocation bitmap, just like a real CTOS disk drive. You can use "Format Disk" on a cache memory disk volume; in fact, if you do not specify a volume name for the cache memory disk, then it will be created uninitialized and you must then use Format Disk to initialize it. If you specify surface tests for a cache memory disk in the Format Disk command, it has the effect of immediately "locking in" all cache pool blocks which are needed for that cache memory disk.
Scratch Volume in Cache Memory Disk
If you define your scratch volume as [M0]
(or the volume name you've chosen), the operating system will write all
temporary files into the cache memory disk instead of using a real disk
volume. This can save both significant time (disk access vs memory
access) and disk space. Just remember that cache memory disk lasts
only from one boot to the next -- it is never written to an actual disk
device.
The Caching Service does not release cache
pool blocks when a file is deleted from cache memory disk. Once a
block has been allocated to cache memory disk, it is locked for that purpose
and will not be unlocked until the processor is re-booted. When a
file is deleted, however, the "sectors" that it occupied are "free" and
will be re-used. It is just that the cache block itself is permanently
allocated to the cache memory disk and will not be released back to the
pool for cache replacement purposes.
Swap Volume in cache memory disk ?
Generally speaking, it is not advisable to use cache memory disk for your swap file volume. There may be some advantage when you're running a number of real-mode programs under CTOS II, but under CTOS III, there is no advantage at all. CTOS II places all real-mode programs in the first megabyte of memory, and if there are more than one real mode program to be run under Context Manager, a swap is forced to occur when bringing in a new real-mode context. CTOS III uses the capabilities of the Intel 80386 and higher processors, which let the real-mode programs run in "virtual 8086 mode", giving each such program its own 1MB address space for operation.
We recommend against using cache memory disk for the swap file except in very limited circumstances. Use the memory for cache, or for executable programs, instead.
When NOT to cache
Some files should probably not be cached
at all. We have found that ISAM files and ISAM index files should
generally be disabled from caching. The reason is that ISAM has its
own buffer space, which is effectively its own cache, and if you allow
the file cache or agent cache to also buffer the data, you wind up double-caching
and slowing down the performance, often significantly.
To prevent the ISAM data and index files from being cached by both ISAM and the Caching Service, simply run the Disable Caching command and supply "[Vol]<Dir>*.ISAM" and "[Vol]<Dir>*.IND" as the file list parameters. ISAM performance is affected mostly by the specific parameters that you can "tune" using the ISAM Configure command. In general, don't cache any ISAM files and your ISAM performance improves.
Similarly, DAM files should not be
cached. These files, by their very nature, tend to be accessed randomly.
There is little advantage to caching such files, as the cache block actually
holds 8 sectors worth of data, and reading that much data (when the object
is rapid access to data) can potentially defeat the speed advantage of
a random-access file.
Special notes on the XE-530
The XE-530 was the first CTOS processor which supported the Caching Service. It provided support for a special type of "disk", known as memory disk as opposed to the term cache memory disk that we've been discussing here. Both types are supported on the XE-530. The memory disk on the XE-530 is a different type of animal, and is significantly slower than cache memory disk. It has an additional feature that it can span several processor boards.
This type of "memory disk" was invented to allow efficient booting of the XE-530 (or older XE-520) from QIC tape. The "memory disk" contained on the bootable QIC tape contained run files and workstation images such that the XE would run with [M0] as its [Sys] volume, only for the purposes of initializing a real disk to be the [Sys] volume. There is no other reasonable use for this XE-type memory disk. Do not define it unless you are constructing a bootable QIC tape for your XE-530. It is not supported on workstations. The :MassStorage: parameter for this older type of memory disk was: Class=Memory (instead of CacheMemory).
The XE-530 does support the same type of cache memory disk as is supported on workstations. All of the configuration parameters are identical for the XE, but you can only define a cache memory disk on a processor which has local disks attached to it (a GP/SI board). Again, the reason for this is that the cache memory disk is managed by the file system, and GP boards that don't have the SCSI controller board attached do not have the file system code in the OS that runs on them.
On the XE-530, you can also define a third
type of cache, known as a remote cache. This type of cache is a separate
pool, used as a cache for another board in the XE-530. Typically,
this type of cache is used to provide a large cache for an older FP board
(80186-based with ST-506 disks), which was limited in the amount of memory
it could address. You can define a "remote cache pool" for an FP
board, and that cache pool can be defined on a GP board and be considerably
larger than the amount of memory on the FP board.
An example of the entries in SrpConfig.sys looks like:
:RemoteCachePool: (Name=Swimming, BlockCount=256)
:RemoteCacheClient: (Name=FP00, Pool=Swimming)
You must define the configuration entries for the client processor board before you define the above entries on whatever board will host the cache pool. Notice that in the example, we've defined a 1 megabyte cache for an FP board that only has 768K memory to begin with. You must define the name of the pool, and when you define the client, must use the same name for the pool. The above entries would be located in the :Processor:GP00 portion of SrpConfig.x.sys.
Monitoring the Cache
There is a utility which you can use to
examine the performance of the Caching Service. It is called "Cache
Status". This utility provides a lot of useful information concerning
the current state of your cache. It is most useful if you answer
"yes" to the prompt "Details?" on the command form. A sample of the
display is shown below:
System Memory Size :
4194304
Cache Pool Size
: 1057358 ( 25%)
Block Size
: 4096
Block Count (total) : 256
Min Working Set Blocks : 20 ( 7%)
Current Blocks Valid
: 23 ( 8%)
Available : 233 ( 91%)
Locked-in : 21 ( 8%)
Busy :
2 ( 0%)
Total Cache Probes :
835
Blocks Obtained :
175 ( 20%)
Hits : 151
( 86%)
Blocks Unavailable:
0 ( 0%)
The information shown is, for the most part, self-explanatory. The important numbers you wish to carefully examine are the number of blocks currently "locked in", and the percentage of "hits".
If the number of blocks which are "locked in" begins to get too large, then that means that the number available for the Caching Service to use for replacement is getting smaller. Of course, some of the "locked in" blocks may represent files which you have specifically placed there using the "Lock In Cache" command. And other "locked" blocks represent blocks allocated to cache memory disk (if there is one defined) or to Agent Cache.
The percentage of "hits" represents the times that the Caching Service found the desired data in the cache, as opposed to having to go to disk to obtain the data. Generally speaking, if the percentage of "hits" is above 75%, your cache is functioning well. If the percentage of "hits" is below 50%, then the cache must be made larger, fewer files "locked", or you might take a look at the cache memory disk, as it may be interfering with the overall operations of the cache.
The other numbers, for the most part, are not particularly important, though they give you a picture of what's happening. You want to keep the "hits" percentage high, and the "locked" percentage low, and you've done your job.
If you have used the "Disable Caching" command on some files and wish to know if a particular file is disabled or enabled, use the Cache Status command and specify the file or files on the command line ("File List"). Cache Status will tell you, for each file, whether it is disabled or enabled for caching.
Cache Programming
The CTOS Procedural Interface manual provides
a good description of the system-common procedures used by the Caching
Service. These calls are available to any CTOS programmer who wishes
to use the cache pool concept. Basically, you can allocate some memory,
initialize a cache pool and use the calls to manage your cache pool, taking
advantage of the "LRU" algorithms which are part of the caching system-common
routines. Refer to the manual for details.
Summary
The CTOS Caching Service provides the means to significantly improve system performance. It can handle caching of local disk files, files from the server and cache memory disk. Configuration and control of the cache is accomplished by simple entries in Config.sys (or SrpConfig.sys).
When used properly, cache can be your friend. When used incorrectly, it can be a source of problems, instead of a solution to them. Good luck with the cache.