Dynamic Load Balancer Plustm


Previous Contents Index

Chapter 12
Practical Applications of DLB Plus Reports and Graphs

DLB Plus is an invaluable tool for system managers and MIS directors. The following section is devoted to explaining how the DLB Plus reports and graphs can be used for both day-to-day system management and long-range capacity planning.

12.1 RSHOT - Hot File Analysis

There are many means available to improve system performance on a VAX. Many of these require that the system manager know which files are being most heavily accessed. Usually, there are obvious choices, but sometimes these heavily used files can be hard to identify.

Prior to DLB Plus, there has not been an easy way to make this determination. The "Hot File Analysis" report identifies files with high I/O activity. The files are listed in descending order by their total number of I/Os or their I/O rates.

You can use the information from this report and the methods suggested in Chapter 14, Reducing OpenVMS I/O Bottlenecks to help reduce file I/O bottlenecks.

12.2 RSCACHE - Suggested Files for Data Caching

Data files that have a high percentage of read I/Os are very good candidates for local and global buffering or data caching. The "Files Suggested for Data Buffering or Data Caching" Report presents the system manager with a list of files that should be local or global buffered or data cached.

A file is included on this list if the read percentage is seventy five percent or more. If a file is already data buffered and it still appears on this list, the number of local and global buffers set for this file should be increased.

You can use the information from this report and the methods suggested in Chapter 14, Reducing OpenVMS I/O Bottlenecks to help resolve these problems.

12.3 RSFRAG - File Fragmentation Analysis

OpenVMS creates and extends files in groups of contiguous disk blocks. Each group is called a FRAGMENT or EXTENT. As files are created and extended, the number of file fragments increases. When a file is badly fragmented, a single logical I/O request can require accessing multiple file fragments...in turn causing multiple physical I/O requests and excessive disk head movement. This is called a split I/O.

Each process that opens a file copies into its address space the locations of SOME of the file's fragments. By default, only eight fragments are copied into this file mapping area. If the process tries to access a fragment of the file not currently mapped, the file header must be reread into memory and a new set of file fragments copied into the mapping area. This is called a window turn.

Each file has at least one 512 byte file header. The file header contains information such as: file organization, recordsize, backup date, revision date, protection, extent mapping data, etc... If the file requires more fragments than can be stored in the primary file header, additional file headers must be created to hold the extent mapping data. The file becomes a "multi-header" file.

Badly fragmented files cause poor performance:

You can use the information from this report and the methods suggested in Chapter 14, Reducing OpenVMS I/O Bottlenecks to address file fragmentation issues.

12.4 RSIMAGE, RDIMAGE, GIMAGE, GRIMAGE - Analysis by Image

The "I/O Analysis by Image" reports and graphs are used to determine the I/O load that an image places on your system. This information is helpful in scheduling operations to make the best use of your system's capacity. An image that is using a lot of system resources might be rescheduled for a non-peak time.

The information from this report is very helpful in capacity planning. If the system manager knows that the number of users of a particular application will grow, the manager can now determine the exact impact this will have on the system.

12.5 RSUSER, RDUSER, GUSER, GRUSER - Analysis by User

It is often difficult for a system manager to know exactly what specific users are doing...what kinds of resources they are consuming. As the number of users of a system grows, this task becomes even more difficult. The "I/O Analysis by User" report shows the system manager exactly which images a user is running and which files are being accessed. This allows the system manager to identify growth and usage patterns.

For capacity planning, it is important to know what impact specific users are having on the system. For example, if the sales staff will double, the impact they have on the system will also double. These reports show the system impact of each user.

12.6 RSDEVICE, RDDEVICE - Analysis by Device

The information provided by the "I/O Analysis by Device" report helps the system manager to balance the load on disk storage devices. By spreading I/O activity across the various devices evenly, system performance can be improved.

12.7 RSPID, RDPID - Analysis by Process ID

The "I/O Analysis By PID" reports assist the manager in assessing the impact various users have on the system. A single user can have many processes running on the system at the same time. By identifying each process and its I/O impact, the manager can better understand the impact of each user.

12.8 RDFILE - Analysis by File

Often it is useful to know which processes are accessing a given file. Using this information you can plan for expanded use of a given file or application.

The "Analysis By File" report shows you for each file:

12.9 LHOLDER, LWAITER - Holding/Waiting Processes

From time to time processes are granted a resource (such as locking a record), which is then not released for a long time. When this happens, other processes that want access to the resource must wait...maybe forever. Prior to DLB Plus, it was difficult or impossible to locate the holding process (the process holding the locked record).

The LHOLDER report lists each process holding locks that other processes are waiting for. For each holder, all processes waiting for the holder's resource are listed (waiters). The LWAITER report lists each waiter and the process holding locks that the waiter needs.

The LIVE menu item on the Master Menu will display the LHOLDER and LWAITER report options.

Note

In a cluster environment, the LHOLDER and LWAITER reports must be run from the node where the waiters are logged on.


Chapter 13
Creating Batch I/O Activity Reports

The Batch I/O Scan Master Menu option creates a report of a batch command file's I/O activity.

This report is helpful when you need more information on what a batch process is doing. For example, if you have a batch process that is running longer than you think it should, you can run the Batch I/O Scan report to see how much time each command is taking. Then, you can use the I/O analysis information to see about streamlining the batch procedure.

The Batch I/O Scan report can be used to:

Before this report can be created, several instruction lines must be added to the batch command file and the batch file must be run. The following example explains how to set up the batch process so the Batch I/O Scan report can be run.

13.1 Example

Setting Up the Command File

The sample command file, SCANTEST.COM, contains the following lines:


    $ intouch/source user:[ds]scantest 
    $ dir scantest.int 
    $ define/user sys$output x.tmp 
    $ type user:[ds]login.com 

In order to analyze the I/O activity of the SCANTEST.COM command file when it is run, several instruction lines need to be added to the beginning of the command file. The required instructions are contained in TTI_DLB:DLB_SCANLOG.COM. The lines are:


    $! 
    $! DLB_SCANLOG.COM - Set up conditions for Batch I/O scanning 
    $!                 - Copyright 1995,1996 Touch Technologies, Inc. 
    $! 
    $ set prefix "$!11%D !8%T$" 
    $ set watch/class=major file 
    $ set verify 

You can use the editor to edit in the lines. For example, you can do one of these options:

  1. add this single line to the beginning of your command file:


        $ @tti_dlb:dlb_scanlog.com 
    

  2. include the @tti_dlb:dlb_scanlog.com file in the beginning of your command file.

After the lines are added to the SCANTEST.COM file, the command file will look like this:


 *  $ @tti_dlb:dlb_scanlog.com 
    $ intouch/source user:[ds]scantest 
    $ dir scantest.int 
    $ define/user sys$output x.tmp 
    $ type user:[ds]login.com 
 
 
 * added line 

or like this:


 *  $! 
 *  $! DLB_SCANLOG.COM - Set up conditions for Batch I/O scanning 
 *  $!                 - Copyright 1995,1996 Touch Technologies, Inc. 
 *  $! 
 *  $ set prefix "$!11%D !8%T$" 
 *  $ set watch/class=major file 
 *  $ set verify 
    $ intouch/source user:[ds]scantest 
    $ dir scantest.int 
    $ define/user sys$output x.tmp 
    $ type user:[ds]login.com 
 
 
 * added lines 

Run the Batch Command File

Submit and run the batch command file as you normally would.

When the batch command file is run, I/O activity information will be stored in the LOG file and you will be able to create a Batch I/O Report.

Creating the Report

Select the Batch I/O Scan item from the Master Menu.

You will be asked for the name of the LOG file to scan and create the report from. For this example, you would enter SCANTEST (.LOG is not required). The wildcard characters * and % can be used (e.g. SCAN*).



 DLB Plus x.x             Dynamic Load Balancer PLUS                19-Jan-1996 
                              Scan Batch LOG files                              
 
Log file: 
 
 
 
 
 
 
 
 
 
 
Log file to scan? scantest_____________________________________________________ 
 
 
EXIT = Exit                                                \ = Back  HELP = Help

After you enter the log file to scan, you will be asked if you want to proceed or not.

Select Yes, to proceed with report creation.

The report will look like this:


January 19, 1996 11:32              DLB Plus                           Page:   1 
                    I/O Scan of USER_ROOT:[DS]SCANTEST.LOG;1 
 
Command 
  Filename                                             Reads    Writes Total I/O 
-------------------------------------------------- --------- --------- --------- 
$ intouch/source user:[ds]scantest 
  INTOUCH.EXE                                             33         0        33 
  SCANTEST.INT                                             1         0         1 
  SALES_MASTER.STR                                         1         0         1 
  SALES_MASTER.DAT                                         4         0         4 
  SALES_MASTER.DEF                                        60         0        60 
  SALES_DETAIL.STR                                         1         0         1 
  SALES_DETAIL.DAT                                        22         2        24 
  SALES_DETAIL.DEF                                         4         0         4 
                                                   --------- --------- --------- 
  Elapsed  00:00:02                                      126         2       128 
 
$ dir scantest.int 
  DS.DIR                                                   1         0         1 
                                                   --------- --------- --------- 
  Elapsed  00:00:00                                        1         0         1 
 
$ type user:[ds]login.com 
  LOGIN.COM                                                1         0         1 
  X.TMP                                                    0         2         2 
                                                   --------- --------- --------- 
  Elapsed  00:00:01                                        1         2         3 
 
                                                   ========= ========= ========= 
*** Log File Totals ***   Elapsed  00:00:03              128         4       132 


Chapter 14
Reducing OpenVMS I/O Bottlenecks

There are two major actions that can be taken to eliminate the delays caused by high file I/O counts:

14.1 Speeding up I/O Operations

Speeding up a file's I/O operations can be accomplished by moving the file to a faster or less busy device or by moving the file across multiple spindles (as in a shadow set). Both read and write operations can be sped up using this method.

14.2 Eliminating I/O Operations

Eliminating file I/O operations can be accomplished in a number of ways. Some of these ways include:
  <> Host based data caching speeds up file reads
  <> RMS file converts speeds up both reads and writes
  <> RMS global buffering speeds up file reads
  <> RMS local buffering speeds up both reads and writes
  <> disk defragmentation speeds up both reads and writes
  <> file defragmentation speeds up both reads and writes

Note

Both RMS local buffering and global buffering can be requested for a given file.

14.3 Host Based Data Caching

Host based data caching uses free memory for high-speed data caching. I/O requests to the file are intercepted by the caching system. If the I/O request is a write operation, the data is passed to the disk device. No speed up occurs. If a read I/O request is intercepted and the requested data is already in the memory data cache, the request is satisfied with a very fast memory move. No actual I/O to the disk occurs. Host based data caching systems are available from a number of commercial software vendors.

14.4 RMS File CONVERSION

As RMS based files are written to, they become internally fragmented and disorganized. Over time, both read and write operations cause extra physical I/O operations to the RMS file. The Digital provided CONVERT utility can be used to defragment and reorganize RMS files. To convert the file SALESM.DAT, at the DCL prompt enter:


        $ CONVERT salesm.dat  salesm.new 
 
        $ RENAME  salesm.new  salesm.dat;  (note the trailing ";") 

This two-step process safely converts and reorganizes an RMS file.

Note

If the CONVERT fails, DO NOT DO THE RENAME. THIS INSURES THE INTEGRITY OF YOUR ORIGINAL UNCONVERTED FILE.

14.5 RMS Buffering

RMS moves data from the disk into memory buffers. From the buffers, data is moved into the application program. Whenever the requested data can not be found in a data buffer, RMS must access the disk to find the data. Accessing the disk is much slower than getting information from a data buffer.

RMS provides two types of file data buffers. These are:

Local data buffers are not shared among processes. Local buffers can only be accessed by the process that they were created for. When RMS opens an indexed file, by default it creates two local data buffers.

Global data buffers are shared among processes. Global buffers can be accessed by all processes that have the file open. By default RMS does not create any global data buffers.

File I/Os can be reduced using either or both of these buffering methods. However, increased buffering requires additional system resources. To avoid running out of system resources, both SYSGEN and AUTHORIZATION (SYSUAF) parameter changes are needed (see Section 14.5.4, Authorization Parameter Changes).

14.5.1 RMS Local Buffering

RMS indexed files with high file I/O counts can benefit from increased local buffering. As the number of local buffers is increased, more I/O requests can be satisfied from the local buffer cache. In some cases, even write requests can be sped up using local buffering (for deferred write operations).

The number of local buffers used by RMS indexed files can be set on either a per-process basis or system wide. In either case, the Digital provided SET RMS command is used to specify the number of local buffers.

For example, to set the number of local buffers used for indexed files for ALL users on the system to eight, the following DCL command is used:


    $ SET RMS/SYSTEM/INDEX/BUFFER=8 

To set the number of local buffers used for indexed files for JUST THIS PROCESS to ten, the following DCL command is used:


    $ SET RMS/INDEX/BUFFER=10 

14.5.2 RMS Global Buffering

RMS based files with high read I/O percentages (75% or greater) can benefit from increased global buffering. As the number of global buffers is increased, more read I/O requests can be satisfied from the global buffer cache. Write requests are written directly to the disk and are not sped up by global buffering.

To specify the number of global buffers to be used on a file, the file must be closed. To set the number of global buffers on file SALESM.DAT to thirty, the following DCL command is used:


    $ SET FILE salesm.dat/GLOBAL=30 

14.5.3 Monitoring RMS Cache Hits

OpenVMS version 5.5 and higher provides a utility for monitoring RMS buffer caching activity. To perform RMS monitoring, the file to be monitored must first have the statistics option set.

In order to SET the statistics option on a file, the file must be closed. To set statistics on the file SALESM.DAT, the following DCL command is used:


    $ SET FILE salesm.dat/STATISTICS 

After the statistics option has been set on the file, the following MONITOR command is used:


    $ MONITOR RMS/FILE=salesm.dat/ITEM=CAC 

The Digital provided MONITOR RMS utility provides both LOCAL and GLOBAL buffer caching information. The higher the cache hit percent shown in the display, the better the I/O performance of the file.

Example 14-1 Caching Information


                      VAX/VMS Monitor Utility 
                        RMS CACHE STATISTICS 
                           on node TTI 
                         18-Sep-1995 12:52:11 
TTI_SALES:SALESM.DAT;1 
Active Streams:   2           CUR        AVE        MIN        MAX 
 
  Local Cache Hit Percent    37.00      36.65       0.00      40.00 
  Local Cache Attempt Rate   51.16       5.53       0.00      51.16 
  Global Cache Hit Percent   57.00      57.02       0.00     100.00 
  Global Cache Attempt Rate  31.89       3.50       0.00      31.89 
  Global Buf Read I/O Rate   13.95       1.48       0.00      13.95 
  Global Buf Write I/O Rate   0.00       0.00       0.00       0.00 
  Local Buf Read I/O Rate     0.00       0.02       0.00       0.33 
  Local Buf Write I/O Rate    0.00       0.00       0.00       0.00 


Previous Next Contents Index