To use FairCom DB Professional to full advantage, a number of important concepts should be understood. Many of these concepts are common approaches used within the C programming language; some are more specific to FairCom's approach to file access using FairCom DB.
This chapter is a thorough introduction to core data and index component definitions needed to build a successful database application.
- Data Record Positions - A FairCom DB data file is a collection of data records and other metadata. Simple rules apply to locating the positions of the records in the file.
- Data and Index Definitions - At the surface, FairCom DB stores data in two types of files: data files and index files. Dive deeper into the file modes in these sections.
- c-tree Keys - FairCom DB maintains keys in an index file that permits records to be rapidly searched.
- Key Segment Modes - The Key Segment Mode value tells c-tree how to locate and interpret the information necessary to create an ISAM key segment key.
- File Recovery - FairCom DB updates information stored in the first record of the file, called the "header," whenever you add a record or key. If this update is interrupted before the header can be updated, c-tree provides functions for file recovery.
- Advanced Space Reclamation - FairCom DB uses a variable space management index to make available space when a variable-length record changes size and is moved in the file.
- Online Compact - FairCom DB supports a feature to compact and rebuild indexes while files are opened and in use.
- Transaction Processing Overview - FairCom DB provides a set of functionality to support transaction processing to ensure that any committed updates to the data and index files will be available even in the event of a system failure. (For more information of transaction processing, see Data Integrity.)
- Default Temporary File Path in Standalone and LOCLIB Models - FairCom DB supports the ability to set a default path for c-tree temporary files created when rebuilding or compacting c-tree files.
- Error Handling - This section breaks down the details in the various parts of the error codes returned by c-tree functions.
- Low-Level Functions - FairCom DB includes a full range of low-level functions that provide developers with complete access to all the features of c-tree.
- c-tree Constraints - This section details the few constraints that developers should acknowledge when planning an efficient FairCom DB application.
Data Record Positions
Data record positions are absolute byte offsets from the beginning of the data file, where the first byte of the file is at offset zero. For fixed record length data files, the byte offsets corresponding to the beginning of each record are guaranteed to be a multiple of the record length specified for the data file at the time of creation. Therefore, a data record position in a fixed record length data file can be converted to a relative record number by dividing the data record position by the record length.
Buffer Regions
As a reminder, many c-tree functions require you to pass a pointer to a buffer area where either a data record or key value will be written. c-tree has no way to determine if the area in which it will write the requested information is actually large enough to contain the information. If it is not, c-tree will contaminate data and/or executable code. CAREFULLY review all areas used as buffers to ensure that they are sufficiently large.
Fixed-Length Data Record Delete Flag
FairCom DB uses the first byte of fixed-length data records for a delete flag. When a fixed-length data record is deleted, because of a call to either DeleteRecord() or ReleaseData(), the first byte is set to 0xff. The rebuild and compact utilities check this first byte to determine if a record is active or deleted. Also, whenever a record is about to be reused by NewData(), which is automatically called by AddRecord(), the first byte is checked to be sure that an active record is not about to be reused by mistake.
It is not necessary to reserve this byte if your application stores only ASCII characters in the first byte; however, you should not store binary data in the first byte or set the first field in your DODA to a binary or numeric type.
To protect against this error, c-tree checks the beginning of records for certain values. AddRecord() and ReWriteRecord() return FBEG_ERR (553) if a record begins with either a delete flag (0xFF) or a resource mark (0xFEFE). This capability can be disabled:
- For non-server applications, disable the check by compiling with NO_ctBEHAV_CHECKFIX defined.
For client/server applications, the configuration keyword COMPATIBILITY NO_CHECKFIX turns off this check.
Resource Records in Fixed-Length Files
Resource records embedded in fixed-length data files have a first byte of 0xFEFE. See Resources for more details.
Data and Index Definitions
FairCom Server stores data in files. Two main types of files are discussed in this section:
- Data files, which store the data found in database tables.
- Index files, which provide rapid access to the data by storing an index to the data.
This section also discusses different file formats, file modes, and other details.
File Formats
FairCom DB (in releases V7 and later) supports two different, but related, file formats:
- Standard file format - The default format for all versions of c-tree Plus and FairCom DB.
- Extended file format - A format containing an extended header, which supports additional features, such as Huge and segmented file support. See Extended Feature Support.
The extended 8-byte file creation routines (e.g., CreateDataFileXtd8(), CreateIFileXtd8(), etc.) create Extended files by default. Extended files are not compatible with c-tree Plus V6. However, the extended 8-byte file creation functions will create Standard files when using the ctNO_XHDRS extended file mode. The existing file creation functions (e.g., CreateDataFile(), CreateIFileXtd(), etc.) create only Standard files.
The FairCom Drivers read both Standard and Extended files and create Standard files.
Data and Index File Numbering
Once a file is opened for use it is referred to by its file number. Therefore, each data and index file must be assigned a unique number by the programmer. Data files, index files and index members all draw file numbers from the same pool; therefore, data files and indexes cannot have the same number assigned.
An index member is an index stored in the same physical file with other indexes. The first additional index member in an index file is given a keyno one greater than the keyno of the host index. The second index member in an index file is given a keyno two greater than the host keyno, and so on.
For example, if data file 0 has 4 total indexes (1 primary index and 3 members), the total consumed file numbers would be 5 and would be numbered 0-4 (the primary index would be file number 1).
The maximum number of c-tree files that can be simultaneously opened is controlled by #define MAXFIL found in ctopt2.h, located in the ctree/include directory. MAXFIL also controls the largest file number that may be used. To adjust the define from its default value of 1024, 2048 for client libraries, place the following define in ctoptn.h to override the MAXFIL define.
#define ctMAXFIL 150 /* set the max number of
files to 150 */
Multi-user, non-server implementations using a dummy lock file have additional file numbering constraints. See the Multi-User Concepts chapter of this manual for further details.
File Modes
When c-tree files are created and/or opened, their characteristics are partly determined by the filmod parameter. Some of these modes will be discussed in more detail in later chapters. filmod determines:
- If a file is opened in a standard manner or in a special manner which avoids problems with limited file descriptors;
- If a file is to be shared among multiple users or used exclusively by one user;
- If a data file is composed of fixed-length or variable-length records;
- The method of transaction management for the file;
- If a file is to contain Resources;
- If a file is to be a Superfile.
This section explains several important aspects of file modes:
- Values of the filmod parameter.
- The relationships between file modes.
- Considerations when opening virtual files.
- Fixed versus Variable-length Records
- Multi-User File Mode
- I/O Management
- More information about file modes
filmod Values
At file creation and open, the filmod parameter is used to specify the important file characteristics discussed above. Due to the number of different options it is not practical to show you all possible combinations.
When you are asked to supply the filmod parameter you can OR the various options together. For example:
(ctVIRTUAL | ctEXCLUSIVE | ctFIXED)
(ctPERMANENT | ctSHARED | ctFIXED | ctTRNLOG | ctDUPCHANEL)
Certain options are mutually exclusive, so they should not be used together. Only one option should be used from each of the following groups:
ctVIRTUAL or ctPERMANENT
ctEXCLUSIVE or (ctSHARED and/or ctREADFIL)
ctFIXED or ctVLENGTH
ctPREIMG or ctTRNLOG
When creating a data or index file you MUST specify one option from each of the FIRST THREE groups shown above.
Most file modes are applicable to data and index files. The following four exceptions should only be applied to data files:
ctSUPERFILE ctVLENGTH ctCHECKLOCK ctCHECKREAD
The following table defines the values for the file modes. You should always use the defined name for the values if possible. However, you need the numeric values if you are using an ISAM parameter file.
These values can be found in ctport.h.
File mode |
Decimal value |
Hexadecimal value |
|---|---|---|
ctEXCLUSIVE |
0 |
0x0000 |
ctSHARED |
1 |
0x0001 |
ctVIRTUAL |
0 |
0x0000 |
ctPERMANENT |
2 |
0x0002 |
ctFIXED |
0 |
0x0000 |
ctVLENGTH |
4 |
0x0004 |
ctREADFIL |
8 |
0x0008 |
ctPREIMG |
16 |
0x0010 |
ctTRNLOG |
48 |
0x0030 |
ctWRITETHRU |
64 |
0x0040 |
ctCHECKLOCK |
128 |
0x0080 |
ctDUPCHANEL |
256 |
0x0100 |
ctSUPERFILE |
512 |
0x0200 |
ctCHECKREAD |
1024 |
0x0400 |
ctDISABLERES |
2048 |
0x0800 |
ctMIRROR_SKP |
8192 |
0x2000 |
ctOPENCRPT |
16384 |
0x4000 |
ctLOGIDX |
32768 |
0x8000 |
ctInsertOnly |
|
0x08000000 |
File Mode Relationships
The file modes within c-tree break down into three categories: default, permanent, and temporary.
Default File Modes
If no file mode is specified, the default file modes are used. The default file modes are listed below:
- ctEXCLUSIVE
- ctVIRTUAL
- ctFIXED
If ctSHARED is not specified, the file mode includes ctEXCLUSIVE by default. If ctPERMANENT is not specified, the file mode includes ctVIRTUAL by default. If ctVLENGTH is not specified, the file mode includes ctFIXED by default.
Permanent File Modes
This category of file modes is set at file creation time. The permanent file modes are listed below:
- ctFIXED or ctVLENGTH
- ctPREIMG or ctTRNLOG
- ctLOGIDX
- ctSUPERFILE
- ctDISABLERES
- ctWRITETHRU
- ctCHECKLOCK or ctCHECKREAD
Any of these file modes used when the file is created are attached to the file. Subsequent opens do not need to include these file mode attributes.
The file must be completely recreated to add or remove ctVLENGTH or ctSUPERFILE modes. If you subsequently desire to enable resources, they can be permanently added by EnableCtResource(). UpdateFileMode() can modify the other permanent file modes.
The last three file modes, ctWRITETHRU, ctCHECKLOCK, and ctCHECKREAD, can be added at file open time as temporary modes if they are not already present as permanent modes. This means the feature will be available while the file is opened. Once the file is closed, the feature is not available, unless the file mode is present the next time the file is opened.
Temporary File Modes
The rest of the file modes available in c-tree can be dynamically changed when the file is opened. Ordinarily, it is best to have the data file and its associated indices opened with the same file modes, except for ctVLENGTH, which does not apply to indices.
Read-only and Shared File Modes
c-tree’s interpretation of “opening a file in read-only” mode (ctREADFIL) needs some clarification. When a file is opened with ctREADFIL mode, c-tree enforces the fact that “all-users” (who have this file opened) must be in “read-only” mode, and will not allow the file to be opened if anyone else has it opened for writes or updates. In addition, once you have the file opened with ctREADFIL, no one else can open this file unless they use ctREADFIL. This gives you a way to ensure that no data changes will occur to the file while you have it opened in ctREADFIL mode.
This definition sometimes causes confusion for users who what to open the file “read-only” for themselves, say a query program, yet want to allow other programs to continue updating the file. In order to accommodate this situation, support was added in c-tree V7.12 for (ctREADFIL | ctSHARED). Opening a file with both the ctREADFIL mode OR-ed with the ctSHARED mode will protect the opening application from writes (i.e., enforce read-only), while allowing other programs alternate access privileges.
Therefore c-tree supports both modes:
- ctREADFIL: restrict all user access to “read-only” ensuring that no one else can modify the file from underneath you.
- ctREADFIL | ctSHARED: enforce read-only for your specific application while allowing others full access.
An attempt to update data or resources to a file opened in ctREADFIL mode will result in a SWRT_ERR (458).
Note: This only applies to client/server and bound server applications. For standalone models, if both ctREADFIL and ctSHARED are turned on, then the ctSHARED is ignored.
Insert Only - prevent updates and deletes
A new file mode (ctInsertOnly) has been introduced to prevent updates and deletes to data files. This mode also prevents all changes using the low-level WRTREC/WRTVREC. To enable this mode at create time, include the ctInsertOnly bit in XCREBLK.splval passed to CreateIfilXtd8().
PUTHDR() has been extended to support the new mode: ctInsertOnlyhdr.
To enable this mode on existing files, call PUTHDR(datno, YES, ctInsertOnlyhdr). To disable on existing files, call PUTHDR(datno, NO, ctInsertOnlyhdr).
This requires exclusive access to the file.
An ADMIN (or other user) must call PUTHDR to disable ctInsertOnly mode to be able to update the file.
PUTHDR calls for ctInsertOnly can be replicated.
Virtual File Open
The following file modes are related to virtual file open:
- ctPERMANENT
- ctVIRTUAL
Many operating systems and/or C compiler run-time libraries limit the number of files that can be opened at one time. To free the application code from worries over the number of files opened at one time, c-tree implements two types of file opens: ctPERMANENT and ctVIRTUAL. A permanent file open causes the file to be opened and stay open until the program executes a file close. This is the traditional type of file open. A virtual file open causes the file to be opened, but allows it to be transparently closed and reopened in order to allow other virtual files to be used.
When it is necessary for a virtual file to be temporarily closed, c-tree selects the least recently used virtual file. This file will remain closed until it is next used, at which time it will be automatically reopened. This strategy causes c-tree to use all available file descriptors.
This activity is transparent to both the user and the programmer. You do not have to treat virtual files any differently than you would a permanent file. There are only two effects from using virtual files:
- The user may notice a slowdown in performance when the virtual files are being opened and closed.
- The programmer will be less likely to have to deal with run-time errors caused when more files are opened than the operating environment supports.
Applications that need more files open will run into this limit. By setting the filmod of some of the files to ctVIRTUAL, c‑tree allows the application to have more files open than the limit. Alternatively, some compilers provide a way to increase the number of file handles available to an application. This option is advisable in Standalone Multi-user applications using a large number of files. Consult your compiler documentation for instructions on increasing the file handle limit.
Fixed versus Variable-length Records
The following file modes are related to fixed and variable-length records:
- ctFIXED
- ctVLENGTH
When creating a data file, filmod specifies whether the data records are fixed-length or variable-length. Once a data file is created, its record-length characteristics cannot be changed (i.e. ctFIXED or ctVLENGTH file mode and fixed-length size). The remaining variable-length portions of the record are truly dynamic. Because variable-length records require a little more processing and some extra code, it is advisable to use fixed-length records unless variable-length records are actually necessary.
The file modes are ctFIXED and ctVLENGTH, which can be used with the other file mode constants. The ctVLENGTH file mode should not be applied to index files.
Fixed length data files can be created with record lengths in the range from 5 bytes to 65,535 bytes. Variable-length data files are created with a defined minimum record length. The minimum record length specifies the smallest data record that will be added to the file. Conceptually, this minimum record length corresponds to the fixed-length fields that may be part of your record structure. The minimum fixed-length portion may be set to zero, indicating all fields for this particular record are in the variable-length portion. FairCom recommends placing fields of a constant length, including all numeric fields, (e.g., LONG, COUNT), in the fixed-length portion. The standard ISAM-level retrieval functions, e.g., GetRecord(), NextRecord(), etc., retrieve only the fixed-length portion of variable-length records. ReReadVRecord() is called to retrieve the variable-length portion. This way, the typically small fixed-length portion can be quickly read, without the overhead of reading the variable-length portion. Variable-length ISAM-level retrieval functions, e.g., GetVRecord(), NextVRecord(), etc., retrieve the entire variable-length record.
With variable-length records, all of the fields in the variable-length portion are themselves variable-length fields terminated with a delimiter. This delimiter is typically the NULL character, unless you have changed the value of the delimiter with the SetVariableBytes() function. The following figure demonstrates the recommended method for creating variable-length field buffers. The delimiter is shown as a null byte, but it can be set to any delimiter.

Once the data is packed into the buffer as illustrated in the previous schematic, the record buffer and buffer length can be passed to AddVRecord() for insertion into the variable-length data file.
The maximum fixed-length portion is 65522. The total length of a variable-length record is 2GB, as long as the application has buffer space to hold the record this large. c-tree capacity limits are available in FairCom DB Capacity Planning.
If you are not going to use a field in the variable-length portion of the record to build an ISAM key you can put any type of field there that you wish. c-tree treats the variable-length portion as a contiguous string of bytes. However, if you are going to build an ISAM key with a variable-length field you have to be careful. You can, however, place any type of field in the variable-length portion of the record if you use a Record Schema, (see Record Schemas). However, even with a Record Schema the application is still responsible for packing and unpacking the variable-length record. All fields in the variable-length portion must be packed as opposed to aligned. No padding bytes are allowed between fields.
Multi-User File Mode
The following file modes are related to multi-user file mode:
- ctEXCLUSIVE
- ctSHARED
- ctREADFIL
- ctCHECKLOCK
- ctCHECKREAD
When c-tree is used in a network or multi-tasking environment, it is necessary to specify whether individual files are shared among the users or opened exclusively for one user. File modes can also specify whether the FairCom Server should check for proper lock ownership. How to use these modes is explained in detail in Multi-User Concepts and FairCom Server. The ctCHECKLOCK and ctCHECKREAD file modes should not be applied to index files.
Important: Note the following sequence regarding the ctCHECKLOCK mode:
FRSREC()
TRANBEG(ctTRNLOG|ctENABLE)
RWTREC()
At first, one might expect RWTREC() would fail because the call to FRSREC() did not lock the record. However, because TRANBEG() set the ISAM lock mode to ctENABLE, RWTREC() acquired a write lock on the record before checking if it was locked.
This is important to be aware of, as it means that in this type of situation, although an application uses ctCHECKLOCK, it could have read a record without a lock and then updated the record, even though another client might have changed the record before the lock was acquired on the update.
I/O Management
The following file modes are related to I/O management:
- ctDUPCHANNEL
- ctWRITETHRU
With the FairCom Server you can control how the Server uses I/O channels. This mode is discussed in FairCom Server Server.
ctWRITETHRU forces the operating system to flush all disk cache buffers when a data write occurs. This parameter can slow performance of the file handler. On the other hand, it is an important feature to use if you want to ensure that all data writes are put to the disk immediately. It is particularly important if you are in an operating environment where the system crashes a lot, and you are not using transactions and the ctTRNLOG mode. However, ctWRITETHRU does not guarantee that operating system buffers have been flushed.
Do not use ctWRITETHRU if automatic recovery is active for the file. Writes may be forced to disk on pre-imaged files. The ctWRITETHRU file mode is ignored for ctTRNLOG files, but ctPREIMG files do not ignore the ctWRITETHRU file mode allowing developers to force writes to disk while maintaining atomicity with ctPREIMG. Indexes are placed into standard ctWRITETHRU mode except that changes in the number of index entries are not written to disk until the commit. Data files are placed into a modified mode in which file updates and header updates are only output at commit.
The FairCom Server can automatically force ctWRITETHRU on all files not protected with transaction control. See the COMPATIBILITY FORCE_WRITETHRU keyword in the FairCom Server Administrator’s Guide and in Migrating Your Application Between Operational Models in this guide.
More About File Modes
Transaction Mode
The following file modes are related to transaction mode:
- ctPREIMG
- ctTRNLOG
- ctLOGIDX
With the FairCom Server or single-user mode you can specify the method of transaction management for a file. Transaction management is discussed in more detail in Data Integrity. ctLOGIDX should only be applied to index files, not data files.
Mirroring Skip Mode
The following file mode is related to mirroring skip mode:
- ctMIRROR_SKP
With the FairCom Server or Single User mode, it is possible to disable mirroring at an individual file level using the ctMIRROR_SKP file mode. See Data Integrity for details.
Resources
The following file mode is related to resources:
- ctDISABLERES
Resources are special variable-length records used to store information in a data file that are not in the same format as the regular data records. Normally they are enabled, but under certain circumstances, you may want to disable them. See Resources for further details.
Superfiles
The following file mode is related to superfiles:
- ctSUPERFILE
A Superfile is a c-tree file that may contain an arbitrary number of c-tree data and index files. This permits a group of files to be moved or copied as a single physical unit. This minimizes dependency on system file descriptors and simplifies file administration. See Superfiles in this manual for further details.
This file mode should not be applied to index files. When a data file is placed in a superfile, all associated index files are placed in the superfile automatically.
Corrupt Files
The following file mode is related to corrupt files:
- ctOPENCRPT
As discussed in detail in File Recovery, each data and index file has a header record. If the header record is compromised, you will not be able to open the file successfully. Typically, you will receive an error code of FCRP_ERR (14). Use RebuildIFile() to rebuild the compromised data file and index.
There are, however, some instances when you want to be able to open a file that has been corrupted. For instance, to use DeleteCtFile() to get rid of the damaged file.
c-tree provides a file mode of ctOPENCRPT for this purpose. If you OR in this mode when you open the file, you will be able to bypass the normal error checking done by c-tree.
Warning: Do not use ctOPENCRPT except in those rare cases requiring it and only for the purpose of deleting the file or doing some very low level processing.
We cannot guarantee the success of any c-tree function other than DeleteCtFile() if you use this mode. Extensive processing of a corrupted file may result in serious problems
Extended File Creation Block Structure
The key to many of the extended features in c-tree is the extended header in Extended files. With the exception of the specification of segmented files, all of the extended file definition capabilities are specified through a parameter block, the XCREblk structure, containing the additional information needed to implement the extended header.
A file supporting 8-byte addresses and/or which uses any of the extended file attributes uses an extended file header instead of the 128-bytes used by Standard 4-byte c‑tree files. Some of the additional bytes are used for the contents of the XCREblk structure, made up of sixteen 4-byte integers, seven of which are reserved for future use. The XCREblk structure passes the extended file attributes to the file creation routines.
typedef struct XCREblk {
LONG x8mode (Extended File Modes (x8mode), Extended File Modes (x8mode)); /* extended file modes */
ULONG segsiz; /* host segment size (MB) */
LONG mxfilzhw; /* high word max file size */
LONG mxfilzlw; /* low word max file size */
LONG fxtsiz; /* first file extent size */
LONG lxtsiz; /* file extent size */
LONG segmax; /* maximum number of segments */
ULONG dskful; /* disk full threshold */
ULONG filkey; /* file encryption key */
LONG prtkey; /* relative key# for partition key */
LONG splval; /* special value parameter */
LONG rs4[4]; /* reserved for future use */
LONG callparm; /* call specific parameter */
} XCREblk;
GetXtdCreateBlock(), declared as:
NINT GetXtdCreateBlock (FILNO filno, pXCREblk pxcreblk);
fills the structure pointed to by pxcreblk with the contents of an XCREblk corresponding to the extended create attributes for filno. Use GetXtdCreateBlock() to retrieve the XCREblk for an existing file so that an extended file can be created with the same extended create attributes.
Note: It is legitimate to call GetXtdCreateBlock() for files created in Standard or Extended format. The resulting XCREblk from a call for a Standard format file can be used in a call to an extended create routine, and the result should be a Standard format (V6 compatible) file.
Extended File Modes (x8mode)
During file creation, the x8mode member of the XCREblk structure can specify new file modes defined in ctport.h:
Symbolic Constant |
Value |
Explanation |
|---|---|---|
ctFILEPOS8 |
0x00000040 |
Huge file support. |
ctFILESEGM |
0x00000008 |
Segmented file. Setting the host segment’s size limit, segsiz, or calling SetFileSegments(), automatically sets this mode. |
ctNO_XHDRS |
0x00010000 |
Forces Standard header instead of Extended. (128 byte V6 header) |
ctNOENCRYP |
0x00000100 |
If file encryption has been enabled by a call to SetEncryption(), an individual file can disable encryption by OR-ing ctNOENCRYP into the x8mode member of the XCREblk structure. |
ctNOSPACHK |
0x00000080 |
Turn off disk full checking for this file only. |
ctSEGAUTO |
0x00200000 |
Use up to segmax automatic file segments of segsiz MB. SetFileSegments() is not required. |
ctPARTAUTO |
0x00800000 |
Partitioned file support. Auto partition naming based on host file name. |
ctTRANDEP |
0x00000010 |
Transaction Dependent Create and Delete. |
ctRSTRDEL |
0x00000800 |
Restorable Deletes. All the features of ctTRANDEP plus the ability to roll back file deletions. |
ctTRANMODE |
0x00000200 |
Auto switch to ctLOGFIL. |
ctPIMGMODE |
0x00000400 |
Auto switch to ctSHADOW. |
ctADD2END |
0x10000000 |
Disable deleted space management and always append new records to the end of the file. In V11 and later, automatically enables "add unique keys first," to prevent space from being allocated in the data file that is not reused if a record ADD operation fails with a duplicate key error. |
ct6BTRAN |
0x20000000 |
Enables six byte transaction number support. |
ctNO6BTRAN |
0x40000000 |
Disables six byte transaction number support. |
ctMEMFILE |
0x80000000 |
Temporary memory resident file. ctTRNLOG is not allowed for memory files, so be sure to adjust any IFIL structures to ctPREIMG instead. |
Special Cache Value Parameter (splval)
Use the splval member of the XCREblk structure to set one of the following parameters:
Symbolic Constant |
Value |
Explanation |
|---|---|---|
ctFILEOPENED |
0x00000001L |
PUTIFIL() ordinarily assumes that the IFIL structure passed in will be used to open the associated data file, update the IFIL info, and close the data file. If the splval member of the XCREblk has the ctFILEOPENED bit turned on in a call to PUTIFILX8(), then it is assumed that the tfilno member of the IFIL structure has been set to the data file number of an already opened (exclusively) file. Further, the file is not closed upon exit from the routine. |
ctKEEPOPEN |
0x00000004L |
When using memory files, this value makes the data file persist even after the last close (meaning when a file close operation cause the user count of file to drop to zero). |
ctKEEPOPENoffAtLogoff |
0x00040000L |
Clear ctKEEPOPEN mode on logoff. |
ctAUTOMKDIR |
0x00001000L |
Causes FairCom Server to automatically create directories that do not exist when creating a c-tree data or index file. (Similar to the AUTO_MKDIR configuration keyword. Affects only the corresponding file: To automatically create directories for all the physical files being created by the Xtd8 call, set the ctAUTOMKDIR bit on the splval field for each xcreblk array entry that corresponds to a physical file. |
ctFLEXREC |
0x00008000L |
Flexible record format. In V11.5 and later: When used in file creation, this will cause the file to be created with Hot Alter Table support. |
ISAM data file create option to turn off ctKEEPOPEN mode for data file when connection terminates
c-tree Server supports an ISAM data file create option that causes the connection to turn off the ctKEEPOPEN attribute for the data file and its associated indexes when the connection terminates. This feature is useful for an application that creates a memory file and wants the memory file to be deleted when the connection terminates, even if the client loses its connection to the server.
To use this feature, OR in the value ctKEEPOPENoffAtLogoff into the splval field of the data file's XCREblk structure array element that you pass to CREIFILX8().
Extended File Properties
The Extended File Creation Block supports several other properties, listed below:
Host Segment Size (segsiz)
The segsiz member of the XCREblk structure permits the host’s segment size to be specified.
Note: Segment sizes are specified as multiples of a megabyte: 1048576 bytes. For example, to specify that the host segment size will be one gigabyte, 1 GB, set segsiz to 1024. It is not necessary to specify the host segment size in XCREblk because SetFileSegments() can optionally set the host segment size as well as the other segment sizes.
Maximum Number of Segments (segmax)
The segmax member of the XCREblk structure specifies the maximum number of segments permitted for the file, including the host segment. This value can be overridden with SetFileSegments().
Max File Size Limit (mxfilzhw, mxfilzlw)
The mxfilzhw and mxfilzlw members of the XCREblk structure permit a maximum file size limit to be specified for a 4-byte or 8-byte file. The file size is specified in bytes.
- For files over 4 GB, mxfilzhw must be a non-zero value.
- For each multiple of 4 GB, add one to the mxfilzhw member.
For example, to limit the file size to 8 GB, set the (mxfilzhw, mxfilzlw) pair to (2, 0). To limit the size to 8.5 GB, use the pair of values (2, 2147483648). To indicate no file size limit, set both values to zero.
File Extent Size (lxtsiz, fxtsiz)
c-tree Plus V6.x files are limited to a file extension size of under 64KB. The lxtsiz member of the XCREblk structure can be used to define file extension sizes of up to 2 GB. lxtsiz is specified in bytes. The fxtsiz member specifies the first file extension size applied when the file is created and may be up to 2 GB. For example, to start a file with an initial size of 100 MB, and to have the file grow by an additional 10 MB as more space is required, use these values:
fxtsiz |
100 * 1024 * 1024 = 104857600 |
lxtsiz |
10 * 1024 * 1024 = 10485760 |
Individual File Disk Space Threshold (dskful)
FairCom DB gives you the ability to turn on/off disk full checking on a file-by-file basis and the ability to define the disk full threshold on a file-by-file basis.
To turn off disk full checks for a particular file, create the file using the 8-byte extended creates (e.g., CreateDataFileXtd8()) and set the ctNOSPACHK bit in the x8mode member of the XCREblk structure on.
To turn on file by file checking with a file specific disk full threshold, set the dskful member of the XCREblk structure to the desired limit at file create time using the extended 8-byte create functions.
To specify the size, in bytes, of the disk-full threshold to be used whenever the file is extended, place the non-zero limit in the dskful member of the XCREblk structure. If extending the size of the file will leave less than the threshold specified in dskful, then the write operation causing the file extension will fail with the error SAVL_ERR (583).
File Encryption Key (filkey)
Whether or not SetEncryption() has been called, setting the filkey member of the XCREblk structure to a non-zero value causes the file to be encrypted using the value of the filkey member as the encryption key.
Relative Partition Key Number (prtkey)
The partition key can be set when the file is created using the prtkey parameter of the extended file creation block. This value defaults to 0, indicating the first key associated with the data file. Set this value to the relative key number for the desired index if the default is not appropriate for your application.
Extended Feature Support
The following features are specified in the Extended File Creation Block. Create files with the 8-byte creation functions, (e.g., CreateDataFileXtd8(), CreateIFileXtd8(), etc.), or convert them with the c-tree conversion utility, ctcv67, to allow these features:
- Huge files (up to 16,000,000 terabytes)
- Segmented files (one logical file over multiple physical files)
- Partitioned files
- Transaction-dependent File Creates and Deletes
- Extended File Extent size
- File-specific Disk Full Checks
- File-specific Encryption
- Maximum File Size Limits
Follow these links to learn more about these extended features:
Xtd8 File Creation Functions
The extended 8-byte file creation functions are the same as their 4-byte counterparts except that a parameter has been added: a pointer to the XCREblk structure, or an array of XCREblk structures, defined in Extended File Creation Block Structure. For example, look at the 4-byte create data file extended function below:
COUNT CreateDataFileXtd(FILNO datno, pTEXT filnam,
UCOUNT datlen, UCOUNT xtdsiz, COUNT filmod,
LONG permmask, pTEXT groupid, pTEXT fileword)
The above example then becomes the extended 8-byte create data file function below:
COUNT CreateDataFileXtd8(FILNO datno, pTEXT filnam,
UCOUNT datlen, UCOUNT xtdsiz, COUNT filmod,
LONG permmask, pTEXT groupid, pTEXT fileword,
pXCREblk pxcreblk)
pxcreblk points to a single XCREblk structure, defined later in this chapter.
A create function referencing more than one file, such as CreateIFileXtd(), is modified in the same manner, except the pxcreblk parameter points to an array of XCREblk structures, one for each physical file referenced by the function. For example, look at the code below:
COUNT CreateIFileXtd(pIFIL ifilptr, pTEXT dataextn,
pTEXT indxextn, LONG permmask, pTEXT groupid,
pTEXT fileword)
The above code becomes the code below:
COUNT CreateIFileXtd8(pIFIL ifilptr, pTEXT dataextn,
pTEXT indxextn, LONG permmask, pTEXT groupid,
pTEXT fileword, pXCREblk pxcreblk)
If the IFIL structure pointed to by ifilptr describes a data file with 10 indexes, but the indexes are contained in two physical index files, pxcreblk must point to an array of three XCREblk structures, one for each physical file. If any of these files do not require the extended features, the corresponding XCREblk structure is zero filled.
The complete list of file creation functions includes: CreateDataFileXtd8(), CreateIFileXtd8(), CreateIndexFileXtd8(), PermIIndex8(), RebuildIFileXtd8(), and TempIIndexXtd8(). These functions are detailed in c-tree Function Descriptions.
Huge File Support
FairCom DB provides support for 8-byte addresses for file offsets. This provides a maximum file size of (4GB)2, or more than 16,000,000 terabytes, per file.
FairCom supports huge files in two ways:
- Huge files - 8-byte offsets as described in this chapter.
- Segmented files - logical files distributed across multiple physical files, as described in Segmented File Support.
Huge File Basics
The following step-by-step instructions walk you through the process of using huge files in general. Details are added throughout this chapter and an example is included in Huge File Creation Example.
Compatibility Note: Files created with standard ISAM create calls with FairCom DB V9 and later include extended headers including the ctFILEPOS8 and ct6BTRAN attributes by default. This feature can be disabled with the COMPATIBILITY REVERT_TO_V6HDR keyword should this be necessary for backward compatibility. Standalone applications can disable this support by setting the cth6flg global variable to any non-zero value.
- Create a library supporting huge files. The default ctHUGEFILE define activates both Huge File and Segmented File support in the FairCom DB client libraries.
- Use the Xtd8 creation functions described in Xtd8 File Creation Functions, and further information in c-tree Function Descriptions:
- Use the ctFILEPOS8 extended file mode in the XCREblk structure, described later in this chapter.
- Create an array of XCREblk structures, one for each physical data and index file to be created.
- Create the file(s) using an Xtd8 create function using the XCREblk array.
Note: Any index referencing a data file created using 8-byte file addresses must also use 8-byte file addresses. A ctFILEPOS8 data file requires a ctFILEPOS8 index. A ctFILEPOS8 index supporting duplicate keys must allow for 8-bytes in its key length for the automatic tiebreaker that FairCom DB automatically appends to the key value. See Huge File Creation Example for more details.
The Xtd8 create functions always create Extended files, even if no extended features are requested (unless the ctNO_XHDRS file mode is turned on). Files created with the original API (e.g., CreateIFileXtd()) are in the Standard c-tree format. A c-tree V6 application receives a FVER_ERR (43) when attempting to open an Extended file.
Note: Files are created in ctEXCLUSIVE mode, so you must close and reopen the file after it is created to allow it to be shared. Since files are created in ctEXCLUSIVE mode, this is a convenient time to execute additional configuration functions, such as SetFileSegments() and PutDODA().
Huge File Creation Example
This example demonstrates creation of ISAM files supporting 8-byte file addresses. For simplicity, it uses a single data file with a single index with a single segment.
/* define extended file attributes */
XCREblk creblks[2] = {
{ ctFILEPOS8, /* x8mode: support HUGE data file */
0, 0, 0, 0,
1048576}, /* lxtsiz: 1MB file extensions */
{ ctFILEPOS8, /* x8mode: support HUGE index file */
0, 0, 0, 0,
1048576} /* lxtsiz: 1MB file extensions */
};
/* define ISAM attributes */
ISEG seg = { 4, 6, 0};
IIDX idx = { 14, 4, 1, 0, 0, 1, &seg};
/* Note that the segment is 6 bytes, but the key allows
* duplicates, so the key length is 14, allowing 8 bytes
* for the offset tiebreaker.
*/
IFIL fil = { "bigdata", -1, 128, 0,
ctSHARED | ctVLENGTH | ctTRNLOG, 1, 0,
ctSHARED | ctVIRTUAL | ctTRNLOG, &idx};
InitISAM(10,5,32); /* Initialize c-tree */
/* create HUGE ISAM files */
CreateIFileXtd8( &fil, NULL, NULL, 0L, NULL, NULL,
creblks /* pointer to array of XCREblks */
);
CloseIFile(&fil);
OpenIFileXtd(&fil,NULL,NULL,NULL);
CloseISAM();
/* Note the use of standard c-tree API functions after the
* initial creation of the file. The file type is invisible to
* the application unless low level calls are used or an
* explicit reference is made to a file offset.
*/
Record Offsets Under Huge File Support
FairCom DB does not rely on native 64-bit (8-byte) integer support. Creating files greater than 4GB without segmented files requires a file I/O library supporting 64-bit file addresses. This is a different issue from native 64-bit integer support, which means the operating system uses 64-bit addresses.
c-tree uses the standard c-tree 32-bit oriented API with both 32-bit files and 64-bit files. However, with 64-bit files, two calls handle the higher order 32-bits (4-bytes): ctSETHGH() and ctGETHGH(). These routines are only needed with ISAM and low-level functions that return or use as input explicit file positions. For example, AddKey() has an input parameter that passes an explicit record address, and FirstKey() returns a record address. These functions need the additional API calls. AddRecord() and FirstRecord() do not use explicit record addresses and do not need these additional API calls, but CurrentFileOffset() does.
The ctSETHGH() routine:
NINT ctSETHGH(LONG highword);
is called before a routine requiring a record address as an input parameter. ctSETHGH() always returns NO_ERROR (0) ctGETHGH():
LONG ctGETHGH(void);
is called after a routine that returns, or sets an output parameter to, a record address.
Note: ctGETHGH() clears the stored value on read. If the following function call requires this value, call ctSETHGH() to reset the value.
To minimize the effect on performance in client-server environments, ctGETHGH() and ctSETHGH() do not make separate calls to the FairCom Server. Instead, the information needed by ctGETHGH() or supplied by ctSETHGH() is cached on the client side.
For sample code, see the example in the ctSETHGH() function description. See ctSETHGH and ctGETHGH for more information.
The list of function calls that require processing of the high-order 4 bytes by using ctSETHGH() or ctGETHGH() includes the following:
Short Name |
Long Name |
|---|---|
AddKey |
|
BATSET when used with BAT_RET_POS or BAT_RPOS |
DoBatch |
BATSETX when used with BAT_RET_POS or BAT_RPOS |
DoBatchXtd |
TransactionHistory |
|
LockList |
|
DeleteKeyBlind |
|
DeleteKey |
|
GetKey |
|
KeyAtPercentile |
|
FirstKey |
|
CurrentFileOffset |
|
GETFIL when using mode BEGBYT |
GetCtFileInfo |
GETRES when used with mode RES_POS |
GetCtResource |
GetGTEKey |
|
GetGTKey |
|
VDataLength |
|
LoadKey |
|
LockCtData |
|
LastKey |
|
GetLTEKey |
|
GetLTKey |
|
NewData |
|
NewVData |
|
NextKey |
|
GetORDKey |
|
PreviousKey |
|
ReadVData |
|
ReadIsamData |
|
ReadIsamVData |
|
ReadData |
|
ReReadVRecord |
|
ReleaseData |
|
ReleaseVData |
|
SetRecord |
|
SystemConfiguration |
|
UpdateRecordOffsetForKey |
|
UpdateCtResource |
|
WriteData |
|
WriteVData |
Segmented File Support
Segmented file support allows a single logical file to occupy multiple physical files. This allows data files to exceed the physical file size limits imposed by the operating system, and when combined with Huge File Support, provides the added benefit of very large file sizes. The physical file segments can be kept together or distributed over multiple volumes.

Segmented File Basics
Files can be segmented automatically or you can choose the size and location of specific segments. The following step-by step instructions will walk you through the process of using segmented files in general. Details are added throughout this chapter.
- Create a library supporting huge files. The ctHUGEFILE define activates both Huge File and Segmented File support in the c-tree client libraries by default.
- Use the Xtd8 file creation functions described in Xtd8 File Creation Functions and further information in c-tree Function Descriptions.
- The ctFILEPOS8 file mode permits huge files. This mode is required if the logical file will exceed 4GB total file size, but is not required for segmented files in general.
- The ctSEGAUTO file mode allows automatic segment generation. See Automatic Segments for more details.
- Generate an XCREblk structure for each physical file to be created.
- Use SetFileSegments() to establish the initial segments. This step is NOT required with automatic segment generation
- Define a SEGMDEF structure detailing the segments to be used.
- Execute SetFileSegments() on the file with the SEGMDEF structure.
The extended creation functions can only provide minimal information for segmented files. The XCREblk structure only holds the size of the host segment and the maximum number of segments. To fill in the details, the SetFileSegments() function, short name ctSETSEG(), specifies segment definitions for newly created files dynamically while the file is open. Also, SetFileSegments() can optionally set or change the size limit on the host segment. However, SetFileSegments() can only be called for files created with the Xtd8 API, which causes the file to have an extended header.
The Xtd8 create functions always create Extended files, even if no extended features are requested (unless the ctNO_XHDRS file mode is turned on). Files created with the original API (e.g., CreateIFileXtd()) are in the Standard c-tree format. A c-tree V6 application receives a FVER_ERR (43) when attempting to open an Extended file.
Note: Files are created in ctEXCLUSIVE mode, so you must close and reopen the file after it is created to allow it to be shared. Since files are created in ctEXCLUSIVE mode, this is a convenient time to execute additional configuration functions, such as SetFileSegments() and PutDODA().
Automatic Segments
The ctSEGAUTO file mode allows automatic segment generation. When this file mode is used, SetFileSegments is not required. Simply set the host segment size and maximum segments (the segsize and segmax elements of the Extended File Creation Block). All additional segments will be the same size as the host file and will be stored in the same directory. The segment names generated for a file start by adding “.001” to the existing file name, then incrementing the numeric extension. For example: The automatic segment names for the file sample.dat would start with sample.dat.001 and continues with sample.dat.002, and so on.
This option is supported by the ctcv67 conversion utility.
SEGMDEF Structure
The segmented file specifications in a call to SetFileSegments() are expressed through the SEGMDEF structure, which is made up of a pointer to a segment name and a 4-byte integer holding the segment size in MB. The segment definitions are stored in the host, or first, file segment as a special FairCom resource.
There is no limit on the number of segments except that the ASCII strings containing the segment names and sizes are limited to just over 8100 bytes. This means that even if each segment requires a 255-byte file name, a minimum of 30 segments could still be defined within the size constraint of the special resource.
The segment definitions can be changed on the fly, even while the file is being updated.
A SEGMDEF structure is used to specify each segment. It is defined in ctport.h as:
typedef struct segmdef {
pTEXT sgname; /* pointer to segment name */
LONG sgsize; /* size of segment in MB */
} SEGMDEF;
sgname points to the name of the segment. The sgname parameter conveys not only a name to use for the segment, but, by virtue of any path or device specification used in the segment name, where the segment should be located.
sgsize specifies the size of a segment in MB. Therefore, if you have the segment size in bytes, divide it by 1,048,576 to get MB.
SetFileSegments Function
The SetFileSegments() function is defined as follows:
COUNT SetFileSegments(FILNO filno, NINT aseg, NINT tseg, pSEGMDEF pseg)
Parameter |
Description |
filno |
The file number of an open file |
aseg |
Specifies the number of active segments, i.e., the segments created immediately. aseg must be at least one and less than or equal to tseg. |
tseg |
Specifies the total number of segments pointed to by pseg. |
pseg |
Points to an array of SEGMDEF structures described below. |
If the first segment definition pointed to by pseg has an sgname pointing to the empty string, (i.e. *sgname == ‘\0’, not sgname == NULL) the sgsize member of the structure becomes the host segment size limit. Only the last segment can have a size limit of zero, which is interpreted as no limit. If the first segment definition pointed to by pseg does not have an sgname pointing to the empty string, then this sgname applies to the second segment.
Additional segments automatically become active as needed, up to the maximum set in tseg. The segments are used in the order defined by the array of SEGMDEF structures pointed to by pseg.
The file referenced by filno must be opened in ctEXCLUSIVE mode the first time SetFileSegments() is called. Note that a file which has been created and not yet closed is always in ctEXCLUSIVE mode, regardless of the file mode specified as part of the create call. After the segment definitions have been established by the first call to SetFileSegments(), it is possible to call SetFileSegments() to modify the segment definitions even while the file is being updated in client/server models. However, it is not possible to change a segment size so that the new size is smaller than the actual physical size of the segment, nor can a segment size be increased if the segment has already reached its previously specified size, nor can SetFileSegments() rename a segment that is in use. A segment is in use if data beyond the segment header information has been written to the segment. An active segment is not in use just because it is on disk; data must have been written to it. Therefore, a call to SetFileSegments() can, in real time, change where segments will reside (provided the segment is not already in use) and/or how large they are (provided the new size is not smaller than the current physical size or that the segment has already been completely filled with data).
Note: The fxtsiz member of the XCREblk structure cannot be set higher than the size of the first (host) segment during a file create. This will result in a SEGM_ERR (674) error signifying the need for more segments, which do not exist yet because SetFileSegments() has not yet been called.
File Segment Example
In this example, the segmented file consists of a host segment limited to 500 MB, a second segment limited to 1 GB, and a final segment unlimited in size. However, the file will not be a HUGE file, so the total size will be limited to 4 GB.
XCREblk creblk = {
0, /* FILEPOS8 is NOT on: not HUGE */
500, /* 1st segment size is 500MB */
0, /* no specified file size limit (HW) */
0, /* no specified file size limit (LW) */
104857600, /* file created at size 100MB */
10485760, /* file extend 10MB at a time */
3, /* maximum number of segments is 3 */
0 /* no disk full threshold */
};
SEGMDEF segdef[2] = {
{"d:dataseg.2",1024}, /* 1024MB = 1GB size limit */
{"e:dataseg.3",0} /* no limit on segment size */
};
/* create data file, specifying the host segment size */
CreateDataFileXtd8(
10, /* data file number */
"c:hostseg.dat", /* data file name */
384, /* record length */
0, /* creblk specifies large extent sizes */
TRNLOG, /* support transaction processing */
0L, /* no permission mask */
NULL, /* no group ID */
NULL, /* no password */
&creblk); /* pointer to extended create block */
/* specify definitions for the two other segments */
SetFileSegments(
10, /* data file number */
1, /* one active segment (the host segment) */
2, /* two segment definitions to be passed */
segdef); /* pointer to the segment definitions */
Partitioned File Support
The FairCom Server supports a unique feature known as Partitioned Files. A partitioned file logically appears to be one file (or more accurately one data file and its associated index files), but is actually a set of files whose contents are partitioned by the value of the partition key. Both the data files and index files are partitioned. This permits data with a defined range of values for the partition key to be rapidly purged or archived (instead of having to delete record-by-record each record within this range).
See Partitioned Files for details on this feature.
Data File Extension
When the logical size of the data file reaches the physical size of the data file, the server physically extends the file, updates the physical and logical size values in the file header, and then releases the header lock after locking the data file header.
The amount by which the server extends the file when needed depends on the data file extension size set when creating the file. If you specify 0 for the data file extension size for a transaction controlled (ctTRNLOG) file, the server extends by the following amount:
- If the data record length is <= 2048, extends by 16384 bytes
- If the data record length is > 2048 and <= 32768, extends by 32768 bytes
- If the data record length is > 32768, extends by the data record length.
The degree of data file header contention varies based on the number of clients that are adding to the file at the same time rather than the file size. However, the operations that occur within the header lock should be very fast and each adder can proceed in parallel to perform the remaining operation once the space is acquired.
c-tree Keys
Key values permit data records to be located quickly, avoiding lengthy sequential searches. c-tree maintains keys in an index file that can be rapidly searched. Each key in a c-tree index file has a data record pointer associated with it.
Target key values are passed to the c-tree routines by a pointer to the first byte of the key value. Keys in c-tree should not be handled as normal character strings. No restrictions are placed on the individual byte values comprising the key. In particular, a null or zero byte DOES NOT signify the end of a key value. Therefore, you should use a function like cpybuf() to manipulate keys.
When you define a key value to c-tree, you establish the length of the key. c‑tree does not pad the target key values to their full length for you. It assumes that you have completely filled in the target key value to the defined length specified at the time the index file is created. If you are not careful to pad your keys to their full length, you may find that your index searches will not work the way you expect.
Note: Target key buffers must be the full length of the key to avoid memory corruption errors. TransformKey() reads and writes the full key length, even when using automatic transformation. See TransformKey for more information.
In the following example, we set up an input buffer of 128 bytes, greater than any key lengths in the application, and initialize the full length to 0x20, the space character, before taking a target value.
#define INPBUFSIZ 128
TEXT inpbuf[INPBUFSIZ];
. . .
ctsfill(inpbuf,32,INPBUFSIZ);
gets(inpbuf);
ISAM Keys
When using c-tree ISAM routines, key value definitions are provided in an incremental ISAM structure. The keys can be compound keys formed from various fields in the data record. Individual segments of the key are defined by the key segment parameters, as discussed in ISAM Functions.
Duplicate Keys
When you create a new index you can set the index up for keys that are always unique, or you can allow duplicate keys. If the index file is set up for duplicate keys, then the 4-byte data record position is automatically appended on the right end of the key to create a unique key value. The programmer must increase the key length by 4 bytes to allow for this addition.
This suffix DOES NOT appear in your data record. DO NOT create extra fields in the record structures to accommodate the suffix.
For example: a key length of 12 will have space for 8 bytes of an actual key value and 4 bytes for the automatic record position suffix.
When a key value is returned from one of the low level index file search routines (e.g., NextKey()), the last 4 bytes of the returned value will contain the associated data record position. Recall that the key value found in the index by a search routine is returned by placing a copy of the entry into the area pointed to by idxval. When using a value found in an index that supports duplicate keys, be sure to strip off the last 4 bytes before displaying the key, or using it for a computation if it is a numerical key, since the suffix is not a proper part of the key value.
Note: For indexes associated with Standard data files, a duplicate key length includes 4 bytes for the associated record position, which is used to break ties. If an index is created for a HUGE data file, then the key length must include 8 bytes for the associated record position.
Sequence Numbers
c-tree supports another mechanism to distinguish among key values. The ISAM-level segment mode 3, SRLSEG, indicates that the key segment will automatically be filled with a 4-byte, ever increasing sequence number. Each time a record is added to the file, the sequence number is incremented. Instead of specifying that the index supports duplicates, the index can be set for unique keys and a 4-byte segment defined as part of the key with a segment mode of 3.
Unlike the duplicate key approach above, this approach DOES require that a 4-byte region of the file be reserved for the sequence number which c-tree will automatically initialize.
Alternative Key Types
FairCom DB supports a number of basic key types:
- Fixed-length keys (0)
- Do not allow an ISAM update to change the key value - KTYP_NOISMKEYUPD (0x80)
- Variable length keys with padding compression - KTYP_VLENGTH (0x608)
- Variable length keys with simple RLE key compression - KTYP_VLENGTH_SRLE (0xE00)
Legacy key compression options:
- Leading-character compression COL_PREFIX (0x04)
- Padding compression COL_SUFFIX (0x08)
- Leading/padding compression COL_BOTH (0x0C)
The key type is set using the ikeytyp member of the IIDX Structure.
Normally you will use fixed-length keys.
FairCom DB offers 2 types of legacy key compression controlled by the IIDX.ikeytyp bits, COL_LEADING(0x04) and COL_PADDING(0x08), which may be combined. These can provide good compression within index nodes, but with the following major limitations:
- Deleted nodes using these modes may not be reclaimed. This could lead to performance issues on indexes with many updates/deletes.
- The compression algorithms could impart a significant CPU cost on key operations, especially for larger node sizes (PAGE_SIZE).
- For indexes under transaction control (ctPREIMG or ctTRNLOG) leaf nodes are limited to the same maximum number of keys as a non-compressed index, but still incur the above performance degradation. Only the internal nodes of the index provide any file system space savings.
In V12, support was added for true variable length keys that can provide key compression without the above limitations.
[01/2024]There are currently two recommended compression modes that provide support for variable length keys:
- IIDX.ikeytyp = KTYP_VLENGTH (0x0608)
Use this for indexes over variable length fields (such as street address or file system paths) that may have a lot of variation in the length of the last index segment. This eliminates the key padding bytes on disk without the high performance costs of the legacy COL_SUFFIX (0x08) key compression. Anyone using the legacy padding compression (or COL_BOTH) should investigate this improved alternative.
- IIDX.ikeytyp = KTYP_VLENGTH_SRLE (0x0E00)
This is suggested for indexes over fixed or variable length fields that may have a lot of repeating binary 0, ASCII space (0x20), or ASCII '0' values. Potential examples that may get good compression are 8 byte integers, UNICODE keys, padded binary or string data.
Legacy Key Compression Details:
If there is a reasonable amount of duplication in the leading characters of the key values, you can decrease the size of your index file by choosing leading-character compression. If your key lengths vary in size you may find that padding compression will decrease your index size (remember that you should be padding keys to the same length). Key type 0x0C (COL_BOTH) combines both types of compression. With compound (multi-segment) keys, c-tree builds the complete key value (for keylen bytes), then applies the compression logic. For this reason, only the last segment can be compressed.
Using one or both of the compression modes increases the CPU time to process a key since the key values must be compressed and expanded, and the key values cannot be located at fixed positions within the node. The increased CPU time is eventually offset by the reduced disk head movement due to the reduced size of the index. As the index grows, the compression can lead to fewer levels in the tree structure, compared with a fixed-length tree, because of the increase in branching, due to more keys in each node. A reduction in the number of levels leads directly to a reduction in the number of disk accesses.
Note: Key compression imposes a significant performance impact, especially when deleting records. Use this feature only when absolutely necessary to keep index space requirements to a minimum.
Leading character compression and padding compression each require a byte to store the extent of the compression. The maximum key length compressed is restricted to 255. Keys of greater length will only have compression applied to the leading and/or trailing 255 bytes. Further, employing the compression modes when no compression is possible will cause wasted storage since the bytes used to note the extent of the compression will be stored in addition to the key values.
The default padding character is an ASCII blank (0x20). This can be changed for a particular index file by using the SetVariableBytes() function. See SetVariableBytes for more details.
Fixed Length Keys
This is the simplest key type. If you are not sure what key type to use, choose this. Key type 0 is for fixed length keys that have little chance for significant leading character redundancy. Even if there is a possibility of key compression, you may still choose key type 0 if you wish to avoid the increased code size (about 4k bytes) or increased computation required to process the compressed keys.
Leading Character Compression Keys
Key type 4 is for fixed length keys that are likely to have significant leading character duplication among the key values. The common beginnings of successive key values are represented by a compression byte count. Consider the following sequence of key values:
"AABBCCDD"
"AABBCDEF"
"AABCDEFG"
With leading character compression enabled, these 24-bytes could be represented by the following 18-bytes:
"AABBCCDD"
5"DEF"
3"CDEFG"
The integers preceding the key values indicate the number of leading characters that the key value shares in common with the immediately preceding key value.
Note: Key compression imposes a significant performance impact, especially when deleting records. Use this feature only when absolutely necessary to keep index space requirements to a minimum.
Padding Compression Keys
Key type 8 is for variable-length keys for which not much leading character duplication is expected. Trailing padding compression enables variable length keys to be stored in the index without incurring the wasted storage of fixed length keys (with the key length set to the maximum possible length). Assuming a maximum key length of 15 bytes, then the fixed length representation of the following key values require 30 bytes:
"ABCDEF"
"DEFGHIJK"
The compressed form below would save 14 bytes:
"ABCDEF"9
"DEFGHIJK"7
Only the last segment is compressed on compound (multi-segment) keys.
Note: Key compression imposes a significant performance impact, especially when deleting records. Use this feature only when absolutely necessary to keep index space requirements to a minimum.
Combined Compression Keys
Key type 12 provides the maximum key compression; combining both leading character and padding compression.
KTYP_NOISMKEYUPD mode prevents ISAM record update from changing index key value
It is possible to create an index that does not allow an ISAM update to change the key value. On such an index, an ISAM record add can add a key value and an ISAM record delete can delete a key value, but an ISAM record update cannot change a key value. (Note the special case for a variable-length record: if an ISAM update causes the record to move, the key is allowed to change its record offset from the old offset to the new offset; but the key value itself is not allowed to change.)
To create an index that uses this feature, OR the KTYP_NOISMKEYUPD bit into the key type (ikeytyp) field of the IIDX structure for that index file before calling CREIFIL() to create the index.
An ISAM record update that attempts to change a key value for an index that has this property enabled fails with error NOISMKEYUPD_ERR (1001).
If extended header support is enabled and the data file contains an extended header, a new flmode3 bit is used to indicate that the data file has one or more indexes that use the "no ISAM key update" feature. This bit is used to allow PUTIFIL() to determine if it needs to open the index files to check if their key type header field needs to be updated to reflect the state of this feature in the IIDX structures passed to PUTIFIL(). PUTIFIL() can be used to turn this bit on or off. The changes take effect immediately.
A value of the splval bit, ctIDXOPENED, can be used with the ctFILEOPENED bit in a call to PUTIFILX8(). This bit indicates that the caller has the index files open. It is useful when the caller knows that it has the index file is already open. If this bit is not specified, PUTIFILX8() attempts to open the index files. In the server model, if the index files are already open and the server tries to open them, the open will fail, but then PUTIFILX8() is able to check if the caller has the index file already open. In standalone mode, this code is not active and the PUTIFILX8() call fails in this situation.
Variable-Length Key Compression
[01/2024]New "Alternative Key Types" have been added to support additional compression options. These key types support the compression of indexes with the ISAM API:
- KTYP_VLENGTH (0x608) - Eliminates the key padding bytes on disk without the high performance costs of the legacy COL_SUFFIX key compression. This is suggested for indexes over variable length fields that may have a lot of variation in the length such as street address, and filesystem paths. Anyone using COL_SUFFIX (8) should investigate this improved alternative.
- KTYP_VLENGTH_SRLE (0xE00) - Simple RLE compression without the high performance costs of legacy key compression. This is suggested for indexes over fields that may have a lot of repeating binary 0, ASCII space, or ASCII 0 values. This may be beneficial for keys over many data types such as 8 byte integers, binary data, or other variable length data.
These KTYP_ values are bits that can be set in the IIDX.ikeytyp for each index.
Deferred Indexing
Deferred Index Maintenance brings new speed to index maintenance operations.
c-tree’s ISAM add, delete, and update operations dutifully perform key insert and deletes on every index associated with a data file: a record add inserts a key value for each associated index; a record delete deletes a key value for each associated index; and a record update deletes the old key and inserts the new key for each associated index (for those keys whose values changed). However, each additional index operation can impose measurable performance loss, especially when numerous indexes are involved. If these immediate key operations could be held off for a period of time, applications can gain substantial performance in many circumstances.

Two application usages are considered strong candidates for deferred index processing.
- Deferred build of a permanent index added to an existing table. This typically requires waiting for an index to be loaded with all key values before returning control to the application. If this process could be sent to a background task, and only once the index build was complete does it become available. This index will be acting as "eventually consistent" until the end of the creation and initial population with existing data and data added during this initial period of time. After all keys are loaded it behaves either in synchronous or deferred mode depending on its permanent assigned deferred attributes.
- A particularly interesting application usage is an application deployed with a core set of primary indexes and allows additional end user alternate indexes to be created. With so many indexes, core application performance can be adversely impacted. If the additional index maintenance overhead could be deferred, core application performance remains less impacted while using its primary indexes.
FairCom addressed these challenges with new deferred maintenance modes. By delaying select index operations, applications can quickly create and/or update files with large numbers of indexes very quickly.
A key background support for this functionality is the ability to create an index file with the data file open in shared mode.
Permanent Deferred Index Mode
First addressing the second described scenario. A deferred indexing attribute can be specified on new index creation and enables ISAM record add, delete, and update operations to delay key insert/delete operations for that index file. OR in the KTYP_DFRIDX bit into the ikeytyp field of that index’s IIDX structure that you pass to the CREIFIL() function.
ISAM operations now avoid additional overhead of directly updating these deferred indexes. With deferred indexing enabled, a background thread performs key insert and delete operations on the deferred index files asynchronously. This mode becomes a permanent attribute of the index. That is, it always remains in this deferred mode.
Background Load of a Regular or Deferred Index
A permanent (as compared to a temporary) index can be created as a regular or permanent deferred index with the data file open in shared mode by setting the dxtdsiz field of the IFIL parameter passed to PRMIIDX() to one of the following modes:
- ctNO_IDX_BUILD
If using ctNO_IDX_BUILD, the index can be loaded later by either:
- Calling RBLIIDX() with the data file open in exclusive mode (which is already supported), or
- Calling ctDeferredIndexControl() with opcode of DFKCTLqueueload to queue the index load to the background index load thread.
- ctQUEUE_IDX_BUILD
With ctQUEUE_IDX_BUILD defined, the index load is immediately queued to a background index load thread after the index is created.
Note that when a regular index (non-deferred) is created with the data file open in shared mode, all connections that already have the associated data file open will internally open the new index and their ISAM record add/delete/update operations will affect this new index. This means that if an error occurs when updating the new index (for example if the add or update attempts to add a key that already exists in that index), the operation fails with that error. By contrast, updates to a deferred index that is created with the data file open in shared mode are queued to a background thread.
Background Thread Processing
For files under full transaction control (ctTRNLOG), deferred operations are written to transaction logs. A background thread reads the operations directly from the transaction log, calling and optional callback function, and optionally applies the operations to any deferred indices.
For atomicity only files (ctPREIMG) and non-transaction-controlled files, deferred operations are written to an in-memory queue. A background thread reads operations from the queue, calling an optional callback function, and optionally applies the operations to any deferred indices.
Note: Deferred Index features are not available for older compilers: Visual Studio VS2005 and older.
Transaction Log Limit for Deferred Indexing
Note: This is a Compatibility Change for FairCom DB V11.5 and later.
The deferred index processing thread for TRNLOG files reads transaction logs and informs the server of their minimum transaction log requirements.
FairCom DB Server now limits by default the number of active transaction logs that are kept for deferred index processing to 50. This default was chosen to avoid potentially running out of critical drive space.
The following configuration option was introduced for controlling this limit:
- MAX_DFRIDX_LOGS <max_logs> - Maximum number of logs to be held specifically for the deferred indexing thread.
This configuration setting does not impact your FairCom DB Server's ability to retain any necessary logs required for Automatic Recovery.
For more information about this setting (and the related setting for Replication Agent logs), see the following in the FairCom DB Server Administrators Guide:
Monitoring and Controlling Deferred Indexing
A Deferred Indexing API for monitoring and controlling deferred indexing is available.
A complete Deferred Indexing utility, dfkctl, is also available with many options ready to use.
For more information, see the ctDeferredIndexControl() function.
Queuing an Index Load
This example shows how to use ctDeferredIndexControl() to queue an index load to a background index load thread. The data file must be open by the calling connection. Set the datno field of the DFKQIL structure to the data file number. The idxlst field of the DFKQIL structure holds the file numbers of the indexes for which you want to queue an index load operation.
DFKQIL dfkqil;
LONG keynos[1];
memset(&dfkctl,0,sizeof(dfkctl));
memset(&dfkqil,0,sizeof(dfkqil));
dfkctl.verson = DFKCTL_VERS_V01;
dfkctl.opcode = DFKCTLqueueload;
dfkctl.verson = DFKCTL_VERS_V01;
dfkctl.opcode = DFKCTLqueueload;
dfkctl.bufsiz = sizeof(dfkqil);
dfkctl.bufptr = (pTEXT) &dfkqil;
dfkqil.verson = DFKQIL_VERS_V01;
dfkqil.datno = datno;
dfkqil.numidx = 1;
keynos[0] = datno + 1;
dfkqil.idxlst = &keynos;
if ((rc = ctDeferredIndexControl(&dfkctl)) != NO_ERROR) {
printf("Error: Failed to schedule index load: %d\n",
rc);
goto err_ret;
}
printf("Successfully scheduled index load.\n");
Counting the Number of Deferred Operations
A count of the number of operations that have been deferred for the index is stored in the index file’s header. A new extended GETFILX() API function has been introduced with additional parameter options and can be used to read this value with the DFRKOPS mode.
Function prototype
ctCONV NINT ctDECL GETFILX( FILNO filno,COUNT mode, pVOID bufptr, pNINT pbuflen );
Example
LONG8 ndfrkops;
NINT rc, buflen = sizeof(LONG8);
if ((rc = GETFILX( keyno, DFRKOPS, &ndfrkops, &buflen))) {
printf("Error: Failed to get deferred operation count: %d\n",
rc);
} else {
printf("Number of deferred index operations :" ctLLnd(10) " ",
ndfrkops);
}
Selected Deferred Index features extended to non-deferred indexes
In V11 and later, the following Deferred Index features have been extended to regular (non-deferred) indexes. c-tree Server now supports the following abilities:
- A regular (non-deferred) index can be created with the data file open in shared mode.
- The background loading of a regular or deferred index can be queued to the index load thread.
Creation of regular index with data file open in shared mode:
PRMIIDX() can be used to create a regular (non-deferred) index with the data file open in shared mode by setting the dxtdsiz field of the IFIL parameter passed to PRMIIDX() to either ctNO_IDX_BUILD or QUEUE_IDX_BUILD.
If using ctNO_IDX_BUILD, the index can be loaded later by either:
- Calling RBLIIDX() with the data file open in exclusive mode (which was already supported).
or
- Calling ctDeferredIndexControl() with opcode of DFKCTLqueueload to queue the index load to the background index load thread.
When using ctQUEUE_IDX_BUILD, the index load is immediately queued to the background index load thread after the index is created.
Note that when a regular index is created with the data file open in shared mode, all connections that already have the associated data file open will internally open the new index and their ISAM record add/delete/update operations will affect this new index. This means that if an error occurs when updating the new index (for example if the add or update attempts to add a key that already exists in that index), the operation fails with that error. By contrast, updates to a deferred index that is created with the data file open in shared mode are queued to a background thread.
Queueing index load:
This example shows how to use ctDeferredIndexControl() to queue the index load to the background index load thread. The data file must be open by the calling connection. Set the datno field of the DFKQIL structure to the data file number. The idxlst field of the DFKQIL structure holds the file numbers of the indexes for which you want to queue an index load operation.
DFKQIL dfkqil;
LONG keynos[1];
memset(&dfkctl,0,sizeof(dfkctl));
memset(&dfkqil,0,sizeof(dfkqil));
dfkctl.verson = DFKCTL_VERS_V01;
dfkctl.opcode = DFKCTLqueueload;
dfkctl.verson = DFKCTL_VERS_V01;
dfkctl.opcode = DFKCTLqueueload;
dfkctl.bufsiz = sizeof(dfkqil);
dfkctl.bufptr = (pTEXT) &dfkqil;
dfkqil.verson = DFKQIL_VERS_V01;
dfkqil.datno = datno;
dfkqil.numidx = 1;
keynos[0] = datno + 1;
dfkqil.idxlst = &keynos;
if ((rc = ctDeferredIndexControl(&dfkctl)) != NO_ERROR) {
printf("Error: Failed to schedule index load: %d\n",
rc);
goto err_ret;
}
printf("Successfully scheduled index load.\n");
Up to 4X Faster Indexes with Smaller Indexes Using Variable-Length Compressed Key Storage
FairCom DB indexing has always been optimized for the fastest possible data access. Indexes are composed of keys. However, not all keys are composed the same. For example, consider indexing based on file paths. Path strings are highly variable in length. Indexing data based on this type of key definition requires defining a key length of the absolute possible maximum while most keys are substantially less. This introduces much wasted space into an index structure as b-tree indexes are constructed of fixed-sized nodes for efficiency of reading and writing. Each node read or write is generally performed as a single I/O operation, thus the more keys per node, the better the I/O throughput of reading keys in an index. Further, large variably sized keys force a large index node size to enforce a three key per node minimum. FairCom has addressed this wasted space challenge for greatly reduced index sizes, leading to less I/O and ultimately, increased performance.
In FairCom DB, we obtained considerable index performance for large key lengths. A new Variable-Length Key Compression ("Vlen Keys") demonstrated up to 6x reduction in storage space and up to 28x faster performance in testing with the FairCom DB Server. The ideal scenario for this level of performance gain involves a reasonable length field (perhaps 32 bytes or larger) that will be roughly ½ of the maximum field length or less for most records.
By default, FairCom DB indexes store keys as fixed-length values. This requires few CPU cycles however, increases storage space and resulting I/O. Taking advantage of the fact CPU processing is orders of magnitude faster than I/O, we created a new compressed, variable-length index structure significantly reducing storage space along with a concomitant reduction of I/O when reading and updating index data.
This effort required advanced optimizations to our Index Node/Page handling. Now we have the ability to store more index entries per Node/Page, resulting in more "bang for the buck" on each disk I/O. These internal improvements do not affect your existing application code. To take advantage of this new advanced index compression, simply rebuild indexes with the new settings detailed in the sections that follow.

Variable-Length Key Performance Results
Substantial improvements in performance were seen when using key compression. The following tests were conducted using the FairCom DB Server on a table with 1 million records and a single index over a 1,000-character field that contained a file name with long paths.

- Insert column is the time in seconds to insert 1,000,000 records.
- Read column is the time in seconds to read 100,000 random records.
- Update column is the time in seconds to update 100,000 random records.
- Delete column is the time in seconds to delete 100,000 records (no duplicates).
- Rebuild column is the time in seconds to rebuild the full 1,000,000 record table.
FairCom DB API API Compression
New index node types provide options for reducing index size. [01/2024]The new NAV (FairCom DB API) index modes are as follows:
- Variable-Length Keys with Compression: CTINDEX_VARIABLE index mode provides variable length keys with padding compression to squeeze more capacity into each page block. Recommended for indexes over variable length data.
- RLE Compression: CTINDEX_VARIABLE_RLE index mode uses variable length keys with a simple RLE key compression of bytes 0x0, 0x20, 0x30 resulting in shorter index entries and smaller index files. Recommended for indexes over variable length data with multiple segments, binary data, or large numeric data types (8 byte integers).
Example
The following example demonstrates how to use one of these modes:
pIndex = ctdbAddIndex(hTableCustMast, "cm_custnumb_idx", CTINDEX_VARIABLE_RLE, NO, NO);
For more information, see ctdbAddIndex in the FairCom DB API documentation.
ISAM API Compression
[01/2024]New "Alternative Key Types" have been added to support additional compression options. These key types support the compression of indexes with the ISAM API:
- KTYP_VLENGTH (0x608) - Eliminates the key padding bytes on disk without the high performance costs of the legacy COL_SUFFIX key compression. This is suggested for indexes over variable length fields that may have a lot of variation in the length such as street address, and filesystem paths. Anyone using COL_SUFFIX (8) should investigate this improved alternative.
- KTYP_VLENGTH_SRLE (0xE00) - Simple RLE compression without the high performance costs of legacy key compression. This is suggested for indexes over fields that may have a lot of repeating binary 0, ASCII space, or ASCII 0 values. This may be beneficial for keys over many data types such as 8 byte integers, binary data, or other variable length data.
These KTYP_ values are bits that can be set in the IIDX.ikeytyp for each index.
See the section in the FairCom ISAM for C Developer's Guide titled IIDX Structure.
Utilities to Confirm Index Compression Modes
The file information utility, ctinfo, displays index compression modes as included in the key type when displaying index information.
Example
IIDX #2 {
/* key length */ 4,
/* key type */ 2048, (0x0800 = KTYP_KEYCOMPSRLE)
/* duplicate flag */ 0,
/* null key flag */ 0,
/* empty character */ 0,
/* number of segments */ 1,
/* r-tree symbolic index */ cm_custnumb_idx,
/* alternate index name */ 0000000000000000,
/* alternate collating seq */ 0000000000000000,
/* alternate pad byte */ 0000000000000000,
};
Option to Automatically Enable c-tree Key Compression When Creating an Index
FairCom DB now supports a configuration option that causes all indexes that contain a key segment that uses the Unicode key segment mode to be created with c-tree leading and padding key compression. By default, this option is off. To enable this option, add UNCSEG_KEYCOMPRESS YES to ctsrvr.cfg. This option can also be enabled or disabled at runtime by calling the ctSETCFG() option or using the ctadmn utility's option to change a server configuration option value.
Key Segment Modes
An ISAM key is composed of one or more key segments. The Key Segment Mode value tells c-tree how to locate the information necessary to create the key segment (the segment position and segment length) and how to translate it, if necessary (the segment mode).
The following table lists the valid key segment mode values. When creating an Incremental ISAM ISEG structure, use the mode value or the symbolic constant. When using an ISAM parameter file, the Key Segment Description record requires the mode value. The segment position interpretation column describes which of the three segment position types defined at the end of the previous section this mode represents.
Mode Value |
Symbolic Constant |
Segment Position Interpretation |
Explanation |
|---|---|---|---|
0 |
REGSEG |
Absolute byte offset |
Regular segment. No transformation. |
1 |
INTSEG |
Absolute byte offset |
Unsigned integer/long. |
2 |
UREGSEG |
Absolute byte offset |
Lower-case letters converted to upper case. Not compatible with SQL usage. |
3 |
SRLSEG |
Absolute byte offset |
Automatic 4-byte/8-byte sequence number. |
4 |
VARSEG |
Relative field # |
Variable-length segment. No transformation, pad to length. |
5 |
UVARSEG |
Relative field # |
Lower-case letters are converted to upper case, pad to length. |
6 |
YOURSEG1 |
|
Reserved for your use. |
7 |
YOURSEG2 |
|
Reserved for your use. |
8 |
SGNSEG |
Absolute byte offset |
Signed integer/long. |
9 |
FLTSEG |
Absolute byte offset |
Floating point (float or double). |
10 |
DECSEG |
Absolute byte offset |
Scaled BCD. |
11 |
BCDSEG |
Absolute byte offset |
RESERVED FOR FUTURE USE. |
12 |
SCHSEG |
Schema field number |
Transform according to the underlying data type, pad varying-length string fields. |
13 |
USCHSEG |
Schema field number |
Transform according to the underlying data type, convert lower case letters to upper case, pad varying-length string fields. |
14 |
VSCHSEG |
Schema field number |
Transform according to the underlying data type, pad fixed or varying-length string fields. |
15 |
UVSCHSEG |
Schema field number |
Transform according to the underlying data type, convert lower case letters to upper case, pad fixed or varying length string fields. |
These values can be OR'd with the other segment modes.
16 |
DSCSEG |
Descending segment mode |
Force the segment to be collated in descending (instead of ascending) order. See Descending Key Segment Values. |
32 |
ALTSEG |
Alternative Collating segment mode |
Allows the key segment to be stored in other than the standard ASCII collating sequence. See Alternative Collating Sequence. |
64 |
ENDSEG |
END segment mode |
Used for searching the trailing end of a key segment. See END of Key Segment. |
256 |
RECBYT |
RECBYT segment mode |
Used for improved variable-length deleted space management. See RECBYT Segment Mode. |
257 |
SCHSRL |
Schema field number |
Schema-based SRLSEG: Automatic 4- or 8-byte sequence number. |
258 |
BITSEG |
Schema field number of bitmask field |
Bitmask segment: set soffset to schema field number of null bit mask; set slength to schema field number of target field. |
259 |
ALLNULLDUPSEG |
Ignored |
Allow duplicate key values when all of the indexed fields are NULL: set soffset to zero (it is ignored); set slength to size of record offset (4 or 8). |
260 |
ANYNULLDUPSEG |
Ignored |
Allow duplicate key values when any of the indexed fields are NULL: set soffset to zero (it is ignored); set slength to size of record offset (4 or 8). |
Don’t be overwhelmed by the number of key segment modes. For fixed-length records there are just a few variations. If you are not sure what to use, start with REGSEG. This takes the value just as it is found in the record, at a given position. If the data field is an unsigned integer, signed integer, floating point or a BCD number, use the appropriate key segment mode. c-tree does the necessary transformations to make the key sort in the appropriate order.
Another basic mode is UREGSEG, which translates lower case letters into upper case letters. If you look at the ASCII collating sequence you will see that all lower case letters come after all upper case letters. If you had “ABC”, “DEF” and “bcd” as key values, they would sort in the sequence if you did not use UREGSEG:
ABC DEF bcd
Most likely, this is not the order that you want the keys in. By using UREGSEG, the lower case letters will be translated to upper case, so the sequence would be:
ABC BCD DEF
Fields in the variable-length portion of a variable-length record must be handled a little differently. Since we cannot predict an absolute byte offset for these fields, we have to be able to calculate a position by counting field delimiters. Use VARSEG or UVARSEG. Since the key must be a particular length, c-tree pads it to the proper length.
Key segment modes 12-15 relate to Record Schemas, described in Record Schemas.
For more information, see ISEG Structure.
Key Value Assembly
FairCom DB stores the key value in the index AFTER requested segments are concatenated and transformed according to the segment mode. For ease of data portability, c-tree stores all numeric key values in High/Low format (MSB, LSB) regardless of the underlying CPU architecture. However, c-tree does not automatically translate your key target buffers in the same way.
It is essential that all target values be transformed prior to key value searches, especially when using numeric segments. For numeric segments, be sure to select the proper segment mode. Segment mode value 1 should be used for unsigned integer and long values, while segment mode value 8 is for signed integers and longs. Floats and doubles should receive segment mode value 9. Any ISAM function that is passed a key target buffer, such as GetRecord(), expects a fully and properly formed key target.
The key target buffer can be manually built using C routines. Be sure to build the key target according to the index and segment definitions. See TransformKey in the function reference section for an example of building a three-segment key target buffer.
After building the key target, call TransformKey() to perform the key value transformations prior to calling the key search function. If you use an extended function call, such as InitISAMXtd(), to initialize c-tree, TransformKey() is automatically called for each ISAM search routine, unless you indicate that you do not want the automatic transformation via the user profile mask.
TransformKey
TransformKey() converts a key target according to the ISAM definitions for a given index. For example, storing a single segment key value made up of a signed long (requires segment mode value 8) results in the value being stored in High/Low format in the index file. To retrieve this signed long from a Low/High machine using GetRecord() requires flipping the bytes in the target. Passing the key target to TransformKey() before passing it to GetRecord() causes the target value to be reversed from Low/High to High/Low to match the index so that a successful match is found.
TransformKey() expects all the necessary segments to be concatenated into a TEXT buffer pointed to by the target parameter. Since TransformKey() constructs the translated key in place, ensure that the key area TEXT buffer is at least as large as the key length, including suffix, defined for the index. See TransformKey in the function reference section for a detailed example.
CurrentISAMKey
The function CurrentISAMKey() returns a pointer to an already assembled key value for the specified index file from the current ISAM record.
Sequence Number Segments
Segment modes 3, SRLSEG, and 257, SCHSRL, are the only segment modes that place data in the data record. All the other modes simply extracted data from the record, which is optionally transformed, and made part of a key value. The automatic sequence number feature of c-tree permits the application data record to contain an increasing or decreasing, as explained below, 4-byte/8-byte data field. When SRLSEG or SCHSRL are used, each time a new record is added to the data file, c-tree increments the data file’s serial, or sequence, number. SRLSEG and SCHSRL cause this sequence number to be placed in the record structure at the byte offset indicated by the key segment information.
More than one key may include a SRLSEG/SCHSRL key segment, but each of these segments must have an identical byte offset. That is, the different keys using the serial number use the same 4-byte/8-byte region of the data record.
To update the serial number when a record is rewritten, delete the record and add it as a new record. ReWriteRecord() does not change the serial number.
Descending Key Segment Values
Any of the key segment modes presented in the previous section can be modified to force the key segment to be collated in descending, (instead of ascending), order by adding a value of 16 to the mode or by OR-ing the mode with the symbolic constant DSCSEG.
For example, a segment mode of two will cause the following key values to be sorted in ascending order:
"ABC"
"BCD"
"DEF"
A segment mode of 18, (2 + 16), stores the key values in descending order:
"DEF"
"BCD"
"ABC"
Because this modification is performed at the segment level, it is possible to have some segments collating in ascending order while others collate in descending order in the same key value. For example, a customer order key may be stored in increasing order by customer number, while orders for the same customer are stored in descending order by date. This would make the most current orders come first for each customer.
Remember the information about preparing keys for searches discussed earlier in IMPORTANT - Key Value Assembly.
Alternative Collating Sequence
Any of the key segment modes can be modified to use an alternative collating sequence by adding 32 to the mode, or OR-ing the mode with the symbolic value ALTSEG. For example, a mode of 32 (0 + 32) stores the key segment an order other than the standard ASCII collating sequence. This is especially useful for extended character sets associated with languages that do not collate correctly according to the standard ASCII sequences.
To use an alternate collating sequence, Resources must be enabled. Resources are explained in Resources. Once resources are enabled, there are two methods for storing the alternate sequence.
- Automatic at index creation: If the file definitions are being stored using the Incremental File Structures, and the index has not been created, set altseq in the IIDX structure to point to the array of sorted integers. When the index is created, the alternate collating array is automatically stored in a resource record in the index file.
- Manual after index created: If the file definitions are being stored using parameter files, or if the index is already created, use SetAlternateSequence() to store the alternate collating sequence as a resource in that file. Any key segment that is set to use an alternate sequence will use the one for that index file.
Regardless of using option 1 or option 2 to store the sorted integer array, segment mode 32 (ALTSEG) must be applied to the appropriate segment definitions.
Note: If you change the alternate collating sequence for an index file that already contains key values, you must rebuild the index or you will not be able to predict the sequence that the keys will appear.
GetAltSequence() brings the alternate collating sequence into memory for examination and update.
END of Key Segment
ENDSEG is used for searching the trailing end of a key segment. ENDSEG automatically right justifies the string, filling the left-hand portion of the segment with the value PADDING, (#define PADDING ‘’ ), then flipping the value.
Note: This segment mode supports STRING field types ONLY.
The following example contains a duplicate allowed index over customer name. The code prompts the user to enter a customer name for the search. Since the index uses the ENDSEG segment mode, just the last few letters may be entered. For example, a target of ‘son’ would retrieve ‘Anderson’ and ‘Emerson’ from the following list of names: Adams, Anderson, Barrington, Emerson.
#define c_rec_len 4*namebuf+4
#define namebuf 30
TEXT inpbuf[namebuf], target[namebuf+sizeof(long)];
COUNT tar_length;
struct {
TEXT ch_name[namebuf]; /* customer name */
TEXT child1[namebuf]; /* customer name */
TEXT child2[namebuf]; /* customer name */
TEXT child3[namebuf]; /* customer name */
LONG ch_num; /* customer number */
} childimage;
ISEG segments[] = {
{0,30,ENDSEG},}; /* first 30 bytes of ch_name */
IIDX indices[] = { /* Duplicate allowed index, therefore key length = 30+4 */
{34, 0, 1, 1, 32, 1, &segments[0], "child_key"}};
IFIL file[] =
{"child", 5, c_rec_len, 4096, 1, 1, 4096, 1,
&indices[0], "ch_name","child3",0};
memset(&childimage,' ',(4*namebuf)); /*pad strings */
memset(target,' ',namebuf);
memset(inpbuf,' ',namebuf);
printf("\n\rEnter name to retrieve by ENDSEG: ");
gets(inpbuf);
cpybuf(target,inpbuf,namebuf);
tar_length = strlen(target);
TransformKey(6,target); /* Optional if automatic */
/* transformations are used by calling an extended */
/* initialization call such as INTISAMX */
if (error = FRSSET(6,target,&childimage,tar_length))
printf("Error on FRSSET(), error = %d decimal",error);
else { /* else if not error, print results */
printf("\nSuccessful retrieval for the ENDSEG INDEX ");
printf("\n\nName: %s",&childimage.ch_name);
}
RECBYT Segment Mode
The RECBYT segment mode specifies an index segment containing the character ‘D’ in the first byte. The rest of the segment is padded with NULL bytes, however, there is no reason to use a segment length greater than one.
To take complete advantage of the variable-length (VLEN) space management capabilities, create an index using only a RECBYT segment with a segment length of one that allows duplicates. Since a duplicate key adds the record position as part of the actual key value, this results in a 5-byte (9-byte for huge files) duplicate key with a single byte (‘D’ for active data records) for the segment. This is referred to as a “RECBYT Index”.
Use NbrOfKeyEntries() on a RECBYT index for an exact count of variable-length records in the data file.
Note: The RECBYT index enables backward physical traversal of variable-length records. By default, the FairCom DB API API creates this index on every table. Because it is an extra index, it slows inserts, updates, and deletes. Unless you need to walk variable-length records backwards, disable the creation of the index by adding NORECBYT to the table create mode.
Improved Deleted Space Management
For non-transaction controlled variable-length files or multi-user standalone variable-length files, FairCom DB will always consider coalescing space when trailing space is marked deleted. For example, deleting records in reverse physical order will obtain maximum space coalescing. An available RECBYT index in these cases allows coalescing in both directions for optimal space usage.
For transaction-controlled variable-length files, this variable-length space management does not take place and deleted variable record space is not coalesced. For records that don't change much in size, space is still efficiently reused based on the space management index present in the file. Only when a RECBYT index exists for this case will FairCom DB check and possibly merge the adjacent deleted record space for transaction controlled files.
The following example adds a RECBYT index to the ISEG/IIDX structures in ctixmg.c:
ISEG segments[] = {
{0,15,5}, {4,5,2}, /* First index key segments */
{0,4,1}, /* Second index key segments */
{4,9,2}, {0,9,5}, /* Third index key segments */
{0,1,RECBYT} /* RECBYT key segment with a
segment length of one. Segment offset is ignored */
};
IIDX ndxs[] = {
{24, 4,1,1,32,2, segments},
{ 4, 0,0,0, 0,1,&segments[2]},
{22,12,1,1,32,2,&segments[3],NULL,"vcusti.2"},
{ 5, /* 1 byte segment length plus 4 byte offset */
0, 1, /* Allow Duplicates */
0,0,1,&segments[5]}
};
File Recovery
Information about each data file and index is stored in the first record of the file, called the header record. This information is modified whenever you add a record or key. In many cases, this information is not written back to the header record on disk immediately, particularly in single user situations. This improves the speed of the c‑tree system, but also makes it possible to have incorrect information in the header if the program crashes before the header is updated.
If this situation occurs, c-tree detects that the header has been compromised. An appropriate error code is returned when opening a compromised data file or index.
c-tree provides a high-level function, RebuildIFile(), which rebuilds the data file header and other information, and recreates the index. To use this function you must build an incremental ISAM structure for the file and indexes, even if you are not using ISAM functions.
c-tree also provides a standalone IFIL-based rebuild utility, ctrbldif, that can be used with ISAM parameter files. See the section titled ctrbldif - IFIL-based Rebuild Utility (ctrbldif - IFIL-based Rebuild Utility, /doc/ctreeplus/ctrbldif-util.htm) in this manual.
If your keys cannot be described properly in an incremental ISAM structure, you will have to rebuild the index yourself. A general process would be:
- Use RebuildIFile() to rebuild the data file header
- Erase the damaged index
- Sequentially process the data file, adding the keys to a new index
If the data portion of the file appears to be corrupted, most likely due to a hardware problem, the c-tree compact utility may be useful. See the section titled ctcmpcif - IFIL-based Compact Utility for further details.
In V11 and later, an option permits an ADMIN group member to open a file with a bad resource chain using ctOPENCRPT | ctDISABLERES filemodes. A file can now be opened even if it has an invalid resource chain. However, the ctOPENCRPT file mode is not sufficient to open a file in this state. Includes both ctDISABLERES and ctOPENCRPT to enable the file open to proceed.
On a server, the user opening the file must belong to the ADMIN group as passwords cannot be checked with resources disabled. The file mode will be forced to read-only (ctREADFIL).
Note: Some resources such as encryption and compression are essential to reading a file; if their resources cannot be processed then the file open operation will continue to fail.
Advanced Space Reclamation
Variable-length records use a variable space management index to make available space when a variable record changes size and is moved in the file. Rather than leave deleted space unavailable, the space management index allows future insertions to take advantage of this deleted space along with the best fit.
The delete node queue thread makes empty index nodes available for reuse, avoiding costly file I/O overhead in creating new nodes. When an index node becomes empty, the thread that emptied the node adds an entry to a delete node queue. The delete node thread reads entries from the delete node queue, prunes the empty nodes from the index tree and adds them to a list of nodes that are available for reuse. Since the operations performed to make empty index nodes reusable may require a huge number of disk operations, the delete node queue thread is started only when FairCom DB is idle.
File Access by Delete Node and Space Reclamation Thread
Logic was added in V11 to control file access by the Delete Node and Space Reclamation threads. These changes are available for FairCom DB when atomic operation support is enabled. This feature is intended to avoid a client being unable to open a c-tree file in exclusive mode because an internal thread has the file open.
Online Compact and Rebuild
Online file compact and rebuild support is available starting with FairCom DB V13.
For background, since FairCom DB reclaims the space consumed by deleted records when new records are inserted and additionally for variable length files when records increase in size, a compact operation is typically only needed in the following scenarios:
- A compact operation is being used as a convenient way to reformat the file. For example, changing the PAGE_SIZE of the file, adding or removing encryption, adding or removing data compression or index compression, and so forth.
- A Hot Alter Table operation has been called and you want all the records converted to the latest schema version.
- A large number of records have been deleted and the disk space consumed by the deleted records is needed immediately. You do not want to wait until new records are inserted to reclaim the space.
The FairCom compact and rebuild operations available prior to V13 require files to be opened in exclusive mode. This is problematic for systems that have minimal windows available for downtimes, therefore an online facility for managing these processes is now available.
The online compact and rebuild operations allow for the files to be actively used by other users while these operations take place. These operations are generally slower than the compact and rebuild versions that require the file to be opened exclusively.
Online compression is not allowed for changing encryption or compression settings.
During an online compact operation, if the file is under full transaction control, the indexes are automatically rebuilt using Online Rebuild.
Online Rebuild is only available for indexes under full transaction control (ctTRNLOG file mode).
See also
- CMPIFIL CompactFile function and its extended versions
- ctcmpcif compact utility
- ctdbRebuildTable
- RebuildIfile function and its extended versions
- ctrbldif rebuild utility
Transaction Processing Overview
There are two major aspects to transaction processing: atomicity and automatic recovery. These are related yet different aspects to transaction processing, and not all database products supply both as FairCom does. Both are discussed in detail in Data Integrity. Atomicity involves ensuring that if any elements are committed that all elements of the transaction are committed. Automatic recovery ensures that any committed updates will be made to the data and index files even in the event of a system crash.
FairCom provides a set of functions and file modes that cover both aspects of transaction processing. This allows you to specify the type of transaction processing used on a file-by-file basis in one of three modes:
- The ctPREIMG file mode allows atomicity by caching updates to files and only changing the files when the entire transaction is complete.
- The ctTRNLOG file mode adds automatic recovery to the ctPREIMG file mode by logging the cached updates to a transaction log file.
- A file with neither ctPREIMG nor ctTRNLOG specified supports no transaction processing. This allows you to disable transaction processing for temporary or non-critical files to improve performance.
The transaction processing API functions work with the file modes to complete the process:
Function |
Description |
|---|---|
Begin() |
Begins the transaction. All updates are tracked from this point. |
Commit() |
Ends the transaction and posts all updates to the files. |
Abort() and AbortXtd() |
Ends the transaction without posting updates to the files. |
SetSavePoint() |
Establishes a drop back position (savepoint) within a transaction. |
RestoreSavePoint() |
Drops back to the last savepoint without aborting the transaction, |
For more information of transaction processing, see Data Integrity.
Default Temporary File Path in Standalone and LOCLIB Models
FairCom DB supports the ability to set a default path for c-tree temporary files in standalone or LOCLIB mode. This feature is useful for setting the path for temporary files created when rebuilding or compacting c-tree files. FairCom DB supports this ability with the configuration keyword TMPNAME_PATH.
To use this feature, first initialize FairCom DB and then set the c-tree global variable ct_tmppth to point to a buffer containing the name of the desired temporary path. Include the path separator at the end of the path name. The buffer may be dynamically allocated. If so, you must also free the buffer when it is no longer needed and set ct_tmppth to NULL.
Example
/* Allocate buffer for temporary file path. */
if (!(ct_tmppth = mballc(1, MAX_NAME)))
printf("Error: Failed to allocate %d bytes for temp path buffer\n", MAX_NAME);
/* Set temporary file path. */
#ifdef ctPortWIN32
strcpy(ct_tmppth, "temp\\");
#else
strcpy(ct_tmppth, "temp/");
#endif
/* Call a function such as CMPIFIL that uses temp path. */
...
/* Done with temp path buffer, so free it. */
if (ct_tmppth) {
mbfree(ct_tmppth);
ct_tmppth = NULL;
}
Error Handling
c-tree functions generally return either an error code or a data record position. Functions that return an error code return a value of zero to indicate no error has occurred, or a non-zero value that designates the type of error. Error codes are found in cterrc.h. Functions which return a data record position return a zero value (which is never a legitimate data record position) to indicate an error or unsuccessful index search.
uerr_code
For low-level functions, the error code can be found in the global variable uerr_cod.
isam_err and isam_fil
ISAM functions place the error code in the global variable isam_err. Since ISAM functions work with multiple files, check the global variable isam_fil for the file number associated with the error.
sysiocod
The global variable sysiocod is set to the value of errno when an I/O error occurs. errno is the C Language run-time error variable which is automatically maintained by the C runtime library. Unlike uerr_cod, sysiocod is NOT reset by new calls to c‑tree. It is only set to errno if an open, create, seek, read, write, or lock function fails. When the following error values are returned it is usually beneficial to display sysiocod:
- FNOP_ERR (12)
- DCRAT_ERR (17)
- SEEK_ERR (35)
- READ_ERR (36)
- WRITE_ERR (37)
- DLOK_ERR (42)
The sysiocod value can be found in the compiler documentation or compiler errno.h file.
ctcatend()
When a fatal problem has occurred (such as memory corruption), a ctcatend() message may appear on the screen or in CTSTATUS.FCS if using the FairCom Server. The letters depict the following:
Letter |
Description |
|---|---|
M |
Transaction mode. |
L |
Location of ctcatend() in code. Each ctcatend() call uses this value as a unique identifier. |
E |
uerr_cod. See c-tree Error Codes or cterrc.h. |
F |
File number involved. |
P |
Position parameter in hexadecimal. Sometimes corresponding to physical record offset. |
ISAM and Low-Level Functions
c-tree includes a full range of low-level functions. These functions provide complete access to all the features of c-tree. You may require specific control over the data and index file manipulations, managing each data and index file individually.
See Using Low-Level Functions for more information on this advanced API layer.
Note: This API is not typically recommended for Client/Server due to the extra number of calls that must be made at low-level, greatly increasing network traffic.
c-tree Constraints
While c-tree is designed to minimize artificial limitations, there are a few constraints that should be acknowledged to optimize operation and allow developers to plan an efficient application. These few constraints are listed below and described in more detail afterward:
- File-Related Limits
- File ID Overflow
- Serial Number Segments
- Transaction High-Water Marks
- Transaction Log Numbering
- File Size
- Record Size
- Enforce Maximum Disk Read/Write Sizes on Windows
File-Related Limits
FairCom DB has very few hard limits, and some can be adjusted. The following limits should be kept in mind as you work with FairCom DB:
- Max fixed-length record size: 64KB
- Max variable-length record size: 2GB
- Max number of records per file: No limit
- Maximum file size: 16 Exabytes (18 million terabytes)
- Maximum number of open files: 32,767
- Maximum number of indices per data file: Defaults to 64 (can be increased using MAX_DAT_KEY in ctsrvr.cfg to a theoretical limit of 32767, although a practical limit exists well before this value)
- Maximum number of segments per index: Defaults to 16 (can be increased using MAX_KEY_SEG in ctsrvr.cfg to a theoretical limit of 32767, although a practical limit exists well before this value)
File ID Overflow
Each time a transaction controlled c-tree data file or index file is opened, the value of its file ID number is increased. If your system has a large number of files, this value can increase a fair amount with each day of processing.
- The upper limit for this value is: 4,294,963,200
If the upper limit is hit, the Server process will shut down.
- The value at which a “Pending File ID Overflow” warning message first appears is: 4,227,858,432
The message “Pending File ID Overflow” will be written to CTSTATUS.FCS. A new entry will be logged every time another 10,000 numbers are used.
From the time the first warning message appears, you have at most 67,104,768 additional data file and index file opens before this value hits this limit.
When the transaction file numbers have been exhausted, error 534 and the following message will be logged in CTSTATUS.FCS:
- User# 00018 Pending File ID overflow: 534
- User# 00018 O18 M18 L58 F-1 Pfffff003x (recur #1) (uerr_cod=534)
If you get error 534, you must do a transaction log reset.
Serial Number Segments
The ISAM-level segment mode 3, SRLSEG, indicates that the key segment will automatically be filled with a signed 4 or 8-byte sequence number. An OSRL_ERR (44) error is returned when the sequence number overflows. When using a signed 4-byte value, the value is limited to 2 GB. 4-byte sequence numbers should not be used in situations where more than 2^31 (or 2,147,483,648) sequenced entries will be required. When using a signed 8-byte value, the sequence number value is limited to 2^63 (or 9,223,372,036,854,775,808).
GetSerialNbr() returns the current sequence number for data file datno used in ISAM applications using a SRLSEG key segment mode. GetSerialNbr() returns a long integer containing the current sequence number, so use ctGETHGH() to obtain the high word portion of the 8-byte value. If an error occurs (such as data file not in use), GetSerialNbr() returns a zero value. Check uerr_cod for the error code.
If a SRLSEG exists, RebuildIFile() finds the highest serial number in use and updates the file header with that value.
To manually manage serial numbers, use the OPS_SERIAL_UPD status_word described in the SetOperationState() function description.
Transaction High-Water Marks
The FairCom Server and the single-user standalone operational model with transaction processing use a system of transaction number high-water marks to maintain consistency between transaction-controlled index files and the transaction log files. When log files are erased, the high-water marks maintained in the index headers permit the new log files to begin with transaction numbers that are consistent with the index files.
With FairCom Server, if a transaction high-water mark exceeds 0x3ffffff0 (1,073,741,808) for version 7.x servers and earlier or 3ffffffffff0 (70,368,744,177,648) for version 8.x servers and later, then the transaction numbers will overflow causing problems with index files. On file open, an error MTRN_ERR (533) is returned if an index file’s high-water mark exceeds this limit. If a new transaction causes the system’s next transaction number to exceed this limit, the transaction will fail with an OTRN_ERR (534).
This should be an unusual occurrence except on systems that are continuously processing significant volumes of transactions.
Before these errors occur, the FairCom Server issues warnings when the transaction numbers approach the transaction limit. These warnings are issued periodically to the system monitor and the CTSTATUS.FCS file.
To aid in debugging if spurious MTRN_ERRs occur, the server configuration keyword TRAN_HIGH_MARK takes a long integer as its argument and specifies a warning threshold. If an index file’s header contains a high-water mark in excess of this threshold, the file’s name is listed in CTSTATUS.FCS.
Use the CleanIndexXtd() function or the ctclntrn utility to reset index file high-water marks to zero. This permits new logs to start over with small transaction numbers. To fully reset the high transaction number requires the following steps:
- Shut down the FairCom Server cleanly, restart the server so it can do automatic recovery with no clients attached, and then shut it down cleanly a second time. Performing two shutdowns in a row ensures the application files are up-to-date and there are no pending recovery items so all SO*.FCS and L*.FCS files can safely be removed. Be sure not to overlook the following FCS files:
- FAIRCOM.FCS
- CTSYSCAT.FCS file (used for ODBC)
- SYSLOGDT.FCS and SYSLOGIX.FCS
- DFRKSTATEDT.FCS and DFRKSTATEIX.FCS
- RECBINDT.FCS and RECBINIX.FCS
- REPLFFCHGDT.FCS and REPLFFCHGIX.FCS
- REPLOGDT.FCS and REPLOGIX.FCS
- REPLOGSHIPDT.FCS and REPLOGSHIPIX.FCS
- REPLSTATEDT.FCS and REPLSTATEIX.FCS
- All the REPLSTATEIX_*.FCS
- REPLSYNCDT1.FCS and REPLSYNCIX1.FCS
- REPLSYNCDT2.FCS and REPLSYNCIX2.FCS
- SEMCOUNT.FCS
- SEQUENCEDT.FCS and SEQUENCEIX.FCS
- SYSLOGDT.FCS and SYSLOGIX.FCS
- CTSTATUS.FCS
- SNAPSHOT.FCS
- SEMCOUNT.FCS
- Use the CleanIndexXtd() function or the ctclntrn utility to clean all indexes used by the FairCom Server, including your application index files, superfiles and variable-length data files. Executing ‘ctclntrn FAIRCOM.FCS’ and all the other remaining FCS files listed above will clean all the member indexes in this system superfile and the same is true of any application superfiles as well.
- If you wish to verify the cleanup process, you can use the cthghtrn utility in the server utils directory to verify that the transaction high-water mark in the files you cleaned is zero or a reasonably low number.
- After completing the cleaning process, verify that there are no *.FCS files other than the cleaned FCS files listed above (which has been verified clean as described above) in the server directory. If you are using SERVER_DIRECTORY (now deprecated), LOCAL_DIRECTORY, LOG_EVEN, LOG_ODD, START_EVEN, START_ODD, or a similar keyword in your ctsrvr.cfg file that takes a directory, be sure to check that path for any existing *.FCS files and remove them.
- When you are satisfied that you have completely cleaned all files, restart the c tree Server. As soon as the server is up and operational, cat or type the CTSTATUS.FCS file prior to attaching any clients and be sure there are no “Pending TRANSACTION # overflow” messages indicating that you have missed cleaning or removing a system (*.FCS) file.
- You can easily monitor the current transaction value in your application by checking the return of the Begin() function and verifying it against the 0x3ffffff0 or 3ffffffffff0 threshold. This will ensure that you know well in advance about any impending transaction number overflow issues and will allow you to prevent any unexpected server shutdown.
Note: If you perform the high-water transaction clean operation to reset your high-water mark and then perform a restore that has unclean files or transaction logs, you will need to perform the clean operation again.
Transaction Log Numbering
The limit for transaction log numbering is high enough that it is unlikely any production system will encounter it. FairCom DB allows transaction log numbers to reach values over 2 billion. Notice that this limit refers to the numbering of transaction log files, which is not related to transaction numbering (see Transaction High-Water Marks). If a system should get close to the 2-billion mark, the Server Transaction Logs can be reset by safely removing the logs, following the FairCom DB Server Best Practice Upgrade Procedure, documented in the section of the Knowledgebase titled Upgrading from Previous Editions.
File Size and Operating System Limits
Different platforms support different maximum file sizes. The limit is imposed by the data type for the value used to seek to a given offset in a file. Older operating systems use a 4-byte signed offset, allowing physical files up to 2 GB. Newer platforms use 4-byte unsigned offsets (4 GB) or 8-byte unsigned offsets (16,000,000 terabytes = 18 exabytes).
On systems supporting only 4-byte signed offsets:
- Standard c-tree files can grow up to 2 GB in size.
- Extended files (also referred to as "huge" files) can be segmented into multiple physical files up to 2 GB each, allowing a single logical file to grow:
- To 4 GB using only segmented file support (using segments <= 2 GB each)
- Up to 16,000,000 terabytes using segmented file support and huge file support (using segments <= 2 GB each)
The following platforms are limited to 2 GB files:
AT&T SVR4 |
SunOS |
HP-UX 10 |
IBM AIX 3.2-4.1 |
Linux (before kernel 2.400) |
SCO OpenServer/UnixWare |
Macintosh (7-9, OS X) |
QNX 2 & 4 |
|
On systems supporting a 4-byte unsigned offset:
- Standard c-tree files can grow up to 4 GB in size.
- Extended files using huge file support and segmented file support can grow to 18 exabytes (using segments <= 4 GB each).
The following platforms fall into this category:
Windows 95 and 98 - FAT file system |
Solaris 2.6 (Intel/SPARC) |
On systems supporting 8-byte offsets:
- Extended format files with huge file support can grow up to 18 exabytes.
- Segmented file support is optional, but convenient for allocating portions of files to different volumes.
- Standard c-tree files can grow up to 4 GB in size.
The following platforms fall into this category:
Windows NT/2000 and above - NTFS file system |
HP-UX 11 |
AIX 4.2 and above |
Solaris 7 and above (Intel and SPARC) |
Linux (kernel 2.400 or later) |
FreeBSD |
File Handles/Descriptors
FairCom DB must request a file handle (file descriptor) for each file it has open.
The number of concurrent file handles is limited to 32768.
Record Size
Fixed-length records have a limit of 64 kilobytes. The record length for a data file is specified with the FairCom DB UCOUNT (unsigned 2-byte integer) datlen parameter.
Variable-length records are limited to 2 GB, with a length represented by the signed 4-byte varlen parameter.
Note: A data record must reside entirely in memory to be added, and must be read into memory as a unit, so memory constraints can also limit record size.
Enforce Maximum Disk Read/Write Sizes on Windows
A large variable-length record write operation could fail with Windows error 1450 ("Insufficient system resources exist to complete the requested service") when the data file resided on a network drive. c-tree normally writes the record in a single call to the WriteFile() Win32 API function. To avoid this error, c-tree supports setting a maximum disk read and write size. Read or write operations that exceed this size are performed as a series of reads or writes of the specified maximum size.
To set a maximum disk read and write size, compile c-tree with #define ctMAX_IO_SIZE <maxbytes>, where <maxbytes> is the maximum number of bytes c-tree will read or write in a single system call.
Note: #define ctMAX_IO_SIZE is supported on Windows systems only and has no effect on other platforms.