Backups and Data Integrity

The purpose of any data backup is to protect data integrity. Periodically backing up application data allows a Server Administrator to recover from problems or to roll back a database to a prior point in time.

There are several ways to back up data controlled by a FairCom Server:

  • Using a standard backup utility while the server is shut down.
  • Using the dynamic dump capability while the server is operational.
  • Using VSS backup integration on Windows.
  • Using ctQuiet/Quiesce for external backups while the server is running.

When possible, FairCom recommends shutting down the server periodically to allow a full backup. This has the advantage of simplicity, since all files can be backed up and restored without using the transaction logs to ensure the data and index files are synchronized. This is especially helpful for applications that do not use transaction control to maintain database integrity. The Administrator can simply restore the files and continue operation.

WARNING: Files under FairCom DB control should never be copied or backed up using third-party software while FairCom DB is operational.

 

FairCom Server Files

The FairCom Server creates special system files to maintain various kinds of information required to recover from problems. The following list details exactly what files are created, along with all required information needed by the System Administrator responsible for working with them. As the Administrator, be sure these files are backed up when appropriate and used for recovery when necessary.

Note: To be compatible with all operating systems, the names for all these files are upper-case characters.

FairCom Server Status Log

When it starts up, and while running, the FairCom Server keeps track of critical information concerning the status of the FairCom Server, e.g., when it started; whether any error conditions have been detected; and whether it shuts down properly. This information is saved in chronological order in a text file, the FairCom Server Status Log, CTSTATUS.FCS. To control the size of CTSTATUS.FCS, or to maintain inactive logs as T*.FCS files, use the CTSTATUS_SIZE keyword. See the keyword description for more detail.

Administrative Information Tables

The FairCom Server creates and uses the file FAIRCOM.FCS to record administrative information concerning users and user groups. This file can be encrypted with the ADMIN_ENCRYPT keyword. See Configuring the FairCom Server for details.

The FairCom Server creates the following files for managing transaction processing:

I0000001.FCS

Transaction Management Files

I0000002.FCS is an empty file generated at startup by any c-tree database engine with transaction support enabled. This file marks ownership of the process directory to avoid colliding with other FairCom DB processes that may generate their own independent transaction log files. The dump restore utility (ctrdmp) is the most common case where this is reported. When one running process detects this file, a TCOL_ERR error (537) is returned indicating this collision.

Note: It is important to safeguard these files, however only the S*.FCS and D0000001.FCS files should remain after a normal server shutdown.

File Name Mapping

FairCom DB maintains a mapping of file names to file numbers. This is transient information and stored in the D0000000.FCS file.

Delete Node Queue

D0000001.FCS maintains a list of emptied index nodes. These are eventually cleaned up by the delete node thread and remain available for reuse if needed via this queue.

Active Transaction Logs

Information concerning ongoing transactions is saved on a continual basis in a transaction log file. A chronological series of transaction log files is maintained during the operation of the FairCom Server. Transaction log files containing the actual transaction information are saved as standard files. They are given names in sequential order, starting with L0000001.FCS (which can be thought of as “active FairCom Server log, number 0000001”) and counting up sequentially (i.e., the next log file is L0000002.FCS, etc.).

The FairCom Server saves up to four active logs at a given time. When there are already four active log files and another is created, the lowest numbered active log is either deleted or saved as an inactive transaction log file, depending on how the FairCom Server is configured (see inactive transaction logs).

Every new session begins with the FairCom Server checking the most recent transaction logs (i.e., the most recent 4 logs, which are always saved as “active” transaction logs) to see if any transactions need to be undone or redone. If so, these logs are used to perform an automatic recovery. When configuring the FairCom Server, the odd and even numbered logs can be written to different physical devices. See Configuring the FairCom Server.

Checkpoint Files

S00000000.FCS and S0000001.FCS are generated during transaction log checkpoints. These files are used to "kick start" recovery and point to known good transaction states.

Inactive Transaction Logs

Transaction log files no longer active (i.e., they are not among the 4 most recent log files) are deleted by default. To save inactive transaction log files when new active log files are created, add the KEEP_LOGS configuration option to the server configuration with a positive number indicating the number of logs to keep. In this case, an inactive log file is created from an active log file by renaming the old file, keeping the log number (e.g., L0000001) and changing the file’s extension from “.FCS” to “.FCA.” The Administrator may then safely move, delete, or copy the inactive, archived transaction log file.

Temporary Stream Files

The server creates five stream files at startup. These files prevent errors when the operating system has used a large number of file handles and the server needs a stream file. The file names begin with FREOPEN followed by a distinguishing character and ending with .FCS. These temporary files are used for internal server operations and should automatically be deleted during a normal server shutdown.

Optional Server System Event Log

The FairCom Server maintains two optional system files: SYSLOGDT.FCS and SYSLOGIX.FCS. SYSLOGDT.FCS is a FairCom DB data file with a record for each recordable system event. Unlike the CTSTATUS.FCS file, the system log files can be encrypted so entries cannot be added, deleted, or modified with a simple text editor, and vendors can add application-specific entries to the log. See Configuring the FairCom Server or your vendor’s documentation for information on the SYSLOG keywords appropriate to your application.

In case of a system failure, be sure to save all the system files (i.e., the files ending with “.FCS”). CTSTATUS.FCS may contain important information about the failure. When there is a system catastrophe, such as a power outage, there are two basic possibilities for recovery:

  • When the power goes back on, the system will use the existing information to recover automatically.
  • The Administrator will need to use information saved in previous backups to recover (to the point of the backup) and restart operations.

Learn more at FairCom Server Status Monitoring Utility, ctsysm.

Deferred Index State File

The DFRKSTATEDT.FCS file stores a list of deferred index states. Learn more at Deferred Indexing in the FairCom DB C and C++ Programmer's Guide.

Global Sequences

The SEQUENCEEDT.FCS file stores global sequences. Learn more in these topics:

Replication State File

REPLSTATEDT.FCS

REPLSTATEIX.FCS

Once the c-tree Replication Agent has started and successfully connected to both the source and target servers, it will create a set of files on the target server which contains information about the current state and position of replication within the source transaction logs. This allows the Replication Agent to pick up from a previous session should a network connection fail, or the agent is paused for administrative purposes.

For more information, see the Replication Agent Guide.

SQL Swap Files

The LTS* files are from large SQL queries. They are created when sorting of a query grows beyond a set memory bound, causing the sort containers to be moved to disk. They are temporary and should be automatically removed after query execution is completed. An unexpected query could potentially generate these files.

For information about SQL swap files (LTS_* files), see SETENV TPE_TMPDIR.

 

Copying FairCom Server Controlled Files

WARNING: FairCom Server controlled files should only be copied, moved, or deleted when FairCom DB is shut down. Copying, moving, or deleting files while FairCom DB is in operation can lead to unpredictable errors and data integrity concerns and is never advised.

When a file open is attempted, FairCom DB checks if either a file with the same name is open, or if a file with the same unique ID is open. In either case, the match means a physical file open is not required. Instead, the open count for the file is incremented. The unique file ID permits different applications and/or client nodes to refer to the same file with different names, i.e., different drive or path mappings. However, if two different files have the same ID, problems arise because the second file will not actually be opened. The ID is constructed so that no two files could have the same ID unless someone copies one file on top of another.

When a file without a matching name matches the unique file ID, FairCom DB attempts to determine if they are actually different files. If so, it automatically generates a new unique ID for the file. In either case, a message to the system console indicates the names of the two files involved. If this information is not critical to your operation, suppress this message by adding the following entry to the FairCom DB configuration file:

MONITOR_MASK  MATCH_FILE_ID

 

Server Unique File Detection - NetWork/Remote/UNC file names

As the FairCom Server manages file open/close operations for multiple users, it is critical for it to recognize uniqueness of a file. Different users can refer to the same physical file using different file names through aliases, relative paths, device mappings or SUBST commands. Internally, the FairCom Server has tests to determine if two files with different names are really different, or are actually the same physical file being accessed with different paths or alias names. If this internal unique file test is not accurate, the FairCom Server may attempt to open the files as two separate physically different files. This presents many problems as the FairCom Server is then managing two separate caches for the same file - data integrity can no longer be enforced in these situations.

In some cases, when confronted with file names mapped across a network or referred with UNC syntax, such as “\\mymachine\c\mydata\myfile.dat”, this internal unique file test incorrectly determines two separate files are being addressed, when actually, the same file is being accessed. One user is using a physical name, while another was is using a UNC name. This problem, while uncommon, leads to serious consequences. As such, FairCom DB performs a number of protections.

COMPATIBILITY NO_UNIQFILE

This option disables attempts to determine if two files accessed with different file names (or paths) and which have identical FairCom DB file IDs are actually the same or different files. This support is added in case our tests for uniqueness are somehow incomplete and lead to unintended file ID reassignments. These modifications give FairCom DB the capability to disable the uniqueness test when files are suspected of having the same internal, “unique” 12-byte ID.

COMPATIBILITY EXACT_FILE_NAMES

This option mandates that all files have the exact same file name in order to be opened. If the internal name test determines that the files are in fact the same physical file, it will not allow the file to be opened with a different name. This compatibility keyword does not permit the same file to be opened with different names. If the same file is attempted to be opened with a different name, then error EXCT_ERR (642) will be returned.

There is a subtle interaction with the NO_UNIQFILE keyword defined above. The possible outcomes for all the combinations of keywords and files are in the table below. A file is represented by a lower case path, an uppercase name, and a numeric file ID. For example, pA1 has path ‘p’, name ‘A’, and file ID 1. pA1 and qA1 are the same file accessed with different paths; pA1 and qB1 are different files mistakenly having the same file ID; and pA1 and qB2 are two different files with different file IDs (as expected).

Four possible keyword combinations are possible:

Standard Neither NO_UNIQFILE nor EXACT_FILE_NAMES
NoUnique Only NO_UNIQFILE
Exact Only EXACT_FILE_NAMES
Both Both NO_UNIQFILE and EXACT_FILE_NAMES

In the outcomes table below, the first (successful) open is for file pA1. The second open is as indicated. In actuality, only the second open for qB1 requires some adjustment or an error return.

Second Open Standard NoUnique Exact Both
qA1 NO_ERROR NO_ERROR EXCT_ERR(642) EXCT_ERR (642)
qB1 Modify B’s file ID and return NO_ERROR Incorrectly treat B as a shared open of A and return NO_ERROR Modify B’s file ID and return NO_ERROR EXCT_ERR (642)
qB2 NO_ERROR NO_ERROR NO_ERROR NO_ERROR

The uniqueness test (which is based on system dependent calls) may incorrectly indicate that two files are unique when they are the same. This occurs with certain mappings and/or aliases masking the sameness of the files. If this occurs, the first row of the above table becomes:

Second Open Standard No Unique Exact Both
qA1 Incorrectly reassign file ID and have same file opened as two different files. NO_ERROR Incorrectly reassign file ID and have same file opened as two different files. EXCT_ERR (642)

The most conservative approach is to turn on both keywords, but of course this requires the same name (and path) to be used for a file on all opens. If the uniqueness test is without weakness, then the standard setting (i.e., neither keyword) works best.

LOCAL FILE TEST

Because of the potential problems with network file names, and because FairCom does not recommend (and discourages) placing FairCom Server files or logs on network drives (e.g. drives NOT on the local machine running the FairCom Server executable), a warning message is logged to CTSTATUS.FCS if a network file is detected. Besides the potential problem for the unique file test, placing data/index or server log files on a mounted network drive will introduce an additional network overhead and jeopardize the server’s performance. The WARNING is only issued on the first such occurrence to avoid unnecessary overhead. If either of the COMPATIBILITY keywords is active, NO_UNIQFILE or EXACT_FILE_NAMES, the issue described above is not in play and FairCom DB automatically disables this test.

Due to the possibility of a performance hit, the COMPATIBILITY NO_TEST_LOCAL keyword is available to turn off the check of whether a file is local or remote.

 

Automatic Recovery

As described in Starting the FairCom Server, each time the FairCom Server starts it checks the current active transaction logs and determines if any transactions must be undone or redone. If any recovery operation is required, the FairCom Server does it automatically. The Administrator need not do anything. The FairCom Server displays messages indicating the beginning and the end of the recovery. When automatic recovery completes, the FairCom Server is ready to use.

Performance Improvements in V12

Automatic Recovery could take a long time to complete if many files were referenced in the transaction logs. Backup recovery logic has been enhanced to greatly speed this process.

As an example, this performance enhancement reduced the time of ctrdmp restoring 1500 data and 1500 index files from 406 sec. to 63 sec.

Automatic Recovery could take a long time to complete if many files were referenced in the transaction logs. Backup recovery logic has been enhanced to greatly speed this process.

As an example, this performance enhancement reduced the time of ctrdmp restoring 1500 data and 1500 index files from 406 sec. to 63 sec.

 

Recovery in Alternate Locations with REDIRECT

The REDIRECT feature is a useful feature allowing a file originating in one directory structure to be repositioned into another directory location during dynamic dump restore. This support has been extended to FairCom DB automatic recovery.

Redirection rules can be specified by using the following configuration entry one or more times in the server configuration file ctsrvr.cfg:

REDIRECT <old path> <new path>

The REDIRECT entry redirects filename references in the transaction logs during automatic recovery to the specified new filename. This option is useful when FairCom DB data and index files are moved to a different location (on the same system or on another system) before running automatic recovery.

To specify an empty string for one of the REDIRECT arguments use a pair of double quotes ("").

Examples

If a file originally existed with the name and path C:\Documents and Settings\Administrator\c-tree Data\customer.dat and now exists as the file D:\Documents and Settings\Guest\customer.dat, the following option will allow automatic recovery to proceed and find the file in its new location:

REDIRECT "C:\Documents and Settings\Administrator\c-tree Data" "D:\Documents and Settings\Guest"

Here’s a similar example using Unix paths, where the original file is named /users/administrator/c-tree data/customer.dat and the file now exists as /users/guest/customer.dat:

REDIRECT "/users/administrator/c-tree data" "/users/guest"

Note: Use double quotes when a filename contains spaces.

Updating IFIL Filenames

As a result of redirection, if the IFIL resource of the file contained a path, this path would be incorrect after the file was redirected to the new location. To support copying FairCom DB files from one directory location to another (on the same system or on a different system) and accessing them in their new location, it is necessary to update any filename paths in a FairCom DB data file’s IFIL resource.

The FairCom DB configuration option REDIRECT_IFIL <filename> provides support for automatically modifying redirected files on the server. When this option is specified, on server start up (after automatic recovery completes) the file named <filename> is opened and its list of file names is read from it. <filename> is a text file containing one FairCom DB data file per line. For each file specified in <filename> FairCom DB opens the file and uses the filename redirection rules (specified with one or more of the REDIRECT options) to change the data and index file paths in the IFIL resource of the file.

Refer to the FairCom DB ctredirect standalone utility to manually modify files that may have been moved.

 

Options for Faster Auto-Recovery

The following keywords described in Advanced Configuration Keywords reduce the FairCom Server disaster recovery time. Reducing the FairCom Server auto-recovery time is done at the expense of FairCom Server throughput during normal operation. FairCom Server auto-recovery is typically a vital consideration in time sensitive, mission critical applications, such as a PBX controller or embedded automation control application. If your application requires the fastest possible data access during normal operations, this section will not be of interest.

CHECKPOINT_FLUSH

CHECKPOINT_IDLE

CHECKPOINT_INTERVAL

FORCE_LOGIDX

RECOVER_DETAILS

RECOVER_FILES

RECOVER_MEMLOG

TRANSACTION_FLUSH

 

Database Backup Guide

The "dynamic dump" feature provides an administrator a safe, secure method of backing up data while FairCom DB is operational. The Administrator can schedule a dump of specific files, which may be all files necessary for recovery or a subset of them. The dump executes while FairCom DB is actively processing transactions and is transparent to users.

FairCom DB performs a dump at first opportunity on or after a scheduled time. When beginning a scheduled dump, FairCom DB temporarily halts new transactions and starts the actual dump as soon as all active transactions complete or abort after a predetermined delay period.

The default behavior is to wait for all pending transactions to complete. This may cause a timeout status error (156 NTIM_ERR) to be logged. That error does not imply anything about the dump, other than it waited for the transaction to complete. To abort all pending transactions and start the dynamic dump immediately, the !DELAY option can be used to abort pending transactions after a specified number of seconds.

When a dynamic dump needs to abort a transaction, it puts the transaction into an error state; however it does not actually abort the transaction. With the transaction in an error state, it cannot progress or commit, so the connection that owns the transaction will abort it. Records locked in the transaction will remain locked until the connection aborts the transaction or detects that the transaction has been canceled. If this should occur, it is plausible the application will receive a TABN_ERR (error 78), indicating the dynamic dump wait has been exhausted, and the transaction was aborted.

Once the dump commences, transactions can process as usual.

Note: The dynamic dump and recovery processes are intended primarily for files under transaction processing control. Non-transaction controlled files can be dumped with certain restrictions. See Dump Files Without Transaction Control for more information.

The following sections describe the dump and recovery utilities:

Process Utility Explanation
Dynamic Dump ctdump Dumps data during server operation.
Dump Recovery ctrdmp Restore files to state as of last dump.
Rollback ctrdmp Roll the database state to an earlier time following a dump recovery.
Roll Forward ctfdmp Roll the database state to a later time following a dump recovery.

 

Scheduling a Database Backup

There are two ways to schedule dynamic dumps:

Server Configuration

The FairCom Server configuration file may be used to schedule dynamic dumps. In the file, the keyword DUMP is followed by the name of the script file defining the dump. The path to this script is relative to the server's working directory.

Dynamic Dump Utility

The dynamic dump utility, ctdump, is a separate utility for the Administrator to use at any time while the server is active.

To schedule an ad hoc dynamic dump with ctdump use the following procedure:

  1. While FairCom DB is running, start the utility program ctdump as any other program in the environment.
  2. Enter the password for the ADMIN administrator account.
  3. Enter the current FairCom Server name (if assigned a different name than the default). See Basic Configuration Options for information on SERVER_NAME.
  4. Enter the name (with path if necessary) of the dynamic dump script file.

The FairCom Server confirms that it has scheduled the requested dynamic dump.

Once a dynamic dump has completed, files may be used for Dump Recovery and/or Rollback.

 

ctdump - Schedule Backup Utility

Note: For a complete discussion of dynamic dumps, see Backups and Data Integrity in the FairCom Server Administrator's Guide.

Operational Model:

  • Client

ctdump schedules a dynamic dump on the fly. The backup definition script may be located either on the server, or passed in from the client.

Command Syntax

ctdump  [-s svn] [-u uid] [-p upw] [-t script] [-b bufsiz] [-n] [-c] [-o backup_filename] [-x]

Options

  • -s svn - c-tree Server name
  • -u uid - User name
  • -p upw - User password
  • -t script - Dump script name
  • -b bufsiz - Use buffer size of bufsiz bytes
  • -c - Send dump script from client
  • -m - Minimize progress notifications
  • -n - Send progress notifications to client
  • -o backup_filename - Write dump stream from server to file on client
  • -x - Write dump stream from server to stdout

For options available when scripting a dynamic dump, see Dynamic Dump Options.

The following demonstrates example usage of this utility:

ctdump ADMIN ADMIN thescript FAIRCOMS

Secure Authentication File

This utility supports the use of an encoded password file. Encoded password files keep user IDs and passwords from plain view when the utility is used within a script file. They are created with the ctcmdset utility. The plain text form of the file should be:

; User Id
USERID ADMIN
; User Password
PASSWD <pass>

Use the -1 option to specify the name of the encoded file.

Dump and Restore Version Compatibility

The ctrdmp utility is used to restore a Dynamic Dump and the ctfdmp utility can be used to roll forward. Occasionally an update to the FairCom Database Engine may cause an incompatibility between versions. For this reason you must use the ctrdmp from the same release from which the dump was created. It is important to save a copy of the ctrdmp utility that is compatible with each dump file. ctfdmp, ctldmp, and ctrdmp utilities display the FairCom DB version used to compile them when they are run.

Compression and Encryption

The ctdump.exe utility can accept compression and encryption utility inputs.

Windows backup compressed with 7z compression utility:

ctdump -s FAIRCOMS -u admin -p ADMIN -t dump.txt -c -x | 7z a -si backup.7z

Unix backup compressed and AES encrypted with standard utilities:

./ctdump -s FAIRCOMS -u admin -p ADMIN -t dump.txt -c -x | gzip|openssl enc -aes-256-cbc -salt >backup.gz.aes

Tip! Unix/Linux Alternative

This method also works with server-side dynamic dump backups prior to this new support and can be implemented on Unix systems using named pipes (mkfifo), and sending the dump to the named pipe.

  1. Create a named pipe:
    1. mkfifo testpipe
  2. The pipe blocks until both sides are in use. So we start our compression command:
    1. cat testpipe | gzip > backup.gz &
  3. Now we create our dynamic dump, with "testpipe" as the dump file. (This example uses the client output, however it should work the same from the server):
    1. ctdump -s FAIRCOMQ -u admin -p ADMIN -t dump.txt -c -o testpipe
      
      c-tree(tm) Version 11.8.0.61131(Build-180802) Dynamic Backup Utility
      
      Copyright (C) 1992 - 2018 FairCom Corporation
      
      ALL RIGHTS RESERVED.
      
      Reading dump stream from server with buffer size of 100000
      
      Dynamic Dump has been successfully written to the file testpipe.
  4. The backup.gz file contains the dump file.

The same approach can be used to compress and encrypt (and potentially other operations provided by system utilities):

cat testpipe|gzip|openssl enc -aes-256-cbc -salt >backup.gz.aes

To extract the dump:

cat backup.gz.aes | openssl enc -aes-256-cbc -d |gzip -d >backup.ctree

Use ctrdmp to extract the files from backup.ctree.

Error Codes

The following error codes are related to dynamic dump operations:
 

Error Name Error Code Explanation
FUNK_ERR 13 Cannot determine file type. Possibly a c-tree V4.3 file?
READ_ERR 36 Failed to read file, either a corrupted or non-tree file.
TCOL_ERR 537 Transaction log collision. Two sets of transaction logs in the same directory?
FCPY_ERR 796 Immediate dump restore file copy failed.
DRST_ERR 797 Immediate dump restore failed.

See Also:

 

dyndmp and dyndumpsetopt

The Dynamic Dump can send a script to the server and receive a dump stream and/or status messages from the server. The following capabilities are available for scripting a dynamic dump:

  1. When scheduling a dynamic dump, the client can send the dump script to the server.
  2. When running a dynamic dump, the client can request that status messages be sent to it while the dump is performed and/or the dump stream file can also be sent to the client process.

To use these options, call the function dyndmpsetopt() before calling dyndmp():

extern NINT dyndmpsetopt(NINT option,pVOID value);
 

The following are the supported options. All options are disabled by default.

  • DDOPT_SENDSCRIPT - Send dump script to server. Set value to the script name, or set it to NULL to disable this option. Example:

dyndmpsetopt(DDOPT_SENDSCRIPT, "script.txt");

  • DDOPT_RECVSTREAM - Receive dump stream from server. Set value to YES to enable this option or NO to disable this option. Example:

dyndmpsetopt(DDOPT_RECVSTREAM, (pVOID) YES);

  • DDOPT_RECVSTATUS - Receive status messages from server. Set value to YES to enable this option or NO to disable this option. Example:

dyndmpsetopt(DDOPT_RECVSTATUS, (pVOID) YES);

  • DDOPT_SETCALLBK - Set callback function. Set value to the callback function pointer, or set it to NULL to disable the use of the callback function. Example:

extern ctCONV NINT mycallback(pVOID pctx,pVOID pdata,NINT datalen,NINT opcode);

dyndmpsetopt(DDOPT_SETCALLBK , &mycallback);

  • DDOPT_SETCONTEXT - Set callback function context pointer. Set value to the context pointer that will be passed to the callback function. Example:

mystruct mycontext;

dyndmpsetopt(DDOPT_SETCONTEXT, &mycontext);

  • DDOPT_SETBUFSIZ - Set communication buffer size. Set value to the buffer size to use. Example:

dyndmpsetopt(DDOPT_SETBUFSIZ, (pVOID) 100000);

Notes:

1) The dump options remain in effect for all dynamic dumps performed by the current connection until they are changed.

2) When the DDOPT_RECVSTREAM or DDOPT_RECVSTATUS options are used, the following dynamic dump script options are ignored:

COPY_NONCTREE - Non-ctree files cannot be copied.

DATE and TIME - No scheduling of dump for later time.

EXT_SIZE - Only one dump extent is created.

FREQ - No repeat of dump.

SEGMENT - No segmenting of dump stream.

The ctdump utility supports these features through command-line options:

usage: ctdump  [-s svn] [-u uid] [-p upw] [-t script] [-b bufsiz] [-n] [-c] [-o backup]

Options:

  • -s svn - c-tree Server name
  • -u uid - User name
  • -p upw - User password
  • -t - Dump script name
  • -b bufsiz - Use buffer size of bufsiz bytes
  • -c - Send dump script from client
  • -m - Minimize progress notifications
  • -n - Send progress notifications to the client
  • -o backup_filename - Write dump stream from server to file on client

Example:

# ctdump -u ADMIN -p ADMIN -o backup.fcd -c -t script.txt -s FAIRCOMS -n

Results:

c-tree(tm) Version 11.1.0.46197(Build-150826) Dynamic Backup Utility
Copyright (C) 1992 - 2015 FairCom Corporation
ALL RIGHTS RESERVED.
Reading dump stream from server with buffer size of 100000
 Start dump. Estimated dump size:      2691072
FAIRCOM.FCS
 86% 100% [     2328576 of      2326528 bytes]
SYSLOGDT.FCS
 89% 100% [       86016 of        81920 bytes]
SYSLOGIX.FCS
 99% 100% [      266240 of       262144 bytes]
S0000000.FCS
100% 100% [        4096 of          128 bytes]
S0000001.FCS
100% 100% [        4096 of          128 bytes]
L0000001.FCS
100% 100% [        2048 of          679 bytes]
   End dump.    Actual dump size:      2705408
Dynamic Dump has been successfully written to the file backup.fcd.

See Also:

 

Backup Script Options

The FairCom dynamic dump provides a variety of options. These options are included in a script file. This section describes the script file and lists the script keywords and arguments available for defining a dynamic dump.

 

Backup Script File

The dump script file is a plain text file that specifies your options. For example, when to perform a dump, what interval to repeat a dump, and what files to include in a dump.

Format

The script file consists of a series of instructions, each of which is given by a keyword followed by a space and an argument (e.g., the keyword !DAY followed by the argument “WED”). All script keywords begin with an ‘!’ and are not case sensitive (i.e., !DAY = !Day). Arguments are strings of letters, numbers, and punctuation, in the format shown below for each keyword (e.g., WED). New lines divide script keywords and arguments. Keep each keyword/argument pair on a separate line, as in the sample script shown after the list of keywords.

With the following two exceptions, the order of keywords does not matter:

  • The next to last script keyword must be !FILES, followed by an argument which is a list of the files to be dumped one file name per line. Do NOT include a file name on the same line after the !FILES keyword.

The last script keyword in the script file must be !END, which takes no argument.

Example

!TIME   23:00:00
!DAY    Sun
!DUMP   SYSTEM.BAK
!DELAY  600
!FREQ   168
!FILES
FAIRCOM.FCS
!END

This script schedules a weekly dump, at 11:00 PM on Sunday nights. The only file included in the dump is FAIRCOM.FCS. The system will wait until 11:10 PM (i.e., 600 seconds delay, after the starting time of 11:00 PM) for all active transactions to complete and then it will abort any transactions still active and begin the actual dump. The dump data will be saved in the file named SYSTEM.BAK.

Note: The FairCom DB Server configuration file can also control the way lingering transactions are aborted.

Note: The server opens the dynamic dump script when the dump is scheduled, reads its contents, then closes it. At dump execution time the server opens the script again, reads the contents and then closes it before proceeding to dump the files.

See Also:

 

!BLOCK_RETURN

Forces ctdump to wait until the dynamic dump is completed before returning. Without BLOCK_RETURN, ctdump returns as soon as the FairCom Server receives the dump request. By waiting for completion, BLOCK_RETURN permits the System Administrator to determine when the dump is completed. Developers may find it useful to alert a System Administrator when a dynamic dump is complete.

 

!COMMENT

Default: Off

Informs the FairCom Server that the remainder of the script file is for documentation purposes only and is not to be evaluated. Do not place keywords after this keyword.

 

!COPY_NONCTREE

To include non c-tree files in a dynamic dump, use the dump keyword !COPY_NONCTREE. Any file included in the !FILES section of the FairCom Server dynamic dump script that receives error FUNK_ERR (13) or error READ_ERR (36) on c-tree open will be treated as a non c-tree file and copied directly to the dump stream base directory. More accurately, to the dump stream base directory plus any subdirectory included in the file's name.

If the destination directory does not exist, the FairCom Server will attempt to create it. If this directory creation fails a FCPY_ERR (796) is reported.

Note: A check is not made that wildcard specifications in the c-tree/non-ctree file sections match the same filename. In this case, the c-tree file is included in the dump and then the non-ctree file is also copied.

See Also

 

!DATE <mm/dd/yyyy>

Date to perform the dynamic dump or rollback. If the date has already passed, the !FREQ interval is applied to find the next scheduled time. If no !DATE or !DAY is specified, today is assumed.

 

!DAY <day of week>

Instead of a date, a day-of-week may be used to schedule the dump. They are specified as SUN, MON, TUE, WED, THR, FRI, or SAT. If no date, time or day-of-week is specified, then the dump is scheduled for immediate execution.

 

!DELAY <seconds>

Number of seconds to wait until aborting active transactions.

If zero, the FairCom Server will not abort active transactions. The dump waits indefinitely until all active transactions have completed and no new transactions will be permitted to begin.

If the delay value is greater than zero, the FairCom Server waits until either the delay has expired or all active transactions have completed. At this point, it begins the dynamic dump and permits new transactions to start up. If all transactions have not completed, the FairCom Server aborts those transactions still in progress, with one of two error messages:

  • TABN_ERR (78), indicates the transaction has been abandoned.
  • SGON_ERR (162), a generic error indicating a break in communication between the FairCom Server and the application.

Note: The default behavior is to wait for all pending transactions to complete. This may cause a timeout status (156 NTIM_ERR) to be logged to inform the DBA of this unexpected situation. That error does not imply anything about the dump, other than it waited for transaction to complete. If the DBA prefers to abort all pending transactions and start the dynamic dump immediately, the !DELAY script option can be used to abort the pending transactions after the specified number of seconds.

Option for Abandoning Dump

When the Dynamic Dump begins, by default it prevents new transactions from beginning and waits indefinitely for open transactions to complete. Once no transactions are active, a dump checkpoint is created, the dump creation begins, and transactions are allowed to resume. If the dump script option !DELAY <N> is specified, the Dynamic Dump waits up to N seconds for active transactions to complete before terminating them. This revision allows !DELAY <N> with N < 0 to cause the dump be abandoned with error DUMP_ABANDONED_ERR (1162) if active transactions still remain after |N| seconds.

 

!DUMP <dump file>

The name of the file or device into which all the data for all the dump files will be stored.

Note: If a file by this name already exists, it will be deleted at the beginning of the dump and the new dump file replaces the old file.

Note: There must be sufficient space for the dump file, which is limited to the maximum file size for the operating system (2 GB on some systems). If enough space is not available, the dump fails. A failure due to insufficient disk space will not corrupt anything, but additional space must be allocated before a dynamic dump is completed.

 

!END

Terminates the instructions in the script file. Place !END immediately after the !FILES keyword and list of files. !END takes no argument.

 

!ENDSEGMENT

Terminates the list of segments when specifying individual segment size and location.

 

!EXT_SIZE <bytes | NO>

Change the default extent (segment) size from 1GB or disable with NO. See Section Dump To Multiple Files - No Size Limit (Backup as Multiple Extent Files, Backup Extent Sizes) for more details.

 

!FILES

The !FILES keyword is followed by names of files to include in the dynamic dump or rollback. This must be the next to last keyword in the script file and it takes no arguments.

Filenames must begin following the !FILES keyword line with one line for each file. File names should not be listed on the same line as the !FILES keyword. The !END keyword terminates the list of files on a single line.

We strongly suggest that FAIRCOM.FCS be included in your list.

Members of a superfile cannot be individually “dumped.” The entire superfile must be dumped; that is, the name of the host superfile, not a superfile member, is placed in the list of files.

The * and ? wildcards are supported.

See !RECURSE for other options.

NOTE: Dynamic dump supports cloning a data file. This means that an empty copy of the data file is backed up. To use this feature, precede the file name with the text <clone>. Here's an example showing myfile.dat being cloned and all files matching the wildcard cust*.dat being cloned (thedata.dat is a full backup):

!FILES
<clone>myfile.dat
<clone>cust*.dat
thedata.dat
!END

See also:

 

Wildcard Support for File Names

Dynamic dump backup and restore scripts specify the names of FairCom DB data and index files that are to be backed up or restored, delimited by the !FILES and !END script keywords. It is possible to specify filenames using the typical asterisk (*) and question mark (?) wildcard symbols. See FairCom DB Standard Wildcards.

In addition, the dynamic dump script keyword !RECURSE controls directory recursion when using wildcards. The !RECURSE keyword only applies when processing a !FILES entry containing a wildcard. Keep in mind that it is possible to specify standard file names and wildcard file names, one after the other, in the script. For example:

!RECURSE YES
!FILES
myfile.dat
cust*.dat
thedata.dat
*.idx
!END

There are three parameters for the !RECURSE keyword:

!RECURSE NO Do not recurse subdirectories.
!RECURSE YES Recurse underlying directories (max depth 32).
!RECURSE MATCH_SUBDIR File names and directory names must match.

In the case of MATCH_SUBDIR, not only does the file name require a match on the wildcard pattern, but only directory names which match the pattern will be considered for recursion.

The dynamic dump is specifically designed to address FairCom DB data and index files, including superfiles. Please keep in mind: it is possible for your wildcard representation to represent non-c-tree files (see Non-ctree Files Included in a Dynamic Dump (Back Up Non-ctree Files, Back Up Non-ctree Files)). The following definitions cause all files within the server’s LOCAL_DIRECTORY to be considered. If any non-ctree files are encountered, the dynamic dump rejects them and a message is written to the CTSTATUS.FCS file if the DIAGNOSTICS DYNDUMP_LOG keyword is active. A rejection does NOT cause the dump to terminate. It will proceed to the next file.

   !FILES *.*              !FILES *
   !END                    !END

Please remember that the dynamic dump does not support individual superfile member names. Specify the host superfile name in the script to back up the members. Here are examples of wildcard names:

  • the pattern m*t matches mark.dat but does not match mark.dtx
  • the pattern *dat matches markdat and mark.dat
  • the pattern *.dat only matches mark.dat (not markdat)

 

Wildcards Exclude and Include Files in Backups

The backup script already supports wildcard file names to easily specify a set of files to be backed up. A frequent situation is having a directory of many files and wanting to exclude several files from a backup. For example, some files are transient in nature and are frequently re-created with no need to be included in a backup. It was requested if this ability could be easier managed with file exclusion capability.

V11.9 and later now provides !EXCLUDE_FILES. This option excludes files from the backup that match wildcard specifications.

  • !EXCLUDE_FILES must be specified before the !FILES section in the dump script.
  • !EXCLUDE_FILES also applies to the !NONCTREEFILES list of files.

Example

Back up all files matching *.dat and *.idx except for files that match test.*.

Reminder: !FILES and !EXCLUDE_FILES must go on a line all by themselves

!DUMP backup.fcb
!EXCLUDE_FILES
test.*
!FILES
*.dat
!END

 

FairCom DB Files to NOT Include in a Backup

Certain c-tree housekeeping files should not be included in your dynamic dumps. When restoring dumps with these files in them, you may find you end up with DCRE_ERR (444) errors as these files collide with housekeeping files of the restore process itself. The following files should be excluded from your list of files to back up:

  • L*.FCS (Transaction Log files)
  • I*.FCS
  • S0000000.FCS (Transaction start file)
  • S0000001.FCS (Transaction start file)
  • D*.FCS

WARNING: Don't use *.FCS in your file list.

Exceptions:

FAIRCOM.FCS - Maintains user and group security information. ALWAYS back up this file.

SEQUENCEDT.FCS - Sequence number pool and index. If using the sequence number feature this file is a must to back up.

SEQUENCEIX.FCS

SYSLOGDT.FCS - System logs. If using this feature, consider backing up this file.

SYSLOGIX.FCS

 

!FREQ <hours>

Hours between successive dumps. For example, 24 to repeat the dump once a day, or 168 to repeat the dump once a week.

 

!IMMEDIATE_RESTORE

The FairCom Server dynamic dump script file keyword, !IMMEDIATE_RESTORE instructs the dynamic dump to be immediately followed by a dump restore. This allows transaction-consistent files to be available immediately in a native file system form as opposed to embedded files in the dynamic dump stream file.

A key issue is where the dynamic dump restore utility, ctrdmp, can run as it cannot run in the current server directory. If this occurs, error TCOL_ERR (537) results indicating that ctrdmp conflicted with an existing server operation.

The natural solution is to run ctrdmp in the directory that receives the dump stream file, which is called the dump stream base directory. In essence, this requires that !DUMP <streamFileSpec> use a file name including a path where the dump restore should run. For example, a dynamic dump script entry of the form

!DUMP   I:\dump\mydumpstream

will cause the dump stream file mydumpstream to be created in the dump stream base directory I:\dump. If !IMMEDIATE_RESTORE is part of the dump script, then the automatically launched ctrdmp is also executed in the I:\dump directory.

It is recommended to launch ctrdmp.exe from a batch file called ctrdmp, which can reside in the server directory. The executable can reside elsewhere (i.e., in the dump stream base directory) and the batch file can call it using a path. The batch file can also do cleanup before (and after) a restore takes place, such as archiving from a prior restore.

Upon restoration of files, the enhanced dump restore will also automatically create any required directory hierarchies for previously backed up files. If an immediate restore operation fails, the server sets the error code for the dynamic dump operation to DRST_ERR (797, immediate dump restore failed).

 

!IMMEDIATE_RESTORE_LOGFILE

The !IMMEDIATE_RESTORE_LOGFILE option can be specified in a dynamic dump script to cause FairCom Server to write the output of the ctrdmp utility during the immediate restore to the specified file. This can be useful for troubleshooting failed immediate restores. For example, to write immediate restore output to the file ctrdmp.log in FairCom Server's working directory (in the LOCAL_DIRECTORY directory if that option is used), use this dump script option:

!IMMEDIATE_RESTORE_LOGFILE ctrdmp.log

Note that the !IMMEDIATE_RESTORE_LOGFILE option is only used by FairCom Server during the backup phase. It is ignored by the ctrdmp utility.

 

!IMMEDIATE_RESTORE_WITH_PROGRESS

FairCom Server reads the output of the dump restore utility during immediate restore and sends progress notifications to the client that invoked the dump (if requested). To use the notification feature during an immediate restore of a dynamic dump, follow these steps:

  1. Install a version of the ctrdmp utility that supports the !IMMEDIATE_RESTORE_WITH_PROGRESS option into a directory that is in FairCom Server's path.
  2. Use the new dump script option !IMMEDIATE_RESTORE_WITH_PROGRESS in the dump script rather than the !IMMEDIATE_RESTORE dump script option.
  3. Use the -n option when running the ctdump utility to request FairCom Server to send dynamic dump progress notifications to the ctdump utility.

 

!NONCTREEFILES

A dynamic dump script also supports listing specific files to be backed up as non-ctree files. If the !FILES list contains the !NONCTREEFILES keyword, all files following that keyword are treated as non c-tree files. Wildcard specifications are allowed. The !NONCTREEFILES keyword must appear after the !FILES keyword.

Also see the alternative method !COPY_NONCTREE

Note: The !NONCTREEFILES script keyword does not require specifying the !COPY_NONCTREE option in the script.

See Also

 

!PROTECT and !PROTECT_LOW

The keyword !PROTECT, without an argument, added to a dynamic dump script file suspends updates to each non-transaction file while it is being dumped. This ensures the file’s data integrity. The associated index files for a data file are not guaranteed to be consistent with the data file because the files are not dumped at the same time. With transaction files, the files are consistent because transaction log entries are used to bring all files back to the same point in time, i.e., the effective dump time. In most situations it is more efficient to dump only the data files and rebuild to recreate the indexes.

The update suspension is enforced only at the ISAM level unless the keyword !PROTECT_LOW is used instead. !PROTECT and !PROTECT_LOW are mutually exclusive options. The last one in the script is used. FairCom suggests using the !PROTECT_LOW when using low-level function calls.

Whether or not !PROTECT or !PROTECT_LOW are used, resource updates are suspended at the AddResource(), DeleteResource(), and UpdateResource() entry points.

 

!RECURSE <YES | NO | MATCH_SUBDIR>

Default is NO. Controls directory recursion when using wildcards. The !RECURSE keyword only applies when processing a !FILES entry containing a wildcard. In the case of MATCH_SUBDIR, not only does the file name require a match on the wildcard pattern, but only directory names which match the pattern will be considered for recursion.

 

!SEGMENT

See details in Segmented Dynamic Dump (Back Up as Segmented Files, Back Up Segmented Files).

 

Dynamic dump !REPLICATE

Enables replication of the files specified by the !FILES (!FILES, !FILES) command, including a superfile host and its members.

You can enable replication on one or more files during your dynamic dump "hot" backup operation. This allows an easy way to begin syncing files between systems.

A dynamic dump script can specify the !REPLICATE option in the !FILES section to instruct the dynamic dump to enable replication support for the files whose names follow the !REPLICATE option. Replication is commenced after the dynamic dump has achieved a quiet transaction state. As are other files listed in the !FILE section, these files are also backed up by the dynamic dump. For example, the following script will cause the dynamic dump to enable replication for the file test2.dat if it does not already have replication enabled:

!DUMP mybackup.fcd
!FILES
FAIRCOM.FCS
test1.dat
test1.idx
test2.idx
!REPLICATE
test2.dat
!END

If enabling replication fails for any file, the dynamic dump logs an error message to CTSTATUS.FCS and terminates. Possible causes of such an error include:

  1. Specifying the name of a non-ctree file or a file that does not meet the requirements for replication after the !REPLICATE keyword.
  2. If a file is open in exclusive mode at the time of the dump, the dynamic dump is not able to enable replication for the file.

 

!TIME <hh:mm:ss>

Time of day, on a 24-hour clock, to perform the dynamic dump or rollback. If the time has already passed, then the !FREQ interval is used to find the next scheduled time. If a !DATE or !DAY is specified without a time, then the time defaults to 23:59:59.

The script requires the use or leading zeros for the hour, minute, and second so that each contains two digits. For example, the valid entry for 6:00 is:

!TIME 06:00:00

The following is not a valid entry (notice the single digit, "6", for hours):

!TIME 6:00:00

If no time, day, or date is specified the dump begins immediately.

 

Dynamic Dump callback event notifications during immediate restore

For developers who are invoking a dynamic dump and receiving notifications, the dynamic dump callback function will now receive event notifications during the immediate restore. When the user-defined dynamic dump callback function is invoked with the opcode parameter set to ctDDCBKstatus, the pdata parameter is of type pDDST. The DDST structure's phase field indicates which event occurred. The new event values for immediate restore are:

1. DDP_BEGINRESTORE: start of dump restore
2. DDP_BEGINFILE, DDP_PROGFILE, and DDP_ENDFILE events occur for each file restored from the backup.
3. DDP_ENDRESTORE: end of dump restore; pddst->errcod is zero if it succeeded or non-zero if it failed.
4. DDP_BEGINROLBAK: start of rollback to dump start time
5. DDP_ENDROLBAK: end of rollback to dump start time
6. DDP_BEGINCLNIDX: start cleaning index nodes
7. DDP_ENDCLNIDX: end of cleaning index nodes
8. DDP_BEGINUPDIFL: start of IFIL update
9. DDP_ENDUPDIFL: end of IFIL update

See ctdumpCallback() in ctree\source\ctdump.c for an example of a callback function that handles these events.

Stop when client callback function returns an error

If an error occurs in the client-side dynamic dump callback function, the server will not continue to perform the dump. In V11.2.3 and later, the logic has been modified so that any error returned by the client's dynamic dump callback function terminates the dynamic dump.

Note: This is a Compatibility Change.

 

Dynamic Dump status and error messages

The dynamic dump client notification option sends a status message to the client when a file listed in the dump script fails to be opened (e.g., error 13 or 14).

The message for error 442 for the ctdump utility prints a message when the dump completes but some files can't be included in the dump (error 442).

An option (-m) was added to the ctdump utility to show minimal output (suppressing the % complete output).

Sample output:


# ctdump -u ADMIN -p ADMIN -s FAIRCOMS -t dump.scr -n -m

FairCom DB(tm) Version 11.3.0.7876(Build-160501) Dynamic Backup Utility

Copyright (C) 1992 - 2016 FairCom Corporation
ALL RIGHTS RESERVED.

Reading dump stream from server with buffer size of 100000


Error: Failed to back up file atomrd.idx: error code 13.

 Start dump. Estimated dump size:     94769152

Successfully backed up file atomrd.dat. File size: 65536
Successfully backed up file mark.dat. File size: 56868864
Successfully backed up file mark.idx. File size: 37814272
Successfully backed up file S0000000.FCS. File size: 128
Successfully backed up file S0000001.FCS. File size: 128
Successfully backed up file L0000529.FCS. File size: 2040

   End dump.    Actual dump size:     94779392

Dynamic dump completed, but some files could not be backed up.

Dynamic Dump completed, but some files could not be backed up (..\dump.scr). See Server Operation Log: CTSTATUS.FCS (442)

 

Dynamic Dump now stops when client callback function returns an error

If an error occurs in the client-side dynamic dump callback function, the server will not continue to perform the dump. In V11.2.3 and later, the logic has been modified so that any error returned by the client's dynamic dump callback function terminates the dynamic dump.

Note: This is a Compatibility Change.

 

FairCom DB Files to Include for a Successful Backup Strategy

A FairCom DB SQL dictionary is composed of several files. You will need to back up all of these files if you want to be able to restore the entire SQL dictionary from your backup. By backing up the correct set of files, you will be able to do a full restore and have your SQL dictionary ready-to-go.

The following files need to be backed up if you want to be able to restore the entire SQL dictionary:

  • FAIRCOM.FCS
  • ctdbdict.fsd
  • *.dat in the ctreeSQL.dbs folder
  • *.idx in the ctreeSQL.dbs folder
  • ctreeSQL.fdd in ctreeSQL.dbs\SQL_SYS

The !FILES (!FILES, !FILES) section of your dynamic dump script will look like this:


!FILES 
FAIRCOM.FCS
ctdbdict.fsd
ctreeSQL.dbs\*.dat
ctreeSQL.dbs\*.idx
ctreeSQL.dbs\SQL_SYS\ctreeSQL.fdd
!END

More generally, the following files are FairCom internal files that need to be included in backups to allow recovery to function without SKIP_MISSING_FILES YES (in the event these files are changed during the backup interval):

  • FAIRCOM.FCS
  • SYSLOG*.FCS
  • SEQUENCE*.FCS
  • DFRKSTATE*.FCS
  • ctdbdict.fsd
  • *.dbs\SQL_SYS\*.fdd
  • RSTPNT*.FCS
  • REPLSTATE*.FCS (created on the target server by the Replication Agent)

Testing the Backup

The following test should demonstrate that you have backed up everything you need:

  1. Use the dynamic dump utility, ctdump (ctdump - Schedule Backup Utility, Submit Backup Utility), to back up your files into SYSTEM.BAK.
    1. The !FILES (!FILES, !FILES) section of your dynamic dump script should include the entries shown earlier.
  2. Shut down your FairCom Server and rename your C:\<faircom>\server\data folder to a new (unused) folder name, such as data.old:
    1. C:\<faircom>\server\data.old
  3. Create a new data folder and copy the following files to this location:
    1. ctrdmp.exe
    2. SYSTEM.BAK
    3. Your backup script (the text file that contains the !FILES section shown above)
  4. Run ctrdmp (FairCom Database Restore Guide, FairCom Database Restore Guide) to restore your files in place.
  5. Now start your FairCom Server and connect using FairCom DB Explorer. You should be able to see your restored SQL tables.

 

Defer Backup I/O for Improved Performance

When a dynamic dump runs, the disk read and write operations of the backup process can slow the performance of other database operations. FairCom DB supports an option that allows an administrator to reduce the performance impact of the dynamic dump.

The FairCom DB configuration option:

  • DYNAMIC_DUMP_DEFER <milliseconds>

This option sets a time in milliseconds that the dynamic dump thread will sleep after each write of a 64KB block of data to the dump backup file.

An application developer can also use the c-tree ctSETCFG() API function to set the DYNAMIC_DUMP_DEFER value. For example, the following call specifies a 10-millisecond DYNAMIC_DUMP_DEFER time:

  • ctSETCFG( setcfgDYNAMIC_DUMP_DEFER, "10" );

The DYNAMIC_DUMP_DEFER value set by a call to ctSETCFG() takes effect immediately, so this API call can be used by administrators to adjust the speed of a running dynamic dump depending on the amount of other database activity.

Note: The maximum allowed DYNAMIC_DUMP_DEFER time is 5000 milliseconds, set at compile-time. If a value is specified that exceeds this limit, the DYNAMIC_DUMP_DEFER time is set to DYNAMIC_DUMP_DEFER_MAX.

The FairCom DB Administrator utility, ctadmn, was also updated to support the dump sleep time option to change this value at run time. The "Change Server Settings" menu is available from the main menu of the ctadmn utility.

Defer Interval

The DYNAMIC_DUMP_DEFER option causes the dynamic dump to pause for the specified number of milliseconds each time it writes 64 KB of data to the dynamic dump stream file. For large backups, even the smallest DYNAMIC_DUMP_DEFER value of 1 millisecond adds significant time to the dynamic dump. For example, 100 GB = 1600000 * 1 ms. = 1600 seconds of additional time.

An additional keyword, DYNAMIC_DUMP_DEFER_INTERVAL, specifies the number of 64 KB blocks that are written before the DYNAMIC_DUMP_DEFER sleep is performed. For example, DYNAMIC_DUMP_DEFER_INTERVAL 16 would cause the DYNAMIC_DUMP_DEFER sleep to occur after every 64 KB * 16 = 1 MB of data written to the dump stream file.

Note: If a value greater than 5000 is specified for DYNAMIC_DUMP_DEFER_INTERVAL, the value is set to 5000. If a value less than 1 is specified, the value is set to 1.

This option can be set by the ctSETCFG() API function:

  • ctSETCFG( setcfgDYNAMIC_DUMP_DEFER_INTERVAL, "16" );

A new menu option to set this value has been added to option 10 of the FairCom DB Server Administration (ctadmn) menu.

 

Backup Defer Interval for Improved Performance

The DYNAMIC_DUMP_DEFER option causes the dynamic dump to pause for the specified number of milliseconds each time it writes 64 KB of data to the dynamic dump stream file. For large backups, even the smallest DYNAMIC_DUMP_DEFER value of 1 millisecond adds significant time to the dynamic dump. For example, 100 GB = 1600000 * 1 ms. = 1600 seconds of additional time.

An additional keyword, DYNAMIC_DUMP_DEFER_INTERVAL, specifies the number of 64 KB blocks that are written before the DYNAMIC_DUMP_DEFER sleep is performed. For example, DYNAMIC_DUMP_DEFER_INTERVAL 16 would cause the DYNAMIC_DUMP_DEFER sleep to occur after every 64 KB * 16 = 1 MB of data written to the dump stream file.

Note: If a value greater than 5000 is specified for DYNAMIC_DUMP_DEFER_INTERVAL, the value is set to 5000. If a value less than 1 is specified, the value is set to 1.

This option can be set by the ctSETCFG() API function:

  • ctSETCFG( setcfgDYNAMIC_DUMP_DEFER_INTERVAL, "16" );

A new menu option to set this value has been added to option 10 of the FairCom DB Server Administration (ctadmn) menu.

 

Back Up Non-ctree Files

Two alternative methods are available in the FairCom Server dynamic dump feature to allow ANY file to be backed up.

Specifying non-ctree Files

A dynamic dump script also supports listing specific files to be backed up as non-ctree files. If the !FILES list contains the !NONCTREEFILES keyword, all files following that keyword are treated as non c-tree files. Wildcard specifications are allowed. The !NONCTREEFILES keyword must appear after the !FILES keyword.

Non c-tree Files Dynamic Dump Script Example

!DUMP backup.dmp

!FILES

*.dat

*.idx

!NONCTREEFILES

*.log

*.txt

*.cfg

!END

Alternative Method

To include non c-tree files in a dynamic dump, use the dump keyword !COPY_NONCTREE. Any file included in the !FILES section of the FairCom Server dynamic dump script that receives error FUNK_ERR (13) or error READ_ERR (36) on c-tree open will be treated as a non c-tree file and copied directly to the dump stream base directory. More accurately, to the dump stream base directory plus any subdirectory included in the file’s name.

If the destination directory does not exist, the FairCom Server will attempt to create it. If this directory creation fails a FCPY_ERR (796) is reported.

Note: A check is not made that wildcard specifications in the c-tree/non-ctree file sections match the same filename. In this case, the c-tree file is included in the dump and then the non-ctree file is also copied.

Non-ctree File Keywords

!NONCTREEFILES

!COPY_NONCTREE

Note: The !NONCTREEFILES script keyword does not require specifying the !COPY_NONCTREE option in the script.

See also

 

Back Up Files Without Transaction Control

It is possible to back up data files that are not under transaction control while the FairCom Server remains running. Of course, the safest way to perform a complete backup of data and index files while the FairCom Server remains running is to ensure that all your files are under transaction control. This way you are sure that all data and index files are completely synchronized, and updates to the files can continue during a dynamic dump.

Some developers choose not to implement transaction control for one reason or another. In some cases, developers migrating from the FairCom DB Standalone Multi-user model, FPUTFGET, to the FairCom Server, choose to use the FairCom Server in an FPUTFGET-like manner. An FPUTFGET-like server is defined with the following FairCom Server keywords:

COMPATIBILITY  FORCE_WRITETHRU

COMPATIBILITY  WTHRU_UPDFLG

Although it is possible to define a non-transaction controlled file within a dynamic dump backup script, there is no protection against updates to this file. In other words, it is possible for the file to be updated during the dynamic dump. Updating a file controlled by transaction processing is okay, because the dump restore process can use the transaction logs to restore to a consistent state. However, if files NOT under transaction control are updated while they are being backed up they cannot be backed up in a consistent state.

The keyword !PROTECT, without an argument, when added to a dynamic dump script file causes the non-transaction files to be dumped cleanly by suspending any updates while each file is dumped. At this point, the associated index files for a data file are not guaranteed to be consistent with the data file because the files are not dumped at the same time. Updates are only suspended while the data file is being backed up.

This technique ensures the data file is backed up in a known state. The restore process for a non-transaction control file MUST be complemented with an index rebuild. Because protection is for data files only, under most situations, the indexes are not worth dumping since they must be rebuilt.

Note: !PROTECT suspends updates at the ISAM level only. The keyword !PROTECT_LOW also suspends low-level updates in addition to the ISAM level. FairCom suggests using the !PROTECT_LOW when using low-level function calls.

 

Automatically Restore Backup for Ready-to-Use Files

The FairCom Server dynamic dump script file keyword, !IMMEDIATE_RESTORE, instructs the dynamic dump to be immediately followed by a dump restore. The idea is to allow for transaction consistent files to be available immediately in a native file system form as opposed to embedded files in the dynamic dump stream file.

A key issue is where the dynamic dump restore utility, ctrdmp, can run as it cannot run in the current server directory. If this occurs, error TCOL_ERR (537) results indicating that ctrdmp conflicted with an existing server operation. The natural solution is to run the ctrdmp in the directory that receives the dump stream file. We call this the dump stream base directory. In essence, this requires that !DUMP <streamFileSpec> use a file name including a path where the dump restore should run. For example, a dynamic dump script entry of the form

!DUMP   I:\dump\mydumpstream

will cause the dump stream file mydumpstream to be created in the dump stream base directory I:\dump. If !IMMEDIATE_RESTORE is part of the dump script, then the automatically launched ctrdmp is also executed in the I:\dump directory.

Upon restoration of files, the enhanced dump restore will also automatically create any required directory hierarchies for previously backed-up files. If an immediate restore operation fails, the server sets the error code for the dynamic dump operation to DRST_ERR (797, immediate dump restore failed).

 

Backup as Multiple Extent Files

The Dynamic Dump backup feature defaults to breaking-up the backup file (stream file) into multiple physical files (segments). This gets around individual file size limits imposed by the host OS (e.g., 2GB for a typical Unix system). Each backup file segment defaults to 1GB. There is no limit on the number of backup files (segments) supported.

Use the !EXT_SIZE keyword to change the segment size at runtime (up to 2000MB) by setting the argument of !EXT_SIZE to the desired number of bytes. Set the argument to NO to disable this feature and limit the dump to one file up to the OS maximum file size.

When a backup stream file is broken into segments, they are named as follows: original.001, original.002, etc, unless the original dump file has a name of the form name.nnn where nnn represent digits. For example, if the original dump file is named dump.str, the first additional segment after dump.str gets to the extent size will be dump.str.001. However, if the original dump file is named dump.111, then the first extent will be dump.112.

On some systems, the dynamic dump extent names formed from the original dump stream file name by adding .001, .002, etc. are not legal. Therefore, the extent name is first checked internally. If it does not work, the original dump stream file name is modified to produce a safe name in one of the following ways:

  • Replace name extension, if any, in original with numeric name extensions (.001, .002, etc.).
  • If the original name had no name extension, truncate the name to 8 bytes, and add numeric name extensions.
  • If the original name had no name extensions and is not more than 8 bytes, use the name FCSDDEXT.001 for the first dump stream segment, incrementing the numeric name extension as needed.

 

Back Up as Segmented Files

The FairCom Server and the ctdump and ctrdmp utilities support dynamic dumping of segmented files and the creation of segmented (stream) dump files.

Segmented dump files are different from the !EXT_SIZE feature that automatically breaks the dump file into 1GB ‘extents’. Dumping to segmented files allows you to take advantage of huge file support and to specify the files size and location for each dump file segment.

  • To dump segmented data or index files, simply list the host (main) file in the !FILES list and the segments will be managed automatically.
  • To cause the output dump file produced by the dynamic dump itself to be segmented, use these script entries:

!SEGMENT        <size of host dump file in MB>

<dump seg name> <size of segment in MB>

...

<dump seg name> <size of segment in MB>

!ENDSEGMENT

The host dump file is the file specified in the usual !DUMP entry. Only the last segment in the !SEGMENT / !ENDSEGMENT list can have a zero size specified, which means unlimited size.

For example, assume bigdata.dat is a segmented file with segment names bigdata.sg1, bigdata.sg2, and bigdata.sg3, and the index file bigdata.idx is not segmented. To dump these files into a segmented dump file, use the script:

!DUMPd:\bigdump.hst

!SEGMENT  50

    e:\bigdump.sg1 75

    f:\bigdump.sg2 0

!ENDSEGMENT

!FILES

    bigdata.dat

    bigdata.idx

!END

The host dump file is up to 50 MB on volume D:, the first dump file segment is up to 75 MB on volume E:, and the last dump file segment is as large as necessary on volume F:.

 

Mirrored File Backups

Mirrored files are supported during dynamic dump and dump recovery as follows:

  1. If a mirrored file should be opened for use by an application during a dynamic dump, the dump script should contain the “mirrored” name, i.e., the name with the vertical bar (‘|’). For example, sales.dat|msales.dat.
  2. If this is not done, and the dynamic dump opens the primary file, because it is not in use, a client opening the primary|mirror combination gets an MNOT_ERR (551, file already opened without mirror). To avoid blocking users from gaining file access, open primary files with their mirrors when specified for dynamic dumps.
  3. The dump recovery program recreates both the primary and mirror files. It reproduces the primary file, and copies it over the mirror file.

 

Backup Progress Messages Displayed in Function Monitor

During backup testing, watching the progress of a running dynamic dump can be beneficial. Adding the keyword DIAGNOSTICS DYNDUMP_LOG writes low-level progress messages to the CTSTATUS.FCS file. If the FUNCTION_MONITOR YES keyword is also active, dynamic dump progress information will also be written to the function monitor.

 

 

Mask Routine Backup Messages in CTSTATUS.FCS

Normally, a dynamic dump writes the names of all the files it backs up to the FairCom Server status log, CTSTATUS.FCS. The FairCom DB configuration option:

CTSTATUS_MASK DYNAMIC_DUMP_FILES 

can be used to suppress the logging of the names of the files backed up by a dynamic dump operation. This option reduces the amount of information logged in CTSTATUS.FCS for easier analysis by an administrator.

Run Time Configuration

The ctSETCFG() function can be used to dynamically turn this option on or off while FairCom DB is running.

Examples

To turn the option on:

    ctSETCFG("setcfgCTSTATUS_MASK", "DYNAMIC_DUMP_FILES");

To turn the option off:

    ctSETCFG("setcfgCTSTATUS_MASK", "~DYNAMIC_DUMP_FILES");

 

Killing a Running Backup

To kill a dynamic dump, simply execute ctadmn and list active clients. The dynamic dump will appear with the COMM PROTOCOL set to DYNAMIC DUMP. Now use the kill clients option to terminate the process. This allows a backup procedure to be canceled (killed) after it has been submitted to the FairCom Server.

 

FairCom Database Restore Guide

 

   
Administration Guide
Database Administrator's Guide
Audience: Developers
Subject: Installing, configuring, and maintaining the FairCom DB Database
Copyright: © Copyright 2025, FairCom Corporation. All rights reserved. For full information, see the FairCom Copyright Notice.

 

In the event of a catastrophic system failure that renders the transaction logs or the actual data files unreadable, it will be necessary to use a dynamic dump or complete backup, to restore data to a consistent, well defined state. This is known as a dynamic dump recovery.

Note: If you make your own system backups when the FairCom Server is not in operation, and include the file FAIRCOM.FCS, you can restore from that backup in the event of a catastrophic failure.

 

Running the Recovery Utility

The ctrdmp (ctrdmp - Backup Restore or System Rollback, Backup Restore Utility) utility provides dynamic dump recovery. This utility is itself a FairCom Server (a bound server) so there are important points to observe when running it:

  1. Be sure the particular FairCom Server undergoing a recovery is not running when ctrdmp starts. Two FairCom Servers operating simultaneously interfere with each other.
  2. Because it is a FairCom Server, ctrdmp generates temporary versions of all system files associated with a FairCom Server (i.e., files with the extension “.FCS,” as described above). Therefore, the dynamic dump file and the ctrdmp utility should be moved to a directory that is not in (or under) the FairCom Server's working directory. This is so the system files created by the recovery program will not overwrite working FairCom Server files. The temporary files are automatically deleted when recovery completes successfully unless !FORWARD_ROLL is in the recovery script. In that case, the S*.FCS files are renamed to S*.FCA and kept in the directory.

After taking these preliminary steps, do the following to recover a dynamic dump:

  1. Start ctrdmp the same way as any normal program in the environment.
  2. When prompted, enter the name of the dynamic dump script file to be used for the recovery.

Note: The same script file used to perform the dump can be used to restore the dump. If a forward dump is planned, include the !FORWARD_ROLL keyword.

The dump recovery begins automatically and produces a series of messages reporting the progress of the recovery:

  • Each recovered, i.e., recreated, file will be listed as it is completed.
  • After all specified files have been recovered, a message is output indicating the recovery log, i.e., the transaction log, is being checked and recovered files were restored back to their state as of a given time, that is, the time the dynamic dump started.
  • A message indicating the dump recovery process finished successfully.

Note: If the dynamic dump data was encrypted with Advanced Encryption (for example, AES), then the ctsrvr.pvf password file must be present with the master key information to decrypt and play back this data.

See Also:

 

 

ctrdmp - Backup Restore or System Rollback

Used to restore backups created with ctdump.

Operational Model:

Standalone

Usage:

ctrdmp [ dumpscript ] [ -x ]

 

  • dumpscript - The name of a valid dynamic dump restore script
  • -x - Read dump stream from stdin

A successful ctrdmp completion always writes the following message to CTSTATUS.FCS:

DR: Successful Dump Restore Termination

A failed ctrdmp writes the following message to CTSTATUS.FCS when ctrdmp terminates normally:

DR: Dump Restore Error Termination...: <cterr>

where <cterr> is the error code.

When the -x option is specified, the !DUMP keyword in the dump script is ignored and the dump stream is read from standard input.

This might be combined with the ctdump output redirection to pipeline a backup and restore operation:

ctdump -s FAIRCOMS -u admin -p ADMIN -t  test6.dmp -c -x| ctrdmp test6.dmp -x

Note: If an error occurs during the restore phase, no backup exists on disk.

If encrypted files are being restored and input redirection is used, ctrdmp is not able to prompt for the master password during the recovery phase of the restore. In this case, an alternate means of providing the master password is required, such as using the CTREE_MASTER_KEY_FILE environment variable.

If for some reason ctrdmp terminates prematurely (for example, a fatal error causes ctrdmp to terminate abnormally), the “Dump Restore Error Termination...” message might not be written to CTSTATUS.FCS. In that case, ctrdmp might have written error messages to standard output or to CTSTATUS.FCS before terminating that helps explain the reason for ctrdmp terminating prematurely.

Note: A 32-bit ctrdmp could fail with error 75 if run on transaction logs created by a 64-bit FairCom Server, which might support more than 2048 connections.

The ctfdmp, ctldmp, and ctrdmp utilities display the FairCom DB version used to compile them when they are run.

Dump and Restore Version Compatibility

The ctrdmp (ctrdmp - Backup Restore or System Rollback, Backup Restore Utility) utility is used to restore (FairCom Database Restore Guide, FairCom Database Restore Guide) a Dynamic Dump and the ctfdmp utility can be used to roll forward (FairCom Database Forward Roll Guide, FairCom Database Forward Roll Guide). Occasionally an update to the FairCom Database Engine may cause an incompatibility between versions. For this reason you must use the ctrdmp from the same release from which the dump was created. It is important to save a copy of the ctrdmp utility that is compatible with each dump file.

Restore Recovery Options

The ctrdmp utility now supports the RECOVER_DETAILS and RECOVER_MEMLOG transaction recovery options (the same options that FairCom Server supports).

If you specify !RECOVER_DETAILS YES in your dump restore script, ctrdmp will log progress messages to the file CTSTATUS.FCS as it performs its automatic recovery.

Environment Variable for Advanced Encryption Password

If this utility has advanced encryption enabled, it can read an encrypted password file instead of being prompted to enter the master password. To enable this, set the environment variable CTREE_MASTER_KEY_FILE to the name of the encrypted master password file.

ctrdmp supports options to help analyze recovery behavior. The following options behave like their corresponding server configuration options:

  • !DIAGNOSTICS TRAN_RECOVERY logs detailed recovery steps to RECOVERY.FCS.
  • !RECOVER_DETAILS YES logs additional recovery progress messages to CTSTATUS.FCS.

See also

 

Restore Script Options

A recovery script, similar to the dynamic dump script, is used with the recovery utility. In general, this is the same script used to make the dynamic dump. (Hint! Back up this script file with your c-tree files so it's readily available!)

The following keywords, with arguments in the same format as dump script options, control the recovery process without effecting the dump itself.


 

 

!CLNIDXX

Each file restored by ctrdmp has its index nodes that contain residual key level locks and their associated transaction values cleaned. This permits the file's transaction high water mark to be reset to zero.

After successful restore, the status log will show a message similar to:

User# 00001    DR: CLNIDXX called...: 19

In the example above, 19 indicates the number of files cleaned.

 

!CONVERT_PATHSEP

Convert path separators to the operating system's native path separator. Used when restoring a dynamic dump that was created on an operating system that uses a different path separator.

For details, see ctrdmp options to convert path separators to operating system's native path separator.

 

!DELETE

Default: !SKIP

The opposite of !SKIP. It causes an existing file to be deleted and replaced by the recovered file.

 

!DIAGNOSTICS TRAN_RECOVERY

In V11.5 and later, the Dynamic Dump restore utility supports enabling diagnostic logging of its recovery process. The logging is the same transaction recovery logging that FairCom Server supports and is written to the file RECOVERY.FCS. This logging can be useful for analyzing the dynamic dump restore recovery process. Note that the logging can slow down the recovery time.

To enable this logging during dynamic dump restore, add the option !DIAGNOSTICS TRAN_RECOVERY to your dump restore script.

 

!#FCB <number of files>

Default: 30000 files

When restoring a large number of files from a dynamic dump backup with ctrdmp, the dump restore can possibly fail with error FNUM_ERR (22, file number out of range). Should you encounter this error with a very large number of restored files, consider the !#FCB option in your ctrdmp script file to increase this value.

Prior to V9, the default was 100, which could lead to error 22 in situations where this number was too low for ctrdmp.

 

!FORWARD_ROLL

Default: No forward roll is performed

If planning to do a forward roll after a dump recovery, this keyword must be in the recovery script. The keyword is ignored during dynamic dump (backup). When present during dump recovery, this keyword causes a transaction start file to be restored with the archive file extension (i.e., S*.FCA). Be sure to rename the file from S*.FCA to S*.FCS before starting the forward roll. See Running the Forward Dump Utility for System Recovery for more information on rolling forward.

 

!PAGE_SIZE <bytes per buffer>

Default: 8192 bytes

The number of bytes per buffer page, rounded down to a multiple of 128.

Note: Required only if the FairCom DB configuration file changes the default page size.

 

!RECOVER_DETAILS

!RECOVER_DETAILS YES 

If you specify !RECOVER_DETAILS YES in your dump restore script, ctrdmp will log progress messages to the file CTSTATUS.FCS as it performs its automatic recovery.

 

!RECOVER_MEMLOG

!RECOVER_MEMLOG N 

where N is the number of transaction logs to be read

In situations where Dynamic Dump restore involves many transaction logs, adding this to the Dynamic Dump Script and specifying a number of transaction logs can speed up recovery. ctrdmp will read N logs into memory and process them from there.

 

!REDIRECT <old path> <new path>

Default: no redirect

Redirect output dumped from the old path into the new path. See Define Alternative Restore Destinations (Restore To Redirected Destination, Restore To Redirected Destination) for more information.

It is frequently required to redirect all files in the backup to a new location. When <entirepath> is used as the first path name in the !REDIRECT option all directory names will be redirected to the specified directory name. The new directory name can be a full or a relative path.

Example

Using this option in a dynamic dump restore script will cause all files to be restored to the directory C:myrestoredir:

!REDIRECT <entirepath> C:myrestoredir

 

!REDIRECT_IFIL

Automatically modifies redirected files on the server using filename redirection rules to change the data and index file paths in the IFIL resource of the file. As a result of redirection during automatic recovery, paths contained in the IFIL resource would be incorrect after the file was redirected to the new location. To support copying c-tree files from one directory location to another, it is necessary to update any filename paths in a c-tree data file's IFIL resource.

For more information, see the REDIRECT_IFIL configuration keyword.

 

!SKIP

Default: !SKIP

Skip recovery for any file listed under the !FILES keyword if the file already exists.

Note: Be aware of the differences between using !SKIP with a recovery and with a rollback (see System Rollback (FairCom Database Rollback Guide, FairCom Database Rollback Guide)) where it must be used with caution.

Note: Only the files specified by the !FILES keyword will be restored. It is not necessary to restore all files contained in the dump.

 

Restore To Redirected Destination

By default, restoring a backup returns files to their original directory. This is due to the fact that file paths are included as part of the filenames in the transaction logs.

To change this default behavior, it is possible to “redirect” the destination of files during a dynamic dump restore.

The dynamic dump script used during restore may contain one or more of the following redirection directives:

!REDIRECT  <old path> <new path>

Note: To specify an empty string for one of the !REDIRECT arguments use a pair of double quotes (‘’).

The !REDIRECT keyword substitutes the <new path> for <old path> when found for all file that are restored. !REDIRECT should not be used with a destination path per se, but rather with a find and replace pattern that is used to modify paths from the !FILES lines to produce new paths to restore to.

This is often necessary when the restored directory is no longer the same as the original file environment. Consider the case of moving files from one server to another with a slightly different directory structure, notably the case in Windows with a different drive identifier, for example C: to D:. It is also useful for developers to obtain a live “snapshot” of a customer’s database and restore it to an alternative destination for testing, debugging or other purposes.

Keep in mind that FairCom DB SQL includes relative paths as part of the filename as referenced in the transaction logs making this necessary when moving date between FairCom DB SQL database directories.

If you find that files are missing after a restore operation, you should retry the restore with the !REDIRECT directive in place to a known good directory location.

Example 1

The following directives cause files that were backed up using absolute names to be restored into the directory temp (relative to the current directory during restore) and files that were backed up from the directory local (relative to the server working directory) to be restored into the absolute directory \temp\local:

!REDIRECT        \        temp\

!REDIRECT   local\        \temp\local\

The following will add temp\ to the path name of all restored files:

!REDIRECT " "     temp\

Example 2

The following will strip d: from any restored files starting with d: (or D:):

!REDIRECT d:     ""

Example 3

The following redirects files from one volume mount to another.

Given original files defined as this:
!FILES         /mnt/data01/*.dat

!FILES         /mnt/data01/*.idx

 

!REDIRECT      /mnt/data01/    /mnt/data02/

Note: The !REDIRECT keyword only affects the restore operation and is ignored when the script is used for the backup process.

 

Restoring Transaction Dependent Files

When transaction logs are used to recover, rollback (ctrdmp) or roll-forward (ctfdmp), FairCom DB scans the transaction logs to determine active transactions and to open the files that are updated. When a file cannot be opened, execution may terminate, typically with a FNOP_ERR (12) error. It is possible to utilize the SKIP_MISSING_FILES option to complete the assessment of active transactions and updated files; and the recovery/rollback/roll-forward will complete, possibly skipping operations on files that could not be opened.

However, this is not always the case as some of the files that were skipped may be created or have been renamed (or deleted), and if they are transaction dependent (TRANDEP) files, the transaction logs contain sufficient information to permit them to be properly updated. Adding SKIP_MISSING_FILES means that non-TRANDEP files may in fact be skipped even though they should have been present.

To avoid requiring SKIP_MISSING_FILES when TRANDEP files are in use, a new default (V9.1.1) behavior effectively treats TRANDEP files as though SKIP_MISSING_FILES is turned on, however, for files without TRANDEP activities, recover, rollback, or roll-forward may still terminate execution if unexpected missing files are encountered.

This behavior can be turned off by adding the COMPATIBILITY NO_AUTO_SKIP configuration keyword to the FairCom DB configuration file, ctsrvr.cfg.

Note: It is possible that an unexpected FNOP_ERR error can still occur for a TRANDEP file, however, this change should greatly reduce the number of unexpected FNOP_ERR’s.

 

FairCom Database Rollback Guide

 

   
Administration Guide
Database Administrator's Guide
Audience: Developers
Subject: Installing, configuring, and maintaining the FairCom DB Database
Copyright: © Copyright 2025, FairCom Corporation. All rights reserved. For full information, see the FairCom Copyright Notice.

 

System rollback restores the system to its status as of a specified point in time. For example, if company payroll processing was started at 1:00 PM and something went awry later in the afternoon, a system rollback can reset the system to the way it was at 1:00 PM, so processing could start again. If other applications using transaction processing files were running while the payroll processing was under way, these other files would also be rolled back to their 1:00 PM state. The Administrator should be aware of all files and related data that will be affected before starting a rollback to avoid interfering with multiple, unrelated systems sharing a FairCom DB Server.

A rollback, like recovery, involves a backup/restore script with different options to control how the rollback is to be done.

 

Running the Rollback Utility

The ctrdmp utility performs a system rollback as well as dynamic dump recovery. ctrdmp checks the first keyword in the script file. If the first line is !ROLLBACK the script is used for a rollback. If it isn’t, the script is considered a dynamic dump script and used for a dump or a recovery.

Note: As in dump recovery, be sure the particular FairCom Server undergoing the rollback is not running when ctrdmp starts, since ctrdmp is a FairCom Server and the two FairCom DB Servers operating simultaneously interfere with each other. Typically, error TCOL_ERR (537, transaction log collision) is observed under these conditions.

Perform a rollback as follows:

  1. Collect ctrdmp, the transaction log files covering the period from the target time to present, and the current log files into a working directory.
  2. Start ctrdmp the same way as any program in the environment.
  3. When prompted, enter the name of the rollback script file to be used.

The rollback begins automatically and produces a series of messages reporting recovery progress. A message returns when the utility completes a successful rollback.

A successful ctrdmp completion outputs the following message to CTSTATUS.FCS:

DR: Successful Dump Restore Termination

A failed ctrdmp outputs the following to CTSTATUS.FCS:

DR: Dump Restore Error Termination...: <cterr>

where <cterr> is a c-tree error code number.

If for some reason ctrdmp terminates prematurely (for example, a fatal error causes ctrdmp to terminate abnormally), the “Dump Restore Error Termination...” message may not be written to CTSTATUS.FCS. In that case, ctrdmp may have written error messages to standard output or to CTSTATUS.FCS before terminating that explains the premature termination.

 

Script File for Rollback

The format of the Rollback script is the same as a dynamic dump script. Accepted rollback options are as follows:

 

 

In This Section

!COMMENT

!DATE <mm/dd/yyyy> (rollback)

!#FCB <number of files>

!FILES

!ROLLBACK

!SKIP

!TIME <hh:mm:ss> (rollback)


 

 

!COMMENT

Default: Off

Informs the FairCom Server that the remainder of the script file is for documentation purposes only and is not to be evaluated. Do not place keywords after this keyword.

 

!DATE <mm/dd/yyyy> (rollback)

Specifies the log date you want to rollback transactions to.

The !DATE keyword is optional. It defaults to the current date.

 

!#FCB <number of files>

Default: 30000 files

When restoring a large number of files from a dynamic dump backup with ctrdmp, the dump restore can possibly fail with error FNUM_ERR (22, file number out of range). Should you encounter this error with a very large number of restored files, consider the !#FCB option in your ctrdmp script file to increase this value.

Prior to V9, the default was 100, which could lead to error 22 in situations where this number was too low for ctrdmp.

 

!FILES

The !FILES keyword is followed by names of files to include in the dynamic dump or rollback. This must be the next to last keyword in the script file and it takes no arguments.

Filenames must begin following the !FILES keyword line with one line for each file. File names should not be listed on the same line as the !FILES keyword. The !END keyword terminates the list of files on a single line.

We strongly suggest that FAIRCOM.FCS be included in your list.

Members of a superfile cannot be individually “dumped.” The entire superfile must be dumped; that is, the name of the host superfile, not a superfile member, is placed in the list of files.

The * and ? wildcards are supported.

See !RECURSE for other options.

NOTE: Dynamic dump supports cloning a data file. This means that an empty copy of the data file is backed up. To use this feature, precede the file name with the text <clone>. Here's an example showing myfile.dat being cloned and all files matching the wildcard cust*.dat being cloned (thedata.dat is a full backup):

 

!FILES

<clone>myfile.dat

<clone>cust*.dat

thedata.dat

!END

See also:

 

!ROLLBACK

!ROLLBACK must be the first entry in the script. It takes no argument. If !ROLLBACK is not the first entry, the script is interpreted as a dynamic dump script.

 

!SKIP

Default: !SKIP

Skip recovery for any file listed under the !FILES keyword if the file already exists.

Note: Be aware of the differences between using !SKIP with a recovery and with a rollback (see System Rollback (FairCom Database Rollback Guide, FairCom Database Rollback Guide)) where it must be used with caution.

Note: Only the files specified by the !FILES keyword will be restored. It is not necessary to restore all files contained in the dump.

 

!TIME <hh:mm:ss> (rollback)

This keyword specifies the time you want to rollback transaction state to.

The !TIME keyword is required for rollback.

 

FairCom Database Forward Roll Guide

The forward dump utility, ctfdmp, can be used to recover from a catastrophic failure following the successful execution of a dynamic dump or from a full backup made after a safe, clean, controlled shutdown of the system.

Note: If you perform a rebuild or a compact on a data file that has transaction logging enabled, you will not be able to roll that file forward past the time of the rebuild/compact operation until a new backup has been completed. The act of compacting or rebuilding a file causes changes to the file that are not stored in the transaction logs. Attempting to roll forward will fail with “FWD: Roll-Forward Error Termination...12” unless the !SKIP option is enabled, and the forward roll operation will then proceed, excluding the affected files, which will be listed in CTSTATUS.FCS.

Preparing to Use the Forward Dump Utility

To prepare for using the forward dump utility, ctfdmp, follow these guidelines:

  1. Set the KEEP_LOGS configuration option to retain all log files. This setting causes log files no longer required for automatic recovery to be renamed instead of deleted. The extensions of log files are changed from .FCS to .FCA, which changes the transaction log files from “active” to “inactive”. These “old” log files may be needed to roll forward.
  2. Make periodic, complete backups using a dynamic dump or offline backup. The following files must be included in a complete backup:
    • All data and index files.
    • The file FAIRCOM.FCS.
    • The S*.FCS files (automatically included in dynamic dump).
  3. Following a safe, complete backup, save all transaction log files created until the next complete backup. Active transaction log files have names of the form L<log number>.FCS, with the number incremented by 1 for each new active transaction log. As specified in the KEEP_LOGS configuration value, when the FairCom Server creates a new active log it renames the active log being replaced from L<log number>.FCS to L<log number>.FCA and saves it as an inactive transaction log file.

Normally, when archiving all the logs, you would reestablish your forward roll starting point on a regular basis by means of a new dynamic dump. This could be done on a weekly basis keeping the previous week's dump and accumulation of logs as a further backup (a "grandfather" approach). It is easy to automate the archiving since their server automatically renames inactive logs to *.FCA and once renamed, the server is not going to access them so they can be archived without causing problems for the server.

Running the Forward Dump Utility for System Recovery

If the system has a catastrophic failure and preparations have been made as recommended above, the data can be recovered as follows:

  1. Restore the contents of the most recent backup, which can be a dynamic dump or a standard backup, provided it includes the files listed in step 2 above.

Note: If the restore is from a dynamic dump, be sure to include the !FORWARD_ROLL keyword in the dump recovery script. This keyword causes creation of a transaction start file for the recovered logs. The transaction start file will be named S*.FCA. After the restore is complete, rename S*.FCA to S*.FCS.

  1. Load all transaction log files saved between the time of that backup and the time of the catastrophic failure and rename all inactive transaction files in this group (i.e., log files with the extension .FCA) to give them the extension of an active transaction log file (i.e., extension .FCS).

The following files should be present: the S*.FCS file created by ctrdmp, the data and index files restored from the dynamic dump, and all L*.FCS and L*.FCA (renamed to L*.FCS) files that have been archived in the default directory.

  1. Start the forward dump utility, ctfdmp, as any other program in the environment. See below for command-line arguments and other considerations. The ctfdmp utility can be used only when the FairCom Server is stopped unless you follow the guidelines listed later in this section.

The forward dump will proceed without any further instructions.

Note: Only transaction-processed files will be updated beyond the state they held when the backup was made.

Command-Line Arguments

ctfdmp accepts the command line arguments shown below. The first two arguments need to be used only if the application uses more than the default number of #FCB or the PAGE_SIZE is larger than default. If either of the first two command line arguments is used, they both must be specified as illustrated below. !SKIP is optional and does not cause an error termination if a file required during the forward roll is not accessible. Extreme care must be exercised if !SKIP is used, since the forward roll has no way of ensuring the integrity of data for files that are skipped.

CTFDMP [!#FCB <number of files>]

       [!PAGE_SIZE <bytes per buffer page>]

       [!SKIP]

Using ctfdmp while FairCom Server is Running

It is recommended that the ctfdmp utility should be used only when the FairCom Server is stopped. The utility can be used when the FairCom Server is running only if:

  • the utility is run in a directory other than the directory in which the server stores its transaction logs

and

  • the files being rolled forward are not in use by the server.

Note: If the dynamic dump data was encrypted with Advanced Encryption (for example, AES), then the ctsrvr.pvf password file must be present with the master key information to decrypt and play back this data.

Directories

This utility does not read the script or the ctsrvr.cfg file.

If local_directory is used in the ctsrvr.cfg file, the local_directory path supplied is not part of the file name stored in the transaction logs, and therefore ctfdmp would expect to find the data and index files in the process working directory. For example, references to the file “foo.dat” in the transaction logs will not contain the relative path “data” so all the data, index, and *.FCS files would need to be in the same directory.

See Also:

 

Forward Roll Path Redirection

While restoring from a backup with the forward roll utility, ctfdmp supports applying file name redirection rules to the file names that appear in the transaction logs. To use this feature, run ctfdmp with the !REDIRECT option, specifying the name of a text file that contains the redirection rules. For example, the following command indicates that the file redir.txt contains redirection rules:

ctfdmp !REDIRECT redir.txt

Each line in the file contains the portion of the file name to replace and its replacement. Place double quotation marks around a string if it contains spaces. A line that begins with a semicolon is ignored.

Examples

To replace an empty path with output\:

;Replace empty path with output\

"" output\

To replace Program Files(x86) with Program Files:

"Program Files(x86)" "Program Files"

To replace production\ with test\:

production\ test\

 

Transaction Log Dump

A transaction log dump is not something a Server Administrator typically needs to use, but we explain it here to be complete. Developers most often use this functionality as an aid to design, code, and debug an application being developed for use with a FairCom Server.

ctldmp is a utility providing a partial dump of transaction log files. This utility will attempt to create an ASCII log from the records in the transaction log and display it on the screen. It converts only the first 39 bytes of each record in the transaction log.

The ctfdmp, ctldmp, and ctrdmp utilities display the FairCom DB version used to compile them when they are run.

 

Options for Transaction Log Dump

The format of keywords for defining a transaction log dump is the same as the dynamic dump script file, but they are not put in a separate file or script. Instead, they are entered along with the name of the program when starting the program.

The keywords and arguments for ctldmp, the transaction log dump utility, are:

DATE <mm/dd/yyyy>

Begin dumping transactions as of the date specified. If no date is specified, begin dumping transactions from the beginning of the log file.

LOG <number>

Dump transactions beginning with the specified log. If no log number is specified, dump all log files meeting all other specifications.

OFF

Value of the position entry, offset, in the log dump that must be matched for the transaction to be listed. It is the ‘P’ field which follows the transaction number in the dump listing.

POS

Byte position in log to start the dump.

TIME <hh:mm:ss>

Begin dumping transaction as of the time specified. If a date is specified, then the date and time are used in conjunction with each other. If a date is not specified the current date is the default.

TRAN

Transaction number which must be matched in order for the transaction to be listed.

TYPE

Transaction type, which must be matched for the transaction to be listed. The following code numbers correspond to the specified transaction type:

TYPE Value Explanation
02 Add key value.
04 Delete key value.
07 Begin transaction.
08 Commit transaction.
09 Abort transaction.
12 New record image.
14 Old record image.
15 Difference of old/new record images.
26 Server checkpoint.
27 Open file.
28 Create file.
29 Delete file.
30 Close file.
31 Client login.
32 Client logoff.
34 End of log segment.
40 Abandon transaction.

CHKPNT yes

Outputs detailed information for each checkpoint encountered during a dump. For example, by using the following command line, the log dump begins with log file L0000100.FCS; only dumps checkpoints, and lists detailed information about checkpoints. tran types are found in ctopt2.h and the table above.

ctldmp log 100 type 26 chkpnt yes

 

Running a Transaction Log Dump

Note: Like ctrdmp, ctldmp is itself a FairCom Server, therefore, the particular FairCom Server that generated the transaction logs being dumped by this utility should not be running while ctldmp is running. Typically, error TCOL_ERR (537, transaction log collision) is observed under these conditions.

Running a transaction log dump is a one-step process completed by starting ctldmp as any program in the environment, followed by up to three keyword/argument pairs specifying date, time and log number. ctldmp runs automatically, without prompting for any information, and informs you when it completes the transaction log dump.

 

ctldmp option to create transaction start files from checkpoints in transaction log files

The ctldmp utility allows you to create transaction log start files for the transaction logs that it scans. Specify the csf ("create start files") option to use this feature.

The start files are named S<lognumber>_<sequencenumber>.FCA, where lognumber is the transaction log number and sequencenumber is the checkpoint number in that log (starting from 1).

These start files can be renamed to S0000000.FCS and S0000001.FCS so that FairCom Server can use the checkpoint positions as starting points for automatic recovery.

Example:

# ctldmp log 1254 csf

 

getlogfil

LOGPOS:1254-002473fax #0:690034176 P0051a057f3-00x  F0000-000 T26 U03 A0000x L139672 042 CHKPNT

0000e0007fa0a000a0009200920073000000000071001200e000000000005100000000000000000

300054001cb100002000020042002c00000030008a20f0003400000040004b20100000000000000

........q............""...""..r<..........x.... ..............T................

Created start file S0001254_0001.FCA for log position 0x002473fa

 

        Sat May 25 01:19:31 2013

 

LOGPOS:1254-0026a624x #0:690034177 P0051a057f3-00x  F0000-000 T26 U17 A0000x L10684 042 CHKPNT

ffffe000f720a000a0009200920092000000000092001200e000000000007200000000000000000

bfff6400a3400000200002004200820000003000c200f0003400000040008300100000000000000

.........s$..........""...""...""...........""... ..............x#.............

Created start file S0001254_0002.FCA for log position 0x0026a624

 

        Sat May 25 01:19:31 2013

 

Controls for Performance AND Safety of Non-Transaction Updates

(In this discussion, a cache page that has been updated and has not yet been written to the file system is called a "dirty page.")

FairCom DB offers multiple levels of transaction protection for your data. Some applications do not require the recoverability full transaction provides for performance reasons. However, these applications may be vulnerable to data loss should system failure occur. If FairCom Server terminates abnormally, updates to data and index files that are not under full transaction control are lost if those updates have not yet been written from c-tree's in-memory data and index caches to the file system. The following factors typically reduce the number of dirty pages that exist:

  1. When an updated cache page is being reused, the updated page is written to the file system cache.
  2. When all connections close a c-tree file, FairCom Server writes the updated pages to the file system cache before closing the file.
  3. An internal thread periodically checks if FairCom Server is idle, and if so it writes updated pages to the file system cache.

However, the combination of using very large data and index caches, keeping files open for long periods of time, and having constant activity on the system increases likelihood that more dirty cache pages exist.

It is possible to define a vulnerability window limiting potential loss of updates for your non-transaction data and index files. FairCom Server supports options to write dirty cache pages to the file system within a specified time period. This means that no more than a set amount of time can pass where data is not flushed to disk.

The following FairCom Server configuration options set the time limit (in seconds) that a data cache page or index buffer can remain dirty before it is written to the file system cache. The default time limit is IMMEDIATE (flush updates for non-tran files to the file system as soon as possible). Specify a value to set a time limit in seconds. Specify OFF to disable time limit-based flushing.


NONTRAN_DATA_FLUSH_SEC   <time_limit_in_seconds>

NONTRAN_INDEX_FLUSH_SEC  <time_limit_in_seconds>
 

These options can also be changed using the ctSETCFG() API function and using the ctadmn utility.

Monitoring Non-Transaction Data Flush

Fields have been added to the system snapshot structure (ctGSMS) to hold the non-tran flush settings and statistics. See Time limit on flushing updated data and index cache pages for TRNLOG files in the FairCom DB Programmer's Reference.

Tuning Non-Transaction Data Flush

These FairCom Server configuration options set the number of counter buckets for the dirty data page and index buffer lists:


NONTRAN_DATA_FLUSH_BUCKETS   <number_of_buckets>

NONTRAN_INDEX_FLUSH_BUCKETS  <number_of_buckets>
 

The default number of counter buckets is 10. Setting the option to zero disables the use of the counter buckets.

Non-Transaction Flush Diagnostics

The configuration option DIAGNOSTICS BACKGROUND_FLUSH can be used to enable logging of flush thread operations to the file NTFLS.FCS.

The configuration option DIAGNOSTICS BACKGROUND_FLUSH_BUCKETS can be used to enable logging of flush counter bucket statistics to the file NTFLSBKT.FCS. Each time a text snapshot is written to the file SNAPSHOT.FCS file, the bucket statistics are written to the NTFLSBKT.FCS file.

 

Checkpoint Requirements

FairCom DB periodically writes checkpoints to the transaction logs. Automatic recovery, rollback, and forward roll use the most recent checkpoint listed in the transaction start files as the starting point. This section explains the conditions that a checkpoint must meet to be used for these operations.

Forward Roll

A forward roll can only be started from a checkpoint that is logged when all of the following conditions are true:

  1. No transactions are active.
  2. No abort node list entries exist (except if using the ctrdmp utility and the forward roll starts from the begin dump checkpoint, in which case it is allowed).
  3. No index buffers contain unflushed updates for committed transactions.
  4. No data cache pages contain unflushed updates for committed transactions.

Due to these requirements, there is no guarantee that a checkpoint logged by calling CTCHKPNT() can be used as the starting point for a forward roll. If a forward roll is attempted from a checkpoint that does not meet these requirements, the forward roll fails with error 510 (RFCK_ERR, "active checkpoint at start of forward roll").

Three options are available for generating a checkpoint that can be guaranteed to be usable for a forward roll operation:

  1. Perform a dynamic dump: This is probably the most commonly-used option to provide a starting point for a forward roll operation. The dynamic dump achieves a quiet transaction state and flushes all updated index buffers and data cache pages for transaction-controlled files. Then it writes a "begin dump" checkpoint to the transaction logs and allows transaction activity to resume. The dynamic dump writes the specified data and index files to the dump stream file, and then it writes an "end dump" checkpoint and copies the transaction logs (the logs containing these two checkpoints and all logs between these two logs) to the dump stream file. When the ctrdmp utility is run, it reads the data and index files from the dump stream file and recovers them to their state as of the begin dump checkpoint. If you include the !FORWARD_ROLL option in the dump restore script, ctrdmp creates a start file that points to the begin dump checkpoint, which can be renamed from an FCA extension to FCS to serve as the starting point for a forward roll.
  2. Call ctQUIET(): Call ctQUIET() with a mode that ensures that the forward roll transaction state requirements are met. For example, use ctQTblockALL | ctQTflushAllFiles. While the server is quiesced, make a copy of the data files, index files, and transaction logs. The logs will contain a checkpoint that can be used to roll forward.
  3. Shut down FairCom Server cleanly: Shut down FairCom Server cleanly so that all clients disconnect and all files are closed, and FairCom Server writes a clean final checkpoint to the transaction log. The message "Perform system checkpoint" in CTSTATUS.FCS indicates that the final checkpoint was written. For example:

Wed Oct 5 10:04:01 2016

 - User# 00021 Server shutdown initiated

Wed Oct 5 10:04:03 2016

 - User# 00021 Communications terminated

Wed Oct 5 10:04:03 2016

 - User# 00021 Perform system checkpoint

Wed Oct 5 10:04:03 2016

 - User# 00021 Server shutdown completed

A final checkpoint (logged at a clean server shutdown) should also have the required attributes. If a checkpoint does not conform to the conditions listed above, a forward roll beginning at such a checkpoint will fail with RFCK_ERR (510).

Rollback

Rollback can be started from any checkpoint. Starting with a point-in-time copy of the data files, index files, and transaction logs (acquired by using dynamic dump or ctQUIET() for example), rollback begins with the most recent checkpoint listed in the transaction start files. First, automatic recovery is performed to bring the files up to the state of the last committed transaction in the transaction logs, and then rollback undoes operations back to the requested point-in-time.

Calling CTCHKPNT

Although rollback can use any checkpoint, the checkpoint requirements for forward roll mean that a call to CTCHKPNT() is not guaranteed to be usable for forward roll. Note that each time a checkpoint is logged, the transaction start files are updated, so only the two most recent checkpoints will be listed in the two start files. The start files are used to provide the starting checkpoint position to forward roll and rollback. By calling CTCHKPNT() you are simply updating the start files more frequently. The two start files will never refer to more than two checkpoints at a time, and FairCom DB automatically writes checkpoints to the transaction logs periodically (typically at least three checkpoints per log).

Also remember that a forward roll or rollback requires more than just a starting checkpoint: the state of the data and index files must correspond to the current state of the transaction logs and the position of the starting checkpoint. To roll forward or back, you will need to have saved a point-in-time copy of the data files, index files, and transaction logs. Unless you save off this complete set of files when you call CTCHKPNT(), that checkpoint will not be useful in rolling forward or rolling back.

 

Hot Backups Stream to STDOUT

Database backups are essential, and should be taken at every opportunity and at regular intervals. However, backups are vulnerable to inappropriate access and can consume additional storage space. Encrypting and/or compressing these critical data streams is more important than ever.

The FairCom Database Engine can now redirect backups (dynamic dumps) to an operating system’s standard output (stdout) channel. This makes it easy to use operating system utilities and third-party tools to process backups before you write them to disk. Any file utility that can read and write data from stdin and stdout can take advantage of this feature. In addition, chaining several processes together is very easy. This greatly reduces post-processing steps such that your backups are ready for immediate archiving. For example, you can pipe a backup into gzip to compress the stream, pipe results into ccrypt and encrypt them, and finally pipe the resulting file directly to FTP and transfer to another environment.

The ctdump (ctdump - Schedule Backup Utility, Submit Backup Utility) utility has a new option (‑x) that redirects a backup stream to stdout and sends error and informational messages to stderr. You must remove any processing you applied before using the ctdump utility to restore these backups.

You must remove any processing in reverse order you applied before using the ctdump (ctdump - Schedule Backup Utility, Submit Backup Utility) utility to restore these backups. And don’t lose or forget your encryption keys!

See ctdump (ctdump - Schedule Backup Utility, Submit Backup Utility)

Restore Backups Direct from STDIN (ctrdmp)

The ctrdmp utility now supports the ability to restore a dynamic dump stream being redirected to standard input. To enable this behavior, ctrdmp must be run with the -x option, in addition to providing the proper redirection.

 

Volume Shadow Service (VSS) Support

The Volume Shadow Service (VSS) is a Microsoft technology built into Microsoft Windows operating systems starting from Microsoft Windows XP. This service allows taking manual or automatic backup copies or “snapshots” of a logical drive. Snapshots have two primary purposes:

  • They allow the creation of consistent backups of a volume, ensuring that the contents cannot change while the backup is being made.
  • They avoid problems with file locking.

By creating a read-only copy of the volume, backup programs are able to access every file without interfering with other programs writing to those same files.

FairCom Server provides VSS support through its VSS writer, which controls how FairCom DB data is set to a consistent state at the beginning of a VSS operation and maintain that consistency throughout the process.

The VSS writer is an integral component of the VSS support provided by FairCom DB. This component is supplied as a Windows dynamic link library (FairCom DBVSSWriter.dll) and can be optionally loaded by FairCom DB at startup.

This section discusses the following topics:

 

VSS Configuration

VSS_WRITER YES

With this option enabled, FairCom DB loads the Volume Shadow Copy Service (VSS) writer DLL (c-treeACEVSSWriter.dll) and initializes the VSS writer when the server starts. A FULL CONSISTENCY VSS backup of FairCom database files will be created, which is comparable to cleanly shutting down the FairCom server and then backing up all files.

For application files that are under full transaction control, a FULL CONSISTENCY VSS backup is more time-consuming and invasive than strictly required. Application files with the ctTRNLOG property are under full transaction control. If full transaction controlled files and the database transaction logs (L*.FCS and S0*.FCS) are included in backups, a normal database automatic recovery takes place at startup and will properly restore CRASH CONSISTENT backed up data. Do NOT enable VSS_WRITER YES in ctsrvr.cfg, on application files that are under full transaction control.

Note: VSS backups require the Volume Shadow Copy service to be running. If this Windows service is set to start manually or is off by default, it needs to be started before VSS backup will work.

The following message is logged in CTSTATUS.FCS indicating the VSS writer has been started:

Mon Sep 13 14:11:27 2010

 - User# 00001 VSS Init: Successfully started the VSS writer.

If you run the command “vssadmin list writers” on a machine with FairCom DB Server running and a properly configured VSS, the list should include c‑treeACEVSSWriter.

Compatibility Notes

  • The FairCom VSS writer is compatible with the backup utilities provided in the server versions of Windows.
  • The Windows backup software provided in desktop versions of Windows (the Enterprise edition of Windows 7 and Windows 8) is not a VSS-compatible backup provider and therefore will not work with the FairCom VSS writer.
  • Windows Server backup (2008 & 2012) is a VSS provider and works with the FairCom VSS writer.
  • Acronis Backup has been tested on Windows 7 (both 32-bit and 64-bit) and works correctly with the FairCom VSS writer when configured with ctsrvr.dds.
  • The Novastor backup utility has been tested on non-server versions of Windows and works correctly with the FairCom VSS writer when configured with ctsrvr.dds.
  • Other third-party backup utilities may work with the FairCom VSS writer if they are VSS-compatible backup providers. Please check with the manufacturer of your backup utility for information about VSS compatibility.

User Permissions

FairCom VSS Writer is intended to be run by users with Administrator permissions. To avoid permission issues when running the VSS Writer with a user that is not an Administrator, you must perform the following operations on the Windows registry:

  1. Run the Windows regedit command.
  2. Browse to find the Key: HKEY_LOCAL_MACHINE>SYSTEM>CurrentControlSet>Services>VSS>VssAccessControl
  3. Insert a new REG_DWORD value with the following syntax DOMAINNAME\USERNAME. For example, if your domain is MYDomain and your user name is User, enter: MYDomain\User
  4. Set the newly created key to the hexadecimal value of 1.
  5. Restart the computer to apply the changes.

Files to Be Backed Up

The VSS writer needs a list of files that are considered as under the server's control. This information must be located in the file ctsrvr.dds residing in the server's working directory (where the faircom.exe is located). For the VSS backup, only entries between !FILES and !END are relevant. There is no directory recursion, so wildcards will not be matched in subdirectories.


!FILES

C:\FairCom\ctreeSDK\ctreeAPI\bin.sql\ctreeSQL.dbs\test1.dat

C:\FairCom\ctreeSDK\ctreeAPI\bin.sql\ctreeSQL.dbs\test1.idx

ctreeSQL.dbs\*.dat

ctreeSQL.dbs\*.idx

ctreeSQL.dbs\SQL_SYS\*

!END
 

This information tells the backup utility which files are under FairCom DB control. If the set of files being backed up does not intersect with the set of files listed in ctsrvr.dds, the VSS service does not interact with FairCom DB VSS writer, resulting in an invalid backup of any files open by the server.

While testing, it is recommended to run the FairCom DB SQL Server with DIAGNOSTICS VSS_WRITER in ctsrvr.cfg. When the VSS writer is correctly configured, you should see entries logged to CTSTATUS.FCS like those listed in VSS Diagnostic Logging.

 

VSS Diagnostic Logging

The following FairCom DB configuration option enables VSS writer diagnostic logging (this can be enabled dynamically on the fly with server administrator utilities):

DIAGNOSTICS VSS_WRITER

When enabled, the VSS writer logs diagnostic messages to CTSTATUS.FCS. These messages indicate the sequence of operations to which the VSS writer is responding. Some examples are shown below:


Tue Sep 14 15:44:05 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnIdentify called

Tue Sep 14 15:44:07 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnPrepareBackup called

Tue Sep 14 15:44:07 2010

 - User# 00016 VSS Diag: [0x1098] (+) Component: CtreeACE

Tue Sep 14 15:44:07 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnPrepareSnapshot called

Tue Sep 14 15:44:07 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnFreeze called

Tue Sep 14 15:44:07 2010

 - User# 00016 VSS Diag: [0x1098] QuietCtree(ctQTblockALL | ctQTflushAllFiles)...

Tue Sep 14 15:44:08 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnThaw called

Tue Sep 14 15:44:08 2010

 - User# 00016 VSS Diag: [0x1098] QuietCtree(ctQTunblockALL)...

Tue Sep 14 15:44:08 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnPostSnapshot called

Tue Sep 14 15:44:10 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnIdentify called

Tue Sep 14 15:44:25 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnIdentify called

Tue Sep 14 15:44:26 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnBackupComplete called

Tue Sep 14 15:44:26 2010

 - User# 00016 VSS Diag: [0x1098]     c-treeACEVSSWriter::OnBackupShutdown called
 

Note: The VSS writer always logs error messages to the Windows event log, even if the DIAGNOSTICS VSS_WRITER option is not specified in the configuration file.

 

Dynamic API Interaction

The following SQL built-in procedures can enable VSS Writer support:

call  fc_set_sysconfig('vss_writer',  'YES');

call  fc_set_sysconfig('diagnostics', 'VSS_WRITER');
 

The SetConfiguration(), ctSETCFG() API function can be used to change VSS configuration dynamically.

Examples

If the FairCom DB VSS writer is not running, the following call starts it:

ctSETCFG(setcfgVSS_WRITER, "YES");

If the FairCom DB VSS writer is running, the following call stops it:

ctSETCFG(setcfgVSS_WRITER, "NO");

The following call enables VSS writer diagnostic logging:

ctSETCFG(setcfgDIAGNOSTICS, "VSS_WRITER");

The following call disables VSS writer diagnostic logging:

ctSETCFG(setcfgDIAGNOSTICS, "~VSS_WRITER");