.cmd, .substitutions and .template explanations#


.cmd workflow#

The recommended way of setting up SSH Monitor is to have a single .cmd file per target to monitor. I.e. a single SSH Monitor IOC program per target to monitor. Here is why:

The records scanning/processing step is not multi-threaded (see https://epics.anl.gov/core-talk/2019/msg00197.php). So every record will be scanned/processed one by one, including the aSub records calling SSH commands. If one or multiples targets are unreachable, then all the SSH commands of the associated aSub records will timeout: one after the other. This can result in a total scanning/processing time superior to the one defined by the SCAN fields of the associated aSub records, i.e. “scan processing over-run” will happen.

Even without unreachable targets (let’s say that all targets are reachable via SSH), if there really is a lot of monitored targets in a single IOC program, then it might also end up with “scan processing over-runs”. The reason is that some SSH shell instructions can take more than a second to complete: if they are a lot of them, then - inevitably - the total scanning/processing time will also be superior to the one defined with the SCAN field

So, the best way to avoid such “scan processing over-run” problem is to configure one IOC program per target. This way, IOC programs will run in parallel without interfering with each other.

Important

When running multiple IOC programs in parallel on the same machine, make sure that each IOC program defines a unique EPICS_CAS_SERVER_PORT with the epicsEnvSet IOC Shell command. You can refer to the syntax used in the .cmd and .substitutions how-to (detailed in the below .cmd explanations). The reasons are given in the TCP ports limitations explanations.


.cmd explanations#

The st_tests.cmd file is a reference file that can be used as an example for all your SSH Monitor related .cmd file, it is also used for the SSH Monitor tests. This file is similar to the one below:

#!../../bin/linux-x86_64/myTargetMonitoring

< envPaths

epicsEnvSet("IOCSH_PS1", "Target1 Monitoring> ")
epicsEnvSet("SUBSTITUTIONS_FILES", "${TOP}/db/*.sub*")
epicsEnvSet("EPICS_CAS_SERVER_PORT", "5064")  
                                              
epicsEnvSet("PREFIX", "target1:")
epicsEnvSet("SSH_OPTS_AND_ARGS", "target-sshmonitor-user@192.168.1.3 -i /home/host-sshmonitor-user/.ssh/host_to_target_ssh_monitor_ed25519_key -o BatchMode=yes -o PasswordAuthentication=no -o ConnectionAttempts=3 -x -o ControlMaster=auto -o ControlPath=~/.ssh/%C -o ControlPersist=60s")

dbLoadDatabase("${SSHMONITOR}/dbd/menuScan.dbd")
dbLoadDatabase("${TOP}/dbd/myTargetMonitoring.dbd")
myTargetMonitoring_registerRecordDeviceDriver(pdbbase)

var(dbRecordsOnceOnly, 1)

dbLoadTemplate("${TOP}/db/example_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/processors_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/memory_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/partitions_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/connection_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
#dbLoadTemplate("${TOP}/db/archiver_appliance_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
#dbLoadTemplate("${TOP}/db/your_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
#dbLoadTemplate("${TOP}/iocBoot/${IOC}/your_other_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")

system('cat "${SUBSTITUTIONS_FILES}" | grep "CATEGORY = " | tr -d " " | tr "\"" " " | awk "{print $1 $2}" | grep -v "\#" 2>/dev/null | uniq -D | grep . && (echo "ERROR: illegal categories: multiple CATEGORY macros share the same name!"; kill -9 $PPID) || echo "INFO: substitutions files ${SUBSTITUTIONS_FILES} are OK"')

iocInit

Here are the details for each line:

  • #!../../bin/linux-x86_64/myTargetMonitoring
    

    instructs the program loader to run the myTargetMonitoring program (located here: ../../bin/linux-x86_64/myTargetMonitoring relatively to the .cmd file, passing the content of the .cmd file as the first argument. This is a Shebang. Note that the content of this .cmd file is a script written with a dedicated EPICS script langage called IOC Shell (iocsh).

  • < envPaths
    

    runs the content of the envPaths file, next to the .cmd file. The envPaths file is automatically generated at build time and contains EPICS environment variables, specifying — most of the time — the location of the Top (with the ${TOP} environment variable), the location of the IOC program in the iocBoot directory of the Top (with the ${IOC} environment variable), the location of the EPICS base, etc.

  • epicsEnvSet("IOCSH_PS1", "Target1 Monitoring> ")
    

    changes the prompt for the IOC Shell. This is very useful when running multiple IOC programs in parallel, because it helps identifying them clearly.

  • epicsEnvSet("SUBSTITUTIONS_FILES", "${TOP}/db/*.sub*")
    

    sets an environment variable locating your .substitutions file. The SUBSTITUTIONS_FILES environment variable will be used later in order to check the content of the .substitutions file (it will be used in the below system(...) line).

  • epicsEnvSet("EPICS_CAS_SERVER_PORT", "5064")
    

    specifies the TCP port used by EPICS for Channel Access communications. When running multiple IOC programs on the same machine, it is recommended to set a unique port for each of the IOC programs for reasons given in the TCP ports limitations explanations.

  • epicsEnvSet("PREFIX", "target1:")
    

    sets the prefix for all the records’ names (in this example, the prefix is target1:).

    Important

    This prefix should be unique across all the .substitutions files. E.g. a unique target name.

  • epicsEnvSet("SSH_OPTS_AND_ARGS", "target-sshmonitor-user@192.168.1.3 -i /home/host-sshmonitor-user/.ssh/host_to_target_ssh_monitor_ed25519_key -o BatchMode=yes -o PasswordAuthentication=no -o ConnectionAttempts=3 -x -o ControlMaster=auto -o ControlPath=~/.ssh/%C -o ControlPersist=60s")
    

    configures the SSH options and arguments (as described by man ssh on the host used to send SSH commands.

    Tip

    Recommended options and arguments to use:

    • -i /path/to/your/ssh/key: select the path to the file from which the identity (private key) for public key authentication is read.

      • Note that if the remote user you are connecting to has an empty password, then you actually don’t need to setup a key-based SSH connection. A regular SSH connection will be enough because there will be no password prompt interrupting the SSH Monitor IOC program. In this case, you can set the -i option to /dev/null.

        However, I strongly suggest you to not allow empty passwords for your users (for obvious security reasons) and to setup a key-base SSH connection (like described in the SSH how-to).

    • -o BatchMode=yes: disable user interaction such as password prompts and host key confirmation requests (this option is useful in scripts and other batch jobs where no user is present to interact with).

    • -o PasswordAuthentication=no: avoids password authentication prompt to appear.

    • -o ConnectionAttempts=n : n connection(s) attempt(s) to try before exiting (one per second).

    • -x : disables x11 forwarding.

    • SSH_Monitor-target-user-name@target-ip-address: sets the target user and IP address.

    Tip

    Very useful options to use:

    • Avoid closing and reopening a new SSH connection for each instruction/PV, by adding the following options:

      • -o ControlMaster=auto: Enables the sharing of multiple sessions over a single network connection. If set to auto, it creates a master session automatically, but if there is a master session already available, subsequent sessions are automatically multiplexed.

      • -o ControlPath=~/.ssh/%C: Specifies the path to the control socket used for connection sharing. In this path, %C generates a SHA1 hash (depending on the target information like username, hostname, port number) in order to get a short and unique socket name.

      • -o ControlPersist=600s: When used in conjunction with ControlMaster, specifies that the master connection should remain open (for 600 seconds here) in the background (waiting for future client connections)

    • Execute SSH intermediate machine(s):

      • -J intermediate-user@intermediate-ip-address: Connect to the target by first making a SSH connection by jumping on an intermediate machine. Multiple jump hops may be specified, separated by comma characters.

      • With the previous option, a second -i argument might be needed to get the SSH key for the intermediate machine.

    Warning

    Options to avoid:

    • -o ConnectTimeout=n: Specifies n second(s) of timeout when connecting to the SSH server. It might not be good idea to change the default timeout because it is actually based on the default TCP timeout of the system (see ConnectTimeout option in $ man ssh_config). Configuring one IOC program per target should be enough to not worry about lowering the SSH timeout. The risk of modifying the SSH timeout behavior is to miss some SSH connections and end up with incorrect PVs. More details about the TCP timeout here: https://stackoverflow.com/a/15485308. With or without configuring the SSH timeout: in any case, it might be worth setting the ConnectionAttempts option to 3 or 4, in order to increase the reliability of the program (even on a shaky network).

  • dbLoadDatabase("${SSHMONITOR}/dbd/menuScan.dbd")
    

    allows to access more EPICS SCAN intervals, which is needed by SSH Monitor. Replace the ${SSHMONITOR} macro with the one specified in your myTargetMonitoring/configure/RELEASE file, if the macro name isn’t SSHMONITOR already.

  • dbLoadDatabase("${TOP}/dbd/myTargetMonitoring.dbd")
    

    loads the automatically generated myTargetMonitoring.dbd file (database definition file).

  • myTargetMonitoring_registerRecordDeviceDriver(pdbbase)
    

    registers “registrars”, i.e.:

  • var dbRecordsOnceOnly 1
    

    allows to throw an error if multiple records share the same name (but it won’t prevent the IOC program from starting). This feature is very interesting for SSH Monitor, because a lot of record names are defined by the user with macros, so it’s quite easy to make mistakes with unfortunate copy/pastes, ending up with duplicated macros (leading to duplicated records names).

    Unfortunately, the var dbRecordsOnceOnly 1 won’t prevent the IOC program from starting if an error is thrown. In fact, it appears that no such aborting mechanism is available right now in EPICS: https://epics.anl.gov/tech-talk/2019/msg00730.php.

    See also

    See the database definition documentation for more details.

  • dbLoadTemplate("${TOP}/db/example_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    dbLoadTemplate("${TOP}/db/processors_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    dbLoadTemplate("${TOP}/db/memory_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    dbLoadTemplate("${TOP}/db/partitions_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    dbLoadTemplate("${TOP}/db/connection_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    #dbLoadTemplate("${TOP}/db/archiver_appliance_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    #dbLoadTemplate("${TOP}/db/your_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    #dbLoadTemplate("${TOP}/iocBoot/${IOC}/your_other_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
    

    loads the wanted .substitutions files. See the next section for more details about the .substitutions files syntax.

  • system('cat "${SUBSTITUTIONS_FILES}" | grep "CATEGORY = " | tr -d " " | tr "\"" " " | awk "{print $1 $2}" | grep -v "\#" 2>/dev/null | uniq -D | grep . && (echo "ERROR: illegal categories: multiple CATEGORY macros share the same name!"; kill -9 $PPID) || echo "INFO: substitutions files ${SUBSTITUTIONS_FILES} are OK"')
    

    runs a system command (expecting your host machine to have a POSIX shell for the command to be run with) that will look for the “CATEGORY” macro in your .substitutions file. It is a common error to duplicate this macro, causing records names / PVs names to be duplicated. So if the system(...) command find duplicated “CATEGORY” macro, it will throw an error message and stop the IOC program.

    Also, note that this type of system command is available in the IOC Shell because the line myTargetMonitoring_DBD += system.dbd has been added to the myTargetMonitoringApp/src/Makefile file, while installing SSH Monitor.

    See also

    See the installation how-to guide for more details.

  • iocInit
    

    starts the IOC program.

See also

More details about .cmd files can be found here: specs/IOCInit.html


.template explanations#

There is a single .template file used by SSH Monitor: ssh_monitor_core.template.

It can be implemented one or multiple times by any .substitutions file (in the file "${TOP}/db/ssh_monitor_core.template" { ... } part of the .substitutions file).

This .template file is heavily documented in the reference documentation (using Doxygen).

See also

See the reference of ssh_monitor_core.template for more details.


.substitutions explanations#

In order to create and configure a .substitutions file for SSH Monitor (e.g. myTargetMonitoringTop/iocBoot/iocMyTargetMonitoring/target1.substitutions), you can look at the SSH Monitor .substitutions files as references/examples (they are used for the SSH Monitor tests).

Tip

In those files, inside a macro value, some characters needs to be escaped if you don’t want them to be interpreted by the EPICS parser:

  • inside double-quotes, escape $ by \\\\

  • inside double-quotes, escape ( by \\

  • inside double-quotes, escape ) by \\

You’ll also notice that those files are divided in two parts:

The global { ... } part#

This part allows to set different macros used across the whole .substitutions file. Here are some details about all those macros:

  • PREFIX: This macro must (as in mandatory) be specified in the global { ... } part.

    • This macros is a prefix for all the record’s names defined through this file.

    • A string format is expected as a value.

    • Important

      This prefix should be unique across all the .substitutions files. E.g. a unique target name.

    • The PREFIX macro is set through the .cmd file via the PREFIX_MACRO and the PREFIX macros.

    • See, in the previous .cmd section, the explanations about the epicsEnvSet("SSH_OPTS_AND_ARGS", "...") line, for more details about the content of this macro.

  • SSH_OPTS_AND_ARGS: This macro must (as in mandatory) be specified in the global { ... } part.

    • This macros specifies the SSH options and arguments used to the SSH command executed on the host (as described by man ssh on the host machine).

    • A string format is expected as a value.

    • The SSH_OPTS_AND_ARGS macro is set through the .cmd file via the SSH_OPTS_AND_ARGS_MACRO and the SSH_OPTS_AND_ARGS macros.

    • See in the previous .cmd section, the explanations about the epicsEnvSet("SSH_OPTS_AND_ARGS", "...") line, for more details about the content of this macro.

  • SSH_OPTS_AND_ARGS_LENGTH: This macro can (as in optional) be specified in the global { ... } part.

    • This macro specifies the maximum size (in characters) of the previous SSH_OPTS_AND_ARGS macros.

    • An integer number format is expected as a value.

    • It’s default value is 1024 (characters).

The file "${TOP}/db/ssh_monitor_core.template" { ... } part#

It will allow to implement the same ssh_monitor_core.template file multiple times, with a different set of macros each time.

Each macro set is defined in its own scope ({ ... }) within the file "${TOP}/db/ssh_monitor_core.template" { ... } part. So each new scope ({ ... }) inside file "${TOP}/db/ssh_monitor_core.template" { ... }, will implement the same ssh_monitor_core.template file, but with different macros.

Here are the details about how to implement the ssh_monitor_core.template file with a new scope ({ ... }) inside file "${TOP}/db/ssh_monitor_core.template" { ... }:

  • A new scope must define the CATEGORY macro with a unique value (unique in the file)

    • A string format is expected as a value.

    • E.g. CATEGORY = "my:very:unique:category:name".

    • Important

      Don’t forget to set a value to the CATEGORY macro, do not left it empty!

    • Warning

      Also, be careful to not duplicate CATEGORY values inside a .substitutions file, or some associated records might be miss-named or have the same name: if it happens, one record could overwrite others!

    • Note the the CATEGORY macro will be part of the PV names defined in the same scope. Each PV name will be prefixed like so: ${PREFIX}${CATEGORY}.

  • A new scope can optionally define the SCAN macro associated to one of the following values (in order to specify at which time interval you want the data to be retrieved, 60 seconds by default):

    • “1 hour”

    • “30 minute”

    • “20 minute”

    • “15 minute”

    • “10 minute”

    • “5 minute”

    • “1 minute”

    • “60 second” (DEFAULT value if not specified)

    • “30 second”

    • “20 second”

    • “10 second”

    • “5 second”

    • “2 second”

    • “1 second”

    • “.5 second”

    • “.2 second”

    • “.1 second”

    Note

    Unlike with other fields, the SCAN macro is specified for a whole scope / category. This is due to the fact that only one aSub record is defined in the ssh_monitor_core.template file (you can also check this file for more details about the aSub record), and thus, only one associated SCAN field. So, in every SSH Monitor .substitutions file, the SCAN macro is generally declared next to the CATEGORY macro.

    Tip

    If you want to override the default SCAN value, e.g. when loading SSH Monitor default .substitutions files, then you can either:

    1. (This is the preferred method) Load the SSH Monitor .substitutions files from your .cmd file, e.g. like so:

      dbLoadTemplate("${TOP}/db/example_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}, SCAN_MACRO=1 minute")

      Note the SCAN_MACRO=1 minute macro will override the default SCAN value.

    2. Modify the SCAN macro directly in the .substitutions file.

  • A new scope can also optionally define the CACHE_STATUS_NAME macro. This macro will specify the name of the PV indicating the status of the cache used by the new scope.

    • A string format is expected as a value.

    • It defaults to “CacheStatus” which don’t really need to be changed.

  • A new scope can also optionally define the OSV_CACHE_STATUS macro. This macro will specify the severity of the PV (if it’s value reaches 1) indicating the status of the cache used by the new scope.

    • A specific string format is expected as a value: it can be either MINOR or MAJOR.

    • It defaults to “MAJOR” which don’t really need to be changed.

  • A new scope is composed of “instructions”. An “instruction” is a set of macros that will specify what shell instruction to run on the target machine in order to retrieve the expected data.

  • A new scope can define up to 15 scalar “instructions” and up to 5 string “instructions”.

    • A “scalar instruction” is just a shell instruction which is expected to return a double-precision floating-point number format.

    • A “string instruction” is just a shell instruction which is expected to return a sequence of characters.

  • A new “scalar instruction” must (as in mandatory) define the following macros:

    • PV_NAME_SCALAR_n: specifies a PV.

      • A string format is expected as a value.

      • Note that n in PV_NAME_SCALAR_n is a unique ID number (unique in the scope) which has to range between 1 and 15.

      • E.g. PV_NAME_SCALAR_6 = "available_ram".

    • SCALAR_n_INSTRUCTION: specifies a shell instruction, whose output is expected to be a double-precision floating-point number format.

      • A string format is expected as a value.

      • Note that n in SCALAR_n_INSTRUCTION is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. SCALAR_6_INSTRUCTION = "free -m | grep 'Mem:' | awk '{print \\\\$7}'".

      • Note that it is possible to specify if the shell instruction should be rather executed on the host machine instead of the target. To do so, add #LOCAL at the end of the SCALAR_n_INSTRUCTION value, e.g. SCALAR_6_INSTRUCTION = "uname --kernel-release | cut -c1-1 #LOCAL". In this case, SSH options specified in SSH_OPTS_AND_ARGS are not taken into account.

      • Note that it is possible to specify if the shell instruction should be logged in the IOC Shell. To do so, add #DEBUG at the end of the SCALAR_n_INSTRUCTION value, e.g. SCALAR_6_INSTRUCTION = "uname --kernel-release | cut -c1-1 #DEBUG".

      • Note that it is possible to specify if the shell instruction should be rather executed on the host machine and should be logged in the IOC Shell, at the same time. To do so, add #LOCAL+DEBUG, or #DEBUG+LOCAL, at the end of the SCALAR_n_INSTRUCTION value, e.g. SCALAR_6_INSTRUCTION = "uname --kernel-release | cut -c1-1 #LOCAL+DEBUG".

    • LOAD_SCALAR_n: a comment-macro used to uncomment/load the whole record called ${PREFIX}${CATEGORY}${PV_NAME_SCALAR_n} (located in the ${TOP}/db/ssh_monitor_core.template file).

      • Note that n in LOAD_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • This comment-macro must (as in mandatory) be empty in order for the record called ${PREFIX}${CATEGORY}${PV_NAME_SCALAR_n} to be loaded.

      • E.g. LOAD_SCALAR_6 = "".

  • Optionally, a “scalar instruction” can also define the SCALAR_n_INSTRUCTION_LENGTH macro. The SCALAR_n_INSTRUCTION_LENGTH macro specifies the number of characters used in SCALAR_n_INSTRUCTION. The default value of SCALAR_n_INSTRUCTION_LENGTH is 1024, so if the value of SCALAR_n_INSTRUCTION contains more than 1024 characters, then you will have to specify it, or the associated shell instruction won’t work. You can also specify this macro to specify a tailor-made length in order to improve memory efficiency.

    • An integer number format is expected as a value.

    • It defaults to 1024

    • Note that n in SCALAR_n_INSTRUCTION_LENGTH is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

    • E.g. if SCALAR_6_INSTRUCTION is set like so: SCALAR_6_INSTRUCTION = "free -m | grep 'Mem:' | awk '{print \\\\$7}'" (a 43 characters long shell instruction), then you can set SCALAR_n_INSTRUCTION_LENGTH like so: SCALAR_n_INSTRUCTION_LENGTH = "44" (43 +1 end of line character).

  • Optionally, a “scalar instruction” can also define the following macros in order to specify some fields of it’s record (like precision, description, alarm thresholds, alarm severity, etc):

    • PREC_SCALAR_n: specifies the floating point precision (i.e. the number of digits to show after decimal point) with which to display the value of the associated “scalar instruction”.

      • An integer number format is expected as a value.

      • It defaults to 16

      • The associated PREC field will result to this macros.

      • Note that n in PREC_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. PREC_SCALAR_6 = "32".

    • DESC_SCALAR_n: specifies the description of the associated “scalar instruction”.

      • A string format is expected as a value.

      • Must not exceed 40 characters.

      • The associated DESC field will result to this macros.

      • Note that n in DESC_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. DESC_SCALAR_6 = "a short description for my PV".

    • LOAD_DESC_SCALAR_n: a comment-macro used to uncomment/load the previous DESC field.

      • This comment-macro must be empty in order for the DESC_SCALAR_n macro to be loaded.

      • E.g. LOAD_DESC_SCALAR_6 = "".

    • EGU_SCALAR_n: specifies the “engineering unit” of the associated “scalar instruction”.

      • A string format is expected as a value.

      • The associated EGU field will result to this macro.

      • Note that n in EGU_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. EGU_SCALAR_6 = "%".

    • LOAD_EGU_SCALAR_n: a comment-macro used to uncomment/load the previous EGU field.

      • This comment-macro must be empty in order for the EGU_SCALAR_n macro to be loaded.

      • E.g. LOAD_EGU_SCALAR_6 = "".

    • HYST_SCALAR_n: specifies the hysteresis factor, of the associated “scalar instruction”. This factor allow to prevent alarm chattering from an input signal that is close to one of the limits and suffers from significant readout noise. The value of the associated “scalar instruction” must change by at least the hysteresis factor before the alarm status and severity is impacted.

      • A double-precision floating-point number format is expected as a value.

      • The associated HYST field will result to this macro.

      • Note that n in HYST_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. HYST_SCALAR_6 = "0.042".

    • LOAD_HYST_SCALAR_n: a comment-macro used to uncomment/load the previous HYST field.

      • This comment-macro must be empty in order for the HYST_SCALAR_n macro to be loaded.

      • E.g. LOAD_HYST_SCALAR_6 = "".

    • DRVL_SCALAR_n: specifies the drive low limit of the associated “scalar instruction”. With this limit, the value of the associated “scalar instruction” is clipped to the following range: from DRVL to DRVH inclusive (provided that DRVH > DRVL).

      • A double-precision floating-point number format is expected as a value.

      • The associated DRVL field will result to this macro.

      • Note that n in DRVL_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. DRVL_SCALAR_6 = "0".

    • LOAD_DRVL_SCALAR_n: a comment-macro used to uncomment/load the previous DRVL field.

      • This comment-macro must be empty in order for the DRVL_SCALAR_n macro to be loaded.

      • E.g. LOAD_DRVL_SCALAR_6 = "".

    • DRVH_SCALAR_n: specifies the drive high limit of the associated “scalar instruction”. With this limit, the value of the associated “scalar instruction” is clipped to the following range: from DRVL to DRVH inclusive (provided that DRVH > DRVL).

      • A double-precision floating-point number format is expected as a value.

      • The associated DRVL field will result to this macro.

      • Note that n in DRVH_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. DRVL_SCALAR_6 = "100".

    • LOAD_DRVH_SCALAR_n: a comment-macro used to uncomment/load the previous DRVH field.

      • This comment-macro must be empty in order for the DRVH_SCALAR_n macro to be loaded.

      • E.g. LOAD_DRVH_SCALAR_6 = "".

    • HOPR_SCALAR_n: specifies the high operating range of the associated “scalar instruction”, which is the upper display limit for the value of the associated “scalar instruction”. If this field is defined, it must be in the range: DRVL <= LOPR <= HOPR <= DRVH. If so, it will act as an upper limit for the VAL, HIHI, HIGH, LOW, and LOLO fields, only for client display.

      • A double-precision floating-point number format is expected as a value.

      • The associated HOPR field will result to this macro.

      • Note that n in HOPR_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. HOPR_SCALAR_6 = "100".

    • LOAD_HOPR_SCALAR_n: a comment-macro used to uncomment/load the previous HOPR field.

      • This comment-macro must be empty in order for the HOPR_SCALAR_n macro to be loaded.

      • E.g. LOAD_HOPR_SCALAR_6 = "".

    • LOPR_SCALAR_n: specifies the low operating range of the associated “scalar instruction”, which is the lower display limit for the value of the associated “scalar instruction”. If this field is defined, it must be in the range: DRVL <= LOPR <= HOPR <= DRVH. If so, it will act as a lower limit for the VAL, HIHI, HIGH, LOW, and LOLO fields, only for client display.

      • A double-precision floating-point number format is expected as a value.

      • The associated LOPR field will result to this macro.

      • Note that n in LOPR_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. LOPR_SCALAR_6 = "0".

    • LOAD_LOPR_SCALAR_n: a comment-macro used to uncomment/load the previous LOPR field.

      • This comment-macro must be empty in order for the LOPR_SCALAR_n macro to be loaded.

      • E.g. LOAD_LOPR_SCALAR_6 = "".

    • HIGH_SCALAR_n: specifies the high alarm limit threshold of the associated “scalar instruction”.

      • A double-precision floating-point number format is expected as a value.

      • The associated HIGH field will result to this macro.

      • Note that n in HIGH_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. HIGH_SCALAR_6 = "80".

    • LOAD_HIGH_SCALAR_n: a comment-macro used to uncomment/load the previous HIGH field.

      • This comment-macro must be empty in order for the HIGH_SCALAR_n macro to be loaded.

      • E.g. LOAD_HIGH_SCALAR_6 = "".

    • HSV_SCALAR_n: specifies the severity of the previous high threshold.

      • A specific string format is expected as a value: it can be either MINOR or MAJOR.

      • The associated HSV field will result to this macro.

      • Note that n in HSV_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. HSV_SCALAR_6 = "MINOR".

    • LOAD_HSV_SCALAR_n: a comment-macro used to uncomment/load the previous HSV field.

      • This comment-macro must be empty in order for the HSV_SCALAR_n macro to be loaded.

      • E.g. LOAD_HSV_SCALAR_6 = "".

    • HIHI_SCALAR_n: specifies the high-high alarm limit threshold of the associated “scalar instruction”.

      • A double-precision floating-point number format is expected as a value.

      • The associated HIHI field will result to this macro.

      • Note that n in HIHI_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. HIHI_SCALAR_6 = "90".

    • LOAD_HIHI_SCALAR_n: a comment-macro used to uncomment/load the previous HIHI field.

      • This comment-macro must be empty in order for the HIHI_SCALAR_n macro to be loaded.

      • E.g. LOAD_HIHI_SCALAR_6 = "".

    • HHSV_SCALAR_n: specifies the alarm severity of the high-high threshold

      • A specific string format is expected as a value: it can be either MINOR or MAJOR.

      • The associated HHSV field will result to this macro.

      • Note that n in HHSV_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. HHSV_SCALAR_6 = "MAJOR".

    • LOAD_HHSV_SCALAR_n: a comment-macro used to uncomment/load the previous HHSV field.

      • This comment-macro must be empty in order for the HHSV_SCALAR_n macro to be loaded.

      • E.g. LOAD_HHSV_SCALAR_6 = "".

    • LOW_SCALAR_n: specifies the low alarm limit threshold.

      • A double-precision floating-point number format is expected as a value.

      • The associated LOW field will result to this macro.

      • Note that n in LOW_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. LOW_SCALAR_6 = "20".

    • LOAD_LOW_SCALAR_n: a comment-macro used to uncomment/load the previous LOW field.

      • This comment-macro must be empty in order for the LOW_SCALAR_n macro to be loaded.

      • E.g. LOAD_LOW_SCALAR_6 = "".

    • LSV_SCALAR_n: specifies the alarm severity of the low threshold.

      • A specific string format is expected as a value: it can be either MINOR or MAJOR.

      • The associated LSV field will result to this macro.

      • Note that n in LSV_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. LSV_SCALAR_6 = "MINOR".

    • LOAD_LSV_SCALAR_n: a comment-macro used to uncomment/load the previous LSV field.

      • This comment-macro must be empty in order for the LSV_SCALAR_n macro to be loaded.

      • E.g. LOAD_LSV_SCALAR_6 = "".

    • LOLO_SCALAR_n: specifies the low-low alarm limit threshold.

      • A double-precision floating-point number format is expected as a value.

      • The associated LOLO field will result to this macro.

      • Note that n in LOLO_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. LOLO_SCALAR_6 = "10".

    • LOAD_LOLO_SCALAR_n: a comment-macro used to uncomment/load the previous LOLO field.

      • This comment-macro must be empty in order for the LOLO_SCALAR_n macro to be loaded.

      • E.g. LOAD_LOLO_SCALAR_6 = "".

    • LLSV_SCALAR_n: specifies the alarm severity of the low-low threshold.

      • A specific string format is expected as a value: it can be either MINOR or MAJOR.

      • The associated LLSV field will result to this macro.

      • Note that n in LLSV_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. LLSV_SCALAR_6 = "MAJOR".

    • LOAD_LLSV_SCALAR_n: a comment-macro used to uncomment/load the previous LLSV field.

      • This comment-macro must be empty in order for the LLSV_SCALAR_n macro to be loaded.

      • E.g. LOAD_LLSV_SCALAR_6 = "".

    • ADEL_SCALAR_n: specifies the dead-band for archive monitors. ADEL and MDEL fields specify a minimum delta, which a changing value must surpass before the value-change monitors are invoked. If these fields have a value of zero, every time the value changes, a monitor will be triggered; if they have a value of -1, every time the record is processed, monitors are triggered. The ADEL field is used by archive monitors and the MDEL field for all other types of monitors.

      • A double-precision floating-point number format is expected as a value.

      • The associated ADEL field will result to this macro.

      • Note that n in ADEL_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. ADEL_SCALAR_6 = "-1".

    • LOAD_ADEL_SCALAR_n: a comment-macro used to uncomment/load the previous LLSV field.

      • This comment-macro must be empty in order for the ADEL_SCALAR_n macro to be loaded.

      • E.g. LOAD_ADEL_SCALAR_6 = "".

    • MDEL_SCALAR_n: specifies the dead-band for archive monitors. ADEL and MDEL fields specify a minimum delta, which a changing value must surpass before the value-change monitors are invoked. If these fields have a value of zero, every time the value changes, a monitor will be triggered; if they have a value of -1, every time the record is processed, monitors are triggered. The ADEL field is used by archive monitors and the MDEL field for all other types of monitors.

      • A double-precision floating-point number format is expected as a value.

      • The associated MDEL field will result to this macro.

      • Note that n in MDEL_SCALAR_n is the same unique ID number used in the previous PV_NAME_SCALAR_n macro.

      • E.g. MDEL_SCALAR_6 = "-1".

    • LOAD_MDEL_SCALAR_n: a comment-macro used to uncomment/load the previous MDEL field.

      • This comment-macro must be empty in order for the MDEL_SCALAR_n macro to be loaded.

      • E.g. LOAD_MDEL_SCALAR_6 = "".

    • See also

      See guides/EPICS_Process_Database_Concepts.html for more details about alarm specification with EPICS.

  • A new “string instruction” must define the following macros:

    • PV_NAME_STRING_n: specifies a PV.

      • A string format is expected as a value.

      • Note that n in PV_NAME_STRING_n is a unique ID number (unique in the scope) which has to range between 1 and 5.

      • E.g. PV_NAME_STRING_4 = "last_10_errors_log".

    • STRING_n_INSTRUCTION: specifies a shell instruction, whose output is expected to be a sequence of characters

      • A string format is expected as a value.

      • Note that n in STRING_n_INSTRUCTION is the same unique ID number used in the previous PV_NAME_STRING_n macro.

      • E.g. STRING_4_INSTRUCTION = "cat /var/log/errors.log | grep -i --max-count=10 'error'"

      • Note that it is possible to specify if the shell instruction should be rather executed on the host machine instead of the target. To do so, add #LOCAL at the end of the STRING_n_INSTRUCTION value, e.g. STRING_4_INSTRUCTION = "uname --all #LOCAL". In this case, SSH options and arguments specified in SSH_OPTS_AND_ARGS are not taken into account.

      • Note that it is possible to specify if the shell instruction should be logged in the IOC Shell. To do so, add #DEBUG at the end of the STRING_n_INSTRUCTION value, e.g. STRING_4_INSTRUCTION = "uname --all #DEBUG".

      • Note that it is possible to specify if the shell instruction should be rather executed on the host machine and should be logged in the IOC Shell, at the same time. To do so, add #LOCAL+DEBUG, or #DEBUG+LOCAL, at the end of the STRING_n_INSTRUCTION value, e.g. STRING_4_INSTRUCTION = "uname --all #LOCAL+DEBUG".

    • LOAD_STRING_n: a comment-macro used to uncomment/load the whole record called ${PREFIX}${CATEGORY}${PV_NAME_STRInG_n} (located in the ${TOP}/db/ssh_monitor_core.template file).

      • Note that n in LOAD_STRING_n is the same unique ID number used in the previous PV_NAME_STRING_n macro.

      • This comment-macro must (as in mandatory) be empty in order for the record called ${PREFIX}${CATEGORY}${PV_NAME_STRING_n} to be loaded.

      • E.g. LOAD_STRING_4 = "".

  • Optionally, a “string instruction” can also define the STRING_n_INSTRUCTION_LENGTH macro. This macro specifies the number of characters used in STRING_n_INSTRUCTION. The default value of STRING_n_INSTRUCTION_LENGTH is 1024, so if the value of STRING_n_INSTRUCTION contains more than 1024 characters, then you will have to specify it, or the associated shell instruction won’t work. You can also specify this macro to specify a tailor-made length in order to improve memory efficiency.

    • An integer number format is expected as a value.

    • It defaults to 1024

    • Note that n in STRING_n_INSTRUCTION_LENGTH is the same unique ID number used in the previous PV_NAME_STRING_n macro.

    • E.g. if STRING_4_INSTRUCTION is set like so: STRING_4_INSTRUCTION = "cat /var/log/errors.log | grep -i --max-count=10 'error'" (a 56 characters long shell instruction), then you can set STRING_n_INSTRUCTION_LENGTH like so: STRING_4_INSTRUCTION_LENGTH = "57" (56 +1 end of line character).

  • Optionally, a “string instruction” can also define the following macros in order to specify some records fields:

    • STRING_n_RESULT_MAX_LENGTH: specifies the maximum size (number of characters) of the result of the associated “string instruction”.

      • An integer number format is expected as a value.

      • It defaults to 1024

      • The associated NELM field will result to this macro.

      • Note that n in STRING_n_RESULT_MAX_LENGTH is the same unique ID number used in the previous PV_NAME_STRING_n macro.

      • E.g. STRING_4_RESULT_MAX_LENGTH = 2048.

    • DESC_STRING_n: specifies the description of the associated “string instruction”.

      • A string format is expected as a value.

      • Must not exceed 40 characters.

      • The associated DESC field will result to this macro.

      • Note that n in DESC_STRING_n is the same unique ID number used in the previous PV_NAME_STRING_n macro.

      • E.g. DESC_STRING_4 = "a short description for my PV".

    • LOAD_DESC_STRING_n: a comment-macro used to uncomment/load the previous DESC field.

      • This comment-macro must be empty in order for the DESC_STRING_n macro to be loaded.

      • E.g. LOAD_DESC_STRING_6 = "".


See also

The syntax of a .substitutions file is covered in this documentation.

See also

The format of a .substitutions file is covered in this documentation.

See also

The syntax and format of a .template file is covered in this documentation.

Important

Finally, in the .substitutions files, it is strongly recommended to use a lot of comments in order to explain clearly what shell instruction does what, and what scope / category are intended for.