.cmd, .substitutions and .template explanations#
What’s a
.cmd?See also
For more details about how to set up a .cmd, see the
.cmdand.substitutionshow-to.
What’s a
.substitutions?See also
For an introduction to the .substitutions file, see its definition: .substitutions. See the associated sources of this definition for more details.
See also
For more details about how to set up a .substitutions, see the
.cmdand.substitutionshow-to.See also
For more details about the .substitutions files, see the
.substitutionsreferences.
What’s a
.template?See also
For more details about the .template file, see the
.templatereferences.
.cmd workflow#
The recommended way of setting up SSH Monitor is to have a single .cmd file per target to monitor. I.e. a single SSH Monitor IOC program per target to monitor. Here is why:
The records scanning/processing step is not multi-threaded
(see https://epics.anl.gov/core-talk/2019/msg00197.php).
So every record will be scanned/processed one by one,
including the aSub records
calling SSH commands.
If one or multiples targets are unreachable,
then all the SSH commands of the associated
aSub records
will timeout: one after the other.
This can result in a total scanning/processing time
superior to the one defined by the SCAN fields of the associated aSub records,
i.e. “scan processing over-run” will happen.
Even without unreachable targets (let’s say that all targets are reachable via SSH), if there really is a lot of monitored targets in a single IOC program, then it might also end up with “scan processing over-runs”. The reason is that some SSH shell instructions can take more than a second to complete: if they are a lot of them, then - inevitably - the total scanning/processing time will also be superior to the one defined with the SCAN field…
So, the best way to avoid such “scan processing over-run” problem is to configure one IOC program per target. This way, IOC programs will run in parallel without interfering with each other.
Important
When running multiple IOC programs in parallel on the same machine,
make sure that each IOC program defines a unique EPICS_CAS_SERVER_PORT
with the epicsEnvSet IOC Shell command.
You can refer to the syntax used in the .cmd and .substitutions how-to
(detailed in the below .cmd explanations).
The reasons are given in the TCP ports limitations explanations.
.cmd explanations#
The st_tests.cmd file is a reference file that can be used as an example for all your SSH Monitor related .cmd file, it is also used for the SSH Monitor tests. This file is similar to the one below:
#!../../bin/linux-x86_64/myTargetMonitoring
< envPaths
epicsEnvSet("IOCSH_PS1", "Target1 Monitoring> ")
epicsEnvSet("SUBSTITUTIONS_FILES", "${TOP}/db/*.sub*")
epicsEnvSet("EPICS_CAS_SERVER_PORT", "5064")
epicsEnvSet("PREFIX", "target1:")
epicsEnvSet("SSH_OPTS_AND_ARGS", "target-sshmonitor-user@192.168.1.3 -i /home/host-sshmonitor-user/.ssh/host_to_target_ssh_monitor_ed25519_key -o BatchMode=yes -o PasswordAuthentication=no -o ConnectionAttempts=3 -x -o ControlMaster=auto -o ControlPath=~/.ssh/%C -o ControlPersist=60s")
dbLoadDatabase("${SSHMONITOR}/dbd/menuScan.dbd")
dbLoadDatabase("${TOP}/dbd/myTargetMonitoring.dbd")
myTargetMonitoring_registerRecordDeviceDriver(pdbbase)
var(dbRecordsOnceOnly, 1)
dbLoadTemplate("${TOP}/db/example_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/processors_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/memory_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/partitions_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
dbLoadTemplate("${TOP}/db/connection_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
#dbLoadTemplate("${TOP}/db/archiver_appliance_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
#dbLoadTemplate("${TOP}/db/your_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
#dbLoadTemplate("${TOP}/iocBoot/${IOC}/your_other_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
system('cat "${SUBSTITUTIONS_FILES}" | grep "CATEGORY = " | tr -d " " | tr "\"" " " | awk "{print $1 $2}" | grep -v "\#" 2>/dev/null | uniq -D | grep . && (echo "ERROR: illegal categories: multiple CATEGORY macros share the same name!"; kill -9 $PPID) || echo "INFO: substitutions files ${SUBSTITUTIONS_FILES} are OK"')
iocInit
Here are the details for each line:
#!../../bin/linux-x86_64/myTargetMonitoringinstructs the program loader to run the
myTargetMonitoringprogram (located here:../../bin/linux-x86_64/myTargetMonitoringrelatively to the .cmd file, passing the content of the .cmd file as the first argument. This is a Shebang. Note that the content of this .cmd file is a script written with a dedicated EPICS script langage called IOC Shell (iocsh).< envPathsruns the content of the
envPathsfile, next to the .cmd file. TheenvPathsfile is automatically generated at build time and contains EPICS environment variables, specifying — most of the time — the location of the Top (with the${TOP}environment variable), the location of the IOC program in theiocBootdirectory of the Top (with the${IOC}environment variable), the location of the EPICS base, etc.epicsEnvSet("IOCSH_PS1", "Target1 Monitoring> ")
changes the prompt for the IOC Shell. This is very useful when running multiple IOC programs in parallel, because it helps identifying them clearly.
epicsEnvSet("SUBSTITUTIONS_FILES", "${TOP}/db/*.sub*")
sets an environment variable locating your .substitutions file. The
SUBSTITUTIONS_FILESenvironment variable will be used later in order to check the content of the .substitutions file (it will be used in the belowsystem(...)line).epicsEnvSet("EPICS_CAS_SERVER_PORT", "5064")
specifies the TCP port used by EPICS for Channel Access communications. When running multiple IOC programs on the same machine, it is recommended to set a unique port for each of the IOC programs for reasons given in the TCP ports limitations explanations.
epicsEnvSet("PREFIX", "target1:")
sets the prefix for all the records’ names (in this example, the prefix is
target1:).Important
This prefix should be unique across all the .substitutions files. E.g. a unique target name.
epicsEnvSet("SSH_OPTS_AND_ARGS", "target-sshmonitor-user@192.168.1.3 -i /home/host-sshmonitor-user/.ssh/host_to_target_ssh_monitor_ed25519_key -o BatchMode=yes -o PasswordAuthentication=no -o ConnectionAttempts=3 -x -o ControlMaster=auto -o ControlPath=~/.ssh/%C -o ControlPersist=60s")
configures the SSH options and arguments (as described by
man sshon the host used to send SSH commands.Tip
Recommended options and arguments to use:
-i /path/to/your/ssh/key: select the path to the file from which the identity (private key) for public key authentication is read.Note that if the remote user you are connecting to has an empty password, then you actually don’t need to setup a key-based SSH connection. A regular SSH connection will be enough because there will be no password prompt interrupting the SSH Monitor IOC program. In this case, you can set the
-ioption to/dev/null.However, I strongly suggest you to not allow empty passwords for your users (for obvious security reasons) and to setup a key-base SSH connection (like described in the SSH how-to).
-o BatchMode=yes: disable user interaction such as password prompts and host key confirmation requests (this option is useful in scripts and other batch jobs where no user is present to interact with).-o PasswordAuthentication=no: avoids password authentication prompt to appear.-o ConnectionAttempts=n:nconnection(s) attempt(s) to try before exiting (one per second).-x: disables x11 forwarding.SSH_Monitor-target-user-name@target-ip-address: sets the target user and IP address.
Tip
Very useful options to use:
Avoid closing and reopening a new SSH connection for each instruction/PV, by adding the following options:
-o ControlMaster=auto: Enables the sharing of multiple sessions over a single network connection. If set to auto, it creates a master session automatically, but if there is a master session already available, subsequent sessions are automatically multiplexed.-o ControlPath=~/.ssh/%C: Specifies the path to the control socket used for connection sharing. In this path, %C generates a SHA1 hash (depending on the target information like username, hostname, port number) in order to get a short and unique socket name.-o ControlPersist=600s: When used in conjunction with ControlMaster, specifies that the master connection should remain open (for 600 seconds here) in the background (waiting for future client connections)
Execute SSH intermediate machine(s):
-J intermediate-user@intermediate-ip-address: Connect to the target by first making a SSH connection by jumping on an intermediate machine. Multiple jump hops may be specified, separated by comma characters.With the previous option, a second
-iargument might be needed to get the SSH key for the intermediate machine.
Warning
Options to avoid:
-o ConnectTimeout=n: Specifiesnsecond(s) of timeout when connecting to the SSH server. It might not be good idea to change the default timeout because it is actually based on the default TCP timeout of the system (seeConnectTimeoutoption in$ man ssh_config). Configuring one IOC program per target should be enough to not worry about lowering the SSH timeout. The risk of modifying the SSH timeout behavior is to miss some SSH connections and end up with incorrect PVs. More details about the TCP timeout here: https://stackoverflow.com/a/15485308. With or without configuring the SSH timeout: in any case, it might be worth setting theConnectionAttemptsoption to 3 or 4, in order to increase the reliability of the program (even on a shaky network).
dbLoadDatabase("${SSHMONITOR}/dbd/menuScan.dbd")
allows to access more EPICS SCAN intervals, which is needed by SSH Monitor. Replace the
${SSHMONITOR}macro with the one specified in yourmyTargetMonitoring/configure/RELEASEfile, if the macro name isn’tSSHMONITORalready.dbLoadDatabase("${TOP}/dbd/myTargetMonitoring.dbd")
loads the automatically generated
myTargetMonitoring.dbdfile (database definition file).myTargetMonitoring_registerRecordDeviceDriver(pdbbase)
registers “registrars”, i.e.:
var dbRecordsOnceOnly 1
allows to throw an error if multiple records share the same name (but it won’t prevent the IOC program from starting). This feature is very interesting for SSH Monitor, because a lot of record names are defined by the user with macros, so it’s quite easy to make mistakes with unfortunate copy/pastes, ending up with duplicated macros (leading to duplicated records names).
Unfortunately, the
var dbRecordsOnceOnly 1won’t prevent the IOC program from starting if an error is thrown. In fact, it appears that no such aborting mechanism is available right now in EPICS: https://epics.anl.gov/tech-talk/2019/msg00730.php.See also
See the database definition documentation for more details.
dbLoadTemplate("${TOP}/db/example_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") dbLoadTemplate("${TOP}/db/processors_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") dbLoadTemplate("${TOP}/db/memory_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") dbLoadTemplate("${TOP}/db/partitions_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") dbLoadTemplate("${TOP}/db/connection_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") #dbLoadTemplate("${TOP}/db/archiver_appliance_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") #dbLoadTemplate("${TOP}/db/your_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}") #dbLoadTemplate("${TOP}/iocBoot/${IOC}/your_other_own_specific_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}")
loads the wanted .substitutions files. See the next section for more details about the .substitutions files syntax.
system('cat "${SUBSTITUTIONS_FILES}" | grep "CATEGORY = " | tr -d " " | tr "\"" " " | awk "{print $1 $2}" | grep -v "\#" 2>/dev/null | uniq -D | grep . && (echo "ERROR: illegal categories: multiple CATEGORY macros share the same name!"; kill -9 $PPID) || echo "INFO: substitutions files ${SUBSTITUTIONS_FILES} are OK"')
runs a system command (expecting your host machine to have a POSIX shell for the command to be run with) that will look for the “CATEGORY” macro in your .substitutions file. It is a common error to duplicate this macro, causing records names / PVs names to be duplicated. So if the
system(...)command find duplicated “CATEGORY” macro, it will throw an error message and stop the IOC program.Also, note that this type of system command is available in the IOC Shell because the line
myTargetMonitoring_DBD += system.dbdhas been added to themyTargetMonitoringApp/src/Makefilefile, while installing SSH Monitor.See also
See the installation how-to guide for more details.
iocInit
starts the IOC program.
See also
More details about .cmd files can be found here: specs/IOCInit.html
.template explanations#
There is a single .template file used by SSH Monitor: ssh_monitor_core.template.
It can be implemented one or multiple times
by any .substitutions file
(in the file "${TOP}/db/ssh_monitor_core.template" { ... } part
of the .substitutions file).
This .template file is heavily documented in the reference documentation (using Doxygen).
See also
See the reference of ssh_monitor_core.template for more details.
.substitutions explanations#
In order to create and configure a .substitutions file for SSH Monitor
(e.g. myTargetMonitoringTop/iocBoot/iocMyTargetMonitoring/target1.substitutions),
you can look at the SSH Monitor .substitutions files
as references/examples (they are used for the SSH Monitor tests).
Tip
In those files, inside a macro value, some characters needs to be escaped if you don’t want them to be interpreted by the EPICS parser:
inside double-quotes, escape
$by\\\\inside double-quotes, escape
(by\\inside double-quotes, escape
)by\\
You’ll also notice that those files are divided in two parts:
The global { ... } part#
This part allows to set different macros used across the whole .substitutions file. Here are some details about all those macros:
PREFIX: This macro must (as in mandatory) be specified in theglobal { ... }part.This macros is a prefix for all the record’s names defined through this file.
A string format is expected as a value.
Important
This prefix should be unique across all the .substitutions files. E.g. a unique target name.
The
PREFIXmacro is set through the .cmd file via thePREFIX_MACROand thePREFIXmacros.See, in the previous
.cmdsection, the explanations about theepicsEnvSet("SSH_OPTS_AND_ARGS", "...")line, for more details about the content of this macro.
SSH_OPTS_AND_ARGS: This macro must (as in mandatory) be specified in theglobal { ... }part.This macros specifies the SSH options and arguments used to the SSH command executed on the host (as described by
man sshon the host machine).A string format is expected as a value.
The
SSH_OPTS_AND_ARGSmacro is set through the .cmd file via theSSH_OPTS_AND_ARGS_MACROand theSSH_OPTS_AND_ARGSmacros.See in the previous
.cmdsection, the explanations about theepicsEnvSet("SSH_OPTS_AND_ARGS", "...")line, for more details about the content of this macro.
SSH_OPTS_AND_ARGS_LENGTH: This macro can (as in optional) be specified in theglobal { ... }part.
The file "${TOP}/db/ssh_monitor_core.template" { ... } part#
It will allow to implement the same ssh_monitor_core.template file multiple times, with a
different set of macros each time.
Each macro set is defined in its own scope ({ ... })
within the file "${TOP}/db/ssh_monitor_core.template" { ... } part.
So each new scope ({ ... }) inside file "${TOP}/db/ssh_monitor_core.template" { ... },
will implement the same ssh_monitor_core.template file,
but with different macros.
Here are the details about how to implement the ssh_monitor_core.template file
with a new scope ({ ... }) inside file "${TOP}/db/ssh_monitor_core.template" { ... }:
A new scope must define the
CATEGORYmacro with a unique value (unique in the file)A string format is expected as a value.
E.g.
CATEGORY = "my:very:unique:category:name".Important
Don’t forget to set a value to the
CATEGORYmacro, do not left it empty!Warning
Also, be careful to not duplicate
CATEGORYvalues inside a .substitutions file, or some associated records might be miss-named or have the same name: if it happens, one record could overwrite others!Note the the
CATEGORYmacro will be part of the PV names defined in the same scope. Each PV name will be prefixed like so:${PREFIX}${CATEGORY}.
A new scope can optionally define the
SCANmacro associated to one of the following values (in order to specify at which time interval you want the data to be retrieved, 60 seconds by default):“1 hour”
“30 minute”
“20 minute”
“15 minute”
“10 minute”
“5 minute”
“1 minute”
“60 second” (DEFAULT value if not specified)
“30 second”
“20 second”
“10 second”
“5 second”
“2 second”
“1 second”
“.5 second”
“.2 second”
“.1 second”
Note
Unlike with other fields, the
SCANmacro is specified for a whole scope / category. This is due to the fact that only one aSub record is defined in the ssh_monitor_core.template file (you can also check this file for more details about the aSub record), and thus, only one associated SCAN field. So, in every SSH Monitor .substitutions file, theSCANmacro is generally declared next to theCATEGORYmacro.Tip
If you want to override the default
SCANvalue, e.g. when loading SSH Monitor default.substitutionsfiles, then you can either:(This is the preferred method) Load the SSH Monitor .substitutions files from your .cmd file, e.g. like so:
dbLoadTemplate("${TOP}/db/example_monitoring.substitutions", "PREFIX_MACRO=${PREFIX}, SSH_OPTS_AND_ARGS_MACRO=${SSH_OPTS_AND_ARGS}, SCAN_MACRO=1 minute")Note the
SCAN_MACRO=1 minutemacro will override the defaultSCANvalue.Modify the
SCANmacro directly in the .substitutions file.
A new scope can also optionally define the
CACHE_STATUS_NAMEmacro. This macro will specify the name of the PV indicating the status of the cache used by the new scope.A string format is expected as a value.
It defaults to “CacheStatus” which don’t really need to be changed.
A new scope can also optionally define the
OSV_CACHE_STATUSmacro. This macro will specify the severity of the PV (if it’s value reaches 1) indicating the status of the cache used by the new scope.A specific string format is expected as a value: it can be either
MINORorMAJOR.It defaults to “MAJOR” which don’t really need to be changed.
A new scope is composed of “instructions”. An “instruction” is a set of macros that will specify what shell instruction to run on the target machine in order to retrieve the expected data.
A new scope can define up to 15 scalar “instructions” and up to 5 string “instructions”.
A “scalar instruction” is just a shell instruction which is expected to return a double-precision floating-point number format.
A “string instruction” is just a shell instruction which is expected to return a sequence of characters.
A new “scalar instruction” must (as in mandatory) define the following macros:
PV_NAME_SCALAR_n: specifies a PV.A string format is expected as a value.
Note that
ninPV_NAME_SCALAR_nis a unique ID number (unique in the scope) which has to range between 1 and 15.E.g.
PV_NAME_SCALAR_6 = "available_ram".
SCALAR_n_INSTRUCTION: specifies a shell instruction, whose output is expected to be a double-precision floating-point number format.A string format is expected as a value.
Note that
ninSCALAR_n_INSTRUCTIONis the same unique ID number used in the previousPV_NAME_SCALAR_nmacro.E.g.
SCALAR_6_INSTRUCTION = "free -m | grep 'Mem:' | awk '{print \\\\$7}'".Note that it is possible to specify if the shell instruction should be rather executed on the host machine instead of the target. To do so, add
#LOCALat the end of theSCALAR_n_INSTRUCTIONvalue, e.g.SCALAR_6_INSTRUCTION = "uname --kernel-release | cut -c1-1 #LOCAL". In this case, SSH options specified inSSH_OPTS_AND_ARGSare not taken into account.Note that it is possible to specify if the shell instruction should be logged in the IOC Shell. To do so, add
#DEBUGat the end of theSCALAR_n_INSTRUCTIONvalue, e.g.SCALAR_6_INSTRUCTION = "uname --kernel-release | cut -c1-1 #DEBUG".Note that it is possible to specify if the shell instruction should be rather executed on the host machine and should be logged in the IOC Shell, at the same time. To do so, add
#LOCAL+DEBUG, or#DEBUG+LOCAL, at the end of theSCALAR_n_INSTRUCTIONvalue, e.g.SCALAR_6_INSTRUCTION = "uname --kernel-release | cut -c1-1 #LOCAL+DEBUG".
LOAD_SCALAR_n: a comment-macro used to uncomment/load the whole record called${PREFIX}${CATEGORY}${PV_NAME_SCALAR_n}(located in the${TOP}/db/ssh_monitor_core.templatefile).Note that
ninLOAD_SCALAR_nis the same unique ID number used in the previousPV_NAME_SCALAR_nmacro.This comment-macro must (as in mandatory) be empty in order for the record called
${PREFIX}${CATEGORY}${PV_NAME_SCALAR_n}to be loaded.E.g.
LOAD_SCALAR_6 = "".
Optionally, a “scalar instruction” can also define the
SCALAR_n_INSTRUCTION_LENGTHmacro. TheSCALAR_n_INSTRUCTION_LENGTHmacro specifies the number of characters used inSCALAR_n_INSTRUCTION. The default value ofSCALAR_n_INSTRUCTION_LENGTHis 1024, so if the value ofSCALAR_n_INSTRUCTIONcontains more than 1024 characters, then you will have to specify it, or the associated shell instruction won’t work. You can also specify this macro to specify a tailor-made length in order to improve memory efficiency.An integer number format is expected as a value.
It defaults to 1024
Note that
ninSCALAR_n_INSTRUCTION_LENGTHis the same unique ID number used in the previousPV_NAME_SCALAR_nmacro.E.g. if
SCALAR_6_INSTRUCTIONis set like so:SCALAR_6_INSTRUCTION = "free -m | grep 'Mem:' | awk '{print \\\\$7}'"(a 43 characters long shell instruction), then you can setSCALAR_n_INSTRUCTION_LENGTHlike so:SCALAR_n_INSTRUCTION_LENGTH = "44"(43 +1 end of line character).
Optionally, a “scalar instruction” can also define the following macros in order to specify some fields of it’s record (like precision, description, alarm thresholds, alarm severity, etc):
PREC_SCALAR_n: specifies the floating point precision (i.e. the number of digits to show after decimal point) with which to display the value of the associated “scalar instruction”.DESC_SCALAR_n: specifies the description of the associated “scalar instruction”.LOAD_DESC_SCALAR_n: a comment-macro used to uncomment/load the previousDESCfield.This comment-macro must be empty in order for the
DESC_SCALAR_nmacro to be loaded.E.g.
LOAD_DESC_SCALAR_6 = "".
EGU_SCALAR_n: specifies the “engineering unit” of the associated “scalar instruction”.LOAD_EGU_SCALAR_n: a comment-macro used to uncomment/load the previousEGUfield.This comment-macro must be empty in order for the
EGU_SCALAR_nmacro to be loaded.E.g.
LOAD_EGU_SCALAR_6 = "".
HYST_SCALAR_n: specifies the hysteresis factor, of the associated “scalar instruction”. This factor allow to prevent alarm chattering from an input signal that is close to one of the limits and suffers from significant readout noise. The value of the associated “scalar instruction” must change by at least the hysteresis factor before the alarm status and severity is impacted.LOAD_HYST_SCALAR_n: a comment-macro used to uncomment/load the previousHYSTfield.This comment-macro must be empty in order for the
HYST_SCALAR_nmacro to be loaded.E.g.
LOAD_HYST_SCALAR_6 = "".
DRVL_SCALAR_n: specifies the drive low limit of the associated “scalar instruction”. With this limit, the value of the associated “scalar instruction” is clipped to the following range: from DRVL to DRVH inclusive (provided that DRVH > DRVL).LOAD_DRVL_SCALAR_n: a comment-macro used to uncomment/load the previousDRVLfield.This comment-macro must be empty in order for the
DRVL_SCALAR_nmacro to be loaded.E.g.
LOAD_DRVL_SCALAR_6 = "".
DRVH_SCALAR_n: specifies the drive high limit of the associated “scalar instruction”. With this limit, the value of the associated “scalar instruction” is clipped to the following range: from DRVL to DRVH inclusive (provided that DRVH > DRVL).LOAD_DRVH_SCALAR_n: a comment-macro used to uncomment/load the previousDRVHfield.This comment-macro must be empty in order for the
DRVH_SCALAR_nmacro to be loaded.E.g.
LOAD_DRVH_SCALAR_6 = "".
HOPR_SCALAR_n: specifies the high operating range of the associated “scalar instruction”, which is the upper display limit for the value of the associated “scalar instruction”. If this field is defined, it must be in the range:DRVL <= LOPR <= HOPR <= DRVH. If so, it will act as an upper limit for the VAL, HIHI, HIGH, LOW, and LOLO fields, only for client display.LOAD_HOPR_SCALAR_n: a comment-macro used to uncomment/load the previousHOPRfield.This comment-macro must be empty in order for the
HOPR_SCALAR_nmacro to be loaded.E.g.
LOAD_HOPR_SCALAR_6 = "".
LOPR_SCALAR_n: specifies the low operating range of the associated “scalar instruction”, which is the lower display limit for the value of the associated “scalar instruction”. If this field is defined, it must be in the range:DRVL <= LOPR <= HOPR <= DRVH. If so, it will act as a lower limit for the VAL, HIHI, HIGH, LOW, and LOLO fields, only for client display.LOAD_LOPR_SCALAR_n: a comment-macro used to uncomment/load the previousLOPRfield.This comment-macro must be empty in order for the
LOPR_SCALAR_nmacro to be loaded.E.g.
LOAD_LOPR_SCALAR_6 = "".
HIGH_SCALAR_n: specifies the high alarm limit threshold of the associated “scalar instruction”.LOAD_HIGH_SCALAR_n: a comment-macro used to uncomment/load the previousHIGHfield.This comment-macro must be empty in order for the
HIGH_SCALAR_nmacro to be loaded.E.g.
LOAD_HIGH_SCALAR_6 = "".
HSV_SCALAR_n: specifies the severity of the previous high threshold.LOAD_HSV_SCALAR_n: a comment-macro used to uncomment/load the previousHSVfield.This comment-macro must be empty in order for the
HSV_SCALAR_nmacro to be loaded.E.g.
LOAD_HSV_SCALAR_6 = "".
HIHI_SCALAR_n: specifies the high-high alarm limit threshold of the associated “scalar instruction”.LOAD_HIHI_SCALAR_n: a comment-macro used to uncomment/load the previousHIHIfield.This comment-macro must be empty in order for the
HIHI_SCALAR_nmacro to be loaded.E.g.
LOAD_HIHI_SCALAR_6 = "".
HHSV_SCALAR_n: specifies the alarm severity of the high-high thresholdLOAD_HHSV_SCALAR_n: a comment-macro used to uncomment/load the previousHHSVfield.This comment-macro must be empty in order for the
HHSV_SCALAR_nmacro to be loaded.E.g.
LOAD_HHSV_SCALAR_6 = "".
LOW_SCALAR_n: specifies the low alarm limit threshold.LOAD_LOW_SCALAR_n: a comment-macro used to uncomment/load the previousLOWfield.This comment-macro must be empty in order for the
LOW_SCALAR_nmacro to be loaded.E.g.
LOAD_LOW_SCALAR_6 = "".
LSV_SCALAR_n: specifies the alarm severity of the low threshold.LOAD_LSV_SCALAR_n: a comment-macro used to uncomment/load the previousLSVfield.This comment-macro must be empty in order for the
LSV_SCALAR_nmacro to be loaded.E.g.
LOAD_LSV_SCALAR_6 = "".
LOLO_SCALAR_n: specifies the low-low alarm limit threshold.LOAD_LOLO_SCALAR_n: a comment-macro used to uncomment/load the previousLOLOfield.This comment-macro must be empty in order for the
LOLO_SCALAR_nmacro to be loaded.E.g.
LOAD_LOLO_SCALAR_6 = "".
LLSV_SCALAR_n: specifies the alarm severity of the low-low threshold.LOAD_LLSV_SCALAR_n: a comment-macro used to uncomment/load the previousLLSVfield.This comment-macro must be empty in order for the
LLSV_SCALAR_nmacro to be loaded.E.g.
LOAD_LLSV_SCALAR_6 = "".
ADEL_SCALAR_n: specifies the dead-band for archive monitors. ADEL and MDEL fields specify a minimum delta, which a changing value must surpass before the value-change monitors are invoked. If these fields have a value of zero, every time the value changes, a monitor will be triggered; if they have a value of -1, every time the record is processed, monitors are triggered. The ADEL field is used by archive monitors and the MDEL field for all other types of monitors.LOAD_ADEL_SCALAR_n: a comment-macro used to uncomment/load the previousLLSVfield.This comment-macro must be empty in order for the
ADEL_SCALAR_nmacro to be loaded.E.g.
LOAD_ADEL_SCALAR_6 = "".
MDEL_SCALAR_n: specifies the dead-band for archive monitors. ADEL and MDEL fields specify a minimum delta, which a changing value must surpass before the value-change monitors are invoked. If these fields have a value of zero, every time the value changes, a monitor will be triggered; if they have a value of -1, every time the record is processed, monitors are triggered. The ADEL field is used by archive monitors and the MDEL field for all other types of monitors.LOAD_MDEL_SCALAR_n: a comment-macro used to uncomment/load the previousMDELfield.This comment-macro must be empty in order for the
MDEL_SCALAR_nmacro to be loaded.E.g.
LOAD_MDEL_SCALAR_6 = "".
See also
See guides/EPICS_Process_Database_Concepts.html for more details about alarm specification with EPICS.
See also
See 20240612143143/https://epics-docs-mj-test.readthedocs.io/projects/epics-base/en/latest/record-reference.html for more details about all the fields of all the records.
A new “string instruction” must define the following macros:
PV_NAME_STRING_n: specifies a PV.A string format is expected as a value.
Note that
ninPV_NAME_STRING_nis a unique ID number (unique in the scope) which has to range between 1 and 5.E.g.
PV_NAME_STRING_4 = "last_10_errors_log".
STRING_n_INSTRUCTION: specifies a shell instruction, whose output is expected to be a sequence of charactersA string format is expected as a value.
Note that
ninSTRING_n_INSTRUCTIONis the same unique ID number used in the previousPV_NAME_STRING_nmacro.E.g.
STRING_4_INSTRUCTION = "cat /var/log/errors.log | grep -i --max-count=10 'error'"Note that it is possible to specify if the shell instruction should be rather executed on the host machine instead of the target. To do so, add
#LOCALat the end of theSTRING_n_INSTRUCTIONvalue, e.g.STRING_4_INSTRUCTION = "uname --all #LOCAL". In this case, SSH options and arguments specified inSSH_OPTS_AND_ARGSare not taken into account.Note that it is possible to specify if the shell instruction should be logged in the IOC Shell. To do so, add
#DEBUGat the end of theSTRING_n_INSTRUCTIONvalue, e.g.STRING_4_INSTRUCTION = "uname --all #DEBUG".Note that it is possible to specify if the shell instruction should be rather executed on the host machine and should be logged in the IOC Shell, at the same time. To do so, add
#LOCAL+DEBUG, or#DEBUG+LOCAL, at the end of theSTRING_n_INSTRUCTIONvalue, e.g.STRING_4_INSTRUCTION = "uname --all #LOCAL+DEBUG".
LOAD_STRING_n: a comment-macro used to uncomment/load the whole record called${PREFIX}${CATEGORY}${PV_NAME_STRInG_n}(located in the${TOP}/db/ssh_monitor_core.templatefile).Note that
ninLOAD_STRING_nis the same unique ID number used in the previousPV_NAME_STRING_nmacro.This comment-macro must (as in mandatory) be empty in order for the record called
${PREFIX}${CATEGORY}${PV_NAME_STRING_n}to be loaded.E.g.
LOAD_STRING_4 = "".
Optionally, a “string instruction” can also define the
STRING_n_INSTRUCTION_LENGTHmacro. This macro specifies the number of characters used inSTRING_n_INSTRUCTION. The default value ofSTRING_n_INSTRUCTION_LENGTHis 1024, so if the value ofSTRING_n_INSTRUCTIONcontains more than 1024 characters, then you will have to specify it, or the associated shell instruction won’t work. You can also specify this macro to specify a tailor-made length in order to improve memory efficiency.An integer number format is expected as a value.
It defaults to 1024
Note that
ninSTRING_n_INSTRUCTION_LENGTHis the same unique ID number used in the previousPV_NAME_STRING_nmacro.E.g. if
STRING_4_INSTRUCTIONis set like so:STRING_4_INSTRUCTION = "cat /var/log/errors.log | grep -i --max-count=10 'error'"(a 56 characters long shell instruction), then you can setSTRING_n_INSTRUCTION_LENGTHlike so:STRING_4_INSTRUCTION_LENGTH = "57"(56 +1 end of line character).
Optionally, a “string instruction” can also define the following macros in order to specify some records fields:
STRING_n_RESULT_MAX_LENGTH: specifies the maximum size (number of characters) of the result of the associated “string instruction”.DESC_STRING_n: specifies the description of the associated “string instruction”.LOAD_DESC_STRING_n: a comment-macro used to uncomment/load the previousDESCfield.This comment-macro must be empty in order for the
DESC_STRING_nmacro to be loaded.E.g.
LOAD_DESC_STRING_6 = "".
See also
The syntax of a .substitutions file is covered in this documentation.
See also
The format of a .substitutions file is covered in this documentation.
See also
The syntax and format of a .template file is covered in this documentation.
Important
Finally, in the .substitutions files, it is strongly recommended to use a lot of comments in order to explain clearly what shell instruction does what, and what scope / category are intended for.