Test Case Specification#
Introduction#
This page shows you the details of test case files.
Test Case File Organization#
Fuze™ Test utilizes directory hierarchy scoping to help automatically discover test cases that must be TBD.
Test Case Specification#
The following sections describe the specification for the configuration and control inside of test cases.
Note
As per the JSON specification you can include comments in your test suite by using keys that are otherwise unused or nops. Any unused key is valid, but in the examples we show, the keywords commentX
or desc
is typically used.
Test Case File Naming#
You must use the following naming specification for test file names:
tc_<name>.json
Where, <name> is any freeform test string that makes sense in your system and adheres to your environment’s file naming spec.
Test Case Content#
The following sections define the contents of test case files.
Basic Features#
Every test case file must contain a single test case. A test case name should be the same as its file’s name, without the .json
extension.
Not doing this may cause unexpected results, such as the test case not being executed.
If you want it to be executed, the test case file name without the .json
extension is required to be listed in a targeted test suite.
This is to provide a measure of convenience, where the use case is the tester composing a test suite without being required to open every individual test case file to get the name from the content. A directory listing can provide the information much more conveniently in a single command.
File Structure and Fields#
A test case file is composed of a set of identifying/clarifying information, and one or more commands.
{
"desc": "<test case description>,
"name": "<name>",
"id": "<alphanumeric_value>", # optional
"testcmds": [
{<testcommand[0]>},
{<testcommand[n]>}
]
}
Field |
Description |
Values |
---|---|---|
|
Name of the test case. Must match file name as follows: |
|
|
Test case ID: a reference that makes sense to you. |
|
|
List of test commands. Test commands defined in the following sections. |
|
Test Commands#
A test case is composed of an ordered list of one or more commands.
Recognized fields are shown here. Any additional/unrecognized fields used will be ignored for processing and can be used as comments.
Command types supported by Fuze™ Test are defined in the following sections.
1. Simple Test Command (tcs
)#
The Simple Test Command includes a list of commands to execute in the host OS context, each with an expected return code and list of strings to find in the output.
The following shows the specification of the tcs
command.
{
"type": "tcs",
"desc": "<description>",
"cmd": "<host_cmd+args>",
"timeout_in_ms": <value>, # optional
"retryhandler": [ # optional
"<cmd[0]>",
"<cmd[n]>"
],
"retrypattern": [ # optional
"<string_match[0]>",
"<string_match[n]>"
],
"retrycount": <value>, # optional
"ret_code": <expected_return_code>,
"expout": [
"<string_match[0]>",
"<string_match[n]>"
],
"failpattern": [
"<string_match[0]>",
"<string_match[n]>"
]
}
Field |
Description |
Values |
---|---|---|
|
The host or host controller command to execute, including its arguments. The command must use the correct syntax of the host system OS. |
|
|
Max time in milliseconds the command should complete within. Exceeding this time results in test case failure. Fractions are supported (e.g., |
|
|
A list of commands or reference to another test case to execute if the original command fails. If this option is omitted or the list is empty, the retry feature is disabled. If any iteration of this handler fails, the original failing command and any further retries are considered failed, regardless of any remaining retries specified by |
|
|
List of 0 or more strings that, if found in the output when the command fails, will trigger the If the list is empty or this option is omitted, the retry feature is disabled. |
|
|
The number of retries allowed for a failed command, after executing the This option is ignored if |
|
|
Expected return code. |
|
|
List of strings to verify are in the output of the command. All listed strings must be present for the test case to pass. Although the test case may pass if no |
|
|
List of optional string values that, if any one is detected in the output of the test command, causes the test to fail. Case insensitive. |
|
Example: tcs
#
{
"type": "tcs",
"desc": "Verify event triggers image capture",
"cmd": "test-app.exe --acquire eventdetect --bridge_crc_on --exit_timeout=5",
"timeout_in_ms": 10000,
"ret_code": 0,
"expout": [
"RESULT : acquire : prepare_image_time : ",
"RESULT : acquire : DONE"
],
"failpattern": [
"FAIL",
"ERROR" ]
},
2. Check for File Existence (cfe
)#
The Check for File Existence test command presents parameters needed to check for a file’s existence/characteristics as passing criteria. The reason for including this in lieu of expecting the test case developer specifying a command such as ls
to determine the file’s existence is to avoid issues with particular execution location and file path specification. The file and any ancillary script specified will be found by the ATF using a search, starting with the local execution folder (i.e., producttest/
), and expanding in scope until the workspace’s base folder is reached.
If check[exists]
is set to True, then at test case execution start, the target file specified by fname
will be removed by moving it, if it exists, to the folder producttest\__CFE_TRASHCAN__
.
The file’s existence will then be checked when the cfe
test command is executed.
The following shows the specification of the cfe
command.
{
"type": "cfe",
"desc": "<description>",
"fname": "<name_of_file>",
"timeout_in_ms": <timeout_in_milliseconds>, # optional
"check": {
"exists": <true_or_false>, # or
"size": <size_in_bytes>, # or
"process": {
"script": "<name_of_script>",
"ret_code": <expected_return_code>
}
}
}
Field |
Description |
Values |
---|---|---|
|
Check for file existence. |
|
|
Name of the file to check. |
|
|
Max time in milliseconds the command should complete within. Exceeding this time results in test case failure. Fractions are supported (e.g., |
|
|
Key-Value pair indicating processing. One of the following: |
|
Example: cfe
#
{
"type": "cfe",
"desc": "Verify the store_blob_large_template.out file",
"fname": "store_blob_large_template.out",
"check": { "exists": true }
},
3. Compare N Values to Static Specification (css
)#
The Compare N Values to Spec test case type presents parameters needed to check command outputs (or results) against a static specification. This test command can be extended to an “average” by increasing the loop variable > 0.
The following shows the specification of the css
command.
{
"type": "css",
"desc": "<description>",
"cmd": "<host_cmd+args>",
"timeout_in_ms": <timeout_value_in_milliseconds>, # optional
"retryhandler": [ # optional
"<cmd[0]>",
"<cmd[n]>"
],
"retrypattern": [ # optional
"<string_match[0]>",
"<string_match[n]>"
],
"retrycount": <value>, # optional
"ret_code": <expected_return_code>,
"expout": [
"<string_match[0]>",
"<string_match[n]>"
],
"failpattern": [
"<string_match[0]>",
"<string_match[n]>"
]
"loop": <number_of_loops>, # optional
"cmpout": [
{
"cmptag": "<tag_to_compare>",
"cmpfunc": "<comparison_function>",
"cmpspec": [ <comparison_specifications> ]
}
] # optional
}
Field |
Description |
Values |
---|---|---|
|
Command type identifier. |
|
|
Command to execute in the context of the OS. |
|
|
Max time in milliseconds the command should complete within. Exceeding this time results in test case failure. Fractions are supported (e.g., |
|
|
A list of commands or reference to another test case to execute if the original command fails. If this option is omitted or the list is empty, the retry feature is disabled. If any iteration of this handler fails, the original failing command and any further retries are considered failed, regardless of any remaining retries specified by |
|
|
List of 0 or more strings that, if found in the output when the command fails, will trigger the If the list is empty or this option is omitted, the retry feature is disabled. |
|
|
The number of retries allowed for a failed command, after executing the This option is ignored if |
|
|
Expected return code. |
|
|
List of strings to verify are in the output of the command. All listed strings must be present for the test case to pass. Although the test case may pass if no |
|
|
List of optional string values that, if any one is detected in the output of the test command, causes the test to fail. Case insensitive. |
|
|
Number of times to execute the command. If greater than 1, the average output is compared to the spec. |
|
|
Comparison output specifications. |
|
Example: css
#
{
"type": "css",
"desc": "Verify the same, plus the timing",
"on_sensor": true,
"on_sensor_toggle_ms": [ 500, 0 ],
"cmd": "test-app.exe --acquire --exit_timeout=5",
"timeout_in_ms": 10000,
"retrypattern": [ "there was a failure" ],
"retryhandler": [
{
"type": "etc",
"testcasename": "tc_fix_the_problem_etc"
},
{
"type": "tcs",
"cmd": "echo Finished retry handler",
"ret_code": 0,
"expout": [],
"fail pattern": []
}
],
"retrycount": 3
"ret_code": 0,
"loop": 1,
"expout": [
"RESULT : acquire : DONE"
],
"failpattern": [ "FAIL", "ERROR" ],
"cmpout": [
{
"cmptag": "RESULT : acquire : prepare_image_time : ",
"cmpfunc": ">",
"cmpspec": [500]
}
]
}
4. Compare N Values to Configurable Specification (ccs
)#
In addition to the Base Test Command fields, the Compare Configurable Value test case type adds parameters needed to allow run-time, dynamic configuration of the compare value. In this case, pre-defined strings are used as substitution patterns that are changed at run-time to the configurable values, which is done by the Executive using the PTA’s API.
{
"type": "ccs",
"desc": "<description>",
"cmd": "<host_cmd+args>",
"timeout_in_ms": <timeout_value_in_milliseconds>, # optional
"retryhandler": [ # optional
"<cmd[0]>",
"<cmd[n]>"
],
"retrypattern": [ # optional
"<string_match[0]>",
"<string_match[n]>"
],
"retrycount": <value>, # optional
"ret_code": <expected_return_code>,
"expout": [
"<string_match[0]>",
"<string_match[n]>"
],
"failpattern": [
"<string_match[0]>",
"<string_match[n]>"
]
"loop": <number_of_loops>, # optional
"cmpout": [
{
"cmptag": "<tag_to_compare>",
"cmpfunc": "<comparison_function>",
"cmpspec": [ <comparison_specifications> ]
}
] # optional
}
Field |
Description |
Values |
---|---|---|
|
Command type identifier. |
|
|
Command to execute in the context of the OS. |
|
|
Max time in milliseconds the command should complete within. Exceeding this time results in test case failure. Fractions are supported (e.g., |
|
|
A list of commands or reference to another test case to execute if the original command fails. If this option is omitted or the list is empty, the retry feature is disabled. If any iteration of this handler fails, the original failing command and any further retries are considered failed, regardless of any remaining retries specified by |
|
|
List of 0 or more strings that, if found in the output when the command fails, will trigger the If the list is empty or this option is omitted, the retry feature is disabled. |
|
|
The number of retries allowed for a failed command, after executing the This option is ignored if |
|
|
Expected return code. |
|
|
List of strings to verify are in the output of the command. All listed strings must be present for the test case to pass. Although the test case may pass if no |
|
|
List of optional string values that, if any one is detected in the output of the test command, causes the test to fail. Case insensitive. |
|
|
Number of times to execute the command. If greater than 1, the average output is compared to the spec. |
|
|
Comparison output specifications. |
|
Example: ccs
#
{
"type": "ccs",
"cmd": "fw-test.exe -e --get_parameter_const LDO_SettleTime",
"loop": 1,
"ret_code": 0,
"expout": ["RESULT : get_parameter_const_emb : SUCCESS"],
"failpattern": ["FAIL", "ERROR"],
"cmpout": [
{
"cmptag": "RESULT : get_parameter_const_emb : LDO_SettleTime :",
"cmpfunc": "==",
"cmpspec": [200]
}
]
}
5. Control DUT Power (cdp
)#
This test command allows the Device Under Test (DUT) to be powered on, off, or both (cycled), either synchronously or asynchronously, before proceeding to the next command in the test case.
{
"type": "cdp",
"poweroff": <true/false>,
"poweron": <true/false>,
"sync": <true/false>,
"reverse": <true/false>
}
Field |
Description |
Values |
---|---|---|
|
Command type identifier. |
|
|
Boolean value indicating whether to power off the DUT. |
|
|
Boolean value indicating whether to power on the DUT. |
|
|
Boolean value indicating whether the power operations should be synchronized before moving to the next command. |
|
|
Boolean value to reverse the power operation sequence. If set to |
|
Example: cdp
#
{
"type": "cdp",
"poweroff": true,
"poweron": true,
"sync": true,
"reverse": false
}
6. Execute Test Case (etc
)#
This command type implements dynamic, inline substitution of one test case’s commands in another test case’s command list at run-time. The referenced test case’s command list is inserted in the referencing test case’s command list in place. This allows test case reuse, for example, to support the SQA team’s basic/enhanced test case concept, where the team reuses simple/basic test cases as a building block for enhanced test cases.
This command avoids having to maintain the same coherent sequence of commands in more than one file. It can be parameterized by using locally defined “macros”. “Locally defined” means the parent/calling test case defines the macros. These macros can be set to any value(s) by the parent/calling test case. The target test case is implemented in terms of these macros.
Warning
Using previously defined (Test Suite or built-in) run-time macros for this feature IS NOT RECOMMENDED.
Attempting to re-define an existing macro defined at a higher level, or larger scope, in an etc
command can lead to an ambiguous condition. If this is done, however, the ATF’s macro scoping is not necessarily obvious, but will follow these rules:
Macros defined in a Test Suite will be “covered” by the same macro being defined in an
etc
test case. This is because Test Suite and Test Case macro substitutions are done by Fuze™ Test at the time a test case is executed.Macros bound to immutable characteristics of the test cycle, such as package and hardware derived values, specified in the “Run-time” Macros table in this document, will render this feature “ignored” in a test case – the standard definition will be preferred in this case. This is because built-in macro substitutions are done by ATF at the time a test case is configured.
Fuze™ Test considers both the module under test and the companion firmware as both “in scope” for a test cycle, so software module test cases can use etc
commands to not only reference other test cases within the module’s project/module scope, but also firmware test cases.
To accommodate the possibility of future stack development that may be staggered, any firmware can be specified as companion firmware, and in software modules’ test cases, regardless of the firmware’s project scope.
While a software module can reference firmware test cases using etc
commands, the opposite is not necessarily true. This is because a firmware-only test cycle does not necessarily include the components required to execute software module test cases.
Warning
Never use etc
commands to reference test cases “up the stack”.
Rationale: There is no guarantee when testing has a firmware module in scope that anything required to test a software module executing “up the stack” will be downloaded, deployed, and available for use in the firmware test case.
Field |
Description |
Values |
---|---|---|
|
Command type identifier. |
|
|
The name of the test case from which to dynamically substitute its command list. |
|
|
Map of macros and their corresponding values to be substituted in the target test case. |
|
Example: etc
#
{
"type": "etc",
"testcasename": "tc_gen3_g3fw_fw-test_--cal_dci"
}
Example: etc
with Macro Substitution#
Parent/Calling Test Case Command:
{
"type": "etc",
"testcasename": "tc_echo-macro",
"macro_subs": { "__PARAM1__": "spi", "__PARAM2__": "ram" }
}
Target Test Case (tc_echo-macro) Command:
{
"type": "tcs",
"cmd": "echo __PARAM1__ and __PARAM2__",
"ret_code": 0,
"expout": [ "spi", "ram" ],
"failpattern": [ ]
}
Commands, Features, and Considerations Applicable to All Test Case Commands#
The following sections provide useful information applicable across all test case commands.
Tool Command Referencing and Paths#
You most likely have custom tool applications and scripts to communicate with, program, exercise, and acquire data from your DUT system. Fuze™ Test is designed to specifically utilize and support such software programs.
Custom executable and input files referenced by test commands should be entered with no specific pathing. Search paths for these files (either absolute or relative to the producttest subfolder), which are recursively searched by Fuze™ Test, as well as file extensions expected must be represented in the resources configuration file (see Setup Configuration Files).
Custom executables must be maintained under source control if they are used in test cases that will be executed by formal test cycles.
Note
Custom executables must be stored anywhere under the configured search paths, but it is recommended that custom scripts be stored under atf\producttest\testcases\...
in a subfolder intuitively named at the highest scope it intended to be used. Because using these executables in test cases requires no specific pathing to be specified (Fuze™ Test will find them and dynamically pre-pend their paths at run-time if search paths are correctly configured), these folders containing executables can be moved around as needed.
Custom executable file extensions must be accounted for in the environment configuration’s “file_extensions” entry.
The run-time version of test case files, configured for execution in the current test cycle’s context (e.g., file paths, scoped validation) can be found in <repo root path>\producttest\config
Fuze™ Test assumes that the first space delimited text entry of a test command’s “cmd” entry is an executable, either custom, or available via the platform’s PATH environment variable.
Validation Scoping (Test Case Reuse)#
Test case commands’ validation specifications can be specified with or without scoping. This scoping can be specified against certain test cycle input parameters, much in the same way that test suites can be scoped. Scoped validation parameters are automatically selected at run-time during the PTA configuration stage of execution.
Test Case Macros#
Test Case Macros in Fuze™ Test are reusable sequences of commands that perform common actions within test cases or suites.
Macros are not a separate feature of the framework, but an organizational pattern used to avoid duplication.
Typical use cases for Macros:
Power cycling a DUT
Resetting a test environment
Establishing communication links
Loading default configuration states
Macros are implemented by:
Defining a JSON list of commands in a shared file.
Importing or including them into test cases or suites.
Referencing them in a consistent way across multiple test cases.
Macros should be maintained in a common directory and follow naming conventions that clearly describe their purpose.
Test Case Types#
While all Fuze™ Test test cases use the same JSON structure, they are typically categorized by engineering purpose.
Common Test Case Types:
Setup Test Case - Prepares the DUT or environment for testing. - Safe to run multiple times.
Functional Test Case - Verifies a specific feature, function, or requirement.
Stress Test Case - Exercises the DUT under load or repeated cycles.
Cleanup Test Case - Restores the DUT or testbed to a known safe state.
These types are logical guidelines only, but following them improves clarity, reuse, and maintainability of test content.
Control and Acquisition#
Control and Acquisition refers to how Fuze™ Test interacts with DUTs and lab hardware.
Control: - Executes actions on the DUT. - Sends commands via supported interfaces.
Acquisition: - Captures responses or measurements from the DUT. - Verifies correct behavior.
Common Control Interfaces (requires implementation):
GPIO Pin Control
UART/Serial Communication
I2C / SPI / CAN Bus Interaction
Power Control Devices
Specialized Lab Equipment Interfaces
Acquisition Methods:
Matching response strings or patterns.
Reading hardware states (GPIO read).
Capturing measurement data from instruments.
Test Cases specify control and acquisition using the TYPE and CMD fields in their command lists.
Mechanism Control#
Mechanisms in Fuze™ Test are software classes that implement control over specific hardware interfaces.
All mechanisms:
Inherit from TestCommandRunnerIF.
Implement the run() method.
Receive command dictionaries defined in the test case JSON.
Execute control or acquisition logic.
Example Mechanisms:
CfeRunner — Handles CFE command execution.
GpioRunner — Controls GPIO pin states.
SerialRunner — Sends and receives data over serial/UART.
PowerRunner — Controls power supplies or relays.
Custom mechanisms can be developed for any lab hardware by implementing the required interface and registering them in the PTA framework.
Test Case Command Reference Example:
{
"TYPE": "GPIO",
"CMD": "toggle",
"ARGS": ["pin17"]
}