Updated: 9/4/2024
Signed in as:
filler@godaddy.com
Updated: 9/4/2024
Signed in as:
filler@godaddy.com
Case Study 3 explores the gDS schema and code needed to support the "reconfiguration" that occurs when testing a complex system. For example, if you have a system with 3 hosts, and you add a fourth host during the execution of a test, how, exactly, do you extend the data store to cover the new host? If you upgrade the software in a host, how do you take it out of service while that operation is occurring, and how do you later put it back into service so the test platform and system under test continue to operate correctly?
Case Study 3 presents the techniques and code on how to do this.
At this time the nfsTest platform allows the user to manually create system faults. Work is underway to have the faults automatically generated..
Linux's "NFS" feature is used to provide the "complex system" functionality needed by Case Study 3. NFS allows a (server) host to export a file system that's local to it and for another (client) host to import that file system and use it locally. NFS is not that "complex," but it serves in this case study to provide a useful example.
The "nfsTest" program requires four or more Linux VMs to run. One VM is the "Control" host — you run the nfsTest test platform code on this host. The other three or more VMs are "Test" hosts and run NFS under the control of the nfsTest program.
I have used both Hyper-V and VMware on my Windows laptop to run the Linux VMs.
The 4 VMs must share the same username and password; this eliminates the need for SSH keys.
A suggestion about naming the VMs: add a 4-digit number to the end of each VM name, so the data files created by the hypervisor for the VM have the same 4-digit number and can be deleted when the VM is deleted (some hypervisors don't automatically delete the VM's "data" file when the VM gets deleted). The number also uniquely identifies the hosts; if it's missing, it makes them pretty useless in this environment. Don't reuse the numbers; instead, keep bumping the numbers up by one as you need more VMs.
Once the 4(+) VMs are established, the "SSH" package must be loaded (by hand) and started on them so they are accessible by CLI from an SSH terminal interface utility. See the commands below or the "loadSSH" file in the repository for the command lines on how to load SSH:
sudo apt update # Update libraries
sudo apt install OpenSSH-server # Install and start the SSH server
sudo systemctl status SSH # List server status if necessary
Once SSH is loaded and started on all the hosts, nfsTest can be used to perform the rest of the configuration.
Log onto the hosts through their GUI, get their IP address (hostname -I) and place the IP address in the nfsTest.conf file. Also, adjust the "server count" and "client count" entries in the nfsTest.conf file.
Start nfsTest and look for nfsTest to recognize all the control and test VMs.
Run the nfsTest "insw" command to load the necessary test code onto the test hosts.
See file nfsTest.dd from the GitHub repository.
The 4 "resources" and the tables that represent them are:
The schema describes the relationships between each resource. It also describes the resource states (Inactive → Activating → Active → Deactivating → Inactive) and uses counts for each instance of a resource.
The key to handling configuration changes during test runs involves designing the schema and the instantiation of it to support all possible configurations to be tested before the testing starts. In that way, you don't have to write code that has to create and delete rows in tables and create and delete relationships between rows "on the fly."
The four components of this are:
-- The tables defined in the schema.
-- The rows in the tables described in the above schema.
-- The threads running against the rows created above
-- The state machine code run by each thread above
For example, a gHost row may cite the use counts of both the export and import file systems that may reside on it. However, a host (in this particular test environment) cannot run as both a server and a client at the same time. Both fields are present, the host can operate as a server or a client at different times as the testing progresses.
This technique is used throughout the test environment. Threads with different capabilities are started against components that may not initially be ready to run in a given scenario. But the threads all run all the time, asking every few seconds, "Am I supposed to start operating as designed now?".
Overall, this implements a matrix of cooperating state machines. See the routines "manageHost," "manageExportFS," "manageImportFS," and "manageFSUser" in "nfsTest" to see the code executed by the state machines.
Copyright © 2024 Testing Complex Systems - All Rights Reserved.
Powered by GoDaddy