Updated 11/12/23
Updated 11/12/23
Case Study 3 explores the gDS schema and code needed to support the "reconfiguration" that occurs in the testing of a complex system. For example if you have a system with 3 hosts and you add a fourth host during the execution of a test, how, exactly, do you extend the data store to cover the new host? If you upgrade the software in a host how do you take it out of service while that operation is occurring and, how do you later put it back into service so the test platform and system under test continue to operate correctly?
Case Study 3 presents techniques and working code on how to do this.
Linux's "NFS" feature is used to provide the "complex system" functionality needed by the case study. NFS allows a (server) host to export a file system that's local to it, and for another (client) host to import that file system and use it locally. NFS is not that "complex", but it serves in this case study to provide a useful example.
The "nfsTest" program requires 4 or more Linux VMs to run. One VM is the "Control" host - you run nfsTest on this host. The other 3 or more VMs are "Test" hosts and run NFS under the control of the nfsTest program.
I have used both Hyper-V and VMware on my Windows laptop to run the VMs (see the Case Study 1 video on how to do this). If you can't run the code you can view the video below.
The 4 VMs must share the same username and password; this eliminates the need for ssh keys.
A suggestion about naming the VMs: add a 4-digit number to the end of each VM name so the data files created by the hypervisor for the VM have the same 4-digit number and can be deleted when the VM is deleted (some hypervisors don't automatically delete the VM's "data" file when the VM gets deleted). The number also uniquely identifies the hosts; if it's missing it makes them pretty useless in this environment. Don't reuse the numbers but instead keep bumping the number up as you need more VMs.
Once the 4(+) VMs are established the "ssh" package must be loaded (by hand) and started on them so they are accessible by CLI from an ssh terminal interface utility. See the commands below or the "loadSSH" file in the repository for the command lines on how to load ssh:
sudo apt update # Update libraries
sudo apt install openssh-server # Install and start the ssh server
sudo systemctl status ssh # List server status if necessary
Once ssh is loaded and started on all the hosts, nfsTest can be used to perform the rest of the configuration.
Log onto the hosts through their GUI, get their IP address (hostname -I) and place the IP address in the nfsTest.conf file. Also, adjust the "server count" and "client count" entries in the nfsTest.conf file.
Start nfsTest and look for nfsTest to recognize all the control and test VMs.
Run the "insw" command on nfsTest to load the necessardy code onto the test hosts.
View the first few minutes of the video to see how to run nfsTest.
See file nfsTest.dd from the GitHub repository.
The 4 "resources", and the tables that represent them are:
-- Hosts (gHost) - The following functionality runs on these hosts:
-- Exported file systems (gExportFS) - These are exported by NFS on a given server host.
-- Imported file systems (gImportFS) - These import a given exported file system from above and are mounted on a given client host
-- File system user (gFSUser) - These run on a given client host and perform I/O operations on the imported file systems on that host.
The schema describes the relationships between each resource. It also describes the resource states (Inactive => Activating => Active => Deactivating => Inactive) and use counts for instance of a resource.
The key to handling configuration changes during test runs involves designing the schema, and the code that subsequently uses it, to support all possible configurations to be tested before the testing starts. In that way you don't have to write code that has to create and delete rows in tables, and relationships between rows "on the fly".
The 4 components of this are:
-- The tables defined in the schema.
-- The rows in the tables described in the above schema.
-- The threads running against the rows created above
-- The state machine code run by each thread above
For example a gHost row may cite the use counts of both the export and import file systems that may reside on it. However a host (in this paarticular test environment) can not run as both a server and a client at the same time. But because both fields are present the host can operate as a server or a client at different times, as the testing progresses.
This technique is used throughout the test environment. Threads with different capabilities are started against components that may not initially be active in a given scenario. But the threads all run all the time, asking every few seconds "am I supposed to start operating as designed now?".
Overall, this implements a matrix of cooperating state machines. See the routines "manageHost", "manageExportFS", "manageImportFS" and "manageFSUser" in "nfsTest" to see the code executed by the state machines.
Testing Complex Systems
Copyright © 2023 Testing Complex Systems - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.