Case Study 3 — nfsTest

A blue and black picture of a hole

Overview

Case Study 3 explores the gDS schema and code needed to support the "reconfiguration" that occurs in the testing of a complex system. For example, if you have a system with 3 hosts, and you add a fourth host during the execution of a test, how, exactly, do you extend the data store to cover the new host? If you upgrade the software in a host, how do you take it out of service while that operation is occurring, and how do you later put it back into service so the test platform and system under test continue to operate correctly?

Case Study 3 presents techniques and a working code on how to do this.

NFS Operation

Linux's "NFS" feature is used to provide the "complex system" functionality needed by the case study. NFS allows a (server) host to export a file system that's local to it and for another (client) host to import that file system and use it locally. NFS is not that "complex," but it serves in this case study to provide a useful example.

nfsTest Prerequisites and Setup

The "nfsTest" program requires four or more Linux VMs to run. One VM is the "Control" host — you run nfsTest on this host. The other three or more VMs are "Test" hosts and run NFS under the control of the nfsTest program.

I have used both Hyper-V and VMware on my Windows laptop to run the VMs (see the Case Study 1 video on how to do this). If you can't run the code, you can view the video below.

The 4 VMs must share the same username and password; this eliminates the need for SSH keys.

A suggestion about naming the VMs: add a 4-digit number to the end of each VM name, so the data files created by the hypervisor for the VM have the same 4-digit number and can be deleted when the VM is deleted (some hypervisors don't automatically delete the VM's "data" file when the VM gets deleted). The number also uniquely identifies the hosts; if it's missing, it makes them pretty useless in this environment. Don't reuse the numbers; instead, keep bumping the numbers up as you need more VMs.

Once the 4(+) VMs are established, the "SSH" package must be loaded (by hand) and started on them so they are accessible by CLI from an SSH terminal interface utility. See the commands below or the "loadSSH" file in the repository for the command lines on how to load SSH:

sudo apt update                        # Update libraries
sudo apt install OpenSSH-server   # Install and start the SSH server
sudo systemctl status SSH           # List server status if necessary

Once SSH is loaded and started on all the hosts, nfsTest can be used to perform the rest of the configuration.

Log onto the hosts through their GUI, get their IP address (hostname -I) and place the IP address in the nfsTest.conf file. Also, adjust the "server count" and "client count" entries in the nfsTest.conf file.

Start nfsTest and look for nfsTest to recognize all the control and test VMs.

Run the "insw" command on nfsTest to load the necessary code onto the test hosts.

View the first few minutes of the video to see how to run nfsTest.

The nfsTest Schema

See file nfsTest.dd from the GitHub repository.

The 4 "resources" and the tables that represent them are:

    -- Hosts (gHost) — The following functionality runs on these hosts (not all at the same time):

    -- Exported file systems (gExportFS) — These are exported by NFS on a given server host.

    -- Imported file systems (gImportFS) — These import a given exported file system from above and are mounted on a given client host.

    -- File system user (gFSUser) — These run on a given client host and perform I/O operations on the imported file systems on that host.

The schema describes the relationships between each resource. It also describes the resource states (Inactive → Activating → Active → Deactivating → Inactive) and uses counts for an instance of a resource.

Cooperating State Machines

The key to handling configuration changes during test runs involves designing the schema and the instantiation of it to support all possible configurations to be tested before the testing starts. In that way, you don't have to write code that has to create and delete rows in tables and create and delete relationships between rows "on the fly."

The four components of this are:

    -- The tables defined in the schema.
    -- The rows in the tables described in the above schema.
    -- The threads running against the rows created above
    -- The state machine code run by each thread above

For example, a gHost row may cite the use counts of both the export and import file systems that may reside on it. However, a host (in this particular test environment) cannot run as both a server and a client at the same time. However, because both fields are present, the host can operate as a server or a client at different times as the testing progresses.

This technique is used throughout the test environment. Threads with different capabilities are started against components that may not initially be active in a given scenario. But the threads all run all the time, asking every few seconds, "Am I supposed to start operating as designed now?".

Overall, this implements a matrix of cooperating state machines. See the routines "manageHost," "manageExportFS," "manageImportFS," and "manageFSUser" in "nfsTest" to see the code executed by the state machines.

CASE STUDY 3 VIDEO (A)

Play Video