A close up of the propellers on an airplane

TESTING COMPLEX SYSTEMS

“The goal of testing is to guarantee a system meets the needs of its creators.”

(See below)

WEBSITE SUMMARY

A jet fighter plane in an air museum.

What's the focus of this website?

Testing a complex software system is essential to ensuring its reliability, stability, and proper operation. Creating a custom “test platform” allows for exhaustive testing, tailored to the operation of the system being tested. The focus of this website is to help practitioners build exemplary test platforms:

    -- By using this web site and its associated webinars and videos.

    -- By consulting on the topic.

===========================================

This website asserts that one way to test some complex computer systems is to subject them to a random combination of functional, load, and stress operations, all the while observing and validating their behavior. Stress operations can include automatic hardware and software failures, configuration changes, and upgrade and downgrade operations.

This type of testing aims to augment unit and functional tests with automation that covers more “logical test space” faster than can be enumerated by traditional methods alone.

Three case studies are an integral part of this website. The case studies come complete with running code housed in the public GitHub repository (browse to or “git clone” the URL below):

https://github.com/talborough/testingComplexSystems

And also with explanatory videos further along in this website. In total, this website gives the practitioner something to run, something to analyze, and something to draw upon as they size up and consider the automation of their particular situation.

What exactly is a “test platform”?

The term “test platform” is a broad concept, similar to “operating system.” In the context of this website, a test platform is a Python program running on a Linux system. The Python / Linux pairing provides specific functionality used by the test platform.

A test platform is designed to interface with, drive, load, stress (fail parts of), and verify the complex system being tested. These activities are executed concurrently and in random sequences. It also provides a capable UI to help its test and development users get the most out of working with it.

The global data store (gDS) and code generator

It's no surprise a “DBMS” is found at the center of many programs. The same is true for a test platform. gDS is the data handling code used in all 3 case studies, and it has been used in several “production” test platforms I have built over the last several years. gDS is *not* a DBMS, but it does provide critical features found in all DBMS with little to no “cost.” It has proven to be a simple and effective tool.

gDS allows the developer to specify a set of shared memory data tables and then add and manage data in those tables. The tables can be relationally organized if desired. Ordinary Python syntax is used to navigate and manipulate data in the tables. This arrangement produces clear, readable programs. Each case study video spends some time reviewing the code it's showcasing and showing the viewer how to read, write, and work with gDS.

There is no “core” to gDS. There is a code generator, and given a data declaration file (schema), the code generator uses it to create Python code that defines and helps manipulate data in shared memory tables. The code generator is provided in the public GitHub code repository.

What else makes an effective test platform?

An effective test platform performs the following actions (randomly and concurrently as needed):

  • Configure the system under test to get it up and running initially.
  • Discover the components of the system under test as it is started each time.
  • Let the user easily specify the test configuration and execution parameters.
  • Invoke all normal application-level functionality.
  • Trigger all application-level error conditions.
  • Fail all system-level components (hosts, networks, databases, etc.).
  • Exceed all operating limits.
  • Always verify application and system operation (lost, corrupted data; invalid operations and states).
  • Verify operation during system upgrade and rollback.
  • Grow in parallel with the implementation of the system being tested so that developers have feedback on their work.
  • Be usable by both the development and the test staff. Give them both a UI / window into the operation of the system being tested and the test platform's operation.

CONTACT US

Write to us if you have any questions or suggestions!

This field is for validation purposes and should be left unchanged.