back
Avatar of Max Melzer
Author: Max Melzer
15. février 2022

Exécution de tests de sites Web en parallèle avec QF-Test

From time to time, customers approach us asking if QF-Test can be used to run tests in parallel. The advantages of this are obvious: Running multiple tests at once could improve test execution time a lot (if raw system performance is not a bottleneck) while saving lots of infrastructure overhead because everything can run on one machine.

Our answer for years has been that this "is possible to a limited degree". Which is still true. But in this blog post, I want to dive a little deeper and see how and to which degree exactly this can be done.

The hardest problem of running anything in parallel on one machine is the I/O interfaces. Your computer usually only has one mouse and keyboard attached for inputting things, and also only one desktop running at a time to display stuff.

This means your test suites must stay within the following boundaries to allow them to run in parallel:

  • Tests should not rely on hard system events for simulating keyboard or mouse input.
  • Tests must not rely on an exclusive desktop or user session.
  • Tests must be able to run in arbitrary order and must not depend on another.

Running a Test Suite in Multiple Threads

Launching a QF-Test test suite multiple times in parallel is the easy part: You can use the batch mode on the command line:

qftest -batch -threads 3 -run /Path/to/test/suite.qft

This will kind of work, but the results will most likely be a jumbled mess, since each thread will try to launch and control the same client, which will eventually fail in some way. We still have more work to do.

Starting Separate Browser Clients per Thread

First we need to be able to launch separate browser clients for each thread. QF-Test separates clients by their name, so you can use the built-in ${qftest:thread} variable when naming your client: In the "Variable Definitions" section of your test suite root step, set "client" to something like web${qftest:thread}. (Storing the client name in a variable called $(client) is only a convention, so you have to make sure your test suite actually follows it.)

Then, you should use something like the following Jython server script to configure every client browser with it's own separate profile for settings, cookies, and caches:

 

import tempfile

client = rc.lookup("client")
profilepath = "%s/%s-profile" % (tempfile.gettempdir(), client)
rc.setLocal("profilepath", profilepath)
print "Benutze Profilpfad %s für Client %s" % (profilepath, client)

 

Dont forget to use the new $(profilepath) variable in the "Start Web Engine" node to have each browser use a different temporary directory for profile data.

Browsers without User Session in Headless Mode

If you now launch your test suite in batch mode, you will see multiple browser windows showing up and all running through your tests. This is nice, but you can still see a big problem. The browser windows all "steal" the window focus from each other on every interaction, which will very likely screw with test stability. To work around this, you'll need to use a headless browser, a browser which runs invisibly in the background, without requiring an actual window or desktop.

To use a headless browser in your tests, set the browser type to "headless" (or "headless-firefox", "headless-chrome", or "headless-edge), in the "Browser type" option of your "Start Web Engine" step. Be mindful that headless browsers don't support all features that normal browsers do.

Alternatively, you can experiment with setting rc.setOption(Options.OPT_WEB_ASSUME_HEADLESS, true) in a SUT script between "Start Web Engine" and "Open Browser Window" to tell QF-Test to treat a normal browser window as headless.

If you run your test suite again, you will see ... nothing (which is a good thing in this case). You should probably add some debug logging to your suite in strategic places to make sure everything runs as expected. You can write to stdout at any time via good old println (or print() in Jython).

Dividing Your Tests Between Threads

Until now, we have only been running the same tests in multiple clients. This is not the kind of parallel testing we're after here. We'd like each of our threads to handle a subset of tests, not have all tests run on each thread.

For this you will need to adjust your test structure itself. It is completely up to you how you want or can split up your tests. One very simple way is to use the modulo operator % to split your test cases evenly between your number of threads. You can use the special variables ${qftest:thread} and ${qftest:threads} to figure out in which thread you are and how many threads there are in total and enclose each test case with an "if" conditional with a statement like this:

$(test_case_index) % ${qftest:threads} == ${qftest:thread}

Or, more elegantly, you could implement a Dispatcher via a TestRunListener at the beginning of your test suite with the following Groovy server script:

 

synchronized(binding) {
    if (! binding.hasVariable("handledSteps")) {
        binding.setVariable("handledSteps",new java.util.concurrent.ConcurrentHashMap())
        binding.setVariable("localStepIndices", ThreadLocal.withInitial({[:]}))
        // Registriere TestRunListener
        notifications.addObserver(notifications.NODE_ENTERED, { args ->
            def step = args.event.getNode()
            if (step.getType() == "TestCase") {
                def thread = Thread.currentThread()
                def handled = binding.getVariable("handledSteps")
                def id = step.getUid()
                def localStepIndices = binding.getVariable("localStepIndices")
                def localStepIndex = localStepIndices.get().get(id)
                localStepIndex = localStepIndex == null ? 0 : localStepIndex+1
                localStepIndices.get().put(id,localStepIndex)
                id = "${id}#${localStepIndex}"

                def existing = handled.putIfAbsent(id,thread)
                if (existing !== null) {
                    // Überspringe Testschritt ${id} in ${thread}, wird bereits durch ${existing} ausgeführt
                    rc.skipTestCase()
                } else {
                    // Führe Testschritt ${id} in ${thread} aus
                }
            }
        })
    }
}

 

We Did It! Or Did We?

If you added some print statements to your test suite, you should now see that QF-Test will run your test batches out of order. Congratulations, you did it!

Still, there remain some caveats.

Firstly, when editing or adding tests in your suite, you will have to be very careful to not break any of the boundaries established at the start of this post.

Also, the algorithm for splitting your tests between threads is far from optimal from a performance perspective. Different test batches could take very different amounts of time, resulting in idle threads. A more complete solution would include some dispatch scheduler to assign test batches to free threads on-demand.

Working around all of these caveats can easily get very complicated. So complicated that it may in fact be faster, cheaper, and more stable to run multiple instances of QF-Test on separate machines or VMs instead, and use some existing dispatch infrastructure.

So in the end, we come back to were we started. Parallel test execution inside of QF-Test really is possible to a limited degree. But you should investigate if it's a good fit for your test suites and your System Under Test before diving in head-first and investing a lot of time and effort.

Addendum 1: Licensing

If you want to run multiple instances of QF-Test in parallel, you'll need an individual license for every instance, just like when QF-Test is running in a CI/CD environment.

When launching in batch mode, QF-Test will let you know you if your licenses are insufficient for the number of threads.

Addendum 2: Launching Parallel Tests in Daemon Mode

Depending on your environment, you may also want to try QF-Test's daemon mode to run multi-threaded tests remotely.

To start the daemon process, run the following command on the host:

qftest -daemon -daemonport=3544

Then, you could trigger parallel tests on the host via the following Jython server script:

 

from de.qfs.apps.qftest.daemon import DaemonRunContext
from de.qfs.apps.qftest.daemon import DaemonLocator
from java.util import Properties

host = "localhost"
port = 3544
testcase = "%s" % (rc.lookup("qftest","suite.path"))
timeout = 60 * 1000

def calldaemon(host, port, testcase, timeout=0, count=1):
    daemon = DaemonLocator.instance().locateDaemon(host, port)
    trd = daemon.createTestRunDaemon()
    
    contexts = trd.createContexts(count)
    if not contexts:
        raise UserException("Could not create %d run contexts. Not enough licenses available?" % count)
        
    # start tests
    for i in range(0,count):
        contexts[i].runTest("%s#Run web tests.Test %d" % (testcase, i % 2), bindings)
        
    # wait for end
    for i in range(0,count):
        if not contexts[i].waitForRunState(DaemonRunContext.STATE_FINISHED, timeout):
            # Run did not finish, terminate it
            contexts[i].stopRun()
            contexts[i].rollbackDependencies()
            if not contexts[i].waitForRunState(DaemonRunContext.STATE_FINISHED, 5000):
                # Context is deadlocked
                rc.logError("No reply from daemon RunContext.")
            rc.logError("Daemon call did not terminate and had to be stopped.")
        result = contexts[i].getResult()
        log = contexts[i].getRunLog()
        rc.addDaemonLog(log)
        contexts[i].release()
        
    return result

result = calldaemon(host, port, testcase, timeout, 4)
rc.logMessage("Result from daemon: %d" % result)

 


For more tips and information about parallel load testing, see the user manual chapter "Performing GUI-based load tests".

Comments are disabled for this post.

0 comments