2018 up to now | 2017 | 2016 | 2015 | 2014 | 2013 | 2012

Mailing List - Entries of 2012


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [QF-Test] How to perform QF-Test suite (formal) reviews? -- Some ideas...


  • Subject: Re: [QF-Test] How to perform QF-Test suite (formal) reviews? -- Some ideas...
  • From: Martin Moser <martin.moser@?.de>
  • Date: Wed, 01 Feb 2012 17:13:11 +0100

Hi all,

we at QFS have similar rules like Klaus-Peter mentioned in this e-mail. For example we try to following the naming scheme of Java for packages and procedures.

I would like to add some more aspects to the list of Klaus-Peter:

1.) Besides a naming scheme for procedures, variables etc. it's valuable to decide how you treat parameters of procedures. Do you want procedures to set all required parameters at the procedures header? Do they require default values? Once you call a procedure should the testers specify any parameter or only those which are really modified by the call.

2.) What information do you want to put into the report? You can use Test-Step nodes, the name attribute of check nodes or a call of rc.logMessage with report=true to make your HTML/XML reports more readable, so you need to clarify whether this should be common practice for your team to use those features. In our case we prefer a very chatty HTML report.

3.) Use at least the @author tag for test-cases as well, so that you can track down, who implemented the automated test-case in QF-Test.

4.) If you have an external test-spec you can add HTML links to the Comment attribute of the test-case or test-set nodes in QF-Test in order to reach this information from the QF-Test reports. You can also consider creating "Skeleton" test-suites out of your management system, like our current integration mechanisms for imbus TestBench or TestLink are already doing.

5.) If you work in several projects in parallel try to figure out, which project requires which QF-Test configuration, especially in the recognition part. If you don't want to share different option files, you could also think about changing them by script in your start sequence.

6.) As Klaus-Peter mentioned already, try to establish something like your custom Test-pattern to check things in a consistent way, e.g. check for the existence of a result label should be done the same way in your dialogs. Therefore use procedures.

7.) If you work in several test-suites for several modules, try to keep your package structure consistent, e.g. all modules contain a "check" package for checks. In order to avoid duplicate procedure names in your structure you can also think about introducing a top-package named like the module itself, like it's down in qfs.qft.

8.) Describe global variables, especially global variables set by Jython and Groovy scripts. Ask yourself whether they are required and when they should be deleted/re-set.


I hope I could add some useful points to your list.


Best Regards,
Martin


--On Dienstag, Januar 24, 2012 15:30:19 +0100 "Berg, Klaus-Peter" <klaus-peter.berg@?.com> wrote:

Hi all,

in the following post I have tried to collect and present some ideas on a
formal QF-Test suite review for *functional* tests.

1. However, because you can do load testing (and even performance
testing) with QF-Test as well, it is necessary to specify which kind of
test suite is reviewed first. Usually this will be a *functional* test
suite.

2. You will need a test specification (test spec) store, e.g., HP Quality
Center or another tool that is able to handle test specs. This
specification should have a specific outline that addresses all necessary
topics. In Sara Ford's blog I found a good short description of what
should be inside such a spec and what is the difference between a test
plan and a test spec:
http://saraford.net/2004/10/28/developing-a-test-specification/ You can
find other good descriptions in the Internet as well but in general,
every company has developed its own standard for this kind of document.

3. It would be good to have a kind of "link" between the test spec and
your QF-Test suite with test-sets and test-cases. Naming the test suites,
test-sets/cases according to the naming "format" of your test spec
manager tool may be an option but this may also lead to quite "poor" or
"ugly" test names like "23580_01_02_01_Project Editor". Another option
would be to have a tool that can analyze QF-Test suite
test-set/test-cases names (.qft files are "only" XML ;-)) and make
connections between specs and these names in order to have some kind of
"traceability"! (The IEEE Standard Glossary of Software Engineering
Terminology defines *traceability* as "the degree to which a relationship
can be established between two or more products of the development
process, especially products having a predecessor-successor or
master-subordinate relationship to one another." [IEEE-610]).


Given a good test specification, now let us take a closer look at the
test "implementation" using QF-Test.

4. It is good practise to have a common test layout inside your file
system that is also a working copy of your test structure that should be
part of a source code control system like Subversion, CVS, or ClearCase.

What should be versioned?
At least your QF-Test suites that contain your actual
test-sets/test-cases, corresponding property files, e.g., with login
information, etc. (for QF-Test load property nodes), other property files
that may be read and analyzed by test scripts in Groovy or Jython,
QF-Test suites that only contain common procedures ("libraries"), (large)
Groovy and Jython files that implement QF-Test custom checkers or that
are helper classes for your procedures.

Should QF-Test itself be versioned? There are pro and cons. At least for
regression tests you need to know what version of QF-Test you were
running and what version of SUT you have tested. Because we are testing
GUIs here, it is quite common to have the SUT version number available
inside the SUT (e.g., for the opening screen and for Help->About).
Perhaps with some assistance from your development group you can access
this version number by means of a SUT script (quite often this version
number is a public final static String).

Besides your scripts you will need to put specific engineering
information under version control, if your SUT has to be prepared by
means your tests cannot do or will simply require.

When thinking about a file/folder structure it seems reasonable to
distinguish between a "base" directory with common files for most or all
of your test suites and the "real" test suites that contain your actual
tests, and that are usually further subdivided according to your business
domain and the "kind" of tests that is executed (functional test/load
test/performance test).

There is a special place inside QF-Test install directory to store
library files, e.g., in Groovy you will use the place described in
chapter "13.7.3 Groovy packages" and chapter "29.1.6 Library path -
Directories holding test-suite libraries" of the actual QF-Test manual.

Analyzing the conformance of a "test" to such a structure should also be
part of a formal test review.

5. For large Groovy or Jython files it is necessary to have a common
header comment, just like for Java files. This comment should contain a
short description of the file's purpose, a copyright remark, @author tag,
@since tag describing the specific Groovy/Jython or QF-Test version that
must be used (at least) in order to run this script (you know, every
QF-Test version is bound to a specific Groovy and Jython version; if you
use specific language features in your script that are available only
since Version x.y.z+ you will have to document that). BTW: For Groovy you
can just open the Groovy terminal from the QF-Test menu bar and type:

groovy:000>  println "Running Groovy version: " +
org.codehaus.groovy.runtime.InvokerHelper.getVersion()  println "Running
Groovy version: " + org.codehaus.groovy.runtime.InvokerHelper.getVersion()
Running Groovy version: 1.7.10
===> null
groovy:000>

in order to find out the current Groovy version inside QF-Test.

6. Not only your test suites, test sets/cases must have meaningful names,
you will also need a naming scheme especially for your procedures and
variables. You can either follow the conventions used by the QF-Test base
library "qfs.qft" -- including procedure comment conventions! -- or
create your own naming scheme; but whatever you do: you should do it in a
consistent way. So, IMO it would make sense to follow the Java Sun naming
conventions for procedure and package names as well as for variables and
constants (this is also true for your scripts, e.g. Groovy scripts! --
and should be reviewed). On the long run it will provide better
readability and maintainability.

When creating libraries youz can use qfs.qft as a template
(...qfs/qftest/qftest-3.x.x/include/qfs.qft). When opening this library
you will see that it contains only 3 types of nodes: - Procedures
- Extras (empty)
- Windows and components

So, during the walkthrough, you should check if your own libraries follow
this scheme, i.e., library suites do NOT have any test-sets etc.!
However, depending on the kind of procedures they contain, they MAY have
a "Windows and components tree" or not. You can express this behavior by
naming your libraries like this: - Libraries WITH "Windows and components
tree": <MyLib>ComponentsAndProcedures.qft - Libraries with EMPTY "Windows
and components tree": <MyLib>Procedures.qft .

7. Be *consistent* in your naming: e.g., do NOT call a package "Util" in
one suite and "Utils" in another library suite.

8. If your specs tell you that there is some screen content (e.g. text
field content) to check, then we should look for adequate "check text"
nodes (or rc.checkEqual() etc. in scripts) in your suites!

9. If your test spec requires some windows / dialogs to appear or
disappear after mouse button/menu clicks, it is NOT enough to "check" if
there are no exceptions during replay! You will have to look for a "Wait
for component to appear/disappear) node or some other kind of "check" for
existence or non-existence of a window after a timeout! E.g., if your top
window is a JDesktop you can check the internal frames that are currently
open.

10. If your test spec defines specific pre- and post conditions for a
test I would expect to see these conditions expressed as setup/cleanup
nodes and dependencies. If you are using QF-Test dependencies (on the
long run, you will ;-)) please review if they follow the guidelines
expressed in the QF-Test manual (chapter 12) and especially look for
proper cleanup/error/exception handling in this area!

11. During a walkthrough you can click on the test-suite top node inside
QF-Test and check the test suites include files in the reverse includes
(dependencies) - looking at the correct relative names, especially with
respect to a common library directory ("library path settings") as
mentioned in topic (4), and looking if some suites are missing or
out-of-date (not actually used, only copy/paste results ;-))

12. When looking at test steps it is worth to identify LARGE set of steps
that use if/then/else/loop nodes together with lots of QF-Test procedure
calls to access and evaluate table columns etc. If your company policy is
"do not use scripting if ever possible" -- then just leave them untouched
-- otherwise check if you can transform the whole set into one nice
script that set's a global return variable or even check for the
possibility of creating a custom check (chapter 39.3 "Implementing custom
checks with the Checker interface"). It could be worth the effort with
respect to readability and usability (you can use a custom check like any
other QF-Test check, even with the "Record checks" tool).

13. Check your error/exception logging: Here are three rules I recommend
for Groovy scripts and QF-Test procedure parameters:

a) In Groovy SUT and server scripts you can use a catch block like this:
        try {
                float test = 2.0/0.0 // throws Division by zero exception
                println "Division OK" // should never happen
        }
        catch (Exception e) {
                def message = "Expected exception was thrown"
                println("$message: ${e.message}") // write to QF-Test
terminal or Java console                 rc.logError("$message:
${e.message}") // write to QF-Test run-log         }

Perhaps you would expect using "System.err.println" in the catch clause
instead of "println", because we are reporting an EXCEPTION. However, if
you execute "binding.variables.each {key,value -> println "$key=$value"}"
you will see that only
        out=java.io.PrintStream@618ac5
is bound, NOT "err". Maybe this is the reason for the behaviour mentioned
above... at least in server scripts... but I'm not shure...

b) In CustomChecker Groovy code where the variable "rc" is not fully
available you can also write:         qf.logError("$message:
${e.message}")
        in the catch clause (see QF-Test Manual "37.7 The qf module").

c) In QF-Test procedure call parameters you can use a 'message' parameter
like this to log the exception text:

        Exception while running SUT:
$[rc.getCaughtException().getMessage()]         or
        Exception while running SUT:
$[rc.getCaughtException().getLocalizedMessage()]

        but only if the procedure is called inside an catch block where
rc.getCaughtException() is available and != NULL!

Example: qfs.utils.writeMessageIntoFile(message, file, true).

14. Variables "from the outside": If you are running QF-Test from the
command line, inside an ANT or Maven script, or controlled by a test
automation framework, it is necessary to review which variables are
handed over, if these variables are located in the right place inside a
test suite, and if there names are correct.


Would be nice to share this proposal with ideas/comments from other users
of the forum (or with people from the QFS company itself ;-))

Best regards,
Klaus-Peter Berg






--
Martin Moser                           martin.moser@?.de
Quality First Software GmbH            http://www.qfs.de
Tulpenstr. 41                          Tel: +49 8171 38648-14
DE-82538 Geretsried                    Fax: +49 8171 38648-16
GF: Gregor Schmid, Karlheinz Kellerer  HRB München 14083