[Webtest] New Warning test result

Nate Oster Nate Oster" <noster@numbersix.com
Fri, 18 May 2007 15:40:09 -0400


Marc and WebTest committers,

Congrats on WebTest2.5!  The reporting improvements are fantastic, and
we're already switching over to the mostly-self-documenting webtest.xml
file for running our scripts.  Now that the results reporting code is
stabilized again, I think it might be time to reconsider adding explicit
support for Warning as a test result.

There are a number of recurring issues and current JIRA enhancement
requests that would be greatly simplified by introducing a new result
type to WebTest.  For example, JIRA WT-244 "Perform a set of steps on
every response," would be pretty simple to implement with some Groovy
code if you had a simple way of *not failing* a test case when one of
the validation steps failed.  You could simply recast the "failure"
exception as a "warning" instead.

Similarly, my current project wants to create a "Workaround" step, so
that we can mark code that's expected to fail because of a known bug and
provide an alterative set of steps.  However, we don't want these
"patched" test scripts to simply PASS(!) - we want to indicate that
something is "suspect" about them.  A simple Warning result would be
perfect for this.

So, "Warning" would be a lot like JUnit's "Error" result.  It's not a
"Failure" of the test, but it's not the expected result either.  I know
that sounds odd with a declarative-procedural test automation tool, but
it actually exposes a generic mechanism and consistent reporting that
could really improve design, maintainability, and extensibility of our
tests. =20

I'm especially interested in how easy it would be to take advantage of
this result type by extending Step with a Groovy macro.  The "Workaround
step" idea, for example, would be easy to implement by extending
GroupStep with a little Groovy.  We tried to introduce this ourselves,
but the results reporting code was undergoing so much change, and we're
so unfamiliar with the WebTest internals, that it proved difficult to
keep up.

What do you think?  Should I raise a Feature request to introduce a
"Warning" test result?

Thanks!
Nate Oster

=20
To: webtest@lists.canoo.com
From: Marc Guillemot <mguillemot@yahoo.fr>
Date:  Mon, 16 Apr 2007 09:44:22 +0200
Subject: [Webtest] Re: Resume after failing step
Reply-To: webtest@lists.canoo.com
Reply-To: Marc Guillemot <mguillemot@yahoo.fr>

Hi Christoph,

if you really want to have "non failing" errors (I still believe that it
is better to split your tests, but I don't know all the details of your
case), you can easily extend WebTest with something like that:

<groovyScript description=3D"define custom step">
public class MySafeFailureContainer
  extends com.canoo.webtest.steps.control
{
  public void doExecute() throws Exception
  {
    try
    {
      super.doExecute();
    }
    catch (final Exception e)
    {
      // just dismiss
    }
  }
}
=09
project.addTaskDefinition('mySafeFailureContainer',
MySafeFailureContainer)

</groovyScript>

and then you can use this custom task inside your steps just like any
other:
<mySafeFailureContainer>
  .. some steps...
</mySafeFailureContainer>

Here groovyScript refers to the Groovy Ant task (not the WebTest step)
as defined by the not-yet-but-very-soon-documented utility webtest.xml
form the latest builds. The call should naturally occur before usage of
the task.

As already said, this will generate reports that are difficult to
analyse. On the other side this shows how easy it is to extend WebTest
;-)

Marc.

Marc Guillemot wrote:
> Seems to be similar to Denis' case and would make sense (if wisely
> used). Perhaps can it be implemented based on WT-251 but this requires
> some changes in the xml report format as currently only one
> error/failure is saved.
>=20
> Marc.
>=20
> Michal wrote:
>> Marc,
>>
>> when I started to work with webtest I also missed extended
haltonfailure
>> feature.
>> For example in our tests after we load page we do common verification
>> for valid urls, 404 errors, we load css, images, etc...
>> If this test fails it is not critical to whole <webtest> section, we
>> would like to know about the error but at the same time we would like
to
>> continue testing.
>>
>> Therefore it would be excellent if we would be able to create a step
>> like <verifyTitle text=3D"my title" onfailure=3D"continuetests" /> =
which
>> would indicate that this step is not critical for the whole webtest
and
>> steps should be executed as if <verifyTitle> passed even if it
failed.
>>
>> What do you think?
>>
>> Marc Guillemot wrote:
>>> haltonfailure and haltonerror both allow to configure if the build
>>> should fail or not. Ie if Ant should stop the execution after
</webtest>.
>>> In both cases the execution of the steps is stopped after the
failing
>>> step.
>>>
>>> Generally I think quite strange to continue a test when something
>>> failed. How to consider the results at the end? Correct or not
correct?
>>> Nevertheless it may make sense in some cases. Denis mentioned a case
to
>>> me last week where he had some <verifyXxxx/> to check the Italian
>>> translation of some text. The application behaved correctly but some
of
>>> the texts where wrong and it would have been more useful to complete
the
>>> execution and show all wrong texts in one time.
>>>
>>> @Christoph
>>> is your case comparable with Denis' one?
>>> Perhaps should you refactor your tests. If you have a long way to go
to
>>> a particular page and want to perform different actions from there,
what
>>> about splitting it? A first test would just go to the particular
page
>>> and store on file session information (probably the cookies that
allow
>>> to hold the session). Then each single "test from the particular
page"
>>> could just reuse this session by setting the cookie value from what
is
>>> read from file. This would give you a far better information in
report:
>>> you would see precisely which single "test from the particular page"
>>> works and which fails.
>>>
>>> Marc.