[Webtest] New Warning test result

Marc Guillemot Marc Guillemot <mguillemot@yahoo.fr>
Tue, 5 Jun 2007 03:50:17 -0700 (PDT)


Hi Nate,

ups, I've already been faster to answer, sorry.

I don't understand the relationship between WT-244, non breaking failures
and Groovy.

We've discussed here for some time (I think this was Denis' suggestion) to
add the possibility to collect verifications failures to make a <webtest>
failed only at the end and have the possibility to display all verification
failures in one time. I don't know if an issue has been opened for that or
not. If I correctly understand you, you would like to have such results as
let say yellow (because neither green nor red) in the overview report? I
have to think more about that, but it could make sense to me too. I've
already had tests where I had to implement a workaround due to a known issue
in the application and where I wasn't satisfied to see them green (thanks to
the workaround) in the report. 
The report format surely needs to be changed for that because currently only
1 error/failure is possible per <webtest> and the result of a <webtest> can
only be successful or failed. So you can't achieve that just by extending
Step.

Please open a new issue for that.

Marc.


Nate Oster wrote:
> 
> 
> Marc and WebTest committers,
> 
> Congrats on WebTest2.5!  The reporting improvements are fantastic, and
> we're already switching over to the mostly-self-documenting webtest.xml
> file for running our scripts.  Now that the results reporting code is
> stabilized again, I think it might be time to reconsider adding explicit
> support for Warning as a test result.
> 
> There are a number of recurring issues and current JIRA enhancement
> requests that would be greatly simplified by introducing a new result
> type to WebTest.  For example, JIRA WT-244 "Perform a set of steps on
> every response," would be pretty simple to implement with some Groovy
> code if you had a simple way of *not failing* a test case when one of
> the validation steps failed.  You could simply recast the "failure"
> exception as a "warning" instead.
> 
> Similarly, my current project wants to create a "Workaround" step, so
> that we can mark code that's expected to fail because of a known bug and
> provide an alterative set of steps.  However, we don't want these
> "patched" test scripts to simply PASS(!) - we want to indicate that
> something is "suspect" about them.  A simple Warning result would be
> perfect for this.
> 
> So, "Warning" would be a lot like JUnit's "Error" result.  It's not a
> "Failure" of the test, but it's not the expected result either.  I know
> that sounds odd with a declarative-procedural test automation tool, but
> it actually exposes a generic mechanism and consistent reporting that
> could really improve design, maintainability, and extensibility of our
> tests.  
> 
> I'm especially interested in how easy it would be to take advantage of
> this result type by extending Step with a Groovy macro.  The "Workaround
> step" idea, for example, would be easy to implement by extending
> GroupStep with a little Groovy.  We tried to introduce this ourselves,
> but the results reporting code was undergoing so much change, and we're
> so unfamiliar with the WebTest internals, that it proved difficult to
> keep up.
> 
> What do you think?  Should I raise a Feature request to introduce a
> "Warning" test result?
> 
> Thanks!
> Nate Oster
> 
>  
> To: webtest@lists.canoo.com
> From: Marc Guillemot <mguillemot@yahoo.fr>
> Date:  Mon, 16 Apr 2007 09:44:22 +0200
> Subject: [Webtest] Re: Resume after failing step
> Reply-To: webtest@lists.canoo.com
> Reply-To: Marc Guillemot <mguillemot@yahoo.fr>
> 
> Hi Christoph,
> 
> if you really want to have "non failing" errors (I still believe that it
> is better to split your tests, but I don't know all the details of your
> case), you can easily extend WebTest with something like that:
> 
> <groovyScript description="define custom step">
> public class MySafeFailureContainer
>   extends com.canoo.webtest.steps.control
> {
>   public void doExecute() throws Exception
>   {
>     try
>     {
>       super.doExecute();
>     }
>     catch (final Exception e)
>     {
>       // just dismiss
>     }
>   }
> }
> 	
> project.addTaskDefinition('mySafeFailureContainer',
> MySafeFailureContainer)
> 
> </groovyScript>
> 
> and then you can use this custom task inside your steps just like any
> other:
> <mySafeFailureContainer>
>   .. some steps...
> </mySafeFailureContainer>
> 
> Here groovyScript refers to the Groovy Ant task (not the WebTest step)
> as defined by the not-yet-but-very-soon-documented utility webtest.xml
> form the latest builds. The call should naturally occur before usage of
> the task.
> 
> As already said, this will generate reports that are difficult to
> analyse. On the other side this shows how easy it is to extend WebTest
> ;-)
> 
> Marc.
> 
> Marc Guillemot wrote:
>> Seems to be similar to Denis' case and would make sense (if wisely
>> used). Perhaps can it be implemented based on WT-251 but this requires
>> some changes in the xml report format as currently only one
>> error/failure is saved.
>> 
>> Marc.
>> 
>> Michal wrote:
>>> Marc,
>>>
>>> when I started to work with webtest I also missed extended
> haltonfailure
>>> feature.
>>> For example in our tests after we load page we do common verification
>>> for valid urls, 404 errors, we load css, images, etc...
>>> If this test fails it is not critical to whole <webtest> section, we
>>> would like to know about the error but at the same time we would like
> to
>>> continue testing.
>>>
>>> Therefore it would be excellent if we would be able to create a step
>>> like <verifyTitle text="my title" onfailure="continuetests" /> which
>>> would indicate that this step is not critical for the whole webtest
> and
>>> steps should be executed as if <verifyTitle> passed even if it
> failed.
>>>
>>> What do you think?
>>>
>>> Marc Guillemot wrote:
>>>> haltonfailure and haltonerror both allow to configure if the build
>>>> should fail or not. Ie if Ant should stop the execution after
> </webtest>.
>>>> In both cases the execution of the steps is stopped after the
> failing
>>>> step.
>>>>
>>>> Generally I think quite strange to continue a test when something
>>>> failed. How to consider the results at the end? Correct or not
> correct?
>>>> Nevertheless it may make sense in some cases. Denis mentioned a case
> to
>>>> me last week where he had some <verifyXxxx/> to check the Italian
>>>> translation of some text. The application behaved correctly but some
> of
>>>> the texts where wrong and it would have been more useful to complete
> the
>>>> execution and show all wrong texts in one time.
>>>>
>>>> @Christoph
>>>> is your case comparable with Denis' one?
>>>> Perhaps should you refactor your tests. If you have a long way to go
> to
>>>> a particular page and want to perform different actions from there,
> what
>>>> about splitting it? A first test would just go to the particular
> page
>>>> and store on file session information (probably the cookies that
> allow
>>>> to hold the session). Then each single "test from the particular
> page"
>>>> could just reuse this session by setting the cookie value from what
> is
>>>> read from file. This would give you a far better information in
> report:
>>>> you would see precisely which single "test from the particular page"
>>>> works and which fails.
>>>>
>>>> Marc.
> 
> _______________________________________________
> WebTest mailing list
> WebTest@lists.canoo.com
> http://lists.canoo.com/mailman/listinfo/webtest
> 
> 

-- 
View this message in context: http://www.nabble.com/New-Warning-test-result-tf3779767.html#a10967103
Sent from the WebTest mailing list archive at Nabble.com.