[Webtest] Extensibility of Canoo
Tue, 29 Oct 2002 14:31:02 +0100
> Why doesn't TestSpecification (and TestStepSequence) implement
I guess when starting we tried to do the simplest solution that
could possibly work and just didn't think about that option.
(And we did not assume being so successful :-)
> If this question has been raised before, and there's good reason,
> please shoot me down now so I don't start trying to implement it.
Extensibility was discussed before, but from a different perspective.
As for now subclassing TestSpec was the way to around the
> What downsides would you see to this?
We should examine what it means to the TaskSpecification.
At the moment we only have one, which is kind of nice.
I would not like the user to deal with multiple scenarios here.
The implementation needs to ensure nesting of WebTest tasks only
(i.e. not "copy" task for example).
Providing a consistent DTD becomes impossible. We need to tackle
this like ANT is doing it for the general case.
The documentation is affected in the same way.
We need to think thoroughly about an "extensibility API" more
special than "Task", e.g. to ensure consistent behaviour in
checking attributes, logging, failure and error cases and
> The tasks would all now have to extend Task (and implement
> TaskContainer in the case of not/repeat). Actually, if
> AbstractTestStepSpecification should be changed to extend Task, I
> wouldn't see too much change at all... most of TestStepSequence
> could be happily thrown away and replaced with a lovely little
> addTask method.
> Should be possible to just use the TaskAdapter on the classes if
> you want the code to be kept non-ant specific.
I don't really care about being ant specific.
> Config would have to be a task, which seems intuitively a little
> funny, but not a problem.
Hm, we need to ensure sequence here.
Such a restructuring would also be an opportunity to go for a
"configref" attribute (analogous to classpathref).
> I'll probably start trying a quick and dirty implementation over
> the next couple of days, so any thoughts on this would be greatly
What worked best for us is to start with a functional test.
We try to first come up with a typical usage, add it to the
selfTestImpl.xml, let it run (sure it fails) and then go
for the simplest way to let it pass.
Everything evolves from there.
I'm curious to see your results