>Hi, Hilmar.
>
>>Methinks the testing scheme should be adapted to the specific system; I doubt a generic solution would work well.
>>
>>You can instruct people who would know the system well, to test the system, with data that is relatively standard, but also try to introduce extreme conditions, e.g., what happens if the user presses ESC instead of clicking on "Close", what happens if you try to input a number outside of the valid range, etc. It often happened to me that I tested my software every time with the same standard data - lazyness to think about the details - and that then the users of the system came up with "innovative" ways to make the system crash. Sometimes with real data, sometimes by accident.
>>
>>It would also help to have the system tested by people who do NOT know all the details about the workings of the system. They may come up with the "innovative errors", which regular users might not even think about, because they already have their established routine.
>
>Well, this is an informal, not automated kind of testing, and it has many of these problems. I guess most of the books Kevin read refer specifically to automated and systematic testing approaches. See my reply to him to give you an idea.
>
>See you,
Sorry? I see no reply to him.
Difference in opinions hath cost many millions of lives: for instance, whether flesh be bread, or bread be flesh; whether whistling be a vice or a virtue; whether it be better to kiss a post, or throw it into the fire... (from Gulliver's Travels)