>Two things to consider when judging the quality of an examination:
>1. the level of technical know-how of the author or authors.
As one of the MCP authors, I represented the low-end of the exam SMEs as far as formal qualifications.
>2. the skill in technical writing. It takes good grasp of technical writing to produce quality examination. IMO, I am not an author of a book, but the only difference between exam and a book is that: BOOK - author gives you the answer. EXAM - author asks you if you know the answer.
>
MS put significant technical editing assets into the exam design. The requirements for what is required as far as correctness and knowledge testing is far more stringent than the requirements for BrainBench.
>In addition, the examination is the 'standard' of the authors - it might be low or high to some. Here in East Asia, the exam is quite fair - not easy not difficult - maybe because of the language barrier.
>
The MCP exams are offered in a number of foreign languages - in fact, one of the requirements for the design of test scenarios was the ability to pose the question and answers unab]mbiguously in several different languages. That was built into the editorial review that went on throughout the alpha and beta process. And the response were reviewed by the technical editors, the other SMEs, and by members of the VFP team for correctness. The BrainBench content is not the same, and MS doesn't claim that MCP qualification shows master-level skill with the product.
In addition, the required skill set is known before the exam is taken for the MCP; the BrainBench exam doesn't do this. For an exam that focuses on syntax in much of its content, and that allows you to use online reference materials, it really doesn't demonstrate much from my POV. YMMV, but its assertion that a score of 4 represents mastery of the product is at best questionable.
Those who can, do; those who can't design bad exams