The XOO7, XMach-1, and XMark benchmarks have been designed to investigate the performance of different XMS. All three benchmarks provide complex data sets to capture the essential characteristics of XML data such as recursive elements and various data types. In terms of simplicity, XMach-1 has the most straightforward database description. It models a document, which is easy for users to understand. XOO7 adapts and extends the OO7 database on modules, which can be understood quite easily. XMark simulates an auction scenario, which is a very specialized interest area, containing many elements and attributes that may be difficult for users to understand.
Table 17.6 shows the coverage of the queries in the three benchmarks. Clearly, XMark and XOO7 cover most of the functionalities. It makes sense that XMach-1 covers the least number of functionalities since it has relatively fewer queries. We observe that both XOO7 and XMach-1 give simple queries that test one or two functionalities, while the majority of the queries in XMark are complex and cover many features. The latter may lead to difficulty in analyzing the results of a query since it may not be clear which feature contributes most to the response time. Furthermore, it is possible that some queries in XMark may not be executable or applicable because the system under test supports only a subset of the complex features.
Varying the number of elements in the XML files can scale the data sets of the three benchmarks. XOO7 allows users to change the file size both depthwise and breadthwise. XMark changes the database size by a certain factor (e.g., 10 times). Since XMach-1 assumes that the XML files are small, changing the number of XML files varies the database size.
As we have shown, the quality of an XMS benchmark can be analyzed with respect to four criteria: simplicity, relevance, portability, and scalability. Our study of the extent to which these criteria are met by each of the three major existing XMS benchmarks, XOO7, XMach-1, and XMark, shows that the definition of a complete XMS benchmark is a challenging ongoing task.