This appendix contains the answers to the sample certification exam questions in Appendix L.
A SYSADM privileges are the highest set of privileges available in DB2 ESE.
D To drop DB2 instance db2inst1, appusr2 must have SYSADM privileges. Therefore, the System Administrator must add the user appusr2 to the group usr1grp.
B If you set TRUST_ALLCLNTS=YES and TRUST_CLNTAUTH=SERVER, then authentication for Trusted non-DRDA Client Authentication with be performed at the SERVER.
B Kerberos authentication types are supported only on clients and servers running Windows 2000, Windows XP, and Windows .NET operating systems. In addition, both the client and server machines must belong to the same Windows domain.
D With an authentication type of KRB_SERVER_ENCRYPT at the server and clients that support the Kerberos security system, the effective system authentication type is KERBEROS. If the clients do not support the Kerberos security system, then the effective system authentication type is equivalent to SERVER_ENCRYPT.
C For sensitive data, consider revoking the SELECT privilege from PUBLIC, then granting the SELECT privilege as required to specific users.
E If DB2LDAPCACHE=NO and DIR_CACHE=NO, always read the information from LDAP. If DB2LDAPCACHE=NO and DIR_CACHE=YES, always read the information from LDAP once and insert it into the DB2 cache.
B When running on Windows operating systems, DB2 supports using either IBM LDAP client or the Microsoft LDAP client to access the IBM SecureWay Directory Server. If you want to select the IBM LDAP client explicitly, you must use the db2set command to set the DB2LDAP_CLIENT_PROVIDER registry variable to IBM.
A To enable LDAP explicitly by setting the DB2_ENABLE_LDAP registry variable to YES using the db2set command.
A Set DB2LDAPCACHE=NO to avoid LDAP information being cached in the database, node, and DCS directories.
B The authorization check is always performed by the LDAP server, not by DB2 or the operating system. The LDAP authorization check is not related to DB2 authorization.
C Answer A is correct because there are four DB2 servers, one entry per server. At the minimum, you will need to reserve four entries in the /etc/services file. Using the best practices, however, you will need to add an additional entry for each database partition for a particular server within the partitioned database environment. Therefore, you will need a total of 16 entries reserved in the /etc/services file.
A There is one and only one db2nodes.cfg file for each instance in a partitioned database environment.
D Any number of DB2 instances can be configured to function as federated servers. Federated servers use DRDA communication protocols to communicate with DB2 family instances. Federated servers use other protocols to communicate with non-DB2 family instances (not a single protocol).
D The processing of segments of a query at a data source instead of at the federated server is called function pushdown.
B The query optimizer generates local and remote access plans for processing a query fragment, based on resource cost. DB2 then chooses the plan it believes will process the query with the least resource cost.
C The CPU at the MYSERV server is five times faster than the CPU at the federated server.
B The data source has started to run on an upgraded CPU that's only half as fast as the federated server.
A The data source has started to run on an upgraded CPU that's only half as fast (or two times slower) as the federated server.
B The network communications data rate to the data source MYSERV is at 2,000,000 bytes per second.
D A result set of 10,000 records with row length of 100 bytes, then number of blocks it will be required to send is: 10,000 x 100 / 65,535 or approximately 16 blocks.
A Only a REFRESH DEFERRED materialized query table can be defined as MAINTAINED BY USER. Only insert, update, and delete are valid actions for materialized query table using the MAINTAINED BY USER option.
B If the LOGSECOND is set to -1, the database is configured with infinite active log space. LOGPRIMARY cannot be set to -1
D The MIRRORLOGPATH database configuration parameter allows the database to write an identical copy of log files to a different path.
D Changes to the table are not logged if you created the Declared Global Temporary table with NOT LOGGED option. The Declared Global Temporary table is logged, and LOGGED is the default option.
E If the BLK_LOG_DSK_FUL is set to YES, the applications will hang when DB2 encounters a log disk full error. You are then able to resolve the error, and the transaction can continue because the DB2 will attempt to create a log file every five minutes until it succeeds and will write a message to the administration log.
B To collect a complete set of log files up to a known point in time you can initiate on demand log archiving by invoking the ARCHIVE LOG command. The issuer of this command cannot have a connection to the specific database, although other users may be connected.
B Other tasks can be used for additional information related to this problem. The ARCHIVE.LOG and USEREXIT.ERR are the most critical files that you will need to verify why the user exit failed.
B This is the best answer to describe the incremental delta function. Delta or incremental delta backup image is a copy of all database data that has changed since the last successful backup (full, incremental, or delta) operation.
B To enable online full backup, you must set LOGRETAIN to YES and set TRACKMOD to YES to allow incremental backup.
C To perform an offline backup of all database partitions for SAMPLE database, you must ensure that no connection exists to the database, back up the catalog partition first, then back up the remaining of the database partitions. The backup command will fail if you stop the DB2 instance.
B To perform a restore from the recent offline backup of all database partitions for SAMPLE database, you must restore the catalog partition first, then all other database partitions of the SAMPLE database. Using TAKEN AT is required because there are multiple database backup images residing in the /dbbackup/SAMPLE directory.
C To roll forward a table space that resides on a database partition number 2 to the end of log, you must specify the database partition number associated with the given table space name, TS1.
E All of these above options are valid under db2inidb tool. In addition to these options, the usage for each option is also correct described.
D Log shipping is the process of copying whole log files to a standby server, either from an archive device or through a user exit program running against the database on the primary server. The database on the standby server is continuously rolling forward through the log files produced by primary server.
E You can suspend a subset of one or more partitions to create a split mirror for performing offline backups. If the catalog partition is included in the subset, it must be the last partition to be suspended.
C After the failover, the database partitions 0 and 1 are running on server2.
C After the failover, the database partitions that were executing on server2, which is defined as database partitions 2 and 3, start up on server1. Because server1 is already running database partitions 0 and 1, database partitions 2 and 3 will be started as logical database partitions 2 and 3 with the logical ports 2 and 3.
B The ALLOW READ ACCESS option prevents all write access to the table by other applications, but it allows read access to preloaded data.
A The ALLOW NO ACCESS option locks the table exclusively and allows no access to the table data while the table is being loaded. This is the default option.
B and C When you receive SQL6048 from db2start, the cause of this is that you don't have the $HOME/.rhosts file for communication among hosts, and the file permission may not be corrected.
B The statement cannot be processed because of a problem with the db2nodes.cfg file. The main reason for this is that the hostname/port couple at line "2" of the db2nodes.cfg file is not unique. The port number at line "2" is the same as the port number at line "1" for the same hostname. To correct this problem, set port number for server1 at line "1" to 0.
C You have only two ports defined in /etc/services on server1 and server2. Now you try to failover database partition 3, and you are not allowed to use the third port. Ensure that the DB2 port range defined in the /etc/services file is large enough to handle all possible partitions to be started. To correct the problem, increase the value for DB2_db2inst1_END in /etc/services to at least 50503.
A There is one buffer pool, the IBMDEFAULTBP, created when a database is created.
C There are three table spaces created when a database is created: SYSCATSPACE, TEMPSPACE1, and USERSPACE1.
C There are three database partition groups created when a database is created, regardless of whether or not the database is partitioned: IBMCATGROUP, IBMDEFAULTGROUP, and IBMTEMPGROUP.
A Because all of the table spaces have a 4-KB page size and the default page size for a database is 4 KB, only one buffer pool is REQUIRED.
B Because the table spaces have a 4-KB and 8-KB page size and the default page size for a database is 4 KB, a minimum of two buffer pools are REQUIRED.
A To create a table space, a buffer pool with the same page size must already exist.
C or D Because the row size is > 8 KB, the table must be created in a table space with either a 16-KB or a 32-KB page size.
B The row has a length of
The total is larger than the max length (4005) for a 4-KB page and, therefore, requires a minimum 8-KB page size.
B Each DMS table space has three extents of overhead, plus a tag of one extent in size, plus there must be two extents available to create an object; this is a total of six extents.
B For a 4-KB page size, only 5 rows could fit on a page; this would require 20,000,000 pages, which would be larger than allowed.
For an 8-KB page size, 11 rows could fit on a page; this would require 9,090,909 pages, which is allowed.
D The table would use one object for the object map and one for the data object, as well as one object for the index's object map and one index object because the table has a primary key that would cause DB2 to create an index, for a total of four.
B The db2empfa tool enables multipage allocation, or allocation of an extent at a time. This can have a significant effect for bulk inserts.
B Adding up the size of the columns gives:
2000 + 250 + 2000 + 4 + 4 + 4
Even without the overhead, this is over 4005, so would require a minimum page size of 8 KB.
C Large table spaces can now contain indexes, as well as LOBs and Long Varchars.
B The $N parameter can append the database partition number to a container name in the create tablespace statement.
C Remove is not a valid option; a container can be dropped using the DROP option.
A The size of the catalog table space cannot be changed, and it is the default page size, i.e., 4 KB.
D Because both containers are 2,000 pages, increasing all containers by an additional 3,000 pages will make them both 5,000 pages.
A The NUMBLOCKPAGES defines the number of pages that should exist in the block-based area of the buffer pool for block-based prefetching.
A SYSCTRL is the lowest authority/privilege from the list that can alter a buffer pool.
C If there is not enough memory available to create a new buffer pool, an error message will be returned to the user, and the buffer pool will be allocated after the database is stopped and restarted.
D The value AUTOMATIC tells DB2 to allocate what is needed when it starts. By setting DATABASE_MEMORY to a real value, the specified amount of memory is reserved for future use.
In this case, 200,000 + 50,000 = 250,000. Therefore, the lowest value given greater than 250,000 is 300,000.
C There must be a minimum of one buffer pool for every page size. In the list, we see 8-KB and 16-KB page sizes. However, the catalog table space uses a 4-KB page size; th erefore, there are three page sizes, so three buffer pools are required.
D A buffer pool cannot be dropped while a table space is associated with it.
C The memory can be re-allocated within DB2 to other buffer pools, etc., but will still be held by DB2 until the database is stopped and restarted.
B Because MAXDEGREE = 4, the select will use a degree of parallelism of 4. Therefore, there will be 4 subagents, plus the coordinating agent, for a total of 5.
C There will be a block index on (C2,C1,C3) and two dimension indexes on C3, and (C1,C2).
C The data will then use six extents, as follows:
B and D Both (current date / 100) and (current date / 10000) increase as the value of current date increases, so they are monotonic. Dividing by 100 will remove the day part of the date; dividing by 10000 will remove the day and month part of the date. In both cases, the result will not decrease as the date increases.
B To maintain optimal performance in an OLTP workload, the data pages need to be in the buffer pools, so I/O can be avoided. Sorting should be avoided because it is costly and degrades performance.
C There are no current statistics for the table MYTAB and its indexes; therefore, the optimizer may be choosing sub-optimal access plans. To allow the optimizer to choose the best access plans, statistics should be gathered for the table and its indexes.
B Because each lock requires 72 bytes of lock list, each user will require approximately 527.5 pages of the lock list. Therefore with 20,000 pages of the lock list, the maximum number of users would be 37.
B Because each lock requires 72 bytes of lock list, each user will require approximately 527.5 pages of the lock list. Therefore, for 50 concurrent users, there would need to be approximately 26,400 pages of the lock list. The value of 30,000 is the minimum value in the list that is greater than this value.
B Although clustered columns will help in the retrieval of data, sorting, joining, and ordering can be done most efficiently using indexes so that sorts are not required.
B The unique index on column c1 that includes the column c2 will allow index-only access when columns c1 and c2 are queried in the same SQL statement. All other indexes would require either a sort or access to the data object to retrieve the values from column c2.
B DB2 UDB Version 7.x supported only Type I indexes. When a Version 7 databases is migrated to Version 8, the indexes are not changed to Type II indexes during the migration process. To convert Type I indexes to Type II indexes, the tables need to be reorged.
D The REORG command has an option (CONVERT) to convert Type I indexes to Type II indexes. By default, they will not be converted. For example:
Reorg indexes all for table user.table1 convert
C The REORG command cannot gather statistics for a table as the table is being operated on.
C CARD is not a column in the SYSCAT.INDEXES view, and NLEAF and NLEVELS are not valid columns in the SYSCAT.TABLES view, so these would not work. A table may not have indexes but may still have statistics, so the SYSCAT.TABLES view must be queried. If the STATS_TIME column is '-', then statistics have not been run for the table.
A Increasing the requestor I/O block size (RQRIOBLK) will allow DB2 to send and receive larger blocks of data, thereby reducing the number of blocked cursor requests.
C The SHEAPTHRES_SHR parameter is new in Version 8 and limits the maximum amount of memory that can be used for shared sorts within a database. Be aware that this is a HARD limit, unlike the SHEAPTHRES configuration parameter.
C Enabling intra-partition parallelism using the INTRA_PARALLEL parameter allows the DB2 optimizer to consider shared sorts when optimizing statements.
C Increasing the sort heaps and the associated sort heap thresholds can help improve performance; however, this may cause other implications. Therefore, these changes should be made with caution.
A Setting the maximum degree of parallelism for each instance to 8 will allow each application within an instance to use up to eight subagents to process an SQL request. If each instance has a number of applications, this may overwhelm the server. Therefore, for this configuration, using a maximum degree of parallelism of 4 will allow multiple applications to run concurrently within each instance.
B The connection concentrator is activated when the maximum number of connections is set higher than the maximum number of coordinating agents.