Profiler Usage Scenarios

This chapter has covered many of the technical aspects of SQL Profiler, but what about some practical applications? Beyond the obvious uses of identifying what SQL an application is submitting, this section takes a look at a few scenarios in which the SQL Profiler can be useful. These scenarios are presented to give you some ideas of how SQL Profiler can be used. The monitoring and analysis capabilities of SQL Profiler are limited only by your creativity and ingenuity.

Analyzing Slow Stored Procedures or Queries

After you have identified that a particular stored procedure is running slow, what should you do next? You might want to look at the estimated execution plan for the stored procedure, looking for table scans and sections of the plan that have a high cost percentage. But what if the execution plan has no obvious problems? This is when you should consider using the SQL Profiler.

You can set up a trace on the stored procedure that captures the execution of each statement within it along with its duration in milliseconds. Here's how:

  1. Create a new trace using the SQLProfilerTSQL_Duration template.

  2. Add the SP:StmtCompleted event from the stored procedure event class to the trace.

  3. Add a filter on the Duration column with the duration not equal to 0. You can also set the filter to a larger number to exclude more of the short-running statements.

If you are going to run the procedure from Query Analyzer, then you might want to add a filter on the SPID column as well. Set it equal to the process ID for your session; the SPID is displayed at the bottom of the Query Analyzer window next to your username in parentheses. This will trace only those commands that are executed from your Query Analyzer window.

When you run the trace and execute the stored procedure, you will see only those statements in the procedure that have a non-zero duration. The statements will be listed in ascending duration order. Look to the bottom of the Profiler output window to find your longer running statements. Now you can isolate these statements, copy them to Query Analyzer, and perform a separate analysis on them to determine your problem.


Sometimes it is necessary to audit the physical changes to your database. For instance, monitor when tables are added, columns are added to a table, or stored procedures are dropped. This auditing can be accomplished with SQL Profiler as follows:

  1. Create a new trace using a Blank template; this will leave the selection of all the events, data columns, and filters to you.

  2. Add the Object:Created event to the trace from the Objects class and the Audit Object Derived Permission event from the Security Audit class.

  3. Add the StartTime, DBUserName, NTUserName, ObjectID, ObjectName, and TextData data columns to your trace.

When you run this trace, you will capture the user and the change that he made. The object that was created or altered will be identified in the ObjectName column. The TextData column, for some of the modifications, will include the actual statement that was used to modify the database.

Identifying Ad Hoc Queries

One problem that can plague a production system is the execution of ad hoc queries against the production database. If you want to identify these ad hoc queries, the application, and the users that are running them, then SQL Profiler is your tool. You can create a trace as follows:

  1. Create a new trace using the SQLProfilerStandard template.

  2. Remove the Not Like SQL Profiler condition from the ApplicationName filter. Add a new ApplicationName filter with Like SQL Query% and Microsoft%.

When this trace is run, you will be able to identify database access that is happening via SQL Query Analyzer or Microsoft Access. The user, the duration, and the actual SQL statement will be captured. An alternative would be to change the ApplicationName filter to trace application access for all application names that are not like the name of your production applications, such as Not Like MyOrderEntryApp%.

Identifying Performance Bottlenecks

Another common problem with database applications is identifying performance bottlenecks. For example, your application is running slow, but you're not sure why. You tested all of the SQL statements and stored procedures used by the application and they were relatively fast. Yet, you find that some of the application screens are slow.

Is it the database server? Is it the client machine? Is it the network? These are all good questions, but what is the answer? SQL Profiler can help you get this answer.

You can use the same trace that was used in the previous "Identifying Ad Hoc Queries" scenario. For this scenario, you need to specify an ApplicationName filter with the name of the application that you want to trace. You might also want to apply a filter to a specific NTUserName to further refine your trace and avoid gathering trace information for users other than the one that you have isolated.

After you have started your trace, exercise the slow-running applications screens. Look at the trace output and take note of the duration of the statements as they execute on the database server. Are they relatively fast? How much time was spent on the execution of the SQL statements and stored procedures relative to the response time of the application screen? If the total database duration is 1,000 milliseconds (1 second), and the screen takes 10 seconds to refresh, then other factors such as the network or the application code need to be examined.

Monitoring Auto-Update Statistics

As discussed in Chapter 34, SQL Server will update index statistics automatically as data is changed in a table. In some environments, excessive auto-updating of statistics can take place and affect system performance. SQL Profiler can be used to monitor auto-updating of statistics as well as automatic statistics creation.

To monitor auto-updating of statistics, create a trace and include the AutoStats event in the Object event class. Also select the Integer Data, Success, and Object ID columns. When the AutoStats event is captured, the Integer Data column contains the number of statistics updated for a given table, the Object ID is the ID of the table, and the TextData column contains names of the columns together with either an Updated: or Created: prefix. The Success column contains potential failure indication.

If you see an excessive number of AutoStats events on a table or index, and the duration is high, it could be affecting system performance. You might want to consider disabling auto-update for statistics on that table and schedule statistics to be updated periodically during non-peak periods.

Monitoring Application Progress

The 10 user-configurable events can be used in a variety of ways, including tracking the progress of an application or procedure. For instance, perhaps you have a complex procedure that is subject to lengthy execution. You can add debug logic in this procedure to allow for real-time benchmarking via SQL Profiler.

The key to this type of profiling is the use of the sp_trace_generateevent stored procedure that enables you to launch the User Configurable event. The procedure needs to reference one of the User Configurable eventids (82 to 91) that correspond to the User Configurable event 0 to 9. If you execute the procedure with eventid = 82, then User Configurable event 0 will catch these events.

Listing 7.5 contains a sample stored procedure that will (in debug mode) trigger the trace events that SQL Profiler can capture:

Listing 7.5 A Stored Procedure That Raises User Configurable Events for SQL Profiler
CREATE PROCEDURE SampleApplicationProc (@debug bit = 0)
declare @userinfoParm nvarchar(128)
select @userinfoParm = getdate()

--if in debug mode, then launch event for Profiler
--    indicating Start of Application Proc
if @debug =1
       SET @userinfoParm = 'Proc Start: ' + convert(varchar(30),getdate(),120)
       EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm

--Real world would have complex proc code executing here
--The WAITFOR statement was added to simulate processing time
WAITFOR DELAY '00:00:05'

---if debug mode, then launch event indicating next significant stage
if @debug =1
       SET @userinfoParm = 'Proc Stage One Complete: '
                              + convert(varchar(20),getdate(),120)
       EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm

--Real world would have more complex proc code executing here
--The WAITFOR statement was added to simulate processing time
WAITFOR DELAY '00:00:05' --5 second delay

---if debug mode, then launch event indicating next significant stage
if @debug =1
       SET @userinfoParm = 'Proc Stage Two Complete: '
                              + convert(varchar(30),getdate(),120)
       EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm

--You get the idea


Now you need to set up a new trace that includes the UserConfigurable:1 event and also includes the TextData data column. After this is complete, you will be able to launch the sample stored procedure from Listing 7.5 and get progress information via SQL Profiler as the procedure executes. The execution command for the procedure follows, and the resulting SQL Profiler results are shown in Figure 7.7.

EXEC  SampleApplicationProc @debug = 1 
Figure 7.7. User-configurable trace results.


    Part III: SQL Server Administration
    Part IV: Transact-SQL
    Part V: SQL Server Internals and Performance Tuning
    Part VI: Additional SQL Server Features