January 2007

The Specialist
Database News You Can Use
A monthly newsletter for Oracle users
January 2007
From Database Specialists
Expert onsite and remote DBA services
for your Oracle databases
Tel: 415-344-0500 • Toll-Free: 1-888-648-0500

Ask DB Specialists: More on Statspack and Execution Plans You’re invited to submit your Oracle-related questions to us at askdbspecialists01@dbspecialists.com. Please include your name and telephone number in your email.

This month’s question comes to us from Doug in Kansas City, MO: I found your column in the November, 2006 Specialist about getting execution plan history from Statspack extremely helpful. But sometimes when I run the sprepsql.sql script I get the error “ORA-20101: ERROR: Hash value chosen does not exist in end snapshot”. How can I get the execution plan information?

Terry Sutton of the Database Specialists team responds: I’ve often had the same frustration when trying to get SQL statement execution plans out of Statspack. If the SQL statement isn’t in the ending snapshot that you choose when you run the report, you’ll get the ORA-20101 error. So how do we find out which snapshots have information on the query we are interested in? I’ve found the following query useful for this task, as well as many others:


COL snap_ids  FORMAT a15            HEADING "Snap IDs"
COL times     FORMAT a20            HEADING "Date/Time"
COL logread   FORMAT 99,999,999,999 HEADING Logical|Reads
COL execs     FORMAT 999,999,999    HEADING Executions
COL reads_per FORMAT 999,999,999    HEADING LReads|/Exec
COL hash      FORMAT 999999999999   HEADING HashValue

SELECT   snap_ids, times, logread, execs, reads_per, hash
FROM     (
         SELECT LAG (s.snap_id, 1, 0) OVER (ORDER BY s.snap_id) ||
                '-' || s.snap_id snap_ids,
                TO_CHAR (LAG (s.snap_time, 1) OVER (ORDER BY s.snap_id),
                         'MM-dd hh24:mi') || '-' ||
                TO_CHAR (s.snap_time, 'hh24:mi') times,
                (e.buffer_gets -
                  NVL ((LAG (e.buffer_gets, 1, 0)
                        OVER (ORDER BY e.snap_id)), 0)) logread,
                (e.executions -
                  NVL ((LAG (e.executions, 1, 0)
                        OVER (ORDER BY e.snap_id)), 0)) execs,
                DECODE (e.executions -
                         NVL ((LAG (e.executions, 1, 0)
                               OVER (ORDER BY E.SNap_id)), 0),
                        TO_NUMBER (NULL),
                        (e.buffer_gets -
                          NVL ((LAG (e.buffer_gets, 1 ,0)
                                OVER (ORDER BY e.snap_id)), 0)) /
                        (e.executions -
                          NVL ((LAG (e.executions, 1, 0)
                                OVER (ORDER BY e.snap_id)), 0))) reads_per,
                e.hash_value hash
         FROM   stats$sql_summary e, stats$snapshot s
         WHERE  e.hash_value = '&hash_value'
         AND    s.snap_id = e.snap_id
         AND    s.snap_time BETWEEN &beg_time AND &end_time
         ) ss
WHERE    SUBSTR (ss.snap_ids, 1, 2) <> '0-'
ORDER BY ss.snap_ids;

Obviously you don’t want to type this query each time (or even once), so save it as a script, say, get_qlog_time.sql. Then run the script in the PERFSTAT schema when you want to mine Statspack data for information about a specific SQL statement.

You’ll be prompted for the statement hash value to search for, along with a starting and ending date and time. I usually enter the dates in a form similar to “SYSDATE – 1”. Here’s an example:

SQL> @get_qlog_time
Enter value for hash_value: 2074560633
old  17:        where e.hash_value = '&hash_value'
new  17:        where e.hash_value = '2074560633'
Enter value for beg_time: sysdate - .2
Enter value for end_time: sysdate
old  19:        and s.snap_time between &beg_time and &end_time) ss
new  19:        and s.snap_time between sysdate - .2 and sysdate) ss

                                 Logical              LReads
Snap IDs     Date/Time             Reads  Executions   /Exec   HashValue
------------ ------------------ -------- ----------- ------- -----------
54177-54178  01-12 09:30-10:00   213,473      55,538       4  2074560633
54178-54179  01-12 10:00-10:30   260,161      68,679       4  2074560633
54179-54180  01-12 10:30-11:00   211,789      55,764       4  2074560633
54180-54181  01-12 11:00-11:30   249,013      65,484       4  2074560633
54181-54182  01-12 11:30-12:00   246,877      63,694       4  2074560633
54182-54183  01-12 12:00-12:30   240,715      62,505       4  2074560633
54183-54184  01-12 12:30-13:00   224,420      57,978       4  2074560633
54184-54185  01-12 13:00-13:30   207,265      54,463       4  2074560633

8 rows selected.

So if all you were looking for was Statspack snapshots which contained executions of the query you’re investigating, you’ll see it in this output. But, in addition, you’ll also see how many executions of the query there were in each period, as well as how many logical reads the query performed.

This concept can be expanded to study any statistic which is saved by Statspack. I use a similar query to track database statistics which are in v$sysstat, and another which tracks statistics for a given wait event. These queries can be very useful in dealing with performance problems. If we use get_qlog_time.sql, we can see a history of the performance of a query over time, which is great if you’ve changed something (like adding an index) and want to see whether the change helped.

This is just an introduction into the power of using the data gathered by Statspack. There are many ways to use this data beyond the standard reports provided by Oracle. If you’re interested in this area, I recommend you visit Tim Gorman’s web site (http://www.evdbt.com/library.htm) where you can find a useful white paper entitled, Performance Diagnostics using STATSPACK data, as well as additional SQL scripts for mining Statspack data.

NoCOUG Winter Conference

Don’t miss the Winter Conference of the Northern California Oracle Users Group. Oracle Corporation’s Juan Loaiza, who introduced the SAME Storage Method (Stripe And Mirror Everything), will give the keynote address.

So, mark your calendar right now for Thursday, February 8 at Oracle Corporation’s conference center in Redwood Shores.

This full-day educational event will also feature ten technical presentations, vendor exhibits, book raffles, networking, and more—all geared towards Oracle DBAs, developers, architects, and team leads.

Don’t forget to stop by the Database Specialists booth and say hello!

Database Specialists at RMOUG Training Days

Two members of the Database Specialists team have been invited to share their expertise at the Rocky Mountain Oracle Users Group Training Days 2007 taking place in Denver, Colorado February 14-15, 2007.

Senior Staff Consultant Terry Sutton returns for his fourth year at the conference. He’ll be discussing and sharing a real-life example of:

  • Tracing Individual Users in Connection-pooled Environments with Oracle 10g.

Roger Schrag will lead sessions on two topics:

  • Moving Oracle Databases Across Platforms without Export/Import
  • What You Can Do When Your Database Runs Out of Temp Space

RMOUG Training Days packs over 90 technical sessions into a two-day event for all levels and includes educational tracks for DBAs and Developers.

For more information about RMOUG Training Days 2007, visit http://www.rmoug.org/training.htm.

A Fountain of Information for DBAs and Developers

Courtesy of the speakers of the last meeting of the Northern California Oracle Users Group, you now have access to slides and presentation notes on a variety of topics addressed at the group’s last conference. Here’s a sampling of what you’ll find on NoCOUG’s website at http://www.nocoug.org/presentations.html:


  • Why You Can’t See Your Real Performance Problems, by Cary Millsap, Hotsos Enterprises
  • Planning and Installing a RAC Database, by Caleb Small, caleb.com
  • Root Cause and Other Urban DBA Legends, by Brian Hitchcock, Sun Microsystems
  • RAC Performance Tuning Best Practices, by Sri Subramaniam, Oracle Corporation

Leave a Reply

Your email address will not be published. Required fields are marked *