November 2007

November 2007
Ask DBSpecialists: Determining Application Mix of Your Oracle Database
You’re invited to submit your Oracle-related questions to us at askdbspecialists11@dbspecialists.com. Please include your name and telephone number in your email.This month’s question comes to us from Worton from Woodland Park, Colorado: I hear a lot about configuring an Oracle database with respect to the type of application being run. I think we have an OLTP type system, but we often get large queries running that sort a lot of data for reports. I am afraid our database is turning into an OLAP environment. Is there any way I can determine the true application mix of my Oracle databases?

James F. Koopmann of the Database Specialists team responds:

This is a common problem in many databases-the initial implementation might be OLTP as in your case but as acceptance and value of data is recognized more and more people begin to use it for reporting purposes. It is our goal as DBAs to continually be watchful of how our databases are being used so that we can make intelligent decisions on how to architect and keep information flowing within our organizations. (More on the topic of configuration decisions in a future issue.)

 

First let’s visit what OLTP and OLAP systems commonly look like.

An OLTP workload has these characteristics:

Read and write intensive

Performance measured by IOPS and average latency (I/O turn-around time)

Having typical usage of large bursts of small reads followed by inserts and updates

Primarily driven by the database issuing small random I/Os

An OLAP workload has these characteristics:

Read intensive

Performance-based on moving large amounts of data in megabytes per second (MBPS)

Driven by read only databases except during the load stage

Having large multi-block read/write I/O streams containing multiple 1-MB I/Os

 

Within Oracle 10g there are a few statistics we can use to help us along in determining the type of I/O requests being done and ultimately help us figure out what type of application we are running.

 

Within the gv$sysstat view we have:

‘physical read total IO requests’ – Total Reads

‘physical write total IO requests’ – Total Write

 

‘physical read total multi block requests’ – Large Reads

‘physical write total multi block requests’ – Large Write

 

‘physical read total bytes’ – Total Bytes Read

‘physical write total bytes’ – Total Bytes Written

 

From these we can then calculate:

Small Reads = Total Reads – Large Reads

Small Writes = Total Writes – Large Writes

 

The purpose for doing this is to categorize the type of I/O. Remember, small I/O typically means OLTP and large I/O typically means OLAP. From a storage perspective this equates to small I/O indicating performance that is IOPS driven and large I/O indicates performance that is MBPS driven. If an existing Oracle 10g Release 2 (10gR2) database is servicing the application mix intended to be reconfigured on a storage array, use this SQL command to determine the workload.

 

set linesize 100

set head off

SELECT ‘Number of Small Reads :’||

sum(decode(name,’physical read total IO requests’,value,0)-

decode(name,’physical read total multi block requests’,value,0)),

‘Number of Small Writes:’||

sum(decode(name,’physical write total IO requests’,value,0)-

decode(name,’physical write total multi block requests’,value,0)),

‘Number of Large Reads :’||

sum(decode(name,’physical read total multi block requests’,value,0)),

‘Number of Large Writes:’||

sum(decode(name,’physical write total multi block requests’,value,0)),

‘Total Bytes Read :’||

sum(decode(name,’physical read total bytes’,value,0)),

‘Total Bytes Written :’||

sum(decode(name,’physical write total bytes’,value,0))

FROM gv$sysstat;

 

When we run this SQL statement we get something like the following:

Number of Small Reads :205903

Number of Small Writes:106883

Number of Large Reads :40298

Number of Large Writes:2791

Total Bytes Read :4188587008

Total Bytes Written :2009381888

 

Statistics contained in the GV$ views are cumulative, so you must obtain samples from the beginning and ending of a peak I/O cycle. Subtract the begining statistics from the ending statistics to obtain the true quantity of reads and writes that have occurred. You can then use these figures to determine the actual I/O being done and classify the database application mix and the workload type.

 

For instance, if you use the previous SQL command again in 10 minutes, you might produce the following statistics.

Number of Small Reads :500211

Number of Small Writes:123474

Number of Large Reads :142981

Number of Large Writes:8010

Total Bytes Read :22232604961

Total Bytes Written :5586081648

 

Calculate the IOPS for small and large I/O, the percentage of reads to writes, and the MBPS throughput.

 

Small Read IOPS =(500211-205903)/(10*60) = 490 IOPS

Small Write IOPS =(123474-106883)/(10*60) = 27 IOPS

Total Small IOPS =(294308-16591)/(10*60) = 517 IOPS

I/O Percentage of Reads to Writes = 94:6

Large Read IOPS =(142981-40298)/(10*60) = 171 IOPS

Large Write IOPS =(8010-2791) /(10*60) = 8 IOPS

Total Large IOPS =(102683+5219) /(10*60) = 179 IOPS

I/O Percentage of Reads to Writes = 96:4

Total MBPS Read =((22232604961-4188587008)/(10*60))/1048576=28 MBPS

Total MBPS Written =((5586081648-2009381888) /(10*60))/1048576=5 MBPS

Total MBPS =((18044017953+3576699760)/(10*60))/1048576=34 MBPS

 

And now you can plainly see that this particular system is a classic OLTP application. You might want to run these test at different times of the day and maybe even at night to see how your database is actually being used.

 

Having Problems with Auto Restart of Oracle?
James F. Koopmann

 

Within /etc/init.d/dbora there is the following command line to startup the Oracle database on reboot:su – $ORA_OWNER -c “$ORA_HOME/bin/dbstart $ORA_HOME”

For some installations this causes the following message/prompt to be generated.

Your default context is user_u:system_r:unconfined_t.

 

You can typically see this message in the /var/log/messages file and unless someone is around at the console to quickly answer [n] then the start of Oracle will fail or hang.

 

A solution to get around this is to change the /etc/init.d/dbora script to use runuser instead of the su command-allowing the system to start Oracle services non-interactively with success.

Oracle OpenWorld Has Come and Gone
James F. Koopmann

 

Many of us, myself included, are again sitting back and watching Oracle OpenWorld slip through our fingers-wishing we could partake of the wonderful content being provided. Don’t let everything slip by. Go and visit the OpenWorld website (http://www.oracle.com/openworld/2007/index.html ) as there is still valuable information you can get without actually being there. You can:

  1. Watch some of the Keynote speakers from Oracle
  2. Watch daily video highlights
  3. Download collateral
  4. Get daily news
  5. Purchase the DVD

One of my typical tricks is to find the presentation I would have liked to go see and then do a search on the net for the author and presentation. Typically presenters will give the same presentation at different venue and you might be able to download it.

Conference Roundup
RMOUG Training Days

The Rocky Mountain State puts on a wonderful two-day conference every year in downtown Denver. Most of the Database Specialists, Inc. team will be in attendance and would enjoy seeing you there. Save the dates of February 13 & 14, 2008 and come say hello. Visit the RMOUG site for more details at http://www.rmoug.org.

 

 

NoCOUG Winter Conference

The Northern California Oracle Users Group will hold its Winter Conference on February 19, 2008. The conference will be at the Oracle Conference Center in Redwood Shores, and there are still opportunities for you to speak. Please share the knowledge. Get updates at http://www.nocoug.org.

 

Call Database Specialists, Inc. when you need remote DBA services or onsite support for your mission-critical Oracle database systems. Visit our website for no-cost resources, white papers, conference presentations and handy scripts. We can help increase your uptime, improve performance, minimize risk, and reduce

costs.

 

Sincerely,

 
Lisa Loper
Database Specialists, Inc.

Leave a Reply

Your email address will not be published. Required fields are marked *