This article is about Oracle SQL/PLSQL Performance Analysis, the aim and the tools.
Performance tuning is a broad and somewhat complex topic area when it comes to Oracle databases. Two of the biggest questions faced by your average DBA concern where to start and what to do. All you may know is that someone (a user) reports a problem about a slow or poor performing application or query. Where do you even begin to start when faced with this situation?
Oracle Performance Tuning is a complex, continuous and iterative process. It will be continuous because everytime performance needs and specifications of the system may change. Also the data, the statistics and the execution plans will change. The parameters and problematic queries or objects should be tuned for best performance every time. (This is the DBA's aspect of course, developer's will consider much more things such as reliability, mobility and modularity of code)
There are some approaches and methodologies of tuning an Oracle database. Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement. The steps are:
1- Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
2- Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
3- Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.
4- Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
5- Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
6- Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
The change from a top-down structured approach to a principle-based "make it stop hurting" one is part of the problem. Gathering statistics is obviously important because how else do you know if you have improved (or worsened) the problem? Still, to some degree with either approach, you are left with the original two questions: what do I look for, and how do I make it better? If the structured approach left you scratching your head, the principled approach only adds to the confusion.
What would help the novice tuner is a list of items or areas to evaluate (configure, diagnose, and tune) in each of the following areas:
• Tuning the Buffer Cache
• Tuning the Redo Log Buffer
• Tuning the Shared Pool Memory
• Tuning the Program Global Area
• Optimizing Data Storage
• Optimizing Tablespaces
• Tuning Undo Segments
• Detecting Lock Contention
• Tuning SQL
An important facet of database system performance tuning is the tuning of SQL statements. SQL tuning involves three basic steps:
1- Identifying high load or top SQL statements that are responsible for a large share of the application workload and system resources, by reviewing past SQL execution history available in the system.
These three steps are repeated until the system performance reaches a satisfactory level or no more statements can be tuned.
2- Verifying that the execution plans produced by the query optimizer for these statements perform reasonably.
3- Implementing corrective actions to generate better execution plans for poorly performing SQL statements.
High-load SQL statements can be identified by:
1- Automatic Database Diagnostic Monitor
We usually use SQL trace and some tools to analyze the trace, then focus to the problem and tune. The most common tool for analyzing SQL trace is tkprof. There are some simpler tools such as Autotrace and Runstats.
2- Automatic Workload Repository
3- V$SQL view
4- Custom Workload
5- SQL Trace
The autotrace utility is a very underutilized feature of SQL*Plus. It offers statement tracing and instant feedback on any successful SELECT, INSERT, UPDATE or DELETE statement. The utility requires a plan table (for the Explain Plan) under the current user’s schema. In addition, it requires the plustrace or DBA role for the user executing the trace. The source for the PLUSTRACE role can be found in
Autotrace is simpler, gives statistics just about the query, and it's just for single SQL statements, not woks for PL/SQL blocks.
Preparing to Analyse Statements
Before the AUTOTRACE command can be used a table called PLAN_TABLE must be created in your user account. This table is created by executing the following command:
Also, the user to run Autotrace should have PLUSTRACE role granted. By default, plustrace role is undefined. You can create it and grant to user by executing following command:
grant plustrace to user1 ;
You can control the report by setting the AUTOTRACE system variable.
SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default.
SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows only the optimizer execution path.
SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows only the SQL statement execution statistics.
SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics.
SET AUTOTRACE TRACEONLY: Like SET AUTOTRACE ON, but suppresses the printing of the user's query output, if any.
Note: SET TIMING ON is a different parameter, that writes elapsed
time to output.
SQL> set timing on
SQL> set autotrace on
SQL> select last_name from employees where department_id=10;
Plan hash value: 2056577954
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Ti
| 0 | SELECT STATEMENT | | 1 | 11 | 2 (0)| 00
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 11 | 2 (0)| 00
|* 2 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 1 | | 1 (0)| 00
Predicate Information (identified by operation id):
2 - access("DEPARTMENT_ID"=10)
8 recursive calls
0 db block gets
5 consistent gets
0 physical reads
0 redo size
340 bytes sent via SQL*Net to client
376 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
This example executes a query to list some data in the employees table. Oracle prints four sets of data:
• The actual results of the query
• Execution time of the query (because of the command: "set timing on")
• An execution plan for the query
• The statistics for the query
The execution plan describe the sequence of operations that Oracle performed to obtain the result for the query. In this example, the SELECT statement was resolved by executin a full table access of the emp table. That is, all the table was read from the table emp.
The statistics provide a summary of how much work Oracle performed to execute the query. The statistics are:
• recursive calls: The number of internal calls Oracle has made to execute the command.
The db block gets, consistent gets and physical reads give the number of blocks that were read to form the buffers or from the disc. For many queries, the number of physical reads is low as the data is already in the database buffers. If the number of physical reads is high then the query will be expected to be slow as there will be many disc accesses.
• db block gets: The number of blocks retrieved to answer the query.
• consistent gets: The number of blocks retrieved that did not change the data and therefore did not interfere with other users (i.e. by locking data).
• physical reads: The number of blocks read from the disc.
• redo size: The number of redo entries.
• bytes sent via SQL*Net to client: The number of bytes sent across the network from the server to the client.
• bytes received via SQL*Net from client: The number of bytes sent across the network from the client to the server.
• SQL*Net roundtrips to/from client: The number of exchanges between client and server.
• sorts (memory): The number of data sorts performed in memory.
• sorts (disc): The number of data sorts performed on disc.
• rows processed: The number of rows processed by the query.
The bytes received/sent via SQL*Net indicate how much data is being moved across the network. This is important as moving a lot of data across the network may affect the network's performance.
The sorts indicate the amount of work done in sorting data during the execution of the query. Sorts are important as sorting data is a slow process.
You can find more examples on the topic: Oracle Performance Analysis: Autotrace workshop
Runstats is another simple utility that shows satisfactory insormation about both SQL statements' and PL/SQL procedures' performance evaluation. It is best for comparing two different approaches. It shows two vital sets of statistics:
The elapsed time difference between two approaches. It very simply shows me which approach is faster by the wall clock
How many resources each approach takes. This can be more meaningful then even the wall clock timings. For example, if one approach is faster then the other but it takes thousands of latches (locks), I might avoid it simply because it will not scale as well.
The way this test harness works is by saving the system statistics and latch information into a temporary table. We then run a test and take another snapshot. We run the second test and take yet another snapshot. Now we can show the amount of resources used by approach 1 and approach 2.
Runstats package is written by
Thomas KYTE, and you can find more information about it and deployment of it on:
For using runstats, you must first create runstats package once. The runstats package code is on the site above. Once you create the package, that is OK, you don't need to run package code everytime.
The second important point to consider is, runstats takes the time elapsed between executions of its procedures. So, you have to run the steps of code immediately.
**FIRST SQL OR PL/SQL CODE TO COMPARE**
**SECOND SQL OR PL/SQL CODE TO COMPARE**
You have to run whole code above immediately, every line immediately after each other, so first write your code to a textfile, then copy the whole code like above to console. Don't forget to have an extra newline at the end of the code you copy, to also run the last line "exec runStats_pkg.rs_stop;" immediately.
You can find examples with runstats on the topic: Oracle Performance Analysis: Runstats workshop
SQL TRACE AND TKPROF
The SQL Trace facility and TKPROF let you accurately assess the efficiency of the SQL statements an application runs. For best results, use these tools with EXPLAIN PLAN rather than using EXPLAIN PLAN alone. This gives more advanced statistics for each SQL Statement that run and recursively called SQL statements and even the internal SQL stamements of Oracle to modify the dictionary.
The SQL Trace facility provides performance information on individual SQL statements and recursive statements. It generates the following statistics for each statement:
• Parse, execute, and fetch counts
If the cursor for the SQL statement is closed, SQL Trace also provides row source information that includes:
• CPU and elapsed times
• Physical reads and logical reads
• Number of rows processed
• Misses on the library cache
• Username under which each parse occurred
• Each commit and rollback
• Wait event data for each SQL statement, and a summary for each trace file
• Row operations showing the actual execution plan of each SQL statement
When the SQL Trace facility is enabled for a session or for an instance, performance statistics for all SQL statements executed in a user session or in the instance are placed into trace files. Using the SQL Trace facility can have a severe performance impact and may result in increased system overhead, excessive CPU usage, and inadequate disk space. So, it is good to use this for a system in test, not for a system in production.
• Number of rows, number of consistent reads, number of physical reads, number of physical writes, and time elapsed for each operation on a row
Trace files are mostly hard to understand. So, there are utilities to format the trace files and give useful statistics output for us. TKPROF is the most common utility. TKPROF comes with Oracle release, that format the contents of the trace file and place the output into a readable output file.
The Performance Tuning Guide is a good document that shows everything about this concept. Follow the link for SQL Trace and Usage of TKPROF: http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#sthref2019
You can find examples with SQL Trace and TKPROF on the topic: Oracle Performance Analysis: TKPROF workshop
10046 and 10053 Events
If you've got an interest in Oracle performance tuning, you'll probably have seen references to something called event 10046. Event 10046 is an undocumented, but well known feature within the Oracle database that generates detailed information on how a statement is parsed, what the values of bind variables are, and what wait events occurred during a particular session. Activating Event 10046 gives us a more detailed trace.
For methods that require tracing levels the following are valid values:
0 - No trace. Like switching sql_trace off.
To ensure that wait events information is written to the trace file for the session, enable tracing with following SQL statement:
2 - The equivalent of regular sql_trace.
4 - The same as 2, but with the addition of bind variable values.
8 - The same as 2, but with the addition of wait events.
12 - The same as 2, but with both bind variable values and wait events.
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
10053 Event with 10gR2 version provides very detailed information on how and why Cost Based Optimizer choosed to execute the query.
To enable 10053 event for sql trace, enable tracing with following SQL statement:
ALTER SESSION SET EVENTS ‘10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
For more information, please go further the references below. Also, I have written code example entries for each method. Follow the workshop topics.