Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Scripts’ Category

Counting rows fast

with one comment

A friend of mine asked me last night basically this: “How is that SQLTXPLAIN counts rows?”. In particular, he was referring to the use of the SAMPLE clause of the SELECT statement. Look at this SQLT’s log piece:

SQL_ID a9x1kc4ymyhkz
--------------------
SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e4
FROM "XYPZ"."INSTRUMENT" SAMPLE (.01) t

SQL_ID 025v6k1032t69
--------------------
SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e5
FROM "XYPZ"."POSITION_COMPOSITION" SAMPLE (.001) t

SQL_ID 8rby3340xpd9k
--------------------
SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e5
FROM "XYPZ"."POSITION_EVENT" SAMPLE (.001) t

WHY is it that SQLT has to count rows?

SQLT has to count rows so it can report side by side DBA_TABLES.NUM_ROWS and COUNT(*) from each Table. This is an easy way to see if your statistics are way off, and this mechanism exists on SQLT well before DBA_TAB_MODIFICATIONS came to existence. Actually, SQLT uses both methods to health-check how stale are your Table statistics.

The conundrum here is: “I use SQLT because I want to diagnose a performance issue on a QUERY on top of large Tables, but I do not want SQLT to take a long time just to produce a COUNT(*) of my Tables…”.

Fast versus Precise

In Performance tuning, there is always a trade-off. You want X but you sacrifice B. Counting rows is no different. Do you want it faster? Then you sacrifice precision. The SAMPLE clause of the SELECT statement allows you to do exactly that (syntax below):

SAMPLE [ BLOCK ] ( sample_percent ) [ SEED ( seed_value ) ]

So, if you specify a 10% sample size then you have to multiply the COUNT(*) by 10. If you sample 1% you multiply the COUNT(*) by 100. In large Tables if you sample, lets say 0.1%, your multiplier becomes 1,000, which is the same than 1e3 (10**3 or 10^3 depending where you went to school). Sample size can be as small as 0.000,001 and as large as 100 (but without including 100 itself). It represents probabilities more than an actual sample size.

The optional BLOCK clause simple says: use sample blocks instead of rows. And the optional SEED clause tries to provide some consistency in the result of the count when you use the same value for two executions of the exact same count. This SEED clause takes a value between 0 and 4,294,967,295.

How SQLT counts rows?

SQLT has over 40 tool parameters. One of them is count_star_threshold with a seeded value of 10,000.

SQLT includes a small algorithm (below) that determines the size of the SAMPLE according to the estimated size of the Table itself, by looking at its statistics as per DBA_TABLES.NUM_ROWS. No statistics? then skip the sample and do a normal full scan. If the Table is expected to be smaller then the count_star_threshold, then do a full scan. So is up to 10x this threshold. After that, use a sample size proportionally inverse to the Table size. The bigger the Table the smaller the Sample.

SQLT also forces a full Table scan and invokes Parallel Execution (PX) as a method to expedite the count. This count can be really fast on Exadata systems as you can imagine.

 /* -------------------------
 *
 * private perform_count_star
 *
 * called by: sqlt$i.common_calls and sqlt$i.remote_xtract
 *
 * ------------------------- */
 PROCEDURE perform_count_star (p_statement_id IN NUMBER)
 IS
 l_sql VARCHAR2(32767);
 l_number NUMBER;
 l_count NUMBER;
 BEGIN
 write_log('=> perform_count_star');

IF sqlt$a.get_param_n('count_star_threshold') = 0 THEN
 write_log('skip "count_star" as per corresponding parameter');
 ELSE
 FOR i IN (SELECT owner, table_name, num_rows, source
 FROM &&tool_administer_schema..sqlt$_dba_all_tables_v
 WHERE statement_id = p_statement_id
 ORDER BY
 owner, table_name)
 LOOP
 IF i.num_rows IS NULL THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*)
FROM "'||i.owner||'"."'||i.table_name||'" t WHERE ROWNUM <= :number';
 l_number := sqlt$a.get_param_n('count_star_threshold');
 ELSIF i.num_rows < sqlt$a.get_param_n('count_star_threshold') THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*)
FROM "'||i.owner||'"."'||i.table_name||'" t WHERE ROWNUM <= :number';
 l_number := sqlt$a.get_param_n('count_star_threshold') * 10;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e1) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e1
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1e1;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e2) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e2
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1e0;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e3) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e3
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e1;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e4) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e4
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e2;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e5) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e5
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e3;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e6) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e6
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e4;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e7) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e7
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e5;
 ELSE
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e8
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e6;
 END IF;

l_sql := REPLACE(l_sql, ':number', l_number);
 write_log('num_rows='||i.num_rows||' sql='||l_sql);
 l_count := NULL;

BEGIN
 EXECUTE IMMEDIATE l_sql INTO l_count;
 write_log(l_count||' rows counted');
 EXCEPTION
 WHEN OTHERS THEN
 write_log('** '||SQLERRM);
 write_log(l_sql||' failed with error above. Process continues.');
 END;

IF l_count IS NOT NULL THEN
 IF i.source = 'DBA_TABLES' THEN
 UPDATE &&tool_repository_schema..sqlt$_dba_tables
 SET count_star = l_count
 WHERE statement_id = p_statement_id
 AND owner = i.owner
 AND table_name = i.table_name;
 ELSIF i.source = 'DBA_OBJECT_TABLES' THEN
 UPDATE &&tool_repository_schema..sqlt$_dba_object_tables
 SET count_star = l_count
 WHERE statement_id = p_statement_id
 AND owner = i.owner
 AND table_name = i.table_name;
 END IF;
 END IF;
 END LOOP;

COMMIT;
 END IF;

write_log('<= perform_count_star');
 END perform_count_star;

Conclusion

Counting rows is like counting beans, you can count one at a time, or you can take some shortcuts. If you are willing to sacrifice some precision for the sake of gaining performance, consider then using the SAMPLE clause of the SELECT statement.

Carlos Sierra’s shared Scripts and Presentations

with 2 comments

I recently delivered 3 sessions at the East Coast Oracle Users Group (ECO). During these sessions I offered to share the actual Presentations and some of the Scripts I used during the 3rd session. I plan to keep updating and expanding both scripts and presentations. They also show now on the right side of this page. Feel free to use, share and recycle any of my scripts and presentations.

Written by Carlos Sierra

November 12, 2013 at 7:23 am

SQL Monitor and SQL ASH Scripts are now available

with 12 comments

I just uploaded two new scripts to my set of “Shared Scripts“. These are the sqlmon.sql and the sqlash.sql. They both input a SQL_ID and produce some txt/html/zip output.

The sqlmon.sql produces SQL Monitor Reports for all executions available on SQL Monitor for a given SQL. It is RAC aware and its output includes both text and “active” reports. You must have an Oracle Tuning Pack license on the system where you may want to use it.

The sqlash.sql produces Active Session History (ASH) Reports for one SQL_ID. It is also RAC aware and output includes both text and html formats. It produces these ASH reports from both memory and AWR. You must have an Oracle Diagnostics Pack license on the system where you may want to use it.

These two scripts are light weight. Install nothing, change nothing. If you need a more robust diagnostic tool in these areas, please use SQLHC and/or SQLT. In the other hand, for a quick understanding of one SQL you may want to use the trio: planx.sql, sqlash.sql and sqlmon.sql.

Written by Carlos Sierra

October 15, 2013 at 9:03 pm

YASTS: Yet Another SQL Tuning Script: planx.sql

with 4 comments

Having SQLTXPLAIN and SQLHC available, WHY do I need yet another way to display execution plans?

New script planx.sql reports execution plans for one SQL_ID from RAC and AWR. It is lightweight and installs nothing. It produces list of performance metrics for given SQL out of gv$sqlstats, gv$sqlstats_plan_hash, gv$sql and dba_hist_sqlstat. It also displays execution plans from gv$sql_plan_statistics_all and dba_hist_sql_plan. It is RAC aware. It also reports on io_saved when executed on Exadata.

Most stand-alone light-weight scripts I have seen only report plans from connected RAC node. This script reports from all RAC nodes. The AWR piece is optional. In other words, if your site does not have a Diagnostics Pack License you can specify so when executing this script, thus all access to AWR data is simply skipped. Output is plain text and it executes in seconds.

I will be using this planx.sql as my first step in the analysis of queries performing slowly. If I need more, then I will use SQLHC or SQLTXPLAIN. This planx.sql script, as well as some others, are beginning to populate my new shared directory of “free” scripts. The link is at the right of the screen, and also here. Quite often I write small scripts to do my job, now they will have a new house there. A readme provides a one-line description of each script.

Conclusion

New planx.sql is an alternative to plain DBMS_XPLAIN.DISPLAY_CURSOR. It displays plans from all RAC nodes and from AWR(opt). It also reports relevant performance metrics for all recorded execution plans. It is fast and installs nothing.

Written by Carlos Sierra

October 9, 2013 at 6:06 pm

Carlos Sierra Shared Scripts

with 2 comments

Every so often I write small scripts for my personal use. Recently I wrote this mini tiny version of a snaper on v$mystat. Basically a script with no parameters that if executed right before and after a SQL I am investigating, it spits out the delta on two snaps on v$mystat taken just before and after my SQL. This way I can quickly validate some of the “cell” counts used by Exadata for example. Of course if I want a more robust script, and I want the capability to monitor a session other than the one I connected, I would use snaper.sql created by Tanel Poder. In my case, I just needed something extra light, with no installation of any object into the database. This mini tiny script is named mystat.sql and it is available here. As time passes by, I will be writing other small and useful scripts. I will upload them into same location. So feel free to visit this Google Drive place every so often. If I upload something exiting I will blog about it. Until then, I just have two new scripts: mystat.sql and tkprof.sql.

Written by Carlos Sierra

October 4, 2013 at 10:43 am

Posted in Scripts

Function to compute SQL_ID out of SQL_TEXT

with 33 comments

Ric Van Dyke from Hotsos asked me yesterday if I knew of a PL/SQL function that inputs a SQL_TEXT and outputs a SQL_ID. I kind of remembered Tanel Poder had written something on this regard. And I also recalled seen an API under DBMS_UTILITY that would provide the SQL_ID given a SQL_TEXT. When I researched on both, I was able to get from Tanel a function to generate the SQL HASH_VALUE out of  given SQL_ID, and from DBMS_UTILITY another API to get the SQL HASH_VALUE out of a SQL_TEXT provided as a VARCHAR2. But I could not find one to compute SQL_ID given SQL_TEXT written on PL/SQL. But Slavik Markovich gave me the insight that I needed to build a PL/SQL function that computes SQL_ID out of SQL Text. Slavik’s function is written on Python.

So, thanks to Ric for his challenging question/request, and to Tanel and Slavik for their valuable insight, I was encouraged to develop this function. Feel free to use it! I have seen the same question been asked a few times in the past, so at least I know we have such a function now!

GRANT EXECUTE ON SYS.DBMS_CRYPTO TO csierra;
CREATE OR REPLACE FUNCTION compute_sql_id (sql_text IN CLOB)
RETURN VARCHAR2 IS
 BASE_32 CONSTANT VARCHAR2(32) := '0123456789abcdfghjkmnpqrstuvwxyz';
 l_raw_128 RAW(128);
 l_hex_32 VARCHAR2(32);
 l_low_16 VARCHAR(16);
 l_q3 VARCHAR2(8);
 l_q4 VARCHAR2(8);
 l_low_16_m VARCHAR(16);
 l_number NUMBER;
 l_idx INTEGER;
 l_sql_id VARCHAR2(13);
BEGIN
 l_raw_128 := /* use md5 algorithm on sql_text and produce 128 bit hash */
 SYS.DBMS_CRYPTO.hash(TRIM(CHR(0) FROM sql_text)||CHR(0), SYS.DBMS_CRYPTO.hash_md5);
 l_hex_32 := RAWTOHEX(l_raw_128); /* 32 hex characters */
 l_low_16 := SUBSTR(l_hex_32, 17, 16); /* we only need lower 16 */
 l_q3 := SUBSTR(l_low_16, 1, 8); /* 3rd quarter (8 hex characters) */
 l_q4 := SUBSTR(l_low_16, 9, 8); /* 4th quarter (8 hex characters) */
 /* need to reverse order of each of the 4 pairs of hex characters */
 l_q3 := SUBSTR(l_q3, 7, 2)||SUBSTR(l_q3, 5, 2)||SUBSTR(l_q3, 3, 2)||SUBSTR(l_q3, 1, 2);
 l_q4 := SUBSTR(l_q4, 7, 2)||SUBSTR(l_q4, 5, 2)||SUBSTR(l_q4, 3, 2)||SUBSTR(l_q4, 1, 2);
 /* assembly back lower 16 after reversing order on each quarter */
 l_low_16_m := l_q3||l_q4;
 /* convert to number */
 SELECT TO_NUMBER(l_low_16_m, 'xxxxxxxxxxxxxxxx') INTO l_number FROM DUAL;
 /* 13 pieces base-32 (5 bits each) make 65 bits. we do have 64 bits */
 FOR i IN 1 .. 13
 LOOP
 l_idx := TRUNC(l_number / POWER(32, (13 - i))); /* index on BASE_32 */
 l_sql_id := l_sql_id||SUBSTR(BASE_32, (l_idx + 1), 1); /* stitch 13 characters */
 l_number := l_number - (l_idx * POWER(32, (13 - i))); /* for next piece */
 END LOOP;
 RETURN l_sql_id;
END compute_sql_id;
/
SHOW ERRORS;
-- test
SET SERVEROUT ON;
SELECT compute_sql_id('select ''Slavik'' from dual') FROM DUAL;
-- you should get 29schpgjyfxux

Written by Carlos Sierra

September 12, 2013 at 5:51 pm

Posted in Scripts

SQL Tuning with SQLTXPLAIN 2-days Workshop

with 6 comments

SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.

Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.

The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!

The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!

Gathering CBO Statistics in Oracle Applications

with 17 comments

When we refer to Oracle Applications we usually think E-Business (EBS), but there are other large Oracle Applications that requires some special consideration when it comes to CBO Statistics gathering: Siebel and People Soft (PSFT). There are some others, but I would say EBS, Siebel and PSFT make the bulk of them all.

Gathering Stats on EBS

This is easy, since the only supported method is FND_STATS. That means using plain DBMS_STATS and ANALYZE is simply not supported. FND_STATS is a wrapper on top of DBMS_STATS that includes some extra functionality. The most important addition is controlling which columns get Histograms. The set of columns EBS requires to have Histograms is small and well controlled. FND_STATS can be executed from within the application (Concurrent Request, i.e. batch job) or directly from SQL*Plus.

If you are using EBS, then be sure FND_STATS is the only active method to gather statistics. In other words, automatic jobs to gather statistics using DBMS_STATS or custom scripts must be disabled.

Gathering Stats on Siebel

Siebel documents how CBO statistics must be gathered in white paper titled “Performance Tuning Guidelines for Siebel CRM Applications on Oracle Database“. This paper is authored by “James Qiu, Paul Blokhin, Mehdi Gerami” (Oracle Development) and its latest version is dated December, 2010. It was rewritten back on 2010 with the contribution of Abel Macias, Dave Henriquez and myself. Out of this paper we created the coe_siebel_stats.sql and the coe_siebel_profile.sql scripts. Both are well documented. Please refer to Metalink (MOS) note 781927.1 to download the latest version of the white paper and the actual scripts.

If you are using Siebel on databases 10.2 or higher, you really want to follow the white paper provided in MOS 781927.1 and implement coe_siebel_stats.sql. We have noticed that since its inception a couple of years ago, the number of performance issues reported in Siebel applications has been reduced significantly.

Gathering Stats on PSFT

For PeopleTools 8.49 and higher, on databases 10.2.0.4 and higher, I recommend the use of Metalink (MOS) note 1322888.1 (Improving Statistics in Oracle RDBMS for PeopleSoft Enterprise). The author of this note is John Houghton, who works for the Center of Expertise (CoE) in the PSFT side of the house. In this MOS note you will find a zip file that contains a couple of documents explaining how to use script pscbo_stats.sql and it also contains the actual script. There is some implementation effort, but once pscbo_stats.sql is implemented your CBO statistics are gathered following best practices for PSFT. I had the opportunity to collaborate with John in the creation and testing of this pscbo_stats.sql script. It is currently on its 2nd major release.

If you are on PSFT 8.49 or higher, you want to implement pscbo_stats.sql out of MOS 1322888.1. Feedback on the results of using this tool have been positive in most cases.

Conclusion

The Cost-based Optimizer (CBO) requires to input good statistics in order to produce an optimal plan. In the perfect world we would use DBMS_STATS with no parameters. Still, some specific application design characteristics, like very skewed data or the use of small tables and temporary tables, require a specif setup to gather statistics tailored to the needs of these applications. All FND_STATS, coe_siebel_stats.sql and pscbo_stats.sql are wrappers on top of DBMS_STATS that incorporate fine granularity use of DBMS_STATS parameters to gather statistics better suited to their application. For EBS the use of FND_STATS is a must, for the other two the use of their corresponding scripts are not a must but strongly recommended.

Written by Carlos Sierra

July 25, 2013 at 7:23 am

Finding the Predicates that are different on the same Execution Plan

with 2 comments

Ok, this may not be common but when it happens it becomes very hard to spot. Couple of cases:

  1. You have a SQL with multiple child cursors. They all could have the same Plan Hash Value (PHV) or maybe more than one. Performance wise even the same PHV can exhibit differences for multiple reason, and one of them is the possibility of having the same PHV but not exactly the same filter or access Predicates. If you have two or more predicates on the same Plan Operation, the difference could be the order of these Predicates, and if their execution performance is quite different, one version could perform better than the other.
  2. You have two or more versions of a core query, where some of these versions have slightly different Predicates (typical case where the SQL is dynamically assembled by an application). This family of similar queries would have different SQL_IDs but they may or may not produce the same PHV. When they do, it is hard to find in which Plan Operations the different Predicates are used.

I have seen many cases where we have to scratch our heads thinking: “Why do I have such a different performance when it is clear we have the same Execution Plan?”. If you are having one of those issues, you may want to use one of the two scripts below. The first inputs a SQL_ID and it outputs a list of those Predicates for each PHV where there is more than one version for any Plan Operation ID. The second inputs a PHV and performs the same analysis regardless if the PHV is associated to one or many SQL_IDs. Both are RAC aware. Both are stand-alone and install nothing.

Script to find differences in access/filter Predicates for a given SQL_ID

SPO difpred1.txt;
SET HEA ON LIN 80 NEWP 1 PAGES 100 LIN 300 TRIMS ON TI OFF TIMI OFF;

-- shows which plan lines have different predicates for given sql_id
WITH d AS (
SELECT sql_id,
 plan_hash_value,
 id,
 COUNT(DISTINCT access_predicates) distinct_access_predicates,
 COUNT(DISTINCT filter_predicates) distinct_filter_predicates
 FROM gv$sql_plan_statistics_all
 WHERE sql_id = '&&sql_id.'
 GROUP BY
 sql_id,
 plan_hash_value,
 id
HAVING MIN(NVL(access_predicates, 'X')) != MAX(NVL(access_predicates, 'X'))
 OR MIN(NVL(filter_predicates, 'X')) != MAX(NVL(filter_predicates, 'X'))
)
SELECT v.plan_hash_value,
 v.id,
 'access' type,
 v.inst_id,
 v.child_number,
 v.access_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.sql_id = d.sql_id
 AND v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_access_predicates > 1
 UNION ALL
SELECT v.plan_hash_value,
 v.id,
 'filter' type,
 v.inst_id,
 v.child_number,
 v.filter_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.sql_id = d.sql_id
 AND v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_filter_predicates > 1
 ORDER BY
 1, 2, 3, 6, 4, 5;

SET HEA ON LIN 80 NEWP 1 PAGES 14 LIN 80 TRIMS OFF TI OFF TIMI OFF;
SPO OFF;

Output Sample:

old 8: WHERE sql_id = '&&sql_id.'
new 8: WHERE sql_id = 'cy9pxhp4y6u78'

PLAN_HASH_VALUE ID TYPE INST_ID CHILD_NUMBER
--------------- ---------- ------ ---------- ------------
PREDICATES
--------------------------------------------------------------------------------
 3724264953 2 filter 1 0
("C2"='WHATEVER' AND TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a')

3724264953 2 filter 1 1
(TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a' AND "C2"='WHATEVER')

Script to find differences in access/filter Predicates for a given PHV

SPO difpred2.txt;
SET HEA ON LIN 80 NEWP 1 PAGES 100 LIN 300 TRIMS ON TI OFF TIMI OFF;

-- shows which plan lines have different predicates for given phv
WITH d AS (
SELECT plan_hash_value,
 id,
 COUNT(DISTINCT access_predicates) distinct_access_predicates,
 COUNT(DISTINCT filter_predicates) distinct_filter_predicates
 FROM gv$sql_plan_statistics_all
 WHERE plan_hash_value = &&plan_hash_value.
 GROUP BY
 plan_hash_value,
 id
HAVING MIN(NVL(access_predicates, 'X')) != MAX(NVL(access_predicates, 'X'))
 OR MIN(NVL(filter_predicates, 'X')) != MAX(NVL(filter_predicates, 'X'))
)
SELECT v.id,
 'access' type,
 v.sql_id,
 v.inst_id,
 v.child_number,
 v.access_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_access_predicates > 1
 UNION ALL
SELECT v.id,
 'filter' type,
 v.sql_id,
 v.inst_id,
 v.child_number,
 v.filter_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_filter_predicates > 1
 ORDER BY
 1, 2, 6, 3, 4, 5;

SET HEA ON LIN 80 NEWP 1 PAGES 14 LIN 80 TRIMS OFF TI OFF TIMI OFF;
SPO OFF;

Output Sample:

Enter value for plan_hash_value: 2339135578
old 7: WHERE plan_hash_value = &&plan_hash_value.
new 7: WHERE plan_hash_value = 2339135578

ID TYPE SQL_ID INST_ID CHILD_NUMBER
---------- ------ ------------- ---------- ------------
PREDICATES
--------------------------------------------------------------------------------
 5 filter 72jhgzs8nb1p4 1 0
MAX("E2"."HIREDATE")=:B1

5 filter 8a4x4867rajuv 1 0
(MAX("E2"."HIREDATE")=:B1 OR :B2 IS NULL)

5 filter 8a4x4867rajuv 1 1
(MAX("E2"."HIREDATE")=:B1 OR :B2 IS NULL)

Written by Carlos Sierra

June 18, 2013 at 6:10 am

Using DBMS_XPLAN to display Cursor Plans for a SQL in all RAC nodes

with 5 comments

Just sharing a lightweight script that uses DBMS_XPLAN.DISPLAY_CURSOR to display Execution Plans for a given SQL_ID out of all RAC nodes. Nothing fancy, just a workaround to the lack of “instance” parameter on this API.

SET ECHO OFF FEED OFF VER OFF SHOW OFF HEA OFF LIN 2000 NEWP NONE PAGES 0 LONG 2000000 LONGC 2000 SQLC MIX TAB ON TRIMS ON TI OFF TIMI OFF ARRAY 100 NUMF "" SQLP SQL> SUF sql BLO . RECSEP OFF APPI OFF AUTOT OFF;
COL inst_child FOR A21;
BREAK ON inst_child SKIP 2;
SPO rac_xplan.txt;
PRO Current Execution Plans (last execution)
PRO
PRO Captured while still in memory. Metrics below are for the last execution of each child cursor.
PRO If STATISTICS_LEVEL was set to ALL at the time of the hard-parse then A-Rows column is populated.
PRO
SELECT RPAD('Inst: '||v.inst_id, 9)||' '||RPAD('Child: '||v.child_number, 11) inst_child, t.plan_table_output
 FROM gv$sql v,
 TABLE(DBMS_XPLAN.DISPLAY('gv$sql_plan_statistics_all', NULL, 'ADVANCED ALLSTATS LAST', 'inst_id = '||v.inst_id||' AND sql_id = '''||v.sql_id||''' AND child_number = '||v.child_number)) t
 WHERE v.sql_id = '&&sql_id.'
 AND v.loaded_versions > 0;
SPO OFF;
SET ECHO OFF FEED 6 VER ON SHOW OFF HEA ON LIN 80 NEWP 1 PAGES 14 LONG 80 LONGC 80 SQLC MIX TAB ON TRIMS OFF TI OFF TIMI OFF ARRAY 15 NUMF "" SQLP SQL> SUF sql BLO . RECSEP WR APPI OFF AUTOT OFF;

Sample Output

Current Execution Plans (last execution)

Captured while still in memory. Metrics below are for the last execution of each child cursor.
If STATISTICS_LEVEL was set to ALL at the time of the hard-parse then A-Rows column is populated.

Inst: 1 Child: 0 Plan hash value: 3724264953

---------------------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
 ---------------------------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | | | 220 (100)| | 1 |00:00:00.01 | 784 |
 | 1 | SORT AGGREGATE | | 1 | 1 | 45 | | | 1 |00:00:00.01 | 784 |
 |* 2 | TABLE ACCESS FULL| T1 | 1 | 1 | 45 | 220 (1)| 00:00:03 | 0 |00:00:00.01 | 784 |
 ---------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
 -------------------------------------------------------------

1 - SEL$1
 2 - SEL$1 / T1@SEL$1

Outline Data
 -------------

/*+
 BEGIN_OUTLINE_DATA
 IGNORE_OPTIM_EMBEDDED_HINTS
 OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
 DB_VERSION('11.2.0.3')
 ALL_ROWS
 OUTLINE_LEAF(@"SEL$1")
 FULL(@"SEL$1" "T1"@"SEL$1")
 END_OUTLINE_DATA
 */

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - filter(("C2"='WHATEVER' AND TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a'))

Column Projection Information (identified by operation id):
 -----------------------------------------------------------

1 - (#keys=0) COUNT(*)[22]
Inst: 1 Child: 1 Plan hash value: 3724264953

-----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost | A-Rows | A-Time | Buffers |
 -----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | | | 123 | 1 |00:00:01.88 | 784 |
 | 1 | SORT AGGREGATE | | 1 | 1 | 45 | | 1 |00:00:01.88 | 784 |
 |* 2 | TABLE ACCESS FULL| T1 | 1 | 1 | 45 | 123 | 0 |00:00:01.88 | 784 |
 -----------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
 -------------------------------------------------------------

1 - SEL$1
 2 - SEL$1 / T1@SEL$1

Outline Data
 -------------

/*+
 BEGIN_OUTLINE_DATA
 IGNORE_OPTIM_EMBEDDED_HINTS
 OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
 DB_VERSION('11.2.0.3')
 OPT_PARAM('_optimizer_cost_model' 'io')
 ALL_ROWS
 OUTLINE_LEAF(@"SEL$1")
 FULL(@"SEL$1" "T1"@"SEL$1")
 END_OUTLINE_DATA
 */

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - filter((TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a' AND "C2"='WHATEVER'))

Column Projection Information (identified by operation id):
 -----------------------------------------------------------

1 - (#keys=0) COUNT(*)[22]

Note
 -----
 - cpu costing is off (consider enabling it)

Written by Carlos Sierra

June 17, 2013 at 9:26 am

Posted in Scripts

Tagged with ,