Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Plan Stability’ Category

How to upgrade Dynamic Sampling on a Query already using an imported SQL Profile

with 3 comments

Problem

I have this query that references a couple of Global Temporary Tables (GTT). These GTT have no CBO Statistics, thus Dynamic Sampling (DS) is used on them. The problem is default value of 2 at the instance level samples only 64 blocks and these GTT are large. Testing with DS on larger samples provides better quality on these dynamic statistics and a better performing Execution Plan for this particular SQL.

This SQL already uses an imported SQL Profile generated by coe_xfr_sql_profile.sql out of SQLT (MOS 215187.1) under sqlt/utl directory. This SQL Profile contains CBO Hints created out of the Outline Data contained on the other_xml column of the Plan. Still a higher level of DS is needed for these two GTT in order to produce a better performing Plan. Setting DS at the instance level or session level is not an option. Modifying the query is not an option. So the question is: “how to embed a dynamic sampling hint on a SQL that already has a SQL Profile based on an Outline“?

Solution

On the script generated by  coe_xfr_sql_profile.sql add one extra Hint with the DS level that is needed. Look at sample below, where one extra Hint has added right below DB_VERSION. Since the two GTT have no CBO Statistics, they invoke DS, which would use now level 6 instead of 2.

h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 6)]',
q'[ALL_ROWS]',

 

 

Written by Carlos Sierra

May 21, 2014 at 7:24 am

Why using SQLTXPLAIN

leave a comment »

Every so often I see on a distribution list a posting that starts like this: “I upgraded my application from database release X to release Y and now many queries are performing poorly, can you tell why?”

As everyone else on a distribution list, my first impulse is to make an educated guess permeated by a prior set of experiences. The intentions are always good, but the process is painful and time consuming. Many of us have seen this kind of question, and many of us have good hunches. Still I think our eagerness to help blinds us a bit. The right thing to do is to step back and analyze the facts, and I mean all the diagnostics supporting the observation.

What is needed to diagnose a SQL Tuning issue?

The list is large, but I will enumerate some of the most important pieces:

  1. SQL Text
  2. Version of the database (before and after upgrade)
  3. Database parameters (before and after)
  4. State of the CBO Statistics (before and after)
  5. Changes on Histograms
  6. Basics about the architecture (CPUs, memory, etc.)
  7. Values of binds if SQL has them
  8. Indexes compare, including state (visible?, usable?)
  9. Execution Plan (before and after)
  10. Plan stability? (Stored Outlines, Profiles, SQL Plan Management)
  11. Performance history as per evidence on AWR or StatsPack
  12. Trace from Event 10053 to understand the CBO
  13. Trace from Event 10046 level 8 or 12 to review Waits
  14. Active Session History (ASH) if 10046 is not available

I could keep adding bullets to the list, but I think you get the point: There are simply too many things to check! And each takes some time to collect. More important, the state of the system changes overtime, so you may need to re-collect the same diagnostics more than once.

SQLTXPLAIN to the rescue

SQLT or SQLTXPLAIN, has been available on MetaLink (now MOS) under note 215187.1 for over a decade. In short, SQLT collects all the diagnostics listed above and a lot more. That is WHY Oracle Support uses it every day. It simply saves a lot of time! So, I always encourage fellow Oracle users to make use of the FREE tool and expedite their own SQL Tuning analysis. When time permits, I do volunteer to help on an analysis. So, if you get to read this, and you want to help yourself while using SQLT but feel intimidated by this little monster, please give it a try and contact me for assistance. If I can help, I will, if I cannot, I will let you know.

Conclusion

It is fun to guess WHY a SQL is not performing as expected, and trying different guesses is educational but very time consuming. If you want to actually find root causes before trying to fix your SQL, you may want to collect relevant diagnostics. SQLT is there to help, and if installing this tool is not something you can do in a short term, consider then SQL Health-Check SQLHC.

 

SQL using Literals instead of Binds. Are all Literals evil?

with 7 comments

Every so often I see systems where there is a good amount of SQL that uses Literals instead of Binds, and executes enough times to create a large number of Cursors. Is this a red flag? As many questions regarding performance, I would say the right answer is: it all depends.

Of course we want to use Binds instead of Literals in order to reduce the frequency of Hard Parses, and in turn reduce CPU consumption and space utilization in the Shared Pool. Does it mean we want to replace all Literals with Binds? Do we declare war on Literals? In my opinion, the answer is simply: NO.

If a SQL has a Predicate on a date column, or a key column, then I would expect the Number of Distinct Values (NDV) for such column to be high, and in some cases as high as the number of rows in the Table (unique values for example). In the other hand, if the Predicate is in one of those columns that denotes a code, like Process Type or Status, and the NDV is small, then I’d rather keep the Literal in place. Specially if the data in such column is skewed and I have (or plan to have) Histograms on it.

What do I propose?

  1. If the SQL is executed sporadically, then it does not matter (Literals or Binds).
  2. If the SQL executes frequently, and the Predicate in question is on a Column where the Number of Distinct Values (NDV) is high, then use a Bind instead of a Literal (for this Predicate).
  3. If the SQL executes frequently, and the Predicate in question is on a Column where the NDV is low, then use a Literal (for this Predicate). This assumes the NDV for these Literals is also small.
  4. Regardless if using a Literal or a Bind for a particular Predicate: If the data in a Column referenced by a Predicate is heavily skewed, gather Statistics with Histograms on this Column.

Follow-up question: When the NDV is high or low? The answer is also: it all depends. I personally prefer to see Literals if the NDV for this Column (and this Literal) is less than 10 (or so).

Why having Binds and Literals on same SQL is better than having all Binds?

If we have good set of CBO Statistics, and we have Histograms on skewed data, and we are using bind peeking, and we are on 11g, and Adaptive Cursor Sharing (ACS) is enabled, and we plan  using SQL Plan Management (SPM), then we are for a treat:

With all the “ands” above, by using Binds on predicates with high NDV and Literals in those with low NDV, then we will end up having a small number of different SQL_IDs for what we consider “the same SQL”. Each incarnation of this SQL could potentially have its own set of optimal Execution Plans created by ACS  and the CBO (by making use of Histograms on the data and Selectivity Profiles on ACS). Then, with the aid of SPM we could provide stability to those multiple optimal Execution Plans for each version of the SQL. That means that SQL Q1 with Literal L1 could have a different set of optimal plans than Q1 with Literals L2.

Conclusion

Replacing some Literals with Binds but not all Literals sounds like a lot of work, but actually the extra work may be worth the effort. In my opinion, the end result is  better if we replace most, but not all (as per proposal above). WHY? Even when ACS does a good job at finding multiple optimal plans for a SQL by using the selectivity of the predicates; by allowing a small number of cursors for the same SQL given the use of Literals in columns with low NDV, we are basically reducing the times we would have to execute a SQL with a sub-optimal plan due to current ACS ramp-up process. This extra granularity provided by a small number of incarnations of the “otherwise same SQL” could be crucial for tuning complex SQL or corner cases.

Written by Carlos Sierra

February 4, 2014 at 5:02 pm

Exadata Optimizations and SQLTXPLAIN Courses

leave a comment »

I will be delivering a couple of courses soon. One in January and the second in February. I will keep posting upcoming Training and Conferences on a new link at the right margin of this blog.

Exadata Optimizations Jan 13-14

This 2-days “Exadata Optimizations” course is for Developers and DBAs new to Exadata and in need to ramp-up quickly. As the name implies, its focus is on Exadata Optimizations. We talk about Smart Scans, Storage Indexes, Smart Flash Cache, Hybrid Columnar Compression (HCC) and Parallel Execution (PX). This course is hands-on, with a fair amount of demos and labs.

SQLTXPLAIN (SQLT) Feb 20-21

This “SQL Tuning with SQLTXPLAIN” 2-days course shows how to use SQLT to actually do SQL Tuning. We go over the ying-yang of the CBO, meaning: Plan Flexibility versus Plan Stability. We use SQLT for labs and we also go over some real-life SQL Tuning cases. If you are currently using SQLT, you are welcome to bring a SQLT Report to class and we could review it there.

Conclusion

New year, new resolutions. I will be investing part of my time sharing knowledge through formal courses and conferences. These days it is hard to find the time and budget to keep our knowledge on the edge, but again and again I see that many of our daily struggles could be mitigated by some concise technical training. So I encourage you to add some training to your list of resolutions for this new year; or at the very least, to get and read some fresh books.

Happy New Year 2014!

Written by Carlos Sierra

December 27, 2013 at 1:24 pm

Non-intrusive SQL Trace instrumentation on legacy PL/SQL code

leave a comment »

Problem

Legacy PL/SQL code with intermittent performance degradation.

To improve the performance of this code, the first step is to diagnose it. But this code has no instrumentation whatsoever, it is in Production, and rolling any code into Production usually requires rigorous testing. So, whatever method we implement has to be light-weight and as safe as possible.

Using Oradebug is not a viable solution mainly for two reasons: It misses the “head” of the transaction, so we may not get to know the SQL taking longer; and second, internal procedures from finding about the issue, reporting it, then acting on it may take from several minutes to hours.

Solution

  1. Identify which PL/SQL libraries are known to be problematic in terms of intermittent performance degradation.
  2. At the beginning  of each callable PL/SQL Procedure or Function, add a call to DBMS_APPLICATION_INFO.SET_MODULE to set some appropriate MODULE and ACTION, for example “R252, LOAD”. Call this API also at the end, to NULL out these two parameters. This code change is very small and safe. It introduces practically no overhead. It simply labels every SQL executed by the PL/SQL library with some MODULE and ACTION that uniquely identify the code of concern.
  3. Activate SQL Trace on the module/action that needs to be traced, by calling DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE, passing parameters SERVICE, MODULE and ACTION. With this API request a SQL Trace to be generated with WAITs and BINDs (binds are optional but desirable). Once these traces are no longer needed (reviewed by someone), turn SQL Trace off using API DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE.
  4. Once the SQL Trace is produced, generate a TKPROF on it. You may want to include parameter “sort=exeela fchela”. This way you get the slower SQL at the top of the TKPROF report.
  5. With SQL Trace and TKPROF, identify the slower SQL and use SQL Monitor and/or SQL XTRACT to get more granular diagnostics (you need to identify SQL_ID). On any given PL/SQL library, it is common that 1~5 SQL statements consume > 80% of the Elapsed Time. Focus on these large consumers.

Conclusion

Producing a SQL Trace with EVENT 10046 level 8 or 12 is very useful to properly diagnose the code on a PL/SQL library which performs poorly. A follow-up on the slower SQL with SQL Monitor and/or SQLT XTRACT is in order. The method presented above is very easy and safe to implement.

Written by Carlos Sierra

December 23, 2013 at 12:23 pm

SQL Tuning with SQLTXPLAIN 2-days Workshop

with 6 comments

SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.

Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.

The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!

The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!

Finding the Predicates that are different on the same Execution Plan

with 2 comments

Ok, this may not be common but when it happens it becomes very hard to spot. Couple of cases:

  1. You have a SQL with multiple child cursors. They all could have the same Plan Hash Value (PHV) or maybe more than one. Performance wise even the same PHV can exhibit differences for multiple reason, and one of them is the possibility of having the same PHV but not exactly the same filter or access Predicates. If you have two or more predicates on the same Plan Operation, the difference could be the order of these Predicates, and if their execution performance is quite different, one version could perform better than the other.
  2. You have two or more versions of a core query, where some of these versions have slightly different Predicates (typical case where the SQL is dynamically assembled by an application). This family of similar queries would have different SQL_IDs but they may or may not produce the same PHV. When they do, it is hard to find in which Plan Operations the different Predicates are used.

I have seen many cases where we have to scratch our heads thinking: “Why do I have such a different performance when it is clear we have the same Execution Plan?”. If you are having one of those issues, you may want to use one of the two scripts below. The first inputs a SQL_ID and it outputs a list of those Predicates for each PHV where there is more than one version for any Plan Operation ID. The second inputs a PHV and performs the same analysis regardless if the PHV is associated to one or many SQL_IDs. Both are RAC aware. Both are stand-alone and install nothing.

Script to find differences in access/filter Predicates for a given SQL_ID

SPO difpred1.txt;
SET HEA ON LIN 80 NEWP 1 PAGES 100 LIN 300 TRIMS ON TI OFF TIMI OFF;

-- shows which plan lines have different predicates for given sql_id
WITH d AS (
SELECT sql_id,
 plan_hash_value,
 id,
 COUNT(DISTINCT access_predicates) distinct_access_predicates,
 COUNT(DISTINCT filter_predicates) distinct_filter_predicates
 FROM gv$sql_plan_statistics_all
 WHERE sql_id = '&&sql_id.'
 GROUP BY
 sql_id,
 plan_hash_value,
 id
HAVING MIN(NVL(access_predicates, 'X')) != MAX(NVL(access_predicates, 'X'))
 OR MIN(NVL(filter_predicates, 'X')) != MAX(NVL(filter_predicates, 'X'))
)
SELECT v.plan_hash_value,
 v.id,
 'access' type,
 v.inst_id,
 v.child_number,
 v.access_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.sql_id = d.sql_id
 AND v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_access_predicates > 1
 UNION ALL
SELECT v.plan_hash_value,
 v.id,
 'filter' type,
 v.inst_id,
 v.child_number,
 v.filter_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.sql_id = d.sql_id
 AND v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_filter_predicates > 1
 ORDER BY
 1, 2, 3, 6, 4, 5;

SET HEA ON LIN 80 NEWP 1 PAGES 14 LIN 80 TRIMS OFF TI OFF TIMI OFF;
SPO OFF;

Output Sample:

old 8: WHERE sql_id = '&&sql_id.'
new 8: WHERE sql_id = 'cy9pxhp4y6u78'

PLAN_HASH_VALUE ID TYPE INST_ID CHILD_NUMBER
--------------- ---------- ------ ---------- ------------
PREDICATES
--------------------------------------------------------------------------------
 3724264953 2 filter 1 0
("C2"='WHATEVER' AND TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a')

3724264953 2 filter 1 1
(TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a' AND "C2"='WHATEVER')

Script to find differences in access/filter Predicates for a given PHV

SPO difpred2.txt;
SET HEA ON LIN 80 NEWP 1 PAGES 100 LIN 300 TRIMS ON TI OFF TIMI OFF;

-- shows which plan lines have different predicates for given phv
WITH d AS (
SELECT plan_hash_value,
 id,
 COUNT(DISTINCT access_predicates) distinct_access_predicates,
 COUNT(DISTINCT filter_predicates) distinct_filter_predicates
 FROM gv$sql_plan_statistics_all
 WHERE plan_hash_value = &&plan_hash_value.
 GROUP BY
 plan_hash_value,
 id
HAVING MIN(NVL(access_predicates, 'X')) != MAX(NVL(access_predicates, 'X'))
 OR MIN(NVL(filter_predicates, 'X')) != MAX(NVL(filter_predicates, 'X'))
)
SELECT v.id,
 'access' type,
 v.sql_id,
 v.inst_id,
 v.child_number,
 v.access_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_access_predicates > 1
 UNION ALL
SELECT v.id,
 'filter' type,
 v.sql_id,
 v.inst_id,
 v.child_number,
 v.filter_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_filter_predicates > 1
 ORDER BY
 1, 2, 6, 3, 4, 5;

SET HEA ON LIN 80 NEWP 1 PAGES 14 LIN 80 TRIMS OFF TI OFF TIMI OFF;
SPO OFF;

Output Sample:

Enter value for plan_hash_value: 2339135578
old 7: WHERE plan_hash_value = &&plan_hash_value.
new 7: WHERE plan_hash_value = 2339135578

ID TYPE SQL_ID INST_ID CHILD_NUMBER
---------- ------ ------------- ---------- ------------
PREDICATES
--------------------------------------------------------------------------------
 5 filter 72jhgzs8nb1p4 1 0
MAX("E2"."HIREDATE")=:B1

5 filter 8a4x4867rajuv 1 0
(MAX("E2"."HIREDATE")=:B1 OR :B2 IS NULL)

5 filter 8a4x4867rajuv 1 1
(MAX("E2"."HIREDATE")=:B1 OR :B2 IS NULL)

Written by Carlos Sierra

June 18, 2013 at 6:10 am

Has my Plan changed or not?

with 12 comments

We have learned from Kerry Osborne’s recent post on SQL Gone Bad – But Plan Not Changed?, and a prior post from Randolf Geist on PLAN_HASH_VALUE – How equal (and stable?) are your execution plans – part 1, that having the same Plan Hash Value (PHV) it is not guarantee that we actually have the same Execution Plan if you were to consider Access and Filter Predicates for Plan Operations. So, what does it mean? It means that if you have inconsistent performance out of the same Execution Plan (same PHV) it is not enough to just check if you have or not the same PHV. You need to be aware of the Predicates as well. Of course there are other possible explanations for having inconsistent performance out of the same PHV, like skewed data combined with the use of binds, but that is another story.

Same Plan but different PHV

The opposite to “Same PHV but different Plan” is also possible. So, after we understood we can have the same PHV but not quite the same Plan if we include in the compare the Predicates, the question became: Can we also have cases having the same Plan (including Predicates) but get a different PHV? And the answer is YES.

Discard the keyword STORAGE, which we can consider not strong enough to determine a Plan Operation is different. Of course this is questionable, but as of today the PHV is blind to the STORAGE keyword, so a Plan with or without this keyword would render the same PHV.

What about system-generated names like ‘SYS_TEMP%’, ‘index$_join$_%’ and ‘VW_ST%’? In all these 3 cases the PHV will be different but the Plan is actually the same, with the exception of the system-generated names. So, if you just look at the PHV and see that is different then it is also possible that actually you have the same Plan.

What if in the case of having the same index name, the set of columns or their order is different? In these cases you may look at the PHV and see the same value, and indeed it is the same Plan, but if the columns on a referenced index have changed, is it really the same Plan? In my opinion it is not! You could be spinning on an issue where you have same Plan, different Performance, but an Index changed its columns.

SQLT to the rescue

Since we have the two cases: “Same PHV but different Plan” and “Same Plan but different PHV”, reviewing and ruling out these two possible cases on a complex Execution Plan can be cumbersome and time consuming. That is WHY SQLT incorporated the concept of SQLT Plan Hash Value (SQLT_PHV)  since version 11.4.0.5 (from May 20, 2010). First came SQLT_PHV1, then SQLT_PHV2 (on SQLT 11.4.2.4 on February 18, 2011). So we have now PHV, SQLT_PHV1 and SQLT_PHV2, as you can see below.

PHV

As you can see in Table foot note: SQLT_PHV1 considers id, parent_id, operation, options, index_columns and object_name. SQLT PHV2 includes also access and filter predicates. So when comparing Plans the values of PHV, SQLT_PHV1 and SQLT_PHV2 can give you a better idea if your Plan is actually the same or not. SQLT COMPARE also uses the 3 values to determine if your Plan is the same or not, and in cases like the PHV is the same but a Predicate on a Plan Operation is different, it highlights in red the specific Plan Operation that has a different Predicate. Pretty cool right? I guess I will have to blog about SQLT COMPARE some time soon then…

Oh, be also aware that AWR does not store Plan Predicates, so if your Plan only exists on AWR you may be blind to these Predicates, but if the EXPLAIN PLAN FOR renders the same Plan as seen in lines 4 and 5 above, then you can see the predicates for 657302870 out the “XPL” Plan instead of  “AWR”. A little trick that becomes handy…

Conclusion

When it comes to Execution Plans and their Plan Hash Value, it is possible that two Plans with same PHV are actually different if you consider the Predicates, and also possible you get a different PHV even if the Plan is the same if your Plan has system-generated names. So, during your analysis just looking at the PHV to determine if two Plans are the same or not is not enough. If you are already using SQLT, pay attention to the “Execution Plans” section, and to the SQLT PHV1 and PHV2 columns.

Written by Carlos Sierra

June 9, 2013 at 6:41 am

Oracle Queries are taking different Plans

with 4 comments

I recently saw a post with this title “Oracle Queries are taking different Plans” and since there was a lot of guessing I suggested to use SQLTXPLAIN and find there the root cause. For some reason or another the thread is still open after 20 days, and the guessing continues. Since the problem of “finding the root cause of a change on an execution plan” is old and still common, I decided to talk about it here.

In Oracle, execution plans do change for one SQL over time, that is a fundamental function of the Cost-based Optimizer (CBO). Not only is normal, but expected and desired in most circumstances. What is not desired, is when a SQL that had a good performing plan suddenly changes its plan to one that we consider sub-optimal in terms of performance. This should be the concern and focus. So WHY the CBO suddenly generates such a plan? The possible reasons are many but I enlist here the most common ones I can think of:

  • Schema object statistics changes
  • Small sample sizes when gathering CBO schema statistics (mostly on 10g)
  • Lack of statistics in some columns referenced by predicates
  • Inconsistent values of bind variables referenced by SQL
  • System wide changes like CBO parameters or system statistics
  • Index modifications including new and dropped indexes
  • Invalid or invisible indexes
  • Large SQL text with many predicates where several plans with similar cost are possible
  • Inconsistent ramp-up process of Adaptive Cursor Sharing (ACS)
  • SQL Profiles generated by the SQL Tuning Advisor (STA)
  • An index changing its blevel (for example from blevel 2 to 3)
  • Volatile values in some columns (for example value “Pending” exists in table column for a short time)
  • Asynchronous gathering of stats on volatile data instead of gathering right between load and consumption

The list above is not comprehensive. It only includes what came to my mind while writing it, so I may have forgotten several cases. And of course there are always some corner cases like special conditions activating uncommon heuristics, or simply hitting a CBO bug.

Guessing WHY a plan has changed is very time consuming and requires multiple queries to the metadata on the database. Since most of the metadata we are interested on is dynamic by nature, we base our guess work on an ever changing environment. Of course the root cause become elusive and we may give up for some time, or jump to false conclusions.

This is where SQLTXPLAIN (MOS 215187.1) or its younger and lighter sister SQLHC (MOS 1366133.1) come to the rescue. These free tools, specially SQLTXPLAIN (SQLT), capture the static and dynamic environment around the one SQL we want to analyze.

By having a consistent view of all elements affecting the execution plan, it becomes much easier to find the root cause of a plan change. And since SQLT also includes current and past details about parameters, statistics and bind variable values, we can even correlate those values to prior execution plans. So I encourage you to try SQLTXPLAIN on a SQL next time you are confronted by the old question: “WHY Oracle Queries are taking different Plans?”.

I am aware SQLTXPLAIN is intimidating. Its output is exuberant, but the point is that it includes what we need to find the root cause of most if not all SQL Tuning cases where a plan has changed.

I have presented this SQLT tool in several conferences during the past year or so. I think it is appropriate now to put together a 3 days workshop to master this tool… Just an idea at this time.

A couple of myths about Plan Stability and CBO Statistics

with one comment

Every so often I get asked about this two statements:

  1. Bind Peeking requires Histograms in order to generate multiple Execution Plans.
  2. Freezing CBO Statistics freezes an Execution Plan.

Both are myths. The true is that you don’t have to have Histograms for a Plan using Bind Peeking to change. And even if you freeze your CBO Statistics your Plan may change.

Range Predicates and Bind Peeking

Consider Range Predicate “COLUMN > :b1“. Where column low/high values are “A” and “Z”. If value of bind :b1 is “A” at the time of a hard parse, the Selectivity of this Predicate is close to 1. If the CBO peeks the value of “Z” the Selectivity is close to zero. A variation of Selectivity like this may change an Execution Plan. Thus: COLUMN does not require to have a CBO Histogram for Bind Peeking to produce multiple Execution Plans”.

Frozen CBO Statistics and Plan Stability

If your table contains columns based on sequences (like in a Primary Key), or if it contains DATE type columns, your Execution Plan may change even if the CBO Statistics do not change.

  1. Consider Predicate “N1 = :b1” where N1 is a NUMBER type column based on a sequence with 100 distinct values and low/high values of 1 and 100. Then hard parse this SQL with values of 100 and 200. The first value may result in a Selectivity of 0.01 while the second my be close to zero. That is a significant change that may cause a plan to change.
  2. Consider Predicate “D1 = :b2” where D1 is a DATE type column with 100 distinct values and low/high of 2013/01/01 and 2013/04/17. Then hard parse this SQL with values of 2013/02/14 and 2013/12/31. Same as the prior case, one value may render a selectivity close to 0.01 while the other close to zero.

The question in both cases above is: WHY would I pass a value out of range? As time progresses, users tend to pass more recent values and since CBO Statistics are frozen, these peeked values become out of low/high range and departing from their low/high end over time. Thus, the Predicate Selectivity diminishes gradually over time. At some point your otherwise stable Execution Plan may suddenly change. Thus: Freezing CBO Statistics does not freeze Execution Plans”.

Conclusion

Since lack of Histograms or freezing CBO Statistics do not guarantee Plan Stability, do not rely on these two myths. If what you are looking for is Plan Stability use then SQL Plan Management available since 11g or SQL Profiles available from 10g.

Written by Carlos Sierra

April 17, 2013 at 9:50 am