Archive for the ‘SQLTXPLAIN (SQLT)’ Category
Table contains N column(s) referenced in predicates with mutating number of histogram endpoints count.
What is the meaning of this observation on a SQLTXPLAIN (SQLT) main report?
“Table contains N column(s) referenced in predicates with mutating number of histogram endpoints count.”
This observation is part of the health-checks performed by SQLT on a given SQL statement. It means the following:
- Your SQL includes predicates on column(s) with histograms; and
- Number of buckets in histograms for these column(s) has changed more than 10% between two consecutive versions of the statistics
For example, your SQL has a predicate like “c1 = :b1” and c1 has a histogram. Currently, the number of buckets was 5, but on your prior version of the statistics for this column, its histogram had 4 buckets.
How does it affect you?
It all depends. Maybe it does not. But maybe, the new value is what your SQL passes during bind peeking. Then it may impact your execution plan…
Keep in mind that “Observations” in SQLT are merely some kind of “heads-up” that something may affect your plan. Other cases of “mutating” values include:
- Indexes with mutating BLEVEL
- Tables with mutating NUM_ROWS
- Columns with mutating NDV
And the same applies to partitions and sub-partitions.
Typical case:
What happens if your SQL was using index X on its execution plan and suddenly it switches to index Y? Same join order, just one index access swapped.
If you see that X or Y is mutating its BLEVEL, you can suspect that might be the culprit. Of course you will have to review 10053 to be sure, but at least the “observation” creates some awareness of this possibility.
SQLTXPLAIN (SQLT) 11.4.4.6 is now available
So what is new in SQLT 11.4.4.6? Honestly, not much: one fix and 6 enhancements:
- BUG: SQLT TC “restore” was allowing a column to be renamed from “a” to “b” then from “b” to “c”, so it was ending up with duplicate rows in histogram among other issues.
- ENH: New tool parameter to control the automatic generation of a script with a Custom SQL Profile for best performing plan. Custom SQL Profiles based on DBMS_SQLTUNE.IMPORT_SQL_PROFILE are now inactive by default.
- ENH: New HC detecting corrupt histograms as per bugs 8543770, 10267075, 12819221 and 12876988.
- ENH: New HC reporting Tables where there are columns referenced in predicates and for which the number of distinct values has changed more than 10% from one statistics version to the next. These abrupt changes can cause an execution plan to also change.
- ENH: New HC about CBO not using partition level histograms if stats were gathered with “INCREMENTAL” and fix for bug 13583722 has been applied.
- ENH: Scripts roxtract.sql and roxecute.sql include now System Parameters with non-default or with modified values. They also report on all initialization paramaters for the connecting instance.
- ENH: All SQLT main methods ask now for the SQLTXPLAIN password in-line instead of using an ACCEPT command.
What is new, is that 3 satellite tools got also a new version. They can be downloaded from their own MOS note. They are also packaged within SQLT under directory sqlt/utl:
SQL Tuning Workshop I (updated schedule of sessions)
I opened 3 sessions of the internal SQL Tuning Workshop I.
- June 11-14, Orlando, FL
- July 9-13, Toronto, Canada
- July 30-August 3, Colorado Springs, CO
This intermediate to advanced class includes these topics:
- Explaining the Explain Plan
- Statistics Foundations
- CBO Fundamentals I
- Statistics Best Practices
- CBO Fundamentals II
- Extended Statistics
- Dynamic Sampling
- Event 10053 Tracing the CBO
- Event 10046 Tracing the Execution
- SQLT Foundations
- SQLT Intermediate
- Real-life Case Studies
I still have some seats available for Orlando and Colorado Springs. For registration please follow instructions on internal link.
On Plan Stability
I was recently working a case of plan stability on 10g. I looked at the SQL text and noticed it had no binds. So it was easy to rule out typical combination of predicates with binds, on columns with histograms, which is known to cause plan instability on 10g (and 9i). 11g does a better job in this area with the implementation of Adaptive Cursor Sharing (ACS), but that is a different story for some other entry.
I looked for histograms in the tables accessed by this SQL and there were no histograms at all. So combination of predicates with literals on columns with histograms could also be ruled out.
Anyways, this is 10g with no binds and no histograms. Indexes were not my suspect since customer insisted the environment was not changing. All they claimed was this one report with very inconsistent execution time over the past few days.
If you are suspecting statistics you are in the right track, but how to prove or rule out? Well, look at the plan cost, for the current and historical plans. Yes, you need to see in AWR if there are any prior plans. Sure enough, the history of 3 plans showed significant cost variances. OK, same SQL, no binds, no histograms, different cost over time, thus it seems to be the statistics. Now the question changes to “what piece of the statistics is making my plan to change”?
If this had been 11g, you could use DBMS_STATS APIs to compare statistics as explained by Maria in a recent blog entry. But this is 10g… On top of that, this customer is gathering statistics daily, so there were several “versions” of the statistics… Many candidates to review…
Anyways, I reviewed current and prior plans, observing where in the plan the cost was so different, and it was clear in a couple of operations referencing the same table. Looking at the predicates of the current plan and the predicates in the SQL text, I noticed one that looked like this “column = ‘N'”. Column statistics showed that column had only one distinct value and it was “Y”. This is when the plan was “bad” meaning poorly performing.
Now, looking at the column history it was evident that different versions of column statistics showed the number of distinct values to fluctuate between 1 and 2. When it was 2, the low and high values were “N” and “Y”. That was the root cause of the plan to keep changing.
When the CBO was evaluating predicate “column = ‘N'” and the NDV was 1, then the selectivity for this predicate was 0. When the NDV was 2, the selectivity was 0.5. Being this a large table and having poorly selective predicates, this one simple equality predicate was causing the overall single-table selectivity to change drastically.
Customer is gathering statistics with a sample size of 15%, and keep in mind they are gathering with no histograms. So I cannot tell for sure if value “N” simply has very few rows, or maybe when they gather statistics there are no rows with value “N”, which are set sometime latter between the gathering of the statistics and the execution of this report.
What I did to prove or disprove what I was concluding, was to implement a test case using SQLTXPLAIN. Then I executed this code below. And sure enough I was able to reproduce now the “good” plan.
DECLARE distcnt NUMBER; density NUMBER; nullcnt NUMBER; avgclen NUMBER; srec DBMS_STATS.STATREC; charvals DBMS_STATS.CHARARRAY; BEGIN DBMS_STATS.GET_COLUMN_STATS ( ownname => 'TC23285', tabname => 'XL_PROFSVC_CLAIM_HEADER', colname => 'ETL_PRCS_IN_CD', distcnt => distcnt, density => density, nullcnt => nullcnt, srec => srec, avgclen => avgclen ); distcnt := 2; density := 1/distcnt; srec.epc := 2; charvals := DBMS_STATS.CHARARRAY (); charvals.EXTEND(2); charvals(1) := 'N'; charvals(2) := 'Y'; srec.bkvals := NULL; DBMS_STATS.PREPARE_COLUMN_VALUES ( srec => srec, charvals => charvals ); DBMS_STATS.SET_COLUMN_STATS ( ownname => 'TC23285', tabname => 'XL_PROFSVC_CLAIM_HEADER', colname => 'ETL_PRCS_IN_CD', distcnt => distcnt, density => density, nullcnt => nullcnt, srec => srec, avgclen => avgclen, no_invalidate => FALSE, force => TRUE ); END; /
SQLTXPLAIN (SQLT) 11.4.4.5 is now available
SQLT XTRSBY new method
If you have a poorly-performing SQL statement on a read-only standby Data Guard database, now you can analyze it with SQLT. Just need to install SQLT on Primary, create a database link into the read-only and use new method SQLT XTRSBY passing SQL_ID and db link name. Read sqlt/sqlt_instructions.html for more details.
In addition, there are two stand-alone scripts (with no SQLT dependencies) that you can use directly on any read-only database. They are sqlt/utl/roxtract.sql and sqlt/utl/roxecute.sql. They mimic SQLT XTRACT and XECUTE but in a smaller scale. Notice that you can also use these two stand-alone scripts safely on any production environment since they do not execute any DDL and they perform queries only (no inserts, updates or deletes). What do you get from them? You get the basics: execution plans; peeked binds; CBO statistics for Tables, Table Columns, Indexes and Index Columns; SQL Monitor Active report; 10053 trace; 10046 trace and TKPROF on roxecute.sql.
If using SQLT XTRSBY on a Data Guard Primary database, you may also want to use roxtract.sql on the read-only standby database. Combined you will get more about your SQL. XTRSBY focuses on GV$* from standby plus DBA* from Primary. Script roxtract.sql only looks at standby.
As always, download latest version of SQLT from MOS 215187.1.
SQLT 11.4.4.5 changes:
Fixes:
- Removed annoying message on main “sqlt$a: *** t:plan_operation%: ORA-01403: no data found”.
- Test Case (TC) script q.sql now takes values of NULL on binds instead of continue searching for not NULL values. It also handles string “NULL” as NULL on dba_hist_sqlbind.
Enhancements:
- SQLT is now Data Guard aware. There is a new method SQLT XTRSBY which is executed on Primary and extracts from stand-by almost everything XTRACT does. Please refer to instructions before using this new method.
- In addition to new SQLT XTRSBY which runs in Data Guard Primary, there are two new scripts sqlt/utl/roxtract.sql and sqlt/utl/roxecute.sql which are super lite versions of SQLT XTRACT and XECUTE. The new read-only (RO) scripts can be used on any read-only database since they do not install anything and do not update anything. They provide a quick and safe view of the environment around one SQL statement.
- New instrumentation on SQLT XTRACT, XECUTE, XTRXEC, XTRSBY and XPLAIN. All these methods produce now a TKPROF for the actual execution of the tool. The new TKPROF is located in the log zip file. It helps to diagnose SQLT taking longer than expected.
- Skip MDSYS objects from metadata script. Since MDSYS is part of the data dictionary, it makes sense to skip it from Metadata as we do with similar schemas.
- Column remap on SQLT TC now considers map by QUALIFIED_COL_NAME when name is other than SYS%. This helps to match more columns when we are restoring CBO schema statistics while remapping to new TC user.
- Metadata includes now PL/SQL library which executed the SQL being analyzed. This is useful to actually see the piece of program that invoked the SQL being analyzed.
Free Webinar: Using SQLTXPLAIN to diagnose SQL statements performing poorly
SQLTXPLAIN (SQLT) is a free tool to diagnose SQL statements performing poorly. If you have used it you know its installation and use is simple (understanding the output may still be overwhelming). But if you have never used it, you may feel intimidated by this tool. I will be doing a 1hr Webinar in how to install SQLT and how to use its main methods: XTRACT, XECUTE, XTRXEC, XTRSBY and XPLAIN. I will allocate 45 minutes for presentation and demo, followed by 15 minutes for a Q&A session. This Webinar is free but its capacity is limited, so if you want to attend please register early. It will be on May 15 at 11am (UTC-05:00) Eastern Time (US & Canada).
Custom SQL Profile and Plan Stability on 10g
SQLTXPLAIN includes into the sqlt/utl sub-directory a script named coe_xfr_sql_profile.sql. This script inputs a SQL_ID, then it asks to choose from a list of known Plan Hash Values (PHV), and after a few seconds it outputs another script that contains the Outline of the PHV together with the SQL Text for the given SQL_ID.
These are some of the things we can do with the script generated by coe_xfr_sql_profile.sql for your SQL_ID/PHV:
- If we execute the generated script with no modifications into the same system, then it creates a Custom SQL Profile with the Outline of the plan from given SQL_ID/PHV. This Plan is independent of CBO Statistics, meaning your Plan will be stable even if you gather new statistics (or delete them).
- If we execute this generated script into a similar system (one with same schema objects), then it creates the Custom SQL Profile from the source system into the target system. That means we basically migrated the PHV into another system. Think on this: “DEV instance exhibits a good-performing Plan. PROD only generates a poorly-performing Plan. You want to quickly migrate the good Plan from DEV into PROD”.
- You can only generate a good-performing Plan if you place a CBO Hint in your SQL, but you cannot modify the application. So, you execute a modified version of your SQL which includes the CBO Hint, then execute coe_xfr_sql_profile.sql for this modified SQL. When you open the generated script you remove the CBO Hint from the SQL Text section. Then this newly generated and modified script contains the original SQL without the Hint and the Outline of the Plan with the Hint. When executed either on the source system or a target system, it creates a custom SQL Profile on it. This Profile associates the original SQL with the good-performing Plan.
- Your original SQL contains a CBO Hint or set of hints and it produces a poorly-performing Plan. When you remove them you notice you get a good-performing Plan, so you execute coe_xfr_sql_profile.sql for the modified SQL (the one with the hints removed). Then you open the generated script and replace on it the SQL Text with the original one (with the hints on it). This generated and modified script, when executed on the source or target system, will create a Custom SQL Profile for the original SQL with the good-performing Plan.
If after you create a Custom SQL Profile with coe_xfr_sql_profile.sql you change your mind and want to drop such Profile, you will find the drop command on the same script that was generated by coe_xfr_sql_profile.sql containing your Profile.
This coe_xfr_sql_profile.sql script uses API DBMS_SQLTUNE.IMPORT_SQL_PROFILE, which is not documented.
A word of caution:
SQL Profiles are basically a set of CBO Hints associated to a SQL Text through a “signature”. This signature is a hash function on the SQL Text. If a Hint from the Profile cannot be applied, it does not invalidate the entire Profile. In other words, the CBO will still try to apply as many Hints as possible. Imagine that one of the Hints refers to an Index, and the Index is dropped, then the Hint will get silently ignored. This may produce sub-optimal Plans.
Another consideration is the effect of a Virtual Private Database (VPD), which appends some dynamic predicates to your SQL. In this case, the Plan produced with the Profile my not be optimal.
If your database is 11g or higher, you’d rather want to use SQL Plan Management (SPM), which provides better Plan Control functionality.
What is SQLTXPLAIN?
OK, I will start with the basics…
SQLTXPLAIN, also known as SQLT for short, is a tool I developed by myself and to myself. At least that was the idea when its name was coe_xplain.sql. That was long ago… like in the past millenia or so.
Today, SQLT is a tool that Oracle uses to diagnose SQL statements performing poorly. So, it went from a single-user tool into one that is used by hundreds of engineers at Oracle. Thanks to the valuable contribution of ideas from some of the brightest minds at Oracle Support in the area of query tuning, SQLT was reborn between the years 2008 and 2012. In particular I thank Abel Macias, Mauro Pagano and Mark Jefferys for their constant flow of ideas and by reporting to me my own flaws. I also thank the more than 650 participants of my two SQL Tuning Workshops, since it is in these sessions where I have learned about most of the enhancements I have implemented over the past 3 years or so.
Anyways, back to the question “What is SQLTXPLAIN?”. This is a tool developed in PL/SQL that inputs one SQL statement and output a zip file that contains many pieces. Some are for diagnostics, and some for further use as a Test Case (TC) is created. So, when someone at Oracle asks for a SQLTXPLAIN, basically the request is to install this tool and execute one of its main methods in order for the requestor to analyze your issue.
SQLT does not come pre-installed, nor it comes with the product. To get it to work, you need to download it first from My Oracle Support (MOS). You can find it under Doc ID 215187.1. After you download it, please unzip it and read included file sqlt_instructions.html. These instructions are going to basically ask you to copy this sqlt.zip file to the database server, unzip it there again, and execute sqlt/install/sqcreate.sql connected as SYS. Provide a value to the inline parameters, and SQLT should be installed in a few minutes. If you notice the sqlplus session exists then something went wrong. Read the *.log files and look for errors. Fix them or at least try. If you get stuck, contact the tool author (me).
Once SQLT is installed then you are ready to use it. Start with method XTRACT. Refer to the instructions provided with the tool. Basically, connect into sqlplus as the application user and execute sqlt/run/sqltxtract.sql passing the SQL_ID of the SQL that you want to analyze. SQLT XTRACT will generate a zip file. Open it up and look for file sqlt_sNNNNN_main.html. Try to make sense out of it. Start with links on SQL Text, Plans Summary and Observations. Then take it from there.
What you will see in the main html report is basically the entire environment around your SQL. I mean in terms of performance. Everything that may affect your SQL performance: from CBO statistics, to parameters, to bind variables, etc. You will find details about cursor sharing, SQL Plan Management (SPM), SQL Profiles, Stored Outlines, etc. Tons of details. Not everything is used every time. What is important is that everything is there in case you need it. Even more, it is there in case someone with more expertise in SQL tuning may need it later on.
SQLT works on RAC. It works also on a distributed environment. It will soon work on data-guard (stand-by read-only) databases. If you want to see a sample of its output, download it from same MOS 215187.1. Feedback on this tool is always welcomed.
SQLT 11.4.4.4 is now available
SQLTXPLAIN (SQLT) 11.4.4.4 was made available on April 2, 2012. Find this tool under MOS 215187.1. It contains 2 fixes and 37 enhancements:
Fixes:
- Peeked and Captured Binds in Execution Plan of MAIN was showing :B1 where predicate was “COL=:B10” (false positives). Fix display of bind peeking and bind capture when SQL contains binds like :b1 and :b10.
- Metadata script includes now creation of Statistics Extensions.
Enhancements:
- New HC when derived stats are detected on a Table or Index (GLOBAL_STATS = ‘NO’ and PARTITIONED = ‘YES’).
- New HC when SQL Plan Baseline contains non-reproducible Plans.
- New HC indicating sizes of SQL Plan History and SQL Plan Baseline (enabled and accepted Plans).
- New HC when there is an enabled SQL Profile and there are VPD policies affecting your SQL. Plan may be unstable.
- New HC when there is more than 1 CBO environment in memory or AWR for given SQL.
- New HC when Indexes or their Partitions/Subpartitions have UNUSABLE segments.
- New HC when Indexes are INVISIBLE.
- New HC when an Index is referenced in a Plan and the index or its partitions/subpartitions are now UNUSABLE.
- New HC when an Index is referenced in a Plan and the index is now INVISIBLE.
- New HC when Table has locked statistics.
- New HC when INTERNAL_FUNCTION is present in a Filter Predicate since it may denote an undesired implicit data_type conversion.
- New HC when Plan Operations have a Cost of 0 and Cardinality of 1. Possible incorrect Selectivity.
- New HC when SREADTIM differs from actual db file sequential read for more then 10%.
- New HC when MREADTIM differs from actual db file scattered read for more then 10%.
- New HC when BLEVEL has changed for an Index, an Index Partition or an Index Subpartition according to statistics versions.
- New HC when NUM_ROWS has changed more than 10% for a Table, a Table Partition or a Table Subpartition according to statistics versions.
- New HC when Index is redundant because its leading columns are a subset of the leading columns of another Index on same Table.
- New HC when leaf blocks on a normal non-partitioned index are greater than estimated leaf blocks with a 70% efficiency.
- Active Session History sections on MAIN report include now up to 20 sessions and 20 snapshots (it was 10 and 10).
- Parameter _optimizer_fkr_index_cost_bias has been added to SQLT XPLORE.
- SQLTPROFILE and script coe_xfr_sql_profile.sql take now SQL statement with SQL Text larger than 32767 characters.
- Add metrics similar to what we have now on summary tables/indexes on SQLHC.
- Tables and Indexes sections on MAIN contain now links showing object counts instead of a constant. Similar to SQLHC.
- Execution Plans on SQLT to show with mouse-over, schema statistics for both: current and as per plan timestamp.
- Add new columns on 11.2.0.3 for all V$, GV$ and DBA views accessed by SQLT.
- Include reason WHY a cursor is not shared (out of XML “reason” column on gv$sql_shared_cursor).
- MAIN report heading includes now a link to MOS SQLT document.
