SQLTXPLAIN (SQLT) 11.4.4.6 is now available
So what is new in SQLT 11.4.4.6? Honestly, not much: one fix and 6 enhancements:
- BUG: SQLT TC “restore” was allowing a column to be renamed from “a” to “b” then from “b” to “c”, so it was ending up with duplicate rows in histogram among other issues.
- ENH: New tool parameter to control the automatic generation of a script with a Custom SQL Profile for best performing plan. Custom SQL Profiles based on DBMS_SQLTUNE.IMPORT_SQL_PROFILE are now inactive by default.
- ENH: New HC detecting corrupt histograms as per bugs 8543770, 10267075, 12819221 and 12876988.
- ENH: New HC reporting Tables where there are columns referenced in predicates and for which the number of distinct values has changed more than 10% from one statistics version to the next. These abrupt changes can cause an execution plan to also change.
- ENH: New HC about CBO not using partition level histograms if stats were gathered with “INCREMENTAL” and fix for bug 13583722 has been applied.
- ENH: Scripts roxtract.sql and roxecute.sql include now System Parameters with non-default or with modified values. They also report on all initialization paramaters for the connecting instance.
- ENH: All SQLT main methods ask now for the SQLTXPLAIN password in-line instead of using an ACCEPT command.
What is new, is that 3 satellite tools got also a new version. They can be downloaded from their own MOS note. They are also packaged within SQLT under directory sqlt/utl:
All session of “SQL Tuning Workshop I” are now full
Regarding internal “SQL Tuning Workshop I” 3 new sessions (Orlando, Toronto and Colorado Springs): All 3 sessions are now full.
SQL Tuning Workshop I (updated schedule of sessions)
I opened 3 sessions of the internal SQL Tuning Workshop I.
- June 11-14, Orlando, FL
- July 9-13, Toronto, Canada
- July 30-August 3, Colorado Springs, CO
This intermediate to advanced class includes these topics:
- Explaining the Explain Plan
- Statistics Foundations
- CBO Fundamentals I
- Statistics Best Practices
- CBO Fundamentals II
- Extended Statistics
- Dynamic Sampling
- Event 10053 Tracing the CBO
- Event 10046 Tracing the Execution
- SQLT Foundations
- SQLT Intermediate
- Real-life Case Studies
I still have some seats available for Orlando and Colorado Springs. For registration please follow instructions on internal link.
On Plan Stability
I was recently working a case of plan stability on 10g. I looked at the SQL text and noticed it had no binds. So it was easy to rule out typical combination of predicates with binds, on columns with histograms, which is known to cause plan instability on 10g (and 9i). 11g does a better job in this area with the implementation of Adaptive Cursor Sharing (ACS), but that is a different story for some other entry.
I looked for histograms in the tables accessed by this SQL and there were no histograms at all. So combination of predicates with literals on columns with histograms could also be ruled out.
Anyways, this is 10g with no binds and no histograms. Indexes were not my suspect since customer insisted the environment was not changing. All they claimed was this one report with very inconsistent execution time over the past few days.
If you are suspecting statistics you are in the right track, but how to prove or rule out? Well, look at the plan cost, for the current and historical plans. Yes, you need to see in AWR if there are any prior plans. Sure enough, the history of 3 plans showed significant cost variances. OK, same SQL, no binds, no histograms, different cost over time, thus it seems to be the statistics. Now the question changes to “what piece of the statistics is making my plan to change”?
If this had been 11g, you could use DBMS_STATS APIs to compare statistics as explained by Maria in a recent blog entry. But this is 10g… On top of that, this customer is gathering statistics daily, so there were several “versions” of the statistics… Many candidates to review…
Anyways, I reviewed current and prior plans, observing where in the plan the cost was so different, and it was clear in a couple of operations referencing the same table. Looking at the predicates of the current plan and the predicates in the SQL text, I noticed one that looked like this “column = ‘N'”. Column statistics showed that column had only one distinct value and it was “Y”. This is when the plan was “bad” meaning poorly performing.
Now, looking at the column history it was evident that different versions of column statistics showed the number of distinct values to fluctuate between 1 and 2. When it was 2, the low and high values were “N” and “Y”. That was the root cause of the plan to keep changing.
When the CBO was evaluating predicate “column = ‘N'” and the NDV was 1, then the selectivity for this predicate was 0. When the NDV was 2, the selectivity was 0.5. Being this a large table and having poorly selective predicates, this one simple equality predicate was causing the overall single-table selectivity to change drastically.
Customer is gathering statistics with a sample size of 15%, and keep in mind they are gathering with no histograms. So I cannot tell for sure if value “N” simply has very few rows, or maybe when they gather statistics there are no rows with value “N”, which are set sometime latter between the gathering of the statistics and the execution of this report.
What I did to prove or disprove what I was concluding, was to implement a test case using SQLTXPLAIN. Then I executed this code below. And sure enough I was able to reproduce now the “good” plan.
DECLARE distcnt NUMBER; density NUMBER; nullcnt NUMBER; avgclen NUMBER; srec DBMS_STATS.STATREC; charvals DBMS_STATS.CHARARRAY; BEGIN DBMS_STATS.GET_COLUMN_STATS ( ownname => 'TC23285', tabname => 'XL_PROFSVC_CLAIM_HEADER', colname => 'ETL_PRCS_IN_CD', distcnt => distcnt, density => density, nullcnt => nullcnt, srec => srec, avgclen => avgclen ); distcnt := 2; density := 1/distcnt; srec.epc := 2; charvals := DBMS_STATS.CHARARRAY (); charvals.EXTEND(2); charvals(1) := 'N'; charvals(2) := 'Y'; srec.bkvals := NULL; DBMS_STATS.PREPARE_COLUMN_VALUES ( srec => srec, charvals => charvals ); DBMS_STATS.SET_COLUMN_STATS ( ownname => 'TC23285', tabname => 'XL_PROFSVC_CLAIM_HEADER', colname => 'ETL_PRCS_IN_CD', distcnt => distcnt, density => density, nullcnt => nullcnt, srec => srec, avgclen => avgclen, no_invalidate => FALSE, force => TRUE ); END; /
New profiler.sql script to nicely report top time consumers out of PL/SQL Profiler (DBMS_PROFILER) data
PL/SQL Profiler has been around for more than a decade. With DBMS_PROFILER we can “profile” the execution of PL/SQL code. This is great when the poor performance happens in PL/SQL code and not necessarily due to SQL statements from those PL/SQL libraries. In such cases Trace/TKPROF are not the right tools for the job since they do not include PL/SQL commands.
Once we use PL/SQL Profiler in our application, we get as output some data in 3 Tables. This “profile” data is populated by the PL/SQL engine at the end of the “profile” run, or as we flush this “profile” data from memory.
If you are using PL/SQL Profiler and you want to produce a nice and concise HTML report that shows which are the top 5 code lines in terms of their execution time, then you may want to use the new version of the profiler.sql script out of MOS Doc ID: 243755.1. This document has been around for a decade or so, but I just re-wrote the profiler.sql script. This new and very improved version is available as of today.
Also, within the same document (note), you can find an output sample. This sample would give you some idea how it would look if used with your application libraries.
SQL Health-Check (SQLHC) 3-minutes video: “…a small tool for big problems!”
A new short video about the SQL Health-Check (SQLHC) script is now available to the entire Oracle community. My special thanks to Robert Story and Steve Dixon for making this possible. See video at MOS Doc ID 1455583.1.
SQLTXPLAIN (SQLT) 11.4.4.5 is now available
SQLT XTRSBY new method
If you have a poorly-performing SQL statement on a read-only standby Data Guard database, now you can analyze it with SQLT. Just need to install SQLT on Primary, create a database link into the read-only and use new method SQLT XTRSBY passing SQL_ID and db link name. Read sqlt/sqlt_instructions.html for more details.
In addition, there are two stand-alone scripts (with no SQLT dependencies) that you can use directly on any read-only database. They are sqlt/utl/roxtract.sql and sqlt/utl/roxecute.sql. They mimic SQLT XTRACT and XECUTE but in a smaller scale. Notice that you can also use these two stand-alone scripts safely on any production environment since they do not execute any DDL and they perform queries only (no inserts, updates or deletes). What do you get from them? You get the basics: execution plans; peeked binds; CBO statistics for Tables, Table Columns, Indexes and Index Columns; SQL Monitor Active report; 10053 trace; 10046 trace and TKPROF on roxecute.sql.
If using SQLT XTRSBY on a Data Guard Primary database, you may also want to use roxtract.sql on the read-only standby database. Combined you will get more about your SQL. XTRSBY focuses on GV$* from standby plus DBA* from Primary. Script roxtract.sql only looks at standby.
As always, download latest version of SQLT from MOS 215187.1.
SQLT 11.4.4.5 changes:
Fixes:
- Removed annoying message on main “sqlt$a: *** t:plan_operation%: ORA-01403: no data found”.
- Test Case (TC) script q.sql now takes values of NULL on binds instead of continue searching for not NULL values. It also handles string “NULL” as NULL on dba_hist_sqlbind.
Enhancements:
- SQLT is now Data Guard aware. There is a new method SQLT XTRSBY which is executed on Primary and extracts from stand-by almost everything XTRACT does. Please refer to instructions before using this new method.
- In addition to new SQLT XTRSBY which runs in Data Guard Primary, there are two new scripts sqlt/utl/roxtract.sql and sqlt/utl/roxecute.sql which are super lite versions of SQLT XTRACT and XECUTE. The new read-only (RO) scripts can be used on any read-only database since they do not install anything and do not update anything. They provide a quick and safe view of the environment around one SQL statement.
- New instrumentation on SQLT XTRACT, XECUTE, XTRXEC, XTRSBY and XPLAIN. All these methods produce now a TKPROF for the actual execution of the tool. The new TKPROF is located in the log zip file. It helps to diagnose SQLT taking longer than expected.
- Skip MDSYS objects from metadata script. Since MDSYS is part of the data dictionary, it makes sense to skip it from Metadata as we do with similar schemas.
- Column remap on SQLT TC now considers map by QUALIFIED_COL_NAME when name is other than SYS%. This helps to match more columns when we are restoring CBO schema statistics while remapping to new TC user.
- Metadata includes now PL/SQL library which executed the SQL being analyzed. This is useful to actually see the piece of program that invoked the SQL being analyzed.
Free Webinar: Using SQLTXPLAIN to diagnose SQL statements performing poorly
SQLTXPLAIN (SQLT) is a free tool to diagnose SQL statements performing poorly. If you have used it you know its installation and use is simple (understanding the output may still be overwhelming). But if you have never used it, you may feel intimidated by this tool. I will be doing a 1hr Webinar in how to install SQLT and how to use its main methods: XTRACT, XECUTE, XTRXEC, XTRSBY and XPLAIN. I will allocate 45 minutes for presentation and demo, followed by 15 minutes for a Q&A session. This Webinar is free but its capacity is limited, so if you want to attend please register early. It will be on May 15 at 11am (UTC-05:00) Eastern Time (US & Canada).
3 minutes cool video on SQL Health-Check SQLHC
SQL Health Check…a small tool for big problems
Sorry, this link only works inside Oracle… Anyways, basically it is a cool short video that explains what SQLHC does. If you ever need a super lite version of SQLT that installs nothing, use SQLHC. Still you may want to install SQLT as time permits. SQLHC is not a replacement for SQLT.
