Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Should I delete my column Histograms to improve plan stability?

with 5 comments

When we combine the use of bind variables in a SQL statement with column Histograms we usually end-up having multiple execution plans over time. This plan instability is due to the lack of control we have over the selectivity of the predicates at the time the SQL is hard parsed while using bind peeking. On 11g this behavior is mitigated with the use of Adaptive Cursor Sharing (ACS), and it can be further improved when using ACS together with SQL Plan Management (SPM).

As we generate multiple execution plans for one SQL statement where the reason is the combination of binds and Histograms, some of these plans may perform better than others over a large number of executions. Thus it is natural to consider some of these execution plans as “bad plans”. If the use of ACS and SPM is not an option, then we may need to consider fixing the plan instability issue with the suppression of either bind peeking or Histograms. If we are in such case, it is better to consider the removal of Histograms first, and leave bind peeking active. The reason is that bind peeking even with no Histograms still uses CBO column statistics and among other things it uses column low and high values when computing selectivity.

Back to the question: Should I delete my column Histograms to improve plan stability?

If you are on 10g, or if on 11g but ACS is not an option, then deleting column Histograms may be your best option to improve your plan stability. Like any other change, you rather test first on a non-production environment. Even in such test environment, you may want to restrict your test to only those tables accessed by your SQL. On 11g, DBMS_STATS.DELETE_COLUMN_STATS provides an option to delete just the Histograms while preserving all other column statistics. This is by using parameter COL_STAT_TYPE with value HISTOGRAM. If you want to do the same on 10g, you may want to use the SQLT HGRM module. In any case you can always restore column statistics using DBMS_STATS.RESTORE_TABLE_STATS.

If you are considering deleting Histograms to test the effect on an execution plan and the performance of your SQL, you may want to test first asking the CBO to simply ignore them. If patch for bug 9550277 has been applied in your system then you can command below.

ALTER SESSION SET "_FIX_CONTROL"='9550277:1';

Written by Carlos Sierra

August 13, 2012 at 8:33 am

About DBA_TAB_HISTOGRAMS.ENDPOINT_VALUE

leave a comment »

If you have looked at the values on DBA_TAB_HISTOGRAMS.ENDPOINT_VALUE you may have seen some like 281422806648623000000000000000000000 when you were actually expecting something like 63736. I have used in SQLT the two functions below for many years. I got the foundations of these two out of a public source which unfortunately I did not record. Since you may or not may have SQLT, I thought a good idea to share these two functions in case you need them.

CREATE OR REPLACE
FUNCTION get_internal_value (p_value IN VARCHAR2)
RETURN VARCHAR2
IS
 temp_n NUMBER := 0;
BEGIN
 FOR i IN 1..15
 LOOP
 temp_n := temp_n + POWER(256, 15 - i) * ASCII(SUBSTR(RPAD(p_value, 15, CHR(0)), i, 1));
 END LOOP;
 RETURN TO_CHAR(ROUND(temp_n, -21));
EXCEPTION
 WHEN OTHERS THEN
 RETURN p_value;
END get_internal_value;
/

CREATE OR REPLACE
FUNCTION get_external_value (p_value IN VARCHAR2)
RETURN VARCHAR2
IS
 temp_n NUMBER;
 temp_i INTEGER;
 my_result VARCHAR2(32767) := NULL;

BEGIN
 IF LENGTH(p_value) != 36 OR
 SUBSTR(p_value, 16) != '000000000000000000000' OR
 p_value > POWER(256, 15) OR
 p_value < POWER(256, 14) THEN
 RETURN p_value; -- cannot get external value
 END IF;

temp_n := p_value / POWER(256, 14); -- get most significant digits

-- decoding most significant digits then shift multiplying by 256
 FOR i IN 1..14
 LOOP
 temp_i := TRUNC(temp_n);
 temp_n := (temp_n - temp_i) * 256;
 IF temp_i NOT BETWEEN 32 AND 126 OR temp_n NOT BETWEEN 32 AND 126 THEN
 EXIT; -- reached the tail
 END IF;
 my_result := my_result||CHR(temp_i); -- all but last byte
 END LOOP;

IF temp_i NOT BETWEEN 32 AND 126 THEN
 RETURN my_result||'?'; -- cannot decode
 END IF;

-- scan to the right starting at temp_i
 FOR i IN temp_i..126
 LOOP
 IF get_internal_value(my_result||CHR(i)) = p_value THEN
 RETURN my_result||CHR(i); -- approximate value
 END IF;
 END LOOP;

-- scan to the left starting at temp_i
 FOR i IN 32..temp_i
 LOOP
 IF get_internal_value(my_result||CHR(temp_i + 32 - i)) = p_value THEN
 RETURN my_result||CHR(temp_i + 32 - i); -- approximate value
 END IF;
 END LOOP;

RETURN my_result||CHR(temp_i); -- this is the best we could do
EXCEPTION
 WHEN OTHERS THEN
 RETURN p_value;
END get_external_value;
/

SELECT get_external_value('281422806648623000000000000000000000')
FROM DUAL;

GET_EXTERNAL_VALUE('281422806648623000000000000000000000')
----------------------------------------------------------
63736

SELECT get_internal_value('63736') FROM DUAL;

GET_INTERNAL_VALUE('63736')
----------------------------------------------------------
281422806648623000000000000000000000

Written by Carlos Sierra

August 9, 2012 at 6:46 am

A healthy approach to optimizer statistics: 11g improvements and common concerns you should know about

leave a comment »

Written by Carlos Sierra

July 20, 2012 at 4:53 am

Posted in Uncategorized

A healthy approach to optimizer statistics: 11g improvements and common concerns you should know about

with 5 comments

Have you ever wondered what is the “right way” to gather statistics for the Optimizer? Well, actually there is no “right way”. When someone asks me “how should we gathered statistics?”, my answer always is: “it all depends”.

Mauro Pagano is presenting at the Central Florida Oracle Users Group (CFOUG) meeting this coming Monday, July 23 at 6:00 PM, in Orlando, Florida. His topic is: “A healthy approach to optimizer statistics: 11g improvements and common concerns you should know about”. This presentation covers interesting features in 11g, but most important: what works well and what works almost well when it comes to statistics gathering. Mauro is one of the best engineers supporting query tuning (QTUNE) at Oracle. At his present role, he has seen many issues concerning statistics. Thus Mauro has a pretty good idea of what works best and  in which conditions. He will also talk about common misconceptions and mistakes.

If you are in the Central Florida area, please contact the CFOUG to attend this valuable session. Power point will be available through the CFOUG web page after the presentation.

Written by Carlos Sierra

July 20, 2012 at 4:52 am

OOW: Best Practices: SQL Tuning Made Easier with SQLTXPLAIN (SQLT)

leave a comment »

If you are attending Oracle Open World (OOW) this October, and you want to learn first-hand what is this SQLTXPLAIN (SQLT) tool and how it can benefit you, please join me at this conference:

Session ID: CON3053
Title: Get Proactive: Best Practices: SQL Tuning Made Easier with SQLTXPLAIN (SQLT)
Date: 10/4/2012
Start Time: 12:45 PM
Room: Moscone West – 3022

Written by Carlos Sierra

July 11, 2012 at 8:10 pm

Posted in SQLTXPLAIN (SQLT)

What is new on SQLTXPLAIN 11.4.4.7 released on July 2?

leave a comment »

I uploaded a new release of SQLTXPLAIN (SQLT) today. The 20 changes for this 11.4.4.7 release are listed below, but I’d rather explain what is important and what is not.

Long ago my friend Lester Gutierrez suggested me an interesting addition to SQLT. In those days he was managing an EBS performance development team, now he does the same for Fusion performance. Anyways, he mentioned it would be nice to include with SQLT XTRACT some relevant AWR reports. In particular, at the time when the SQL had performed poorly. The idea here is this: since SQLT already presents lots of details about the SQL and its environment, some broader diagnostics at the time the SQL had poor execution times would help to better understand the system in terms of load and concurrency. This new feature is one of the 20 changes below. SQLT now includes up to 6 AWR reports (6 is a default value for a new configuration parameter). These 6 reports correspond to the periods between two consecutive AWR snapshots where the SQL has had the largest delta elapsed time, across all nodes in case of RAC.

So, why limit this functionality to AWR? On this new release of SQLT, besides the new set of relevant AWR reports, SQLT also produces up to 6 ADDM reports, some ASH reports from memory and from AWR (one set per instance), up to 12 SQL Monitor Active reports, and one SQL Detail report. All driven by the SQL being analyzed. Prior releases included only one SQL Monitor Active report and under some special cases one SQL Detail report. So this 11.4.4.7 is more aggressive in terms of including reports that sometimes are simply unknown or under-utilized. I am setting the defaults for the parameters controlling the number of these reports to some small values, at least on 11.4.4.7. I will monitor their performance as I get new cases, and I may adjust these default values on subsequent releases. In terms of SQLT performance, the impact of these new features seems marginal.

Another relevant change is an addition of a new health-check recently requested by Abel Macias. It detects bug 14013094, which in short it causes Index Partition statistics to go wrong. This is when the order of partition names on an index do not match their table at index creation time. I have already seen a couple of cases and I feel more will come. This new health-check is also included into the new SQL Health-Check script (SQLHC). So if the SQL being analyzed may suffer from this bug, SQLT and SQLHC will create an “Observation”, which are basically the outcome of the dozens of health-checks performed around the SQL.

That brings another important change. When for some reason or another it is hard to install SQLT on a system, I am recommending to use SQLHC. The former requires to install a schema SQLTXPLAIN and hundreds of objects on it. The latter requires no installation at all. The gap of the diagnostics generated is brutal: SQLT provides all we need to diagnose a SQL performing poorly, while SQLHC provides only some pieces. But these pieces have been carefully selected. During several interactions with a key developer from the Fusion performance team, we agreed to add some important new sections to SQLHC. So in this release of SQLHC (also 11.4.4.7), this script has several new features. It will never compete with SQLT, but now it provides a good starting point to a SQL tuning case. Think SQLHC as a lite version of SQLT. By the way, SQLHC had its own note 1366133.1 on My Oracle Support (MOS), but it is also packaged within SQLT under sqlt/utl.

Parallel Execution new tool (MOS 1460440.1) did not escape a few changes in this 11.4.4.7 release. Mostly minor bugs. Same about profiler.sql (243755.1), which produces a report for PL/SQL Profiler data. Both tools are also packaged inside SQLT under sqlt/utl. This way it is easier to access them. If you are not familiar with them, you may want to go to their own MOS notes and look for the output sample.

I would say the other changes not mentioned explicitly, are less relevant. Anyways, try always to use the latest version of SQLT out of 215187.1. If you are a seasoned user of SQLT you know it changes often, always with some new features and some fixes. I hope you start using 11.4.4.7 soon!

  1. BUG: Readme report for PeopleSoft was showing “Gathering Statistics” twice.
  2. BUG: Parameter “_db_file_optimizer_read_count” was showing an incorrect value on Main report.
  3. BUG: Number of executions above Execution Plans was showing zero when statistics_level was not set to ALL at the time the cursor was parsed.
  4. BUG: Avoid error below when DBMS_METADATA is not granted to PUBLIC. 1452/29 PLS-00201: identifier ‘DBMS_METADATA’ must be declared
  5. ENH: New HC to detect bug 14013094 on statistics gathering for partitioned tables and indexes where partition names are created in different order between table and index.
  6. ENH: New tool parameter tcb_export_data to control value of parameter exportData on API DBMS_SQLDIAG.EXPORT_SQL_TESTCASE.
  7. ENH: Handle non-stopper error: ORA-06550: line 4, column 18 with “FROM applsys.fnd_product_groups”.
  8. ENH: Scripts roxtract and roxecute are now deprecated. Use sqlhc.sql and sqlhcxec.sql instead.
  9. ENH: Main report, sqlhc.sql and sqlhcxec.sql include now Plan Line and Plan summaries for Active Session History from memory and AWR.
  10. ENH: Scripts sqlhc.sql and sqlhcxec.sql to include ASH summaries, Cursor Sharing, set of SQL Monitor reports, SQL Detail report, SQL Profiles, SQL Plan Baseline and SQL Patches.
  11. ENH: Readme for PSFT includes now instructions to gather statistics using both PSCBO_STATS and DBMS_STATS.
  12. ENH: SQL Text now shows in red these keywords: SELECT, INSERT, UPDATE, DELETE, MERGE, FROM, WHERE.
  13. ENH: COUNT(*) on tables accessed by SQL being analyzed is reduced to a sample size 1% of current values. This will make this COUNT less accurate but much faster.
  14. ENH: SQLT main methods produce now up to 6 AWR reports for snaphots where the SQL being analyzed produced considerable elapsed time.
  15. ENH: SQLT main methods produce now up to 12 most recent SQL Monitor reports for the SQL being analyzed.
  16. ENH: SQLT main methods produce now ASH reports from memory and from AWR.
  17. ENH: SQLT main methods produce now up to 6 ADDM reports for snaphots where the SQL being analyzed produced considerable elapsed time.
  18. ENH: Workaround: disconnect ORA-07445 on SYS.DBMS_SQLTUNE_INTERNAL, which happened in some 11.2 systems.
  19. ENH: SQL Test Case (TC) now provides a script tc_pkg.sql to generate a stand-alone TC out of an implemented SQLT TC.
  20. ENH: All main methods record now into sqltxhost.log a set of 5 vmstat and sar samples.

Monitor one SQL using AWR

with 7 comments

Sometimes we want to keep an eye on a SQL statement that may not be part of any of the Top-SQL as per AWR reports. Use API below passing just the SQL_ID and from now on this SQL will be captured by AWR as long as the SQL is in memory at the time of the snapshot.

SQL> EXEC DBMS_WORKLOAD_REPOSITORY.ADD_COLORED_SQL('98n7902rytpxn');

This API is available on 11.1 and higher.

Written by Carlos Sierra

June 29, 2012 at 5:01 pm

Posted in SQL Monitor

SQLTXPLAIN presentation in Orlando on June 25, 2012

with 2 comments

There will be a 1-hr presentation on SQLTXPLAIN (SQLT) hosted by the Central Florida Oracle Users Group (CFOUG). There will be plenty of time for questions. This is an opportunity to discuss -face to face- any current SQL tuning issues in which you may have used SQLT.

Written by Carlos Sierra

June 18, 2012 at 4:01 pm

How to start SQLT XTRACT on a SQL that executes at night?

with one comment

Someone asked recently for an easy way to execute SQLT XTRACT on a SQL that happens to execute only out of a daily batch process. This person wanted to use SQLT to capture dynamic performance data from GV$ views, so method XTRACT would help. The problem was then “how to get SQLT XTRACT to start lets say at 10pm when I leave the office at 5:30pm?”.

Starting with SQLT 11.4.4.6, all main methods (including XTRACT) take the SQLTXPLAIN password as an inline parameter instead of an ACCEPT command, so we can now easily execute at 5:30pm a script where we pass to XTRACT both the SQL_ID and the SQLTXPLAIN password. In example below such password is sqltxplain (lower case):

-- script.sql which executes SQLT XTRACT with a delay of 4.5 hours
host sleep 4.5h
start run/sqltxtract.sql 51x6yr9ym5hdc sqltxplain

Written by Carlos Sierra

June 6, 2012 at 11:41 am

Posted in SQLTXPLAIN (SQLT)

Table contains N column(s) referenced in predicates with mutating number of histogram endpoints count.

with one comment

What is the meaning of this observation on a SQLTXPLAIN (SQLT) main report?

“Table contains N column(s) referenced in predicates with mutating number of histogram endpoints count.”

This observation is part of the health-checks performed by SQLT on a given SQL statement. It means the following:

  1. Your SQL includes predicates on column(s) with histograms; and
  2. Number of buckets in histograms for these column(s) has changed more than 10% between two consecutive versions of the statistics

For example, your SQL has a predicate like “c1 = :b1” and c1 has a histogram. Currently, the number of buckets was 5, but on your prior version of the statistics for this column, its histogram had 4 buckets.

How does it affect you?

It all depends. Maybe it does not. But maybe, the new value is what your SQL passes during bind peeking. Then it may impact your execution plan…

Keep in mind that “Observations” in SQLT are merely some kind of “heads-up” that something may affect your plan. Other cases of “mutating” values include:

  • Indexes with mutating BLEVEL
  • Tables with mutating NUM_ROWS
  • Columns with mutating NDV

And the same applies to partitions and sub-partitions.

Typical case:

What happens if your SQL was using index X on its execution plan and suddenly it switches to index Y? Same join order, just one index access swapped.

If you see that X or Y is mutating its BLEVEL, you can suspect that might be the culprit. Of course you will have to review 10053 to be sure, but at least the “observation” creates some awareness of this possibility.

Written by Carlos Sierra

June 5, 2012 at 3:16 pm