Archive for the ‘SQLTXPLAIN (SQLT)’ Category
Creating a SQL Plan Baseline from Cursor Cache or AWR
A DBA deals with performance issues often, and having a SQL suddenly performing poorly is common. What do we do? We proceed to “pin” an execution plan, then investigate root cause (the latter is true if time to next fire permits).
DBMS_SPM provides some APIs to create a SQL Plan Baseline (SPB) from the Cursor Cache, or from a SQL Tuning Set (STS), but not from the Automatic Workload Repository (AWR). For the latter, you need a two-steps approach: create a STS from AWR, then load a SPB from the STS. Fine, except when your next fire is waiting for you, or when deciding which is the “best” plan is not trivial.
Take for example chart below, which depicts multiple execution plans with different performance for one SQL statement. The SQL statement is actually quite simple, and data is not significantly skewed. On this particular application, usually one-size-fits-all (meaning one-and-only-one plan) works well for most values passed on variable place holders. Then, which plan would you choose?
Note: please get all scripts using the download column on the right
Looking at summary of known Execution Plans’ performance below (as reported by planx.sql), we can see the same 6 Execution Plans.
1st Plan on list shows an average execution time of 2.897ms according to AWR, and 0.896ms according to Cursor Cache; and number of recorded executions for this Plan are 2,502 and 2,178 respectively. We see this Plan contains one Nested Loop, and if we look at historical performance we notice this Plan takes less than 109ms 95% of the time, less than 115ms 97% of the time, and less then 134ms 99% of the time. We also see that worst recorded AWR period, had this SQL performing in under 150ms (on average for that one period).
We also notice that last plan on list performs one execution in 120.847ms on average (as per AWR) and 181.113ms according to Cursor Cache (on average as well). Then, “pinning” 1st plan on list seems like a good choice, but not too different than all but last plan, specially when we consider both: average performance and historical performance according to percentiles reported.
PLANS PERFORMANCE ~~~~~~~~~~~~~~~~~ Plan ET Avg ET Avg CPU Avg CPU Avg BG Avg BG Avg Executions Executions ET 100th ET 99th ET 97th ET 95th CPU 100th CPU 99th CPU 97th CPU 95th Hash Value AWR (ms) MEM (ms) AWR (ms) MEM (ms) AWR MEM AWR MEM MIN Cost MAX Cost NL HJ MJ Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) ----------- ----------- ----------- ----------- ----------- ------------ ------------ ------------ ------------ ---------- ---------- --- --- --- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- 4113179674 2.897 0.896 2.715 0.714 96 5 2,502 2,178 8 738 1 0 0 149.841 133.135 114.305 108.411 147.809 133.007 113.042 107.390 578709260 29.576 32.704 28.865 31.685 1,583 1,436 6,150 1,843 67 875 1 0 0 154.560 84.264 65.409 57.311 148.648 75.209 62.957 56.305 1990606009 74.399 79.054 73.163 77.186 1,117 1,192 172 214 905 1,108 0 1 0 208.648 208.648 95.877 95.351 205.768 205.768 94.117 93.814 1242077371 77.961 77.182 1,772 8,780 949 1,040 0 1 0 102.966 98.206 91.163 89.272 100.147 97.239 90.165 88.412 2214147219 79.650 82.413 78.242 80.817 1,999 2,143 42,360 24,862 906 1,242 0 1 0 122.535 101.293 98.442 95.737 119.240 99.118 95.266 93.156 1214505235 120.847 181.113 105.485 162.783 506 1,355 48 12 114 718 1 0 0 285.950 285.950 285.950 285.950 193.954 193.954 193.954 193.954
Plans performance summary above is displayed in a matter of seconds by planx.sql, sqlperf.sql and by a new script spb_create.sql. This output helps make a quick decision about which Execution Plan is better for “pinning”, meaning: to create a SPB on it.
Sometimes such decision is not that trivial, as we can see on sample below. Which plan is better? I would go with 2nd on list. Why? performance-wise this plan is more stable. It does a Hash Join, so I am expecting to see a Plan with full scans, but if I can get consistent executions under 0.4s (according to percentiles), I would be tempted to “pin” this 2nd Plan instead of 1st one. And I would stay away from 3rd and 5th. So maybe I would create a SPB with 3 plans instead of just one, and include on this SPB 1st, 2nd and 4th on the list.
PLANS PERFORMANCE ~~~~~~~~~~~~~~~~~ Plan ET Avg ET Avg CPU Avg CPU Avg BG Avg BG Avg Executions Executions ET 100th ET 99th ET 97th ET 95th CPU 100th CPU 99th CPU 97th CPU 95th Hash Value AWR (ms) MEM (ms) AWR (ms) MEM (ms) AWR MEM AWR MEM MIN Cost MAX Cost NL HJ MJ Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) Pctl (ms) ----------- ----------- ----------- ----------- ----------- ------------ ------------ ------------ ------------ ---------- ---------- --- --- --- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- 1917891576 0.467 0.334 0.330 0.172 119 33 554,914,504 57,748,249 6 1,188 2 0 0 6,732.017 10.592 1.628 1.572 1,420.864 1.557 1.482 1.261 99953997 1.162 2.427 0.655 0.492 83 55 58,890,160 2,225,247 12 2,311 0 1 0 395.819 235.474 108.142 34.909 56.008 22.329 12.926 3.069 3559532534 1.175 1,741.041 0.858 91.486 359 46 21,739,877 392 4 20 1 0 0 89,523.768 4,014.301 554.740 298.545 21,635.611 216.456 54.050 30.130 3650324870 2.028 20.788 1.409 2.257 251 199 24,038,404 143,819 11 5,417 0 1 0 726.964 254.245 75.322 20.817 113.259 21.211 13.591 8.486 3019880278 43.465 43.029 20,217 13,349 5,693 5,693 0 1 0 43.465 43.465 43.465 43.465 43.029 43.029 43.029 43.029
About new script spb_create.sql
Update: Scripts to deal with SQL Plan Baselines, SQL Profiles and SQL Patches
This new script is a life-saver for us, since our response time for an alert is usually measured in minutes, with a resolution (and sometimes a root cause analysis) expected in less than one hour from the time the incident is raised.
This script is quite simple:
- it provides a list of known Execution Plans including current (Cursor Cache) and historical (AWR) performance as displayed in two samples above, then
- asks on which Plan Hash Values (PHVs) you want to create a SPB on. It allows you to enter up to 3 PHVs; last
- asks if you want these plans to be set as FIXED
After you respond to ACCEPT parameters, then a SPB for your SQL is created and displayed. It does not matter if the Plan exists on Cursor Cache and/or on AWR, it finds the Plan and creates the SPB for you. Then: finding known Execution Plans, deciding which one is a better choice (or maybe more than one), and creating a SPB, all can be done very rapidly.
If you still prefer to use SQL Profiles and not SPBs for whatever reason, script coe_xfr_sql_profile.sql is still around and updated. On these 12c days, and soon 18c and beyond, I’d much rather use SQL Plan Management and create SPBs although!
Anyways, enjoy these free scripts and become a faster hero “pinning” good plans. Then don’t forget to do diligent root cause analysis afterwards. I use SQLd360 by Mauro Pagano for deep understanding of what is going on with my SQL statements.
Soon, I will post about a cool free tool that automates the implementation of SQL Plan Management on a high-rate OLTP where stability is more important than flexibility (frequently changing Execution Plans). Stay tuned!
Note: please get all scripts using the download column on the right
SQLT and SQLd360 interview and one-day class on Practical SQL Tuning announcement
With permission of the Northern California Oracle Users Group (NoCOUG) I am reproducing a warm interview on SQLTXPLAIN and SQLd360. During this interview Mauro Pagano and myself talk about the history behind these two free tools and how the former has evolved into the latter. You can find the full transcript of the interview here: YesSQL(T). If you want to read the entire free online NoCOUG Journal, you will discover other cool articles.
Anyways, I am glad Iggy Fernandez invited us to participate first on this interview, and second to collaborate on the meeting planned for January. On that meeting Mauro and I will conduct a one full day workshop on “Practical SQL Tuning” (January 28) in Northern California. We hope to see many of you guys there, and please bring questions and case studies.
SQLTXPLAIN under new administration
During my 17 years at Oracle, I developed several tools and scripts. The largest and more widely used is SQLTXPLAIN. It is available through My Oracle Support (MOS) under document_id 215187.1.
SQLTXPLAIN, also know as SQLT, is a tool for SQL diagnostics, including Performance and Wrong Results. I am the original developer and author, but since very early stages of its development, this tool encapsulates the expertise of many bright engineers, DBAs, developers and others, who constantly helped to improve this tool on every new release by providing valuable feedback. SQLT is then nothing but the collection of many good ideas from many people. I was just the lucky guy that decided to build something useful for the Oracle SQL tuning community.
When I decided to join Enkitec back on 2013, I asked Mauro Pagano to look after my baby (I mean SQLT), and sure enough he did an excellent job. Mauro fixed most of my bugs, as he jokes about, and also incorporated some of his own :-). Mauro kept SQLT in good shape and he was able to continue improving it on every new release. Now Mauro also works for Enkitec, so SQLT has a new owner and custodian at Oracle.
Abel Macias is the new owner of SQLT, and as such he gets busy maintaining and enhancing this tool among other duties at Oracle. So, if you have enhancement requests, or positive feedback, please reach out to Abel at his Oracle account: abel.macias@oracle.com. If you come across some of my other tools and scripts, and they show my former Oracle account (carlos.sierra@oracle.com), please reach out to Abel and he might be able to route your concern or question.
Since one of my hobbies is to build free software that I also consume, my current efforts are on eDB360, eAdam and eSP. The most popular and openly available is eDB360, which basically gives your a 360-degree view of a database without installing anything. Then, Mauro is also building something cool on his own free time. Mauro is building the new SQLd360 tool, which is already available on the web (search for SQLd360). This SQLd360 tool, similar to eDB360, provides a 360-degree view, but instead of a database its focus is one SQL. And similarly than eDB360 it installs nothing on the database. Both are available as “free software” for anyone to download and use. That is the nice part: everyone likes free! (specially if any good).
What is the difference between SQLd360 and SQLT?
Both are exceptional tools. And both can be used for SQL Tuning and for SQL diagnostics. The main differences in my opinion are these:
- SQLT has it all. It is huge and it covers pretty much all corners. So, for SQL Tuning this SQLTXPLAIN is “THE” tool.
- SQLd360 in the other hand is smaller, newer and faster to execute. It gives me what is more important and most commonly used.
- SQLT requires to install a couple of schemas and hundreds of objects. SQLd360 installs nothing!
- To download SQLT you need to login into MOS. In contrast, SQLd360 is wide open (free software license), and no login is needed.
- Oracle Support requires SQLT, and Oracle Engineers are not exposed yet to SQLd360.
- SQLd360 uses Google charts (as well as eDB360 does) which enhance readability of large data sets, like time series for example. Thus SQLd360 output is quite readable.
- eDB360 calls SQLd360 on SQL of interest (large database consumers), so in that sense SQLd360 enhances eDB360. But SQLd360 can also be used stand-alone.
If you ask me which one would I recommend, I would answer: both!. If you can use both, then that is better than using just one. Each of these two tools (SQLT and SQLd360) has pros and cons compared to the other. But at the end both are great tools. And thanks to Abel Macias, SQLT continues its lifecycle with frequent enhancements. And thanks to Mauro, we have now a new kid on the block! I would say we have a win-win for our large Oracle community!
SQLTXPLAIN PL/SQL Public APIs to execute XTRACT from 3rd party tools
Many tools offer Public APIs, which expose some functionality to other tools. SQLTXPLAIN contains also some Public APIs. They are provided by package SQLTXADMIN.SQLT$E. I would say the most relevant one is XTRACT_SQL_PUT_FILES_IN_DIR. This blog post is about this Public API and how it can be used by other tools to execute a SQLT XTRACT from PL/SQL instead of SQL*Plus.
Imagine a tool that deals with SQL statements, and with the click of a button it invokes SQLTXTRACT on a SQL of interest, and after a few minutes, most files created by SQLTXTRACT suddenly show on an OS pre-defined directory. Implementing this SQLT functionality on an external tool is extremely easy as you will see below.
Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR inputs a SQL_ID and two other optional parameters: A tag to identify output files, and a directory name. Only SQL_ID parameter is mandatory, and the latter two are optional, but I recommend to pass values for all 3.
I used “Q1” as a tag to be included in all output files. And I used staging directory “FROG_DIR” at the database layer, which points to “/home/oracle/frog” at the OS layer.
On sample below, I show how to use this Public API for a particular SQL_ID “8u0n7w1jug5dg”. I call this API from SQL*Plus, but keep in mind that if I were to call it from within a tool’s PL/SQL library, the method would be the same.
Another consideration is that Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR may take several minutes to execute, so you may want to “queue” the request using a Task or a Job within the database. What is important here on this blog post is to explain and show how this Public API works.
SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR parameters:
Find below code snippet showing API Parameters. Notice this API is overloaded, so it may return the STATEMENT_ID or nothing. This STATEMENT_ID is the 5 digits number you see on each SQLT execution.
CREATE OR REPLACE PACKAGE &&tool_administer_schema..sqlt$e AUTHID CURRENT_USER AS /* $Header: 215187.1 sqcpkge.pks 12.1.03 2013/10/10 carlos.sierra mauro.pagano $ */ /*************************************************************************************/ /* ------------------------- * * public xtract_sql_put_files_in_dir * * executes sqlt xtract on a single sql then * puts all generated files into an os directory, * returning the sqlt statement id. * * ------------------------- */ FUNCTION xtract_sql_put_files_in_dir ( p_sql_id_or_hash_value IN VARCHAR2, p_out_file_identifier IN VARCHAR2 DEFAULT NULL, p_directory_name IN VARCHAR2 DEFAULT 'SQLT$STAGE' ) RETURN NUMBER; /* ------------------------- * * public xtract_sql_put_files_in_dir (overload) * * executes sqlt xtract on a single sql then * puts all generated files into an os directory. * * ------------------------- */ PROCEDURE xtract_sql_put_files_in_dir ( p_sql_id_or_hash_value IN VARCHAR2, p_out_file_identifier IN VARCHAR2 DEFAULT NULL, p_directory_name IN VARCHAR2 DEFAULT 'SQLT$STAGE' );
Staging Directory
To implement Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR on your tool, you need first to create and test a staging directory where the API will write files. This directory needs to be accessible to the “oracle” account, so I show below how to create sample directory “frog” while connected to the OS as “oracle”.
Since the API uses UTL_FILE, it is important that “oracle” can write into it, so be sure you test this UTL_FILE write after you create the directory and before you test Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR.
Use code snippet provided below to test the UTL_FILE writing into this new staging OS directory.
DECLARE out_file_type UTL_FILE.file_type; BEGIN out_file_type := UTL_FILE.FOPEN ( location => 'FROG_DIR', filename => 'Test1.txt', open_mode => 'WB', max_linesize => 32767 ); END; /
Executing SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR
On your tool, you can call this SQLT Public API from PL/SQL. You may want to use a Task or Job since the API may take several minutes to execute and you do not want the user to simply wait until SQLT completes.
Conclusion
Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR is available for any 3rd party tool to use. If SQLT has been pre-installed on a system where your tool executes, then calling this API as shown above, will generate a set of SQLT files on a pre-defined staging OS directory.
If the system where you install your tool does not have SQLT pre-installed, your tool can direct its users to download and install SQLT out of My Oracle Support (MOS) under document 215187.1.
Once you generate all these SQLT XTRACT files into an OS staging directory, you may want to zip them, or make them visible to your tool user. If the latter, then show the “main” html report.
SQLT is an Oracle community tool hosted at Oracle MOS under 215187.1. This tool is not supported, but if you have a question or struggle while implementing this Public API, feel free to shoot me an email or post your question/concern on this blog.
Creating a SQL Plan Baseline out of a modified SQL
While delivering a session at KSCOPE 14, I was asked a very good question: Can I create a SQL Plan Baseline for an Original SQL out of a Modified SQL? In other words, query Q1, which performs poorly, has SQL_ID1 and SQL_HANDLE1 with a poorly performing Execution Plan with Hash Value PHV1. With a small modification to this query, like adding a CBO Hint or removing one, we obtain query Q2, which performs well, and has SQL_ID2, SQL_HANDLE2 and PHV2. So what we want it to associate PHV2 to SQL_ID1. The questions is: how do we do that with an API?
Maria Colgan explains on this blog how to manually do exactly what is requested above. So the question becomes: can we do this manual procedure with an easy to use API?
Script coe_load_sql_baseline.sql below inputs SQL_ID1, SQL_ID2 and PHV2, which correspond to the Original SQL_ID, the Modified SQL_ID and the Plan Hash Value from the Modified SQL (it could have more than one plan); then it produces a SQL Plan Baseline for the Original SQL out of the Execution Plan of the Modified SQL. By doing this we can “fix” a Query performing poorly by attaching to it a good performing Execution Plan that was produced by making a small temporary change to the Original SQL, like adding or removing a small set of CBO Hints. So we can generate good Execution Plans without actually changing the application.
Script coe_load_sql_baseline.sql is provided inside SQLT (MOS 215187.1) under sqlt/utl directory. Use always the latest version. As of June 2014 the latest version is the one below. This script does not require to install SQLT.
SPO coe_load_sql_baseline.log; SET DEF ON TERM OFF ECHO ON FEED OFF VER OFF HEA ON LIN 2000 PAGES 100 LONG 8000000 LONGC 800000 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 NUM 20 SQLP SQL>; SET SERVEROUT ON SIZE UNL; REM REM $Header: 215187.1 coe_load_sql_baseline.sql 11.4.5.8 2013/05/10 carlos.sierra $ REM REM Copyright (c) 2000-2013, Oracle Corporation. All rights reserved. REM REM AUTHOR REM carlos.sierra@oracle.com REM REM SCRIPT REM coe_load_sql_baseline.sql REM REM DESCRIPTION REM This script loads a plan from a modified SQL into the SQL REM Plan Baseline of the original SQL. REM If a good performing plan only reproduces with CBO Hints REM then you can load the plan of the modified version of the REM SQL into the SQL Plan Baseline of the orignal SQL. REM In other words, the original SQL can use the plan that was REM generated out of the SQL with hints. REM REM PRE-REQUISITES REM 1. Have in cache or AWR the text for the original SQL. REM 2. Have in cache the plan for the modified SQL REM (usually with hints). REM REM PARAMETERS REM 1. ORIGINAL_SQL_ID (required) REM 2. MODIFIED_SQL_ID (required) REM 3. PLAN_HASH_VALUE (required) REM REM EXECUTION REM 1. Connect into SQL*Plus as user with access to data dictionary REM and privileges to create SQL Plan Baselines. Do not use SYS. REM 2. Execute script coe_load_sql_baseline.sql passing first two REM parameters inline or until requested by script. REM 3. Provide plan hash value of the modified SQL when asked. REM REM EXAMPLE REM # sqlplus system REM SQL> START coe_load_sql_baseline.sql gnjy0mn4y9pbm b8f3mbkd8bkgh REM SQL> START coe_load_sql_baseline.sql; REM REM NOTES REM 1. This script works on 11g or higher. REM 2. For a similar script for 10g use coe_load_sql_profile.sql, REM which uses custom SQL Profiles instead of SQL Baselines. REM 3. For possible errors see coe_load_sql_baseline.log REM 4. Use a DBA user but not SYS. Do not connect as SYS as the staging REM table cannot be created in SYS schema and you will receive an error: REM ORA-19381: cannot create staging table in SYS schema REM SET TERM ON ECHO OFF; PRO PRO Parameter 1: PRO ORIGINAL_SQL_ID (required) PRO DEF original_sql_id = '&1'; PRO PRO Parameter 2: PRO MODIFIED_SQL_ID (required) PRO DEF modified_sql_id = '&2'; PRO WITH p AS ( SELECT DISTINCT plan_hash_value FROM gv$sql_plan WHERE sql_id = TRIM('&&modified_sql_id.') AND other_xml IS NOT NULL ), m AS ( SELECT plan_hash_value, SUM(elapsed_time)/SUM(executions) avg_et_secs FROM gv$sql WHERE sql_id = TRIM('&&modified_sql_id.') AND executions > 0 GROUP BY plan_hash_value ) SELECT p.plan_hash_value, ROUND(m.avg_et_secs/1e6, 3) avg_et_secs FROM p, m WHERE p.plan_hash_value = m.plan_hash_value ORDER BY avg_et_secs NULLS LAST; PRO PRO Parameter 3: PRO PLAN_HASH_VALUE (required) PRO DEF plan_hash_value = '&3'; PRO PRO Values passed to coe_load_sql_baseline: PRO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PRO ORIGINAL_SQL_ID: "&&original_sql_id." PRO MODIFIED_SQL_ID: "&&modified_sql_id." PRO PLAN_HASH_VALUE: "&&plan_hash_value." PRO WHENEVER SQLERROR EXIT SQL.SQLCODE; SET TERM OFF ECHO ON; -- trim parameters COL original_sql_id NEW_V original_sql_id FOR A30; COL modified_sql_id NEW_V modified_sql_id FOR A30; COL plan_hash_value NEW_V plan_hash_value FOR A30; SELECT TRIM('&&original_sql_id.') original_sql_id, TRIM('&&modified_sql_id.') modified_sql_id, TRIM('&&plan_hash_value.') plan_hash_value FROM DUAL; -- open log file SPO coe_load_sql_baseline_&&original_sql_id..log; GET coe_load_sql_baseline.log; . -- get user COL connected_user NEW_V connected_user FOR A30; SELECT USER connected_user FROM DUAL; VAR sql_text CLOB; VAR plan_name VARCHAR2(30); EXEC :sql_text := NULL; EXEC :plan_name := NULL; -- get sql_text from memory BEGIN SELECT REPLACE(sql_fulltext, CHR(00), ' ') INTO :sql_text FROM gv$sqlarea WHERE sql_id = TRIM('&&original_sql_id.') AND ROWNUM = 1; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('getting original sql_text from memory: '||SQLERRM); :sql_text := NULL; END; / -- get sql_text from awr BEGIN IF :sql_text IS NULL OR NVL(DBMS_LOB.GETLENGTH(:sql_text), 0) = 0 THEN SELECT REPLACE(sql_text, CHR(00), ' ') INTO :sql_text FROM dba_hist_sqltext WHERE sql_id = TRIM('&&original_sql_id.') AND sql_text IS NOT NULL AND ROWNUM = 1; END IF; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('getting original sql_text from awr: '||SQLERRM); :sql_text := NULL; END; / -- sql_text as found SELECT :sql_text FROM DUAL; -- check is sql_text for original sql is available SET TERM ON; BEGIN IF :sql_text IS NULL THEN RAISE_APPLICATION_ERROR(-20100, 'SQL_TEXT for original SQL_ID &&original_sql_id. was not found in memory (gv$sqlarea) or AWR (dba_hist_sqltext).'); END IF; END; / -- check phv is found DECLARE l_count NUMBER; BEGIN SELECT COUNT(*) INTO l_count FROM gv$sql WHERE sql_id = TRIM('&&modified_sql_id.') AND plan_hash_value = TO_NUMBER(TRIM('&&plan_hash_value.')); IF l_count = 0 THEN RAISE_APPLICATION_ERROR(-20110, 'PHV &&plan_hash_value. for modified SQL_ID &&modified_sql_id. was not be found in memory (gv$sql).'); END IF; END; / SET ECHO OFF; DECLARE plans NUMBER; description VARCHAR2(500); sys_sql_handle VARCHAR2(30); sys_plan_name VARCHAR2(30); BEGIN -- create sql_plan_baseline for original sql using plan from modified sql plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE ( sql_id => TRIM('&&modified_sql_id.'), plan_hash_value => TO_NUMBER(TRIM('&&plan_hash_value.')), sql_text => :sql_text ); DBMS_OUTPUT.PUT_LINE('Plans Loaded: '||plans); -- find handle and plan_name for sql_plan_baseline just created SELECT sql_handle, plan_name INTO sys_sql_handle, sys_plan_name FROM dba_sql_plan_baselines WHERE creator = USER AND origin = 'MANUAL-LOAD' AND created = ( -- past 1 minute only SELECT MAX(created) max_created FROM dba_sql_plan_baselines WHERE creator = USER AND origin = 'MANUAL-LOAD' AND created > SYSDATE - (1/24/60)); DBMS_OUTPUT.PUT_LINE('sys_sql_handle: "'||sys_sql_handle||'"'); DBMS_OUTPUT.PUT_LINE('sys_plan_name: "'||sys_plan_name||'"'); -- update description of new sql_plan_baseline description := UPPER('original:'||TRIM('&&original_sql_id.')||' modified:'||TRIM('&&modified_sql_id.')||' phv:'||TRIM('&&plan_hash_value.')||' created by coe_load_sql_baseline.sql'); plans := DBMS_SPM.ALTER_SQL_PLAN_BASELINE ( sql_handle => sys_sql_handle, plan_name => sys_plan_name, attribute_name => 'description', attribute_value => description ); DBMS_OUTPUT.PUT_LINE(plans||' plan(s) modified description: "'||description||'"'); -- update plan_name of new sql_plan_baseline :plan_name := UPPER(TRIM('&&original_sql_id.')||'_'||TRIM('&&modified_sql_id.')); :plan_name := sys_plan_name; -- avoids ORA-38141: SQL plan baseline SQL_PLAN_64b0jqr2t1h3558b5ab4d does not exist IF :plan_name <> sys_plan_name THEN plans := DBMS_SPM.ALTER_SQL_PLAN_BASELINE ( sql_handle => sys_sql_handle, plan_name => sys_plan_name, attribute_name => 'plan_name', attribute_value => :plan_name ); DBMS_OUTPUT.PUT_LINE(plans||' plan(s) modified plan_name: "'||:plan_name||'"'); END IF; -- drop baseline staging table for original sql (if one exists) BEGIN DBMS_OUTPUT.PUT_LINE('dropping staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'"'); EXECUTE IMMEDIATE 'DROP TABLE STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.')); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'" did not exist'); END; -- create baseline staging table for original sql DBMS_OUTPUT.PUT_LINE('creating staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'"'); DBMS_SPM.CREATE_STGTAB_BASELINE ( table_name => 'STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.')), table_owner => '&&connected_user.' ); -- packs new baseline for original sql DBMS_OUTPUT.PUT_LINE('packaging new sql baseline into staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'"'); plans := DBMS_SPM.PACK_STGTAB_BASELINE ( table_name => 'STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.')), table_owner => '&&connected_user.', sql_handle => sys_sql_handle, plan_name => :plan_name ); DBMS_OUTPUT.PUT_LINE(plans||' pla(s) packaged'); END; / -- display details of new sql_plan_baseline SET ECHO ON; REM REM SQL Plan Baseline REM ~~~~~~~~~~~~~~~~~ REM SELECT signature, sql_handle, plan_name, enabled, accepted, fixed--, reproduced (avail on 11.2.0.2) FROM dba_sql_plan_baselines WHERE plan_name = :plan_name; SELECT description FROM dba_sql_plan_baselines WHERE plan_name = :plan_name; SET ECHO OFF; PRO PRO **************************************************************************** PRO * Enter &&connected_user. password to export staging table STGTAB_BASELINE_&&original_sql_id. PRO **************************************************************************** HOS exp &&connected_user. tables=&&connected_user..STGTAB_BASELINE_&&original_sql_id. file=STGTAB_BASELINE_&&original_sql_id..dmp statistics=NONE indexes=N constraints=N grants=N triggers=N PRO PRO If you need to implement this SQL Plan Baseline on a similar system, PRO import and unpack using these commands: PRO PRO imp &&connected_user. file=STGTAB_BASELINE_&&original_sql_id..dmp tables=STGTAB_BASELINE_&&original_sql_id. ignore=Y PRO PRO SET SERVEROUT ON;; PRO DECLARE PRO plans NUMBER;; PRO BEGIN PRO plans := DBMS_SPM.UNPACK_STGTAB_BASELINE('STGTAB_BASELINE_&&original_sql_id.', '&&connected_user.');; PRO DBMS_OUTPUT.PUT_LINE(plans||' plan(s) unpackaged');; PRO END;; PRO / PRO SPO OFF; HOS zip -m coe_load_sql_baseline_&&original_sql_id. coe_load_sql_baseline_&&original_sql_id..log STGTAB_BASELINE_&&original_sql_id..dmp coe_load_sql_baseline.log HOS zip -d coe_load_sql_baseline_&&original_sql_id. coe_load_sql_baseline.log WHENEVER SQLERROR CONTINUE; SET DEF ON TERM ON ECHO OFF FEED 6 VER ON HEA ON LIN 80 PAGES 14 LONG 80 LONGC 80 TRIMS OFF TI OFF TIMI OFF SERVEROUT OFF NUM 10 SQLP SQL>; SET SERVEROUT OFF; UNDEFINE 1 2 3 original_sql_id modified_sql_id plan_hash_value CL COL PRO PRO coe_load_sql_baseline completed.
How to upgrade Dynamic Sampling on a Query already using an imported SQL Profile
Problem
I have this query that references a couple of Global Temporary Tables (GTT). These GTT have no CBO Statistics, thus Dynamic Sampling (DS) is used on them. The problem is default value of 2 at the instance level samples only 64 blocks and these GTT are large. Testing with DS on larger samples provides better quality on these dynamic statistics and a better performing Execution Plan for this particular SQL.
This SQL already uses an imported SQL Profile generated by coe_xfr_sql_profile.sql out of SQLT (MOS 215187.1) under sqlt/utl directory. This SQL Profile contains CBO Hints created out of the Outline Data contained on the other_xml column of the Plan. Still a higher level of DS is needed for these two GTT in order to produce a better performing Plan. Setting DS at the instance level or session level is not an option. Modifying the query is not an option. So the question is: “how to embed a dynamic sampling hint on a SQL that already has a SQL Profile based on an Outline“?
Solution
On the script generated by coe_xfr_sql_profile.sql add one extra Hint with the DS level that is needed. Look at sample below, where one extra Hint has added right below DB_VERSION. Since the two GTT have no CBO Statistics, they invoke DS, which would use now level 6 instead of 2.
h := SYS.SQLPROF_ATTR( q'[BEGIN_OUTLINE_DATA]', q'[IGNORE_OPTIM_EMBEDDED_HINTS]', q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]', q'[DB_VERSION('11.2.0.3')]', q'[OPT_PARAM('optimizer_dynamic_sampling' 6)]', q'[ALL_ROWS]',
Why using SQLTXPLAIN
Every so often I see on a distribution list a posting that starts like this: “I upgraded my application from database release X to release Y and now many queries are performing poorly, can you tell why?”
As everyone else on a distribution list, my first impulse is to make an educated guess permeated by a prior set of experiences. The intentions are always good, but the process is painful and time consuming. Many of us have seen this kind of question, and many of us have good hunches. Still I think our eagerness to help blinds us a bit. The right thing to do is to step back and analyze the facts, and I mean all the diagnostics supporting the observation.
What is needed to diagnose a SQL Tuning issue?
The list is large, but I will enumerate some of the most important pieces:
- SQL Text
- Version of the database (before and after upgrade)
- Database parameters (before and after)
- State of the CBO Statistics (before and after)
- Changes on Histograms
- Basics about the architecture (CPUs, memory, etc.)
- Values of binds if SQL has them
- Indexes compare, including state (visible?, usable?)
- Execution Plan (before and after)
- Plan stability? (Stored Outlines, Profiles, SQL Plan Management)
- Performance history as per evidence on AWR or StatsPack
- Trace from Event 10053 to understand the CBO
- Trace from Event 10046 level 8 or 12 to review Waits
- Active Session History (ASH) if 10046 is not available
I could keep adding bullets to the list, but I think you get the point: There are simply too many things to check! And each takes some time to collect. More important, the state of the system changes overtime, so you may need to re-collect the same diagnostics more than once.
SQLTXPLAIN to the rescue
SQLT or SQLTXPLAIN, has been available on MetaLink (now MOS) under note 215187.1 for over a decade. In short, SQLT collects all the diagnostics listed above and a lot more. That is WHY Oracle Support uses it every day. It simply saves a lot of time! So, I always encourage fellow Oracle users to make use of the FREE tool and expedite their own SQL Tuning analysis. When time permits, I do volunteer to help on an analysis. So, if you get to read this, and you want to help yourself while using SQLT but feel intimidated by this little monster, please give it a try and contact me for assistance. If I can help, I will, if I cannot, I will let you know.
Conclusion
It is fun to guess WHY a SQL is not performing as expected, and trying different guesses is educational but very time consuming. If you want to actually find root causes before trying to fix your SQL, you may want to collect relevant diagnostics. SQLT is there to help, and if installing this tool is not something you can do in a short term, consider then SQL Health-Check SQLHC.
Meet “edb360”: a free tool that provides a 360-degree view of an Oracle database
Simply put: edb360 is a new free tool that provides a 360-degree view of an Oracle database.
What is “edb360“?
This “edb360” tool is the product of a collaborative effort of some very smart guys, and me. Special thanks to Frits Hoogland, Karl Arao, Randy Johnson, Martin Bach, Kyle Hailey, Tanel Poder, Alex Fatkulin, Mauro Pagano, Abel Macias, Jon Adams and Jack Agustin. These guys helped me to envision edb360, some directly and some indirectly, but their help and shared knowledge motivated me to develop edb360 and make it available today.
The edb360 tool started as a quick and dirty “script” to gather basic information about a database without knowing anything about it before hand. The first rule for edb360 was: it has to install nothing in the database. The second rule became: it has to provide some insight about a database.
The output is presented for the most part into 3 formats: HTML, Text and Comma-separated Values (CSV). Why? HTML and Text can be easily used to consolidate important findings into a Word report. Sometimes HTML is more useful and sometimes Text is better. Then CSV is used to produce charts out of Performance Trends. Some people can visualize trends easier with a graph (me included).
What about other tools?
Of course there are wonderful tools that can help in this arena, like Oracle Enterprise Manager (OEM) or Oracle’s Automatic Workload Repository (AWR). So why not using those tools? Well, if I had access to OEM or I knew before hand which time intervals I want to analyze with AWR, then I would not have a strong need to use edb360. The reality that we consultants face when we are getting acquainted of a system, is that we are not given any access to the database of interest (usually production). And asking for a server account feels like asking for coke’s secret formula: then we simply cannot poke the database at our own will, and that is understandable. So, what is our second best?: please run this script that installs nothing and generates a zip file with some metadata from your system. The script is plain text and its output is also plain text (html, text and csv files). So, any DBA or System Administration can validate that no customer confidential data is extracted or exposed. A win-win!
If the system we want to understand is an Exadata system, we can also request for an Exacheck output, if not an Exadata system but a RAC cluster, there is Raccheck. These two tools, available though My Oracle Support (MOS) make a good companion for the edb360. In other words, edb360 is not a replacement for the other two but more of an add-on or companion.
Why is edb360 free?
Why not? Often I get asked: why do you give away the tools and scripts you develop? The answer is simple: tools, scripts, white papers, blog entries like this, in my mind they all represent the same: sharing knowledge with our Oracle community. I wish for a community where knowledge (and tools) flows for all to benefit. Let’s say my personal time I invest building tools and scripts kinds of make it up for my lame blog postings. 😉
What is the catch?
No catch. Just be aware that edb360 makes use of some DBA_HIST views and ASH data, and those are part of the Oracle Diagnostics Pack. So when executing the tool it will ask to indicate if your site has those licenses. Your answer determines the scope of the output. So if you specify you have a license for the Oracle Diagnostics Pack then your edb360 output includes pieces from AWR and ASH, else AWR and ASH are not accessed.
About versions, feedback and support
For the most part, I am committed to maintain this tool as my personal time permits. That means I can only work on it during odd hours and not every day. Nothing different than SQLTXPLAIN during the first few years of its existence, so I am not scared. Keep also in mind this edb360 is work in progress, and version v1405 is the first one I feel comfortable sharing with the community. In other words, it is far from perfect and I foresee it growing in multiple directions.
If you like this tool, and want to enhance its output, get SQLHC from MOS 1366133.1, and place the sqlhc.sql script into the same db360/sql directory. By doing so, you will also get 3 SQL health-checks. In other words, edb360 is SQLHC aware.
Conclusion
If you like free tools and have a use for this edb360 tool, you might as well download it and give it a shot. Nothing to lose (besides a few minutes of your spare time). A sample output is also available under same link above.
Life is Good!
About AWR, SQLT and DB360
This blog posting is about answering this first question below, which I often get asked:
Can I use SQLTXPLAIN (SQLT) instead of AWR?
The answer is: it depends. If you are doing SQL Tuning and you already know the SQL_ID, then you may want to use SQLT XTRACT (MOS 215187.1) directly on the SQL_ID of concern. But even in that case, keep in mind that SQLT accesses AWR data, so your environment must have a valid license to use the Oracle Diagnostics Pack. In fact, when you install SQLT, it asks if you have the Oracle Tuning Pack, the Oracle Diagnostics Pack or none. Depending how you respond, SQLT access or not the set of views and APIs included on those Oracle Packs. That means you can configure SQLT to access or not AWR data.
What is the difference between AWR and SQLT?
In short, the difference between these two is the scope. You use AWR to diagnose database performance, while you use SQLT to diagnose one SQL. Sometimes I explain this difference by saying: “AWR is to an entire database to what SQLT is to one SQL”. One is for system-wide performance, the other is very centric to one SQL at a time.
Why SQLT exists?
I envisioned SQLT on the late 90’s when I was a road-warrior fighting fires in the area of performance, and in particular SQL performance. I found back then that Oracle-provided tools like TKPROF were excellent, but I always needed something more, like knowing the state of the CBO Statistics, Tables and Indexes, etc.
These days, my good friend Mauro Pagano from Oracle, is keeping the ball rolling. He is keeping SQLT in constant motion, making it a better tool on every new version. So, I would say this: SQLT is filling up some gaps that me, and many others, consider important in order to be diligent on root cause analysis for cases were a SQL performs poorly.
What is DB360?
As SQLT brings to the table several pieces of information that we need for SQL Tuning, and which are not available using out-of-the-box tools like TKPROF or SQL Monitoring, the new DB360 tool is doing something similar for the entire database: It complements what AWR provides by producing a report with meaningful information about an entire database. This DB360 is a tool that installs nothing on the database, and produces an HTML report with sections such as Configuration, Sizing, CBO Statistics, Performance Trends, etc.
Is DB360 a licensed product?
No. This DB360 tool belongs to Enkitec. It is not yet available to the Oracle community, but it will be soon. Same as SQLT, if you have an Oracle Tuning or Diagnostics Pack, then when you execute DB360 you would get to see in your DB360 report some pieces of information generated out of views and APIs covered by those Oracle Packs, else you get only the pieces which require no Oracle Pack license. Besides the restriction to limit your use of DB360 as per your Oracle Pack license, DB360 itself will be available to the Oracle community for free, and with no strings attached, same as SQLT.
Why are SQLT and DB360 free to download and use?
These tools are simply a contribution to the Oracle community. “Sharing tools is like sharing knowledge: it makes our IT experience more pleasurable”. The only payback comes when you share back with the Oracle community some of your knowledge, or some of your tools and scripts. If you have been a speaker in an Oracle Users Groups, then you may relate to this gratifying experience to share with others what you know. At RMOUG these past 3 days, I have had the opportunity to experience once again this special sense of community, that is always eager to share and to learn from each other.
Conclusion
SQLT complements TKPROF and SQL Monitor. DB360 complements AWR. When it comes to diagnostics, either for one SQL or for an entire Database, having to our disposal additional diagnostics in the context of our analysis, improves our chances to do a diligent job, while reducing the time that it would take to assembly those pieces manually; all with the extra benefit of having these extra diagnostics pieces consistent to a point in time. So, I hope you continue enjoying SQLT and in the near future DB360!
Exadata Optimizations and SQLTXPLAIN Courses
I will be delivering a couple of courses soon. One in January and the second in February. I will keep posting upcoming Training and Conferences on a new link at the right margin of this blog.
Exadata Optimizations Jan 13-14
This 2-days “Exadata Optimizations” course is for Developers and DBAs new to Exadata and in need to ramp-up quickly. As the name implies, its focus is on Exadata Optimizations. We talk about Smart Scans, Storage Indexes, Smart Flash Cache, Hybrid Columnar Compression (HCC) and Parallel Execution (PX). This course is hands-on, with a fair amount of demos and labs.
SQLTXPLAIN (SQLT) Feb 20-21
This “SQL Tuning with SQLTXPLAIN” 2-days course shows how to use SQLT to actually do SQL Tuning. We go over the ying-yang of the CBO, meaning: Plan Flexibility versus Plan Stability. We use SQLT for labs and we also go over some real-life SQL Tuning cases. If you are currently using SQLT, you are welcome to bring a SQLT Report to class and we could review it there.
Conclusion
New year, new resolutions. I will be investing part of my time sharing knowledge through formal courses and conferences. These days it is hard to find the time and budget to keep our knowledge on the edge, but again and again I see that many of our daily struggles could be mitigated by some concise technical training. So I encourage you to add some training to your list of resolutions for this new year; or at the very least, to get and read some fresh books.
Happy New Year 2014!