I delivered my two sessions at the Great Lakes Oracle Users Group today. It was a great experience! I had the honor to fill to capacity both sessions, and both were scheduled in the largest room out of 5 concurrent tracks! I estimate that in each session I had more than 50% of the total audience. It is very rewarding been able to share some knowledge with such a great crowd. I was asked if I would do a half -day pre-conference workshop next year. I would certainly do if I can.
Anyways, with the conference behind, I am sharing here both presentations. For the one in Adaptive Cursor Sharing, if you want to perform the labs we did today, please post your request here and I will find a way to share those demo scripts.
- SQL Tuning made much easier with SQLTXPLAIN (SQLT)
- Using Adaptive Cursor Sharing (ACS) to produce multiple Optimal Plans
Scripts are now uploaded into acs_demo. Download this “doc” file and change its type to “zip” before opening it.
What is new in SQLT? A couple of new stand-alone utilities. Both can be used without having to install SQLT. And both are only for 11g+. Look for sqlt/utl/mon and sqlt/utl/spm. The former is to monitor executions taking longer than 5 seconds (requires Oracle Tuning Pack and it uses the SQL Monitor report). The latter is to facilitate the use of SQL Plan Management (SPM), specially for plan migrations. For a complete log of changes to SQLT you can review the changes log here: SQLT Changes Log as of 22.214.171.124. Most recent set is this:
126.96.36.199 May 10, 2013
- BUG: Fix TRCA parsing errors due to missing CURSOR information: invalid call at line 27 “1″ “1″ “” “2″.
- ENH: List of Cursors to include now “is_obsolete” flag.
- ENH: Improved performance on SQLT TC restore script. This SQL was taking too long to execute: UPDATE SQLI$_STATTAB_TEMP SET C5 = :B2 WHERE STATID = :B1 AND TYPE IN (‘C’, ‘I’, ‘T’).
- ENH: Traces generated by SQLT are now limited to 200 MB per file.
- ENH: New stand-alone set of scripts to migrate a SQL Plan Basaline. Look for readme in sqlt/utl/spm.
- ENH: New stand-alone monitoring tool to capture executions of SQL statements taking more than 5 seconds. Look for readme in sqlt/utl/mon.
188.8.131.52 April 5, 2013
- BUG: SQL Tuning Advisor was invoked only when SQL existed in memory. Now it is invoked also if SQL resides only in AWR.
- BUG: File sqlt_instructions.html had links with XPREXT tag instead of XTRACT.
- ENH: TKPROF may error out under some circumstances. SQLT now displays a message to “ignore this error”.
- ENH: Improved performance of SQLT COMPARE when number of partitions is high.
- ENH: Collection of dba_sql_patches only happens on 11g+. It used to error out on 10g.
- ENH: Following error shows now on log file but not in MAIN html report:
sqlt$a: *** i:DBMS_SPACE.CREATE_INDEX_COST: ORA-01031: insufficient privileges
sqlt$a: *** i:index_name:XIE1_PCRD_APPROVER
- ENH: Best and Worst plans now consider last those plans with no fetches. Before this change it was possible to see killed executions (fetches 0) ranking as good plans.
- ENH: Purge of TRCA global temporary tables uses now a TRUNCATE instead of a DELETE.
- ENH: SQLT produces now a SQLT_installation_logs_archive.zip with all installation logs.
- ENH: Include Product Component Version in installation and main methods logs.
- ENH: SQLT log zip includes now alert.log and spfile.
- ENH: SQLDX (SQL Dynamic eXtractor) includes now details about tables accessed by execution plans of SQL provided.
184.108.40.206 March 5, 2013
- ENH: New script sqlt/utl/sqltq.sql to generate “q” script that contains binds and sql text for given SQL_ID.
- ENH: Script sqlt/utl/coe_gen_sql_patch.sql inputs now CBO Hints text in addition to SQL_ID.
- ENH: New min methods XPREXT and XPREXC similar to XTRACT and XECUTE but disabling several SQLT features as per sqlt/run/sqltcommon11.sql in order to improve SQLT performance.
220.127.116.11 March 1, 2013
- BUG: Following SQLT Parameters where not updateable: sql_monitor_reports, awr_reports, addm_reports, sta_time_limit_secs
Workaround: Execute this update before using API to change value: update SQLTXPLAIN.sqli$_parameter set low_value = 0, high_value = 9999 where name = ‘addm_reports’;
- BUG: Eliminate duplicate entries on views: sqlt$_gv$act_sess_hist_pl_v and sqlt$_dba_act_sess_hist_pl_v.
- BUG: Incorrect observation “Table rebuild candidate.” when getting error on DBMS_SPACE.CREATE_TABLE_COST.
- BUG: Incorrect observation “Index coalesce candidate.” when getting error on DBMS_SPACE.CREATE_INDEX_COST.
- BUG: Trap this error on 10g:
ORA-00942: table or view does not exist
sqlt$a: *** d:INSERT INTO sqlt$_gv$cell_state (statement_id, statid, begin_end_flag) SELECT :statement_id, :statid, :begin_end_flag FROM gv$cell_state
- ENH: Remove health-check references to Bug 3620168, which was fixed on 10.2.
- ENH: COUNT(*) on Tables referenced by SQL includes now a FULL(t) Hint. This is to improve performance of COUNT.
- ENH: SQLT now sets TRCA to skip extents capture (by setting TRCA tool parameter “capture_extents” to N during SQLT installation). This is to improve XECUTE performance.
- ENH: New SQLT parameter “tcb_export_pkg_body” controls if 11.2+ Test Case Builder includes package bodies (of the packages referenced in the SQL are exported). Default is FALSE.
- ENH: View DBA_HIST_PARAMETER is now excluded from SQLT repository export file. This is to improve performance of export and to reduce size of zip.
- ENH: Value of parameter “_db_file_exec_read_count” is now displayed in MAIN and COMPARE reports.
- ENH: 11g PLSQL_CODE_TYPE is set to NATIVE by default (to improver performance). 10g defaults back to INTERPRETED.
- ENH: SQLY XPLORE now iterates “_optimizer_max_permutations” on values: 100, 2000, 40000, 79999 and 80000.
- ENH: SQL Patches are now displayed on the MAIN report.
- ENH: New script sqlt/utl/flush_cursor.sql flushes one cursor out of the shared pool.
- ENH: New script sqlt/utl/coe_gen_sql_patch.sql generates a SQL Patch for one SQL with some diagnostics CBO Hints. It also turns trace 10053 for same SQL.
Execution Plans have a strong dependency on CBO Statistics. In corner cases, a small change on the Schema Objects statistics creates havoc in an Execution Plan. If you strongly suspect the only change to your environment was a CBO Statistics gathering, you may consider restoring them to a prior version and test the effect in your SQL Execution Plan. If you consider doing this, you’d rather try your test on an actual “test” environment. You could use SQLT XHUME for that, but that is more of an advanced SQLT method… A topic that I will leave for some other day.
Anyways, if you want to see what I call “Statistics Versions”, which are prior Statistics together with Current and Pending Statistics, you may want to make use of some of SQLT views, available under sqlt/install/sqcvw.sql; and also copied into this sqlt_views.doc file (just change file type to *.txt after download). Then search for these views almost at the top of the file:
You do not have to install or use SQLT in order to benefit of these views. Simply grab them from the shared file and create your own version. Then you can monitor fluctuations or do whatever you need to do with this information.
SQL Plan Management (SPM) has been available since the first release of 11g. As you know SPM is the new technology that provides Plan Stability with some extra Plan Control and Management features. Maria Colgan has done an excellent job documented the “SPM functionality” pretty well in 4 of her popular blog postings:
- Creating SQL plan baselines
- SPM Aware Optimizer
- Evolving SQL Plan Baselines
- User Interfaces and Other Features
A question that I often get is: How do I move this good plan from system A into system B? To me, this translates into: How do I migrate an Execution Plan? And if source and target systems are 11g, the answer is: Use SQL Plan Management (SPM).
Migrating a Plan using SPM
Assuming both – source and target systems are on 11g then I suggest one of the two approaches below. If the source is 10g and target is 11g, then the 2nd approach below would work. In both cases the objective is to create a SQL Plan Baseline (SPB) into the target system out of a known plan from the source system.
Option 1: Create SPB on source then migrate SPB into target
- Create SQL Plan Baseline (SPB) in Source
- From Memory; or
- From AWR (requires Diagnostics Pack license)
- Package & Export SPB from Source
- Import & Restore SPB into Target
Cons: Generates a SPB in Source system
Option 2: Create SQL Tuning Set (STS) on source, migrate STS into target, promote STS into SPB in target
- Create SQL Tuning Set (STS) in Source (requires Tuning Pack license)
- From Memory; or
- From AWR (requires Diagnostics Pack license)
- Package & Export STS from Source
- Import & Restore STS into Target
- Create SPB from STS in Target
Pros: No SPB is created in Source system
Cons: Requires license for SQL Tuning Pack
How SQLTXPLAIN (SQLT) can help?
SQLT has been generating for quite some time a STS for each Plan Hash Value (PHV) of the SQL being analyzed. This STS for each PHV created on the source system is also stored inside the SQLT repository and included in the export of this SQLT repository. By doing this every time, options 1 and 2 above are simplified. If we want to promote a Plan into a SPB in source system we only have to execute an API that takes the Plan from the STS and creates the SPB. The dynamic readme included with SQLT has the exact command. And if we want to create a SPB on a target system having a SQLT from a source system, we have to restore the SQLT repository into the target system, then restore the STS out of the SQLT repository, and last create the SPB out of the STS. All these steps are clearly documented in the SQLT dynamic readme, including exact commands. There is one caveat although: you need SQLT in source and restore its repository in target…
Stand-alone scripts to Migrate a Plan using SPM
Options 1 and 2 above list the steps to take a plan from a source system and implement with it a SPB into a target system. The questions is: How exactly do I perform each of the steps? Yes, there are APIs for each step, but some are a bit difficult to use. That is WHY I have created a set of scripts that pretty much facilitate each of the steps. No magic here, only some time savings. If you want to use these scripts, look for SQLT directory sqlt/utl/spm, which will be available with SQLT 18.104.22.168 on May 10, 2013. If you need these scripts before May 10, then please send me an email or post a comment here.
When an Oracle user process performs poorly, meaning slower that business requirements, the most common way to start its diagnostics is by turning SQL Trace on, then proceed to review the trace file that is created. This process is usually referred as “tracing” and can be accomplished by several methods including DBMS_MONITOR and ALTER SESSION commands.
Back to the question that motivated this Blog entry: What is the difference between TKPROF and Trace Analyzer? Both input a trace file generated by SQL Trace (also known as EVENT 10046) and both produce a summary out of the input trace.
It has been around since early releases of Oracle. It is included as part of the database software. It is fast. It produces an output report in plain text format. It performs number crunching in memory. It does not have to connect to a database (it can if you want to include Explain Plans in its report). It provides a nice and clean summary of performance metrics for each SQL statement included in trace file.
Trace Analyzer TRCANLZR (TRCA)
It is an add-on tool and not part of the Oracle database product. It is free to download and use out of My Oracle Support (MOS), previously known as Metalink (under note 224270.1). It takes longer to execute than TKPROF. It provides more diagnostics details than TKPROF. It connects to a database (same where the trace was produced or some other). Relies on a database to store the parsed trace, which can help if you want to do some data mining in the contents of the trace. Its output is HTML and also text. It is easier to read than TKPROF specially regarding the Execution Plan. Bottom-line: it is like TKPROF on steroids, as per Eddie Awad’s Blog. See sample below:
Both TKPROF and TRCA are great tools. Keep in mind that there are also some excellent tools external to Oracle in this area. If you need more advanced functionality than the one provided by TKPROF or TRCA, I encourage you to explore these external tools. Personally, I have evaluated Method R and I liked it very much. Also, if you want to learn more about the contents of a SQL Trace you may be interested on this book: Optimizing Oracle Performance by Cary Millsap and Jeff Holt, published by O’Reilly Media.
In case you have never seen one, you can browse this SQLT XTRACT Report. This sample is a PDF file. The actual report is generated as HTML, which eases the navigation on it.
There is this SQL that is constantly executed by many users. This SQL executes in a subsecond thousands of times per day. Every once in a while the same SQL takes several seconds but these incidents are uncommon. Tracing would cause significant overhead since there is no known way to predict when an execution may take longer. In this scenario the concern is finding the root cause of these elusive spikes in the performance of this SQL.
Use SQL Monitor report to watch for any SQL that takes longer than a few seconds. Then produce a comprehensive report that would bring some initial light to these spikes. The challenge is to capture this information 24×7 without imposing a significan overhead.
Implement a script that loops over SQL Monitor views and capture any SQL that took or is taking more than a few seconds. Then review these reports searching by sql_text.
1. Open a new session and execute mon_repository.sql followed by mon_capture.sql. The latter will loop indefinitely.
2. On a second session execute mon_reports.sql every so often.
3. Review table v_sql_monitor by sql_text then read corresponding report on zip.
REM $Header: mon_repository.sql 22.214.171.124.1 2013/04/24 carlos.sierra $ DROP TABLE v_sql_monitor; CREATE TABLE v_sql_monitor ( sql_id VARCHAR2(13), key NUMBER, sql_exec_start DATE, sql_exec_id NUMBER, status VARCHAR2(19), first_refresh_time DATE, last_refresh_time DATE, username VARCHAR2(30), capture_date DATE, report_date DATE, sql_text VARCHAR2(2000), mon_report CLOB, PRIMARY KEY (sql_id, key));
REM $Header: mon_capture.sql 126.96.36.199.1 2013/04/24 carlos.sierra $ DECLARE l_mon_report CLOB; BEGIN LOOP INSERT INTO v_sql_monitor (sql_id, key, sql_exec_start, sql_exec_id, status, first_refresh_time, last_refresh_time, sql_text, username ) SELECT v.sql_id, v.key, v.sql_exec_start, v.sql_exec_id, v.status, v.first_refresh_time, v.last_refresh_time, v.sql_text, v.username FROM v$sql_monitor v WHERE v.process_name = 'ora' AND v.sql_text IS NOT NULL AND UPPER(v.sql_text) NOT LIKE 'BEGIN%' AND UPPER(v.sql_text) NOT LIKE 'DECLARE%' AND (v.status LIKE 'DONE%' OR (v.status = 'EXECUTING' AND (v.last_refresh_time - v.first_refresh_time) > 1/24/60 /* 1 min */)) AND NOT EXISTS (SELECT NULL FROM v_sql_monitor t WHERE t.sql_id = v.sql_id AND t.key = v.key); FOR i IN (SELECT t.*, t.ROWID row_id FROM v_sql_monitor t WHERE t.capture_date IS NULL) LOOP l_mon_report := DBMS_SQLTUNE.REPORT_SQL_MONITOR ( sql_id => i.sql_id, sql_exec_start => i.sql_exec_start, sql_exec_id => i.sql_exec_id, report_level => 'ALL', type => 'ACTIVE' ); UPDATE v_sql_monitor SET mon_report = l_mon_report, capture_date = SYSDATE WHERE ROWID = i.row_id; END LOOP; COMMIT; DBMS_LOCK.SLEEP(60); -- sleep 1 min END LOOP; END; /
REM $Header: mon_reports.sql 188.8.131.52.1 2013/04/24 carlos.sierra $ SET ECHO OFF FEED OFF VER OFF SHOW OFF HEA OFF LIN 2000 NUM 20 NEWP NONE PAGES 0 LONG 2000000 LONGC 2000 SQLC MIX TAB ON TRIMS ON TI OFF TIMI OFF ARRAY 100 NUMF "" SQLP SQL> SUF sql BLO . RECSEP OFF APPI OFF AUTOT OFF SERVEROUT ON SIZE UNL; SPO reports_driver.sql; PRO SET ECHO OFF FEED OFF VER OFF SHOW OFF HEA OFF LIN 2000 NUM 20 NEWP NONE PAGES 0 LONG 2000000 LONGC 2000 SQLC MIX TAB ON TRIMS ON TI OFF TIMI OFF ARRAY 100 NUMF "" SQLP SQL> SUF sql BLO . RECSEP OFF APPI OFF AUTOT OFF SERVEROUT ON SIZE UNL;; BEGIN FOR i IN (SELECT t.sql_id, t.key, t.ROWID row_id FROM v_sql_monitor t WHERE t.report_date IS NULL) LOOP DBMS_OUTPUT.PUT_LINE('SPO sql_id_'||i.sql_id||'_key_'||i.key||'.html;'); DBMS_OUTPUT.PUT_LINE('SELECT mon_report FROM v_sql_monitor WHERE sql_id = '''||i.sql_id||''' AND key = '||i.key||';'); DBMS_OUTPUT.PUT_LINE('SPO OFF;'); DBMS_OUTPUT.PUT_LINE('UPDATE v_sql_monitor SET report_date = SYSDATE WHERE ROWID = '''||i.row_id||''';'); DBMS_OUTPUT.PUT_LINE('HOS zip -m mon_reports sql_id_'||i.sql_id||'_key_'||i.key||'.html'); END LOOP; END; / PRO COMMIT;; PRO SET TERM ON ECHO OFF FEED 6 VER ON SHOW OFF HEA ON LIN 80 NUM 10 NEWP 1 PAGES 14 LONG 80 LONGC 80 SQLC MIX TAB ON TRIMS OFF TI OFF TIMI OFF ARRAY 15 NUMF "" SQLP SQL> SUF sql BLO . RECSEP WR APPI OFF SERVEROUT OFF AUTOT OFF;; SPO OFF; @reports_driver.sql HOS zip -m mon_reports reports_driver.sql HOS unzip -l mon_reports SET TERM ON ECHO OFF FEED 6 VER ON SHOW OFF HEA ON LIN 80 NUM 10 NEWP 1 PAGES 14 LONG 80 LONGC 80 SQLC MIX TAB ON TRIMS OFF TI OFF TIMI OFF ARRAY 15 NUMF "" SQLP SQL> SUF sql BLO . RECSEP WR APPI OFF SERVEROUT OFF AUTOT OFF;