Some of you may not know what is the Oaktable. I quote here a very precise description right from the Oaktable’s main page: <<< The OakTable network is a network for the Oracle scientist, who believes in better ways of administering and developing Oracle based systems. The network is an informal organisation, which will get together at various Oracle events. >>> To me, it is a lot more, since its members are a very collaborative bunch of smart and dedicated folks. A common denominator I have perceived so far is the honest desire to apply scientific methods to complex issues we encounter in the Oracle world. If you are serious about learning Oracle and its intricacies, you may want to do as I do: ”feast from the crumbles of the Oaktable“.
The Oaktable World is a conference that happens to coincide with OOW. This year I have the honor to speak at Oaktable World. According to the agenda, I will speak on Tuesday, September 24 at 10am. My session is at the same time than Jonathan Lewis, thus I am prepared to speak to myself (unless lucky me you happen to not know who is JL). Anyways, I will be speaking about Adaptive Cursor Sharing (ACS) and how it actually works. I will do some live demos and my goal is that all participants (if any) can actually fully understand how and when ACS produces multiple optimal execution plans for a given SQL.
If you plan to attend any of the Oaktable World sessions, please be sure you register. Space is very limited.
I will be attending myself as many of the other Oaktable World sessions as I can, else I will be there at the Enkitec booth #501 in Moscone South in case you want to stop-by and say hello.
Ric Van Dyke from Hotsos asked me yesterday if I knew of a PL/SQL function that inputs a SQL_TEXT and outputs a SQL_ID. I kind of remembered Tanel Poder had written something on this regard. And I also recalled seen an API under DBMS_UTILITY that would provide the SQL_ID given a SQL_TEXT. When I researched on both, I was able to get from Tanel a function to generate the SQL HASH_VALUE out of given SQL_ID, and from DBMS_UTILITY another API to get the SQL HASH_VALUE out of a SQL_TEXT provided as a VARCHAR2. But I could not find one to compute SQL_ID given SQL_TEXT written on PL/SQL. But Slavik Markovich gave me the insight that I needed to build a PL/SQL function that computes SQL_ID out of SQL Text. Slavik’s function is written on Python.
So, thanks to Ric for his challenging question/request, and to Tanel and Slavik for their valuable insight, I was encouraged to develop this function. Feel free to use it! I have seen the same question been asked a few times in the past, so at least I know we have such a function now!
GRANT EXECUTE ON SYS.DBMS_CRYPTO TO csierra; CREATE OR REPLACE FUNCTION compute_sql_id (sql_text IN CLOB) RETURN VARCHAR2 IS BASE_32 CONSTANT VARCHAR2(32) := '0123456789abcdfghjkmnpqrstuvwxyz'; l_raw_128 RAW(128); l_hex_32 VARCHAR2(32); l_low_16 VARCHAR(16); l_q3 VARCHAR2(8); l_q4 VARCHAR2(8); l_low_16_m VARCHAR(16); l_number NUMBER; l_idx INTEGER; l_sql_id VARCHAR2(13); BEGIN l_raw_128 := /* use md5 algorithm on sql_text and produce 128 bit hash */ SYS.DBMS_CRYPTO.hash(TRIM(CHR(0) FROM sql_text)||CHR(0), SYS.DBMS_CRYPTO.hash_md5); l_hex_32 := RAWTOHEX(l_raw_128); /* 32 hex characters */ l_low_16 := SUBSTR(l_hex_32, 17, 16); /* we only need lower 16 */ l_q3 := SUBSTR(l_low_16, 1, 8); /* 3rd quarter (8 hex characters) */ l_q4 := SUBSTR(l_low_16, 9, 8); /* 4th quarter (8 hex characters) */ /* need to reverse order of each of the 4 pairs of hex characters */ l_q3 := SUBSTR(l_q3, 7, 2)||SUBSTR(l_q3, 5, 2)||SUBSTR(l_q3, 3, 2)||SUBSTR(l_q3, 1, 2); l_q4 := SUBSTR(l_q4, 7, 2)||SUBSTR(l_q4, 5, 2)||SUBSTR(l_q4, 3, 2)||SUBSTR(l_q4, 1, 2); /* assembly back lower 16 after reversing order on each quarter */ l_low_16_m := l_q3||l_q4; /* convert to number */ SELECT TO_NUMBER(l_low_16_m, 'xxxxxxxxxxxxxxxx') INTO l_number FROM DUAL; /* 13 pieces base-32 (5 bits each) make 65 bits. we do have 64 bits */ FOR i IN 1 .. 13 LOOP l_idx := TRUNC(l_number / POWER(32, (13 - i))); /* index on BASE_32 */ l_sql_id := l_sql_id||SUBSTR(BASE_32, (l_idx + 1), 1); /* stitch 13 characters */ l_number := l_number - (l_idx * POWER(32, (13 - i))); /* for next piece */ END LOOP; RETURN l_sql_id; END compute_sql_id; / SHOW ERRORS; -- test SET SERVEROUT ON; SELECT compute_sql_id('select ''Slavik'' from dual') FROM DUAL; -- you should get 29schpgjyfxux
SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.
Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.
The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!
The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!
SQLTXPLAIN maintains a repository of metadata associated to a SQL statement being analyzed. A subset of this SQLT repository is automatically exported every time one of the main methods (like XTRACT) is used. A question sometimes I get is: what do you do with this SQLT repository? Well, in most cases: nothing. That is because most of the diagnostics we need are already included in the main report. But in some very particular cases we may want to dig into the SQLT repository. For example, in a recent case I wanted to understand some patterns about a particular wait event that happen to become “slower” after the application of an Exadata patch. Then, when IO Resource Manager IORM was configured from Basic to Auto, these waits became “fast” again. How to measure that? How to observe any trends and correlated to the times when the patch was applied and then when the IORM was adjusted? Fortunately SQLT had been used, and it captured ASH data out of AWR before it aged out. So I was able to use it in order to get to “see” these trends below.
Now the question becomes: How do I produce this kind of graph? Well, with Excel it is easy, we just need to bring the piece of the SQLT repository that we want into Excel. For that, I first import the whole SQLT repository for a given sqlt_sNNNNN_*.zip file into my local SQLT repository. The command is spelled out in the sqlt_sNNNNN_readme.html as “import SQLT repository”, then I just use SQL Developer to export the output of a query into XLS (or CSV). In this case the SQL I used was the one below.
If you try to use the SQLT repository for some similar data analysis and you struggle with it, please let me know and I will help you out.
WITH awr_ash AS ( SELECT DISTINCT -- two or more statements can include same snaps sample_time, sql_plan_hash_value, sql_plan_line_id, wait_class, event, time_waited, ROUND(time_waited / 1e3) time_waited_ms, current_obj#, machine, delta_time, delta_read_io_requests, delta_write_io_requests, delta_read_io_bytes, delta_write_io_bytes, delta_interconnect_io_bytes, ROUND(delta_read_io_requests / (delta_time / 1e6)) read_io_requests_per_sec, ROUND(delta_write_io_requests / (delta_time / 1e6)) write_io_requests_per_sec, ROUND(delta_read_io_bytes / (delta_time / 1e6)) read_io_bytes_per_sec, ROUND(delta_write_io_bytes / (delta_time / 1e6)) write_io_bytes_per_sec, ROUND(delta_interconnect_io_bytes / (delta_time / 1e6)) interconnect_io_bytes_per_sec FROM sqlt$_dba_hist_active_sess_his WHERE statement_id IN (90959, 90960, 90962, 90963) AND session_state = 'WAITING' AND time_waited > 0 AND delta_time > 0 AND wait_class IS NOT NULL ), awr_ash_grp AS ( SELECT TRUNC(sample_time) sample_day, event, COUNT(*) samples, ROUND(AVG(time_waited_ms)) avg_time_waited_ms, ROUND(AVG(read_io_requests_per_sec)) read_io_requests_per_sec, ROUND(AVG(write_io_requests_per_sec)) write_io_requests_per_sec, ROUND(AVG(read_io_bytes_per_sec)) read_io_bytes_per_sec, ROUND(AVG(write_io_bytes_per_sec)) write_io_bytes_per_sec, ROUND(AVG(interconnect_io_bytes_per_sec)) interconnect_io_bytes_per_sec FROM awr_ash GROUP BY TRUNC(sample_time), event ), cell_smart_reads AS ( SELECT sample_day, TO_CHAR(sample_day, 'DD-MON-YYYY') sample_date, samples, avg_time_waited_ms, read_io_requests_per_sec, read_io_bytes_per_sec, ROUND(read_io_bytes_per_sec / (1024 * 1024)) read_io_mb_per_sec, interconnect_io_bytes_per_sec, ROUND(interconnect_io_bytes_per_sec / (1024 * 1024), 1) interconnect_io_mb_per_sec FROM awr_ash_grp WHERE event = 'cell smart table scan' ) SELECT sample_date, samples, avg_time_waited_ms, read_io_requests_per_sec, read_io_mb_per_sec, interconnect_io_mb_per_sec FROM cell_smart_reads ORDER BY sample_day;
Mauro Pagano is doing a great job keeping SQLT well taken care! Version 12.1.01 is now available for download out of MOS (Metalink) doc_id 215187.1. With a monthly release, it is always a good idea to download the latest version at least every quarter. This latest version includes some adjustments for 12c and some other changes as seen below. Here is the list of changes for the past 3 months. The saga continues!
12.1.01 August 19, 2013
- BUG: Incorrect date wrapping in report “Non-Default or Modified Session Parameters”
- BUG: SQLHC Table Summary reporting section was skipping tables if they had no index defined on them
- BUG: The logic in “Newest Value” column in Modified System Parameters needed to be adjusted, the value was sometime incorrect
- BUG: CBO Environment section was reporting a parameter many times if that param had been changed more than once
- ENH: SQLT and SQLHC works on 12c!
- ENH: Improved SQLHC HC on corrupt histograms for 12c
- ENH: SQLT can now create user SQLTXADMIN in presence of password complexity verification function
- ENH: Stat table creation now supports Unicode charset in both SQLT TC and SQLT TCX
- ENH: Importing SQLT TC now handles IOT remapping
- ENH: Fix for 12c
sqlt$a: *** d:ORA-12899: value too large for column “SQLTXPLAIN”.”SQLT$_DBA_TAB_COLS”.”LOW_VALUE” (actual: 45, maximum: 40).
- ENH: Added “version date” to SQLHC to make it easier to recognize if an old version is used
- ENH: Added “user” to SQLHC reporting the db login user
- ENH: SQLDX is now executed after SQLT/SQLHC completes, it helps in case SQLDX takes a long time to run since the zip file for SQLT/SQLHC is ready to be moved even when SQLDX is still running
- ENH: Added NUM_CPUs, NUM_COREs and NUM_CPU_SOCKETs info from V$OSSTAT to SQLT/SQLHC to help with multicore/multisocket CPUs
- ENH: Added Historical SQL Statistics – Total section to SQLHC
- ENH: SQLHC Exec now captures session statistics for the SQL analyzed
- ENH: New HC on Mviews not used in the SQL/plan that are defined on top of tables involved in the SQL/plan
- ENH: New column “If Offloadable” and “IO Saved %” on Exadata in Performance Statistics section
- ENH: New HC on Rewrite Equivalences defined by the owner of the objects involved in the SQL/plan
126.96.36.199 July 15, 2013
- BUG: Fixed issue with XPREXT and XPREXC where ASH was not turned off completely
- ENH: New health-check on plans with same PHV but predicates with different ordering
- ENH: SQLHC and SQLHCXEC now provides information about DBMS_STATS preferences
- ENH: SQLT shows when a parameter value has been changed (when this info is available in AWR)
- ENH: New link to Table Statistics Extensions directly available in the top menu
188.8.131.52 June 10, 2013
- ENH: New health-check on SDREADTIM/MREADTIM in Exadata
- ENH: Added SQLT parameter tcb_sampling_percent to control TCB sampling data percentage
- ENH: If no SQL is identified using the input string as HASH_VALUE then a search on PLAN_HASH_VALUE is performed
- ENH: Introduced new parameter c_ash_hist_days to split AWR data collection from ASH data collection and to limit the amount of info extracted from ASH
- ENH: Added scale information for TIMESTAMP datatype in Captured Binds sections
- ENH: ASH reports are now generated 1 per hour/instance starting from the interval where the SQL was most impacting
- ENH: SQLDX (SQL Dynamic eXtractor) collects only up to 10k rows per table/view
I need a large Table to do some testing of SQLTXPLAIN in Exadata. Seeded SALES Table out of the SH sample schema is not big enough for the test I want to do. This SH.SALES tables has less than 1M rows and I need a few billion rows. So, I could just fabricate my own data or I could expand the SH.SALES table. I opted for the latter so I could join this big table with some dimension tables out of the same sample schema.
This is the script that I used. It is still running. I expect 31B rows out of it. Sharing here in case you need a big sales table.
CREATE TABLE sales_big AS WITH p AS (SELECT prod_id FROM sh.products WHERE ROWNUM <= 50), -- max 50 c AS (SELECT cust_id FROM sh.customers WHERE ROWNUM <= 5000), -- max 50000 t AS (SELECT time_id FROM sh.times WHERE ROWNUM <= 500), -- max 1000 h AS (SELECT channel_id FROM sh.channels WHERE ROWNUM <= 5), -- max 5 r AS (SELECT promo_id FROM sh.promotions WHERE ROWNUM <= 50) -- max 500 -- 50 * 5000 * 500 * 5 * 50 = 31,250,000,000 SELECT p.prod_id, c.cust_id, t.time_id, h.channel_id, r.promo_id, ROUND(ABS(DBMS_RANDOM.NORMAL) * 100) quantity_sold, ROUND(DBMS_RANDOM.VALUE(0, 1000) * ABS(DBMS_RANDOM.NORMAL), 2) amount_sold FROM p, c, t, h, r;