Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘SQLTXPLAIN (SQLT)’ Category

YASTS: Yet Another SQL Tuning Script: planx.sql

with 4 comments

Having SQLTXPLAIN and SQLHC available, WHY do I need yet another way to display execution plans?

New script planx.sql reports execution plans for one SQL_ID from RAC and AWR. It is lightweight and installs nothing. It produces list of performance metrics for given SQL out of gv$sqlstats, gv$sqlstats_plan_hash, gv$sql and dba_hist_sqlstat. It also displays execution plans from gv$sql_plan_statistics_all and dba_hist_sql_plan. It is RAC aware. It also reports on io_saved when executed on Exadata.

Most stand-alone light-weight scripts I have seen only report plans from connected RAC node. This script reports from all RAC nodes. The AWR piece is optional. In other words, if your site does not have a Diagnostics Pack License you can specify so when executing this script, thus all access to AWR data is simply skipped. Output is plain text and it executes in seconds.

I will be using this planx.sql as my first step in the analysis of queries performing slowly. If I need more, then I will use SQLHC or SQLTXPLAIN. This planx.sql script, as well as some others, are beginning to populate my new shared directory of “free” scripts. The link is at the right of the screen, and also here. Quite often I write small scripts to do my job, now they will have a new house there. A readme provides a one-line description of each script.


New planx.sql is an alternative to plain DBMS_XPLAIN.DISPLAY_CURSOR. It displays plans from all RAC nodes and from AWR(opt). It also reports relevant performance metrics for all recorded execution plans. It is fast and installs nothing.


Written by Carlos Sierra

October 9, 2013 at 6:06 pm

SQL Tuning with SQLTXPLAIN 2-days Workshop

with 6 comments

SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.

Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.

The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!

The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!

About SQLT Repository

with 11 comments

SQLTXPLAIN maintains a repository of metadata associated to a SQL statement being analyzed. A subset of this SQLT repository is automatically exported every time one of the main methods (like XTRACT) is used. A question sometimes I get is: what do you do with this SQLT repository? Well, in most cases: nothing. That is because most of the diagnostics we need are already included in the main report. But in some very particular cases we may want to dig into the SQLT repository. For example, in a recent case I wanted to understand some patterns about a particular wait event that happen to become “slower” after the application of an Exadata patch. Then, when IO Resource Manager IORM was configured from Basic to Auto, these waits became “fast” again. How to measure that? How to observe any trends and correlated to the times when the patch was applied and then when the IORM was adjusted? Fortunately SQLT had been used, and it captured ASH data out of AWR before it aged out. So I was able to use it in order to get to “see” these trends below.

Screen Shot 2013-08-23 at 2.02.02 PM

Now the question becomes: How do I produce this kind of graph? Well, with Excel it is easy, we just need to bring the piece of the SQLT repository that we want into Excel. For that, I first import the whole SQLT repository for a given sqlt_sNNNNN_*.zip file into my local SQLT repository. The command is spelled out in the sqlt_sNNNNN_readme.html as “import SQLT repository”, then I just use SQL Developer to export the output of a query into XLS (or CSV). In this case the SQL I used was the one below.

If you try to use the SQLT repository for some similar data analysis and you struggle with it, please let me know and I will help you out.

WITH awr_ash AS (
SELECT DISTINCT -- two or more statements can include same snaps
 ROUND(time_waited / 1e3) time_waited_ms,
 ROUND(delta_read_io_requests / (delta_time / 1e6)) read_io_requests_per_sec,
 ROUND(delta_write_io_requests / (delta_time / 1e6)) write_io_requests_per_sec,
 ROUND(delta_read_io_bytes / (delta_time / 1e6)) read_io_bytes_per_sec,
 ROUND(delta_write_io_bytes / (delta_time / 1e6)) write_io_bytes_per_sec,
 ROUND(delta_interconnect_io_bytes / (delta_time / 1e6)) interconnect_io_bytes_per_sec
 FROM sqlt$_dba_hist_active_sess_his
 WHERE statement_id IN (90959, 90960, 90962, 90963)
 AND session_state = 'WAITING'
 AND time_waited > 0
 AND delta_time > 0
 AND wait_class IS NOT NULL ),
awr_ash_grp AS (
SELECT TRUNC(sample_time) sample_day,
 COUNT(*) samples,
 ROUND(AVG(time_waited_ms)) avg_time_waited_ms,
 ROUND(AVG(read_io_requests_per_sec)) read_io_requests_per_sec,
 ROUND(AVG(write_io_requests_per_sec)) write_io_requests_per_sec,
 ROUND(AVG(read_io_bytes_per_sec)) read_io_bytes_per_sec,
 ROUND(AVG(write_io_bytes_per_sec)) write_io_bytes_per_sec,
 ROUND(AVG(interconnect_io_bytes_per_sec)) interconnect_io_bytes_per_sec
 FROM awr_ash
 event ),
cell_smart_reads AS (
SELECT sample_day,
 TO_CHAR(sample_day, 'DD-MON-YYYY') sample_date,
 ROUND(read_io_bytes_per_sec / (1024 * 1024)) read_io_mb_per_sec,
 ROUND(interconnect_io_bytes_per_sec / (1024 * 1024), 1) interconnect_io_mb_per_sec
 FROM awr_ash_grp
 WHERE event = 'cell smart table scan' )
SELECT sample_date,
 FROM cell_smart_reads

Written by Carlos Sierra

August 23, 2013 at 2:17 pm

SQLTXPLAIN 12.1.01 is now available

with 6 comments

Mauro Pagano is doing a great job keeping SQLT well taken care! Version 12.1.01 is now available for download out of MOS (Metalink) doc_id 215187.1. With a monthly release, it is always a good idea to download the latest version at least every quarter. This latest version includes some adjustments for 12c and some other changes as seen below. Here is the list of changes for the past 3 months. The saga continues!

12.1.01 August 19, 2013

  1. BUG: Incorrect date wrapping in report “Non-Default or Modified Session Parameters”
  2. BUG: SQLHC Table Summary reporting section was skipping tables if they had no index defined on them
  3. BUG: The logic in “Newest Value” column in Modified System Parameters needed to be adjusted, the value was sometime incorrect
  4. BUG: CBO Environment section was reporting a parameter many times if that param had been changed more than once
  5. ENH: SQLT and SQLHC works on 12c!
  6. ENH: Improved SQLHC HC on corrupt histograms for 12c
  7. ENH: SQLT can now create user SQLTXADMIN in presence of password complexity verification function
  8. ENH: Stat table creation now supports Unicode charset in both SQLT TC and SQLT TCX
  9. ENH: Importing SQLT TC now handles IOT remapping
  10. ENH: Fix for 12c
    sqlt$a: *** d:ORA-12899: value too large for column “SQLTXPLAIN”.”SQLT$_DBA_TAB_COLS”.”LOW_VALUE” (actual: 45, maximum: 40).
  11. ENH: Added “version date” to SQLHC to make it easier to recognize if an old version is used
  12. ENH: Added “user” to SQLHC reporting the db login user
  13. ENH: SQLDX is now executed after SQLT/SQLHC completes, it helps in case SQLDX takes a long time to run since the zip file for SQLT/SQLHC is ready to be moved even when SQLDX is still running
  14. ENH: Added NUM_CPUs, NUM_COREs and NUM_CPU_SOCKETs info from V$OSSTAT to SQLT/SQLHC to help with multicore/multisocket CPUs
  15. ENH: Added Historical SQL Statistics – Total section to SQLHC
  16. ENH: SQLHC Exec now captures session statistics for the SQL analyzed
  17. ENH: New HC on Mviews not used in the SQL/plan that are defined on top of tables involved in the SQL/plan
  18. ENH: New column “If Offloadable” and “IO Saved %” on Exadata in Performance Statistics section
  19. ENH: New HC on Rewrite Equivalences defined by the owner of the objects involved in the SQL/plan July 15, 2013

  1. BUG: Fixed issue with XPREXT and XPREXC where ASH was not turned off completely
  2. ENH: New health-check on plans with same PHV but predicates with different ordering
  3. ENH: SQLHC and SQLHCXEC now provides information about DBMS_STATS preferences
  4. ENH: SQLT shows when a parameter value has been changed (when this info is available in AWR)
  5. ENH: New link to Table Statistics Extensions directly available in the top menu June 10, 2013

  1. ENH: New health-check on SDREADTIM/MREADTIM in Exadata
  2. ENH: Added SQLT parameter tcb_sampling_percent to control TCB sampling data percentage
  3. ENH: If no SQL is identified using the input string as HASH_VALUE then a search on PLAN_HASH_VALUE is performed
  4. ENH: Introduced new parameter c_ash_hist_days to split AWR data collection from ASH data collection and to limit the amount of info extracted from ASH
  5. ENH: Added scale information for TIMESTAMP datatype in Captured Binds sections
  6. ENH: ASH reports are now generated 1 per hour/instance starting from the interval where the SQL was most impacting
  7. ENH: SQLDX (SQL Dynamic eXtractor) collects only up to 10k rows per table/view

Written by Carlos Sierra

August 20, 2013 at 11:00 am


Carlos Sierra is joining Enkitec soon

with 29 comments

In a few days I will be joining Enkitec. Sure I will miss Oracle after 17 good years there. I made some really good friends and I learned a lot. I had the opportunity to build tools I thought were needed by the Oracle community, and I am glad they are been deployed all over the world. I am also pleased to know all these tools will continue helping our SQL Tuning community with no disruption.

A question I am often asked is: WHY Enkitec? I could elaborate on an answer but to keep it short, I would simply say “because I like the idea of contributing to an expanding team of outstanding professionals in the Oracle space“. I also value the close relationship between Enkitec and Oracle. I see this as a win-win.

At Enkitec I will be doing a lot of stuff. My main focus at the beginning will be database performance in general and SQL Tuning in particular. I foresee the possibility to open some SQL Tuning hands-on workshops using SQLTXPLAIN, SQLHC and some other cool tools and techniques. I also expect some additional duties and new areas of expertise to develop, which I embrace as new challenges and opportunities.

Overall, I feel very excited and ready to start walking this new career path. What can I say… Just when I though life was good it simply got better!

You know how to reach me (here) or by email.

Cheers — Carlos

Written by Carlos Sierra

July 26, 2013 at 11:00 am

SQL Tuning 101 and Sushi

with 3 comments

A question that I hear often is: “how to tune a query?”. It comes in several flavors, but what I usually read between lines is: I am new to this “sql tuning” thing, and I have been asked to improve the performance of this query, but I have no clue where to start.

Knowing about nothing on SQL Tuning is quite common for most DBAs. Even seasoned DBAs may stay away from SQL Tuning, mostly because they feel out of their comfort zone. In addition, I think SQL Tuning is like Sushi: You either love it or hate it! And same like Sushi, most would avoid it simply because they haven’t tried it. By the way, I am a Sushi lover but that is another story…

SQL Tuning 101

So, if you are like in square 1, and you are serious about learning SQL Tuning, where do you start? There are about a couple dozens of well recognized names on this space of SQL Tuning. Look at my blog roll to get some idea. The problem with this list is that most of the “gurus” walk on water and their very simple explanations require like tons of knowledge in related topics. Not bad if you are traveling the road from intermediate to advanced, but a newbie gets lost like in 5 seconds. There is also the risk of falling for all the misinformation you find in the internet written by some adventurous self-proclaimed “experts”.

I strongly suggest to start by reading the Concepts reference manual for your Oracle release, followed by the SQL Reference. Feeling comfortable writing SQL is a must. You don’t know SQL? Then, learn it first then move into SQL Tuning. If you have some Development experience that helps a lot. If you are a seasoned Developer you are half the way there already.

After reading the Concepts reference manual and becoming proficient in SQL, then get yourself a copy of the “Oracle SQL High-Performance Tuning (2nd Edition)” book written by Guy Harrison more than 10 years ago. Disregard the fact that the book is old. I like this book because it starts with the basics. It assumes nothing. I learned a lot from it when I first read it long time ago. If you search for it in Amazon just be aware the front-cover picture is incorrect, but the actual book is correct.

Another book I suggest is the recent one written by Stelios Charalambides. The title is “Oracle SQL Tuning with Oracle SQLTXPLAIN”. I like this one because it introduces SQLTXPLAIN like you would do with Sushi: Starting with a California Roll, then Sushi and last Sashimi. Also, you would get to learn of SQLTXPLAIN and how this FREE tool can help you to digest your dinner better… I mean your SQL Tuning.

Once you gain some experience doing some real SQL Tuning, then you can move to start reading endorsed blogs and books from the Sushi Masters. I hope you give it a try, and I hope you also get to love it!

Written by Carlos Sierra

July 18, 2013 at 6:15 am

SQLTXPLAIN under new administration

with 5 comments

With great pleasure I am announcing here that SQLTXPLAIN (SQLT) is “under new administration”. What does it mean? In short, almost nothing to you. Under the covers, it means you will start seeing a new (and younger) face associated to SQLT. My good friend Mauro Pagano from Oracle has graciously accepted the role of been the first/main contact for this tool. He also has accepted the responsibility of supporting, maintaining and enhancing this tool.  Mauro won’t be alone on this task. Another good friend of mine – Abel Macias, will back up Mauro when needed.

Support of SQLT usually goes like this: If someone from Oracle Support requests you to provide a SQLT report, that engineer owning the Service Request (SR) is your main point of contact, including questions/concerns/issues installing SQLT. That same engineer can reach to Mauro or Abel for assistance if needed. In the other hand, if you are a business partner of Oracle or you are using SQLT on your own terms, and you have a question/concern/issue about a SQLT installation, then refer to Metalink (MOS) note 215187.1, where you can see who is the main contact for SQLT and how to email him/her. As of today that person is Mauro.

A bit about Mauro and Abel: I have worked with both for several years now, and besides been super qualified, I have learned that each of us has usually a different technical view on a given issue. So, a brainstorm between 2 or the 3 of us usually provides a much better outcome than using only one brain… Anyways, Mauro and Abel have been instrumental to the success of SQLT, always providing great ideas which translate into very useful features. SQLT is my baby, but it is also theirs! 😉

Of course I will continue advising on SQLT. That includes frequent communication with Mauro and Abel, but it also means to continue helping the Oracle community, specially when it comes to “how to make sense of the output”. So, SQLT is “under new administration”, but it should mean almost nothing different to you.

Keep using SQLT and keep providing your valuable feedback! SQLT is what it is today thanks to the great ideas we constantly receive and implement!

Written by Carlos Sierra

July 17, 2013 at 6:06 am