Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Testing’ Category

Forcing a “Nested Loop only” Execution Plan

with 5 comments

Sometimes you do what you have to do. So here I confess doing something I usually avoid: forcing an Execution Plan (which is not the same as using a more conventional method for Plan stability).

This is a case on 11.2.0.3.0 base release where the application vendor sets the optimizer to 9i, and tweaks other CBO parameters in questionable ways, then some queries produce suboptimal plans (as expected); and you are called to help without changing the obvious.

There is a family of queries from an ad-hoc query generator that permits users to issue queries without a set of selective predicates. These queries join several large tables and their performance is poor (as expected as well!). On top of the previous, all these queries include the /*+ FIRST_ROWS */ CBO Hint and the questionable DISTINCT keyword. Note: it is quite common for developers to throw a DISTINCT keyword “to avoid duplicates” where the mere existence of duplicates would be an indication of an application bug; so “why fix it if I can hide it, right?”.

There is one caveat although: these queries include a generic predicate “rownum <= :b1”, and value passed defaults to 5000, so users rationale is “if I only want the first X rows my query should return fast”. This highlights still another questionable practice since it is hard to imagine a user scrolling 5000 rows and making any sense of such large set, especially when the full “filtered” set would be several million rows long. So the original problem is questionable in several ways. Nevertheless, sometimes we are called to help besides providing advice. And no, we are not allowed to slap hands 😉

The good news is that we can use this extra predicate on rownum and make these queries to return the first X rows really fast; and I mean less than 5 seconds instead of over one hour or more! And if users want not 5000 but 500 or even 50 rows, then we can be in the sub-second range!

You may be thinking FIRST_ROWS optimization, and that was my first try. Unfortunately, on 11.2.0.3.0, even reversing all the suboptimal CBO parameters at the session level, I would consistently get an Execution Plan with a few Hash Joins and a large Cost; and if I were to force a Nested Loop Plan, the cost would be several orders of magnitude larger so the CBO would not pick it! Nevertheless, such a “Nest Loop only” Execution Plan would fulfill the user’s expectations, regardless the validity of the initial request. And yes, CBO statistics are OK, not perfect but simply OK. One more piece of info: this is not Exadata! (if it were Exadata most probably these same Execution Plans with full table scans and Hash Joins would simply fly!).

So, my issue became: How do I force an Execution Plan that only contains Nested Loops? If I could do that, then the COUNT STOP operation could help me to halt my SQL execution once I fetched the first X rows (Hash Join does not allow me do that). Remember: these tables have literally millions of rows. I could pepper these queries with a ton of CBO Hints and I would get my desired “Nested Loop only” Execution Plan… But that would be a lot of work and tricky at best.

SQL Patch to the rescue

I could had used a SQL Profile, but I think this dirty trick of suppressing Hash Joins and Sort Merge Joins, would be better served with a SQL Patch. I also thought Siebel: They do tweak CBO parameters as well, and they suppress Hash Joins, but they change System and Session level parameters… Since I wanted my change to be very localized, SQL Patch could provide me just what I needed.

Under the Downloads section on the margin of this page, there is a “cscripts” link that includes the sqlpch.sql script. I used this script and passed as the second parameter the following string (1st parameter is SQL_ID). With a SQL Patch generated this way, I could systematically produce a “Nested-Loops only” Execution Plan for these few queries. I did not have to change the original SQL, nor change the CBO environment at the System or Session level, neither restrict the query generator, and I did not had to “educate” the users to avoid such unbounded queries.

OPT_PARAM("_optimizer_sortmerge_join_enabled" "FALSE") OPT_PARAM("_hash_join_enabled" "FALSE")

Conclusion

I have to concede doing something questionable, in this case using a SQL Patch to force a desired Execution Plan instead of fixing the obvious, simply because that was the shortest path to alleviate the user’s pain.

I consider this technique above a temporary work-around and not a solution to the actual issue. In this case the right way to handle this issue would be:

  1. Have the application vendor certify their application to the latest release of the database and reset all CBO related parameters, plus
  2. Have the application vendor remove CBO Hints and DISTINCT keyword from queries, plus
  3. Configure the ad-hoc query generator to restrict users from executing queries without selective predicates, then
  4. Tune those outlier queries that may still need some work to perform as per business requirements, and possibly
  5. Educate the users to provide as many selective predicates as possible

Anyways, the potential of using a SQL Patch to tweak an Execution Plan in mysterious ways is quite powerful, and something we may want to keep in the back of our minds for a rainy day…

Written by Carlos Sierra

August 8, 2015 at 8:30 am

SQL Tuning with SQLTXPLAIN 2-days Workshop

with 6 comments

SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.

Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.

The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!

The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!

Creating a big Sales table

with 2 comments

I need a large Table to do some testing of SQLTXPLAIN in Exadata. Seeded SALES Table out of the SH sample schema is not big enough for the test I want to do. This SH.SALES tables has less than 1M rows and I need a few billion rows. So, I could just fabricate my own data or I could expand the SH.SALES table. I opted for the latter so I could join this big table with some dimension tables out of the same sample schema.

This is the script that I used. It is still running. I expect 31B rows out of it. Sharing here in case you need a big sales table.

CREATE TABLE sales_big AS
WITH
p AS (SELECT prod_id FROM sh.products WHERE ROWNUM <= 50), -- max 50
c AS (SELECT cust_id FROM sh.customers WHERE ROWNUM <= 5000), -- max 50000
t AS (SELECT time_id FROM sh.times WHERE ROWNUM <= 500), -- max 1000
h AS (SELECT channel_id FROM sh.channels WHERE ROWNUM <= 5), -- max 5
r AS (SELECT promo_id FROM sh.promotions WHERE ROWNUM <= 50) -- max 500
-- 50 * 5000 * 500 * 5 * 50 = 31,250,000,000
SELECT p.prod_id,
 c.cust_id,
 t.time_id,
 h.channel_id,
 r.promo_id,
 ROUND(ABS(DBMS_RANDOM.NORMAL) * 100) quantity_sold,
 ROUND(DBMS_RANDOM.VALUE(0, 1000) * ABS(DBMS_RANDOM.NORMAL), 2) amount_sold
 FROM p, c, t, h, r;

Written by Carlos Sierra

August 12, 2013 at 5:36 pm

Posted in Exadata, Testing