Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘OUG’ Category

About AWR, SQLT and DB360

leave a comment »

This blog posting is about answering this first question below, which I often get asked:

Can I use SQLTXPLAIN (SQLT) instead of AWR?

The answer is: it depends. If you are doing SQL Tuning and you already know the SQL_ID, then you may want to use SQLT XTRACT (MOS 215187.1) directly on the SQL_ID of concern. But even in that case, keep in mind that SQLT accesses AWR data, so your environment must have a valid license to use the Oracle Diagnostics Pack. In fact, when you install SQLT, it asks if you have the Oracle Tuning Pack, the Oracle Diagnostics Pack or none. Depending how you respond, SQLT access or not the set of views and APIs included on those Oracle Packs. That means you can configure SQLT to access or not AWR data.

What is the difference between AWR and SQLT?

In short, the difference between these two is the scope. You use AWR to diagnose database performance, while you use SQLT to diagnose one SQL. Sometimes I explain this difference by saying: “AWR is to an entire database to what SQLT is to one SQL”. One is for system-wide performance, the other is very centric to one SQL at a time.

Why SQLT exists?

I envisioned SQLT on the late 90’s when I was a road-warrior fighting fires in the area of performance, and in particular SQL performance. I found back then that Oracle-provided tools like TKPROF were excellent, but I always needed something more, like knowing the state of the CBO Statistics, Tables and Indexes, etc.

These days, my good friend Mauro Pagano from Oracle, is keeping the ball rolling. He is keeping SQLT in constant motion, making it a better tool on every new version. So, I would say this: SQLT is filling up some gaps that me, and many others, consider important in order to be diligent on root cause analysis for cases were a SQL performs poorly.

What is DB360?

As SQLT brings to the table several pieces of information that we need for SQL Tuning, and which are not available using out-of-the-box tools like TKPROF or SQL Monitoring, the new DB360 tool is doing something similar for the entire database: It complements  what AWR provides by producing a report with meaningful information about an entire database. This DB360 is a tool that installs nothing on the database, and produces an HTML report with sections such as Configuration, Sizing, CBO Statistics, Performance Trends, etc.

Is DB360 a licensed product?

No. This DB360 tool belongs to Enkitec. It is not yet available to the Oracle community, but it will be soon. Same as SQLT, if you have an Oracle Tuning or Diagnostics Pack, then when you execute DB360 you would get to see in your DB360 report some pieces of information generated out of views and APIs covered by those Oracle Packs, else you get only the pieces which require no Oracle Pack license. Besides the restriction to limit your use of DB360 as per your Oracle Pack license, DB360 itself will be available to the Oracle community for free, and with no strings attached, same as SQLT.

Why are SQLT and DB360 free to download and use?

These tools are simply a contribution to the Oracle community. “Sharing tools is like sharing knowledge: it makes our IT experience more pleasurable”. The only payback comes when you share back with the Oracle community some of your knowledge, or some of your tools and scripts. If you have been a speaker in an Oracle Users Groups, then you may relate to this gratifying experience to share with others what you know. At RMOUG these past 3 days, I have had the opportunity to experience once again this special sense of community, that is always eager to share and to learn from each other.

Conclusion

SQLT complements TKPROF and SQL Monitor. DB360 complements AWR. When it comes to diagnostics, either for one SQL or for an entire Database, having to our disposal additional diagnostics in the context of our analysis, improves our chances to do a diligent job, while reducing the time that it would take to assembly those pieces manually; all with the extra benefit of having these extra diagnostics pieces consistent to a point in time. So, I hope you continue enjoying SQLT and in the near future DB360!

Advertisements

Written by Carlos Sierra

February 7, 2014 at 6:07 pm

Speaking at the SLOUG tomorrow

leave a comment »

Kerry and I will be speaking at the Saint Louis Oracle Users Group. The name of the event is Big Bash 2013. We will talk about problem solving, plan stability and plan flexibility. We expect to have a large audience and we look forward to meet old friends and make new ones. Oracle Users Groups are nice gatherings, and every group has its one charisma. For me, this is my first time speaking at SLOUG, and actually the first time in Saint Louis. Flying today and retuning tomorrow late at night. Feeling excited!

Written by Carlos Sierra

October 17, 2013 at 7:32 am

Posted in Conferences, OUG

IOUG COLLABORATE 13

with 4 comments

I will be speaking at the Independent Oracle Users Group (IOUG) “COLLABORATE 13” conference next week (April 8-11, 2013). I will deliver two sessions:

  1. SQL Tuning made easier with SQLTXPLAIN (SQLT): Monday, April 8 between 3:45 and 4:45 pm. Mile High Ballroom 3A.
  2. How to improve SQL Performance with new SQL Health Check Tool: Wednesday, April 10 between 1:00 and 2:00 pm. Mile High Ballroom 3A.

This time I will demo SQLT installation and use of the SQLT XTRACT method. For the SQLHC I will demo an actual execution. So, I will spend less time in PPTs and more time showing how to use these tools and how to navigate their output. Feedback from prior events ask for more demo time…

Traveling to Denver in 3 hours (Saturday morning) and returning home (Orlando) on Thursday night. If you are attending this IOUG COLLABORATE 13 event I would love to meet you!

Also, if you have ANY questions about SQLTXPLAIN (SQLT) or SQL Health-Check (SQLHC) keep in mind I am just one email away!

Written by Carlos Sierra

April 6, 2013 at 7:07 am

Interesting case where a full table scan is chosen instead of an index

with 13 comments

While I was presenting at Hotsos Symposium 2013, I showed a real-life case where the CBO was choosing a full table scan instead of an index range scan. The root cause was a suspicious “NVL(column, value) = value” predicate seen in the execution plan. But the actual SQL contained only “column = value”. So the question became “WHERE the NVL is coming from”? The selection of index or full scan became a simple side effect of the “function(column) = value” predicate.

On 11g there is a new optimization to improve the performance of DDL operations adding columns to an exiting table with data, where the new column is declared as having a constant as default value while making it not nullable. In other words, all existing rows are logically expected to be updated with default value on this new column. If the table is large, this ALTER TABLE ADD COLUMN operation would take long. This optimization records instead just the metadata for this new column (default value and not null constraint) and returns in no time regardless the size of the table.

What was confusing in this real-life case was the fact that filter predicate looked like this “NVL(USER2.SOME_FLAG,’N’)=’Y'” while SOME_FLAG default value was “Y” instead of “N” according to DBA_TAB_COLS. How was that possible? At some point we suspected the column default value had changed from “N” to “Y” but it was a simple speculation. So HOW do we prove or disprove the unexpected “N” in the filter predicate was the product of changing the default value of the affected column?

I created a small test case in order to reproduce observations from the real-life case. I created a table with data on my 11g test database. I created afterwards a new column which would use this new optimization (having a default constant and making it not nullable during same DDL). Last, I modified the default value for same column with another DDL. Keep in mind that when column was created only the metadata was recorded and the actual rows where not updated (this is the optimization).

DROP TABLE t1;
CREATE TABLE t1 AS
WITH v1 AS (SELECT ROWNUM n FROM dual CONNECT BY LEVEL <= 10000)
SELECT ROWNUM id,
 DBMS_RANDOM.STRING('U', 40) c1
 FROM v1, v1
 WHERE ROWNUM <= 10000;

-- rows are not updated. only metadata is recorded. (first DDL)
ALTER TABLE t1 ADD (c2 VARCHAR2(3) DEFAULT 'NO' NOT NULL);

-- data default is recorded. (second DDL)
ALTER TABLE t1 MODIFY (c2 DEFAULT 'YES');

SELECT COUNT(*) FROM t1 WHERE c2 = 'NO';

/*
 COUNT(*)
----------
 10000 <<<<<<<<<<<<<<<<< expecting 0!
*/

SET PAGES 100 LIN 300;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'PREDICATE'));

/*
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 23 (100)| |
| 1 | SORT AGGREGATE | | 1 | 3 | | |
|* 2 | TABLE ACCESS FULL| T1 | 11342 | 34026 | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 2 - filter(NVL("C2",'NO')='NO') <<<< expecting (NVL("C2",'YES')='NO')!
*/

I was expecting different result and I thought this was a bug. I presented the issue to Mauro Pagano for a peer review and he explained that results were expected (and correct). If we hadn’t had the DDL optimization, at the time of the first ALTER TABLE we would had updated all rows with value “NO”. Then at second DDL we would had updated no rows with value “YES”, but future rows may get that value “YES” if c2 was not explicitly assigned a value. It made sense. And the optimization mimics the same behavior. So, rows that existed before the first DDL have no actual value for “c2” but stored metadata will assign “NO” when accessed. Rows created between DDL 1 and 2 would acquire default value of “NO” if needed (and updated with that “NO” value). And rows created after 2nd DDL would get value “YES” if needed (and updated with that “YES” value). So, only rows before DDL 1 would still have an internal NULL but external “NO”, thanks to initial stored metadata from DDL optimization.

Conclusion

The use of DDL optimization improved the performance of the ALTER TABLE ADD COLUMN command but it introduced a filter predicate which disabled the possible use of a normal index on same column, affecting the performance of queries having such filter predicates. Possible solutions include:

  1. Adding such columns without the optimization: Add column (DDL), update all rows to default value (DML), add default value to column (DDL), add not null constraint (DDL)
  2. Issue an ALTER SESSION SET “_add_col_optim_enabled” = FALSE; before the column is created
  3. Create either an expression with function seen in predicate, or a function-based index

Written by Carlos Sierra

March 12, 2013 at 8:59 am

Posted in OUG, Plan Stability

Hotsos Symposium 2013

with 5 comments

Today is the 3rd and last day of the Hotsos Symposium 2013. I will stay the extra day so I can attend Maria Colgan’s 1-day class. I am delighted to be here. This annual event is awesome. I will certainly apply to speak next year!

In my opinion, most sessions have been of good quality. Topics are interesting, and having 2 sessions at the same time allows me to pick one out of two instead of many. Of course there are times where I would like to attend the two!

Since this is my second year, I feel like coming back home. I know more people than last year, including other speakers. My kudos to Hotsos for keeping up with the quality of this event.

I would love to see more Oracle attendees, specially Support managers so they can better sense what the Oracle community needs… Anyways, I am happy to see some Oracle faces here… Since many of us work from home these days, participating in events like this helps to reestablish relationships, that would otherwise fade over time.

Yesterday, we paid a visit to Enkitec‘s offices. Veronica showed us the office and we ate some pizza there. The office is modern, stylish and cool. It invites to work! One thing I noticed is the natural synergy among employees. I find this intimate networking of smart guys so important to achieve better results. It encourages constant collaboration and kind of explains me the success of this firm. It brought to my memory my first years at Oracle, when we used to gathered in the aisles and talk about anything, including work! Anyways, it was nice to go and visit…

My to-do list after this Symposium grows:

  1. I noticed some misconceptions about statistics gathering. I have to chase down some Metalink notes which are causing this trouble…
  2. Without knowing it, some speakers have created some “enhancement requests” to SQLT, so I have to incorporate some of these soon…
  3. I need to review some SQL tuning complex cases that some participants want to discuss with me. I will review and provide some advice…
  4. I expect to receive some emails with enhancements to either the CBO or the database. I will log them to corresponding groups…
  5. Oracle Support would benefit of some refreshment in “diagnostics collection” for SQL tuning issues. That means another round of internal training. Maybe short remote sessions…
  6. I have a list of at least half a dozen topics that could be of interest for future conferences. I need to polish this list and draft some abstracts. Then I will review with some OUG’s key players to sense interest.
  7. I need to purchase (and read) like 3-4 books.
  8. I need to evaluate the possibility of writing a new tool. It would be a holistic review of CBO statistics (including schema, system, data dictionary and fixed objects). The tool would perform many health-checks and return a set of observations. A natural component could be the dynamic generation of a second script to actually implement those observations. Statistics is still a hot issue and I noticed very diverse opinions. Some are good and some are simply wrong.
  9. We need to investigate if “STATISTICS_LEVEL to ALL” can be improved in terms of performance in EXADATA.
  10. I need to learn STRACE and determine its real value for Support. Then review existing Metalink notes and update them accordingly.
  11. Big data and fussy data mining seem to be hot. Performance of data mining may become a concern… I have to research.
  12. Method-R and Delphix have excellent products. I need to learn more about them since I expect their adoption by the Oracle community will make them more visible to all of us. I may write a Metalink note on the former so Support can make better sense of it.
  13. I have to document in this Blog more about the many ramifications of SQLT, specially in the area of Plan Stability and Plan Migration. It seems to be an eternal concern. SQLT provides a lot of help in this area but I think we need better explanations on how to exploit these capabilities.

Ok. Time for a coffee and my daily email race… So far, email is winning by far…

Written by Carlos Sierra

March 6, 2013 at 7:53 am

SQLTXPLAIN (SQLT) Show and Tell at Hotsos Symposium

with one comment

I will have the honor to speak at the Hotsos Symposium 2013 next week. My session name is “Understanding SQLTXPLAIN (SQLT) Main Report by Navigating through Some Samples“. It is scheduled for Tuesday, March 5 at 8:30 am in room A.

I plan to cover my PPT in no more than 15 minutes, so I can dedicate 45 minutes to ad-hoc demo and simultaneous Q&A. SQLT is quite big, so in 45 minutes I cannot cover the full content of the main SQLT report, but I will cover for sure the parts we use more often.

If you have never seen SQLT this is a good opportunity to see what is all the buzz about it. If you know SQLT then you can learn even more, and ask ANY question you may have about it. I would love to see you there! And please, please: ask as many questions as you may have about this tool and how to use it to diagnose SQL statements performing poorly.

About Hotsos:

One more thing: if you are newbie in SQL Tuning or overall Oracle performance, you may want to know more about Hotsos and their annual Symposium.  In my opinion, this conference is the best of the best in the area of Oracle performance. I attended for the first time last year (2012) when I was accepted as a speaker. I was in awe all the time! So much knowledge in such a short time! I learned in 4 days more than in any class I have taken in my 17 years at Oracle. I also learned that most speakers have some really valuable nuggets to share with our Oracle community, and most important: speakers are so approachable! So, if you are attending for the first time, I hope you enjoy this symposium as much as I do. If you cannot attend this year, I hope you can attend next year! It is worth every cent and every minute of your time…

Written by Carlos Sierra

March 1, 2013 at 8:22 am

Posted in OUG, SQLTXPLAIN (SQLT)

How to flush one Cursor out of the Shared Pool

with 11 comments

During my 2-days SQL Tuning session at the NoCOUG last week, someone asked me how to flush just one Cursor out of the Shared Pool. We were playing with Adaptive Cursor Sharing (ACS) and between tests we were simply flushing the entire Shared Pool. On 11g there is an API that allows us to flush just one Cursor. I wrote the small script flush_cursor.sql below, and I also placed it inside SQLTXPLAIN (SQLT) under directory sqlt/utl. Feel free to use it and share as needed:

REM Flushes one cursor out of the shared pool. Works on 11g+
REM To create DBMS_SHARED_POOL, run the DBMSPOOL.SQL script.
REM The PRVTPOOL.PLB script is automatically executed after DBMSPOOL.SQL runs.
REM These scripts are not run by as part of standard database creation.
SPO flush_cursor_&&sql_id..txt;
PRO *** before flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
BEGIN
 FOR i IN (SELECT address, hash_value
 FROM gv$sqlarea WHERE sql_id = '&&sql_id.')
 LOOP
 SYS.DBMS_SHARED_POOL.PURGE(i.address||','||i.hash_value, 'C');
 END LOOP;
END;
/
PRO *** after flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
UNDEF sql_id;
SPO OFF;

Written by Carlos Sierra

February 28, 2013 at 2:21 pm