Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘SQL Tuning’ Category

Carlos Sierra is joining Enkitec soon

with 29 comments

In a few days I will be joining Enkitec. Sure I will miss Oracle after 17 good years there. I made some really good friends and I learned a lot. I had the opportunity to build tools I thought were needed by the Oracle community, and I am glad they are been deployed all over the world. I am also pleased to know all these tools will continue helping our SQL Tuning community with no disruption.

A question I am often asked is: WHY Enkitec? I could elaborate on an answer but to keep it short, I would simply say “because I like the idea of contributing to an expanding team of outstanding professionals in the Oracle space“. I also value the close relationship between Enkitec and Oracle. I see this as a win-win.

At Enkitec I will be doing a lot of stuff. My main focus at the beginning will be database performance in general and SQL Tuning in particular. I foresee the possibility to open some SQL Tuning hands-on workshops using SQLTXPLAIN, SQLHC and some other cool tools and techniques. I also expect some additional duties and new areas of expertise to develop, which I embrace as new challenges and opportunities.

Overall, I feel very excited and ready to start walking this new career path. What can I say… Just when I though life was good it simply got better!

You know how to reach me (here) or by email.

Cheers — Carlos

Written by Carlos Sierra

July 26, 2013 at 11:00 am

Gathering CBO Statistics in Oracle Applications

with 17 comments

When we refer to Oracle Applications we usually think E-Business (EBS), but there are other large Oracle Applications that requires some special consideration when it comes to CBO Statistics gathering: Siebel and People Soft (PSFT). There are some others, but I would say EBS, Siebel and PSFT make the bulk of them all.

Gathering Stats on EBS

This is easy, since the only supported method is FND_STATS. That means using plain DBMS_STATS and ANALYZE is simply not supported. FND_STATS is a wrapper on top of DBMS_STATS that includes some extra functionality. The most important addition is controlling which columns get Histograms. The set of columns EBS requires to have Histograms is small and well controlled. FND_STATS can be executed from within the application (Concurrent Request, i.e. batch job) or directly from SQL*Plus.

If you are using EBS, then be sure FND_STATS is the only active method to gather statistics. In other words, automatic jobs to gather statistics using DBMS_STATS or custom scripts must be disabled.

Gathering Stats on Siebel

Siebel documents how CBO statistics must be gathered in white paper titled “Performance Tuning Guidelines for Siebel CRM Applications on Oracle Database“. This paper is authored by “James Qiu, Paul Blokhin, Mehdi Gerami” (Oracle Development) and its latest version is dated December, 2010. It was rewritten back on 2010 with the contribution of Abel Macias, Dave Henriquez and myself. Out of this paper we created the coe_siebel_stats.sql and the coe_siebel_profile.sql scripts. Both are well documented. Please refer to Metalink (MOS) note 781927.1 to download the latest version of the white paper and the actual scripts.

If you are using Siebel on databases 10.2 or higher, you really want to follow the white paper provided in MOS 781927.1 and implement coe_siebel_stats.sql. We have noticed that since its inception a couple of years ago, the number of performance issues reported in Siebel applications has been reduced significantly.

Gathering Stats on PSFT

For PeopleTools 8.49 and higher, on databases 10.2.0.4 and higher, I recommend the use of Metalink (MOS) note 1322888.1 (Improving Statistics in Oracle RDBMS for PeopleSoft Enterprise). The author of this note is John Houghton, who works for the Center of Expertise (CoE) in the PSFT side of the house. In this MOS note you will find a zip file that contains a couple of documents explaining how to use script pscbo_stats.sql and it also contains the actual script. There is some implementation effort, but once pscbo_stats.sql is implemented your CBO statistics are gathered following best practices for PSFT. I had the opportunity to collaborate with John in the creation and testing of this pscbo_stats.sql script. It is currently on its 2nd major release.

If you are on PSFT 8.49 or higher, you want to implement pscbo_stats.sql out of MOS 1322888.1. Feedback on the results of using this tool have been positive in most cases.

Conclusion

The Cost-based Optimizer (CBO) requires to input good statistics in order to produce an optimal plan. In the perfect world we would use DBMS_STATS with no parameters. Still, some specific application design characteristics, like very skewed data or the use of small tables and temporary tables, require a specif setup to gather statistics tailored to the needs of these applications. All FND_STATS, coe_siebel_stats.sql and pscbo_stats.sql are wrappers on top of DBMS_STATS that incorporate fine granularity use of DBMS_STATS parameters to gather statistics better suited to their application. For EBS the use of FND_STATS is a must, for the other two the use of their corresponding scripts are not a must but strongly recommended.

Written by Carlos Sierra

July 25, 2013 at 7:23 am

SQL Tuning 101 and Sushi

with 3 comments

A question that I hear often is: “how to tune a query?”. It comes in several flavors, but what I usually read between lines is: I am new to this “sql tuning” thing, and I have been asked to improve the performance of this query, but I have no clue where to start.

Knowing about nothing on SQL Tuning is quite common for most DBAs. Even seasoned DBAs may stay away from SQL Tuning, mostly because they feel out of their comfort zone. In addition, I think SQL Tuning is like Sushi: You either love it or hate it! And same like Sushi, most would avoid it simply because they haven’t tried it. By the way, I am a Sushi lover but that is another story…

SQL Tuning 101

So, if you are like in square 1, and you are serious about learning SQL Tuning, where do you start? There are about a couple dozens of well recognized names on this space of SQL Tuning. Look at my blog roll to get some idea. The problem with this list is that most of the “gurus” walk on water and their very simple explanations require like tons of knowledge in related topics. Not bad if you are traveling the road from intermediate to advanced, but a newbie gets lost like in 5 seconds. There is also the risk of falling for all the misinformation you find in the internet written by some adventurous self-proclaimed “experts”.

I strongly suggest to start by reading the Concepts reference manual for your Oracle release, followed by the SQL Reference. Feeling comfortable writing SQL is a must. You don’t know SQL? Then, learn it first then move into SQL Tuning. If you have some Development experience that helps a lot. If you are a seasoned Developer you are half the way there already.

After reading the Concepts reference manual and becoming proficient in SQL, then get yourself a copy of the “Oracle SQL High-Performance Tuning (2nd Edition)” book written by Guy Harrison more than 10 years ago. Disregard the fact that the book is old. I like this book because it starts with the basics. It assumes nothing. I learned a lot from it when I first read it long time ago. If you search for it in Amazon just be aware the front-cover picture is incorrect, but the actual book is correct.

Another book I suggest is the recent one written by Stelios Charalambides. The title is “Oracle SQL Tuning with Oracle SQLTXPLAIN”. I like this one because it introduces SQLTXPLAIN like you would do with Sushi: Starting with a California Roll, then Sushi and last Sashimi. Also, you would get to learn of SQLTXPLAIN and how this FREE tool can help you to digest your dinner better… I mean your SQL Tuning.

Once you gain some experience doing some real SQL Tuning, then you can move to start reading endorsed blogs and books from the Sushi Masters. I hope you give it a try, and I hope you also get to love it!

Written by Carlos Sierra

July 18, 2013 at 6:15 am

Can We Simplify The Process of SQL Tuning?

with 3 comments

Just came back from vacation.

A few weeks ago Stelios Charalmbides and myself wrote a short article on the RMOUG SQL>UPDATE Newsletters Online. The title of this article is: “Can We Simplify The Process of SQL Tuning?”. You can read it here if interested. You may also want to check Stelios blog.

Written by Carlos Sierra

July 9, 2013 at 8:24 am

Posted in SQL Tuning

Oracle Queries are taking different Plans

with 4 comments

I recently saw a post with this title “Oracle Queries are taking different Plans” and since there was a lot of guessing I suggested to use SQLTXPLAIN and find there the root cause. For some reason or another the thread is still open after 20 days, and the guessing continues. Since the problem of “finding the root cause of a change on an execution plan” is old and still common, I decided to talk about it here.

In Oracle, execution plans do change for one SQL over time, that is a fundamental function of the Cost-based Optimizer (CBO). Not only is normal, but expected and desired in most circumstances. What is not desired, is when a SQL that had a good performing plan suddenly changes its plan to one that we consider sub-optimal in terms of performance. This should be the concern and focus. So WHY the CBO suddenly generates such a plan? The possible reasons are many but I enlist here the most common ones I can think of:

  • Schema object statistics changes
  • Small sample sizes when gathering CBO schema statistics (mostly on 10g)
  • Lack of statistics in some columns referenced by predicates
  • Inconsistent values of bind variables referenced by SQL
  • System wide changes like CBO parameters or system statistics
  • Index modifications including new and dropped indexes
  • Invalid or invisible indexes
  • Large SQL text with many predicates where several plans with similar cost are possible
  • Inconsistent ramp-up process of Adaptive Cursor Sharing (ACS)
  • SQL Profiles generated by the SQL Tuning Advisor (STA)
  • An index changing its blevel (for example from blevel 2 to 3)
  • Volatile values in some columns (for example value “Pending” exists in table column for a short time)
  • Asynchronous gathering of stats on volatile data instead of gathering right between load and consumption

The list above is not comprehensive. It only includes what came to my mind while writing it, so I may have forgotten several cases. And of course there are always some corner cases like special conditions activating uncommon heuristics, or simply hitting a CBO bug.

Guessing WHY a plan has changed is very time consuming and requires multiple queries to the metadata on the database. Since most of the metadata we are interested on is dynamic by nature, we base our guess work on an ever changing environment. Of course the root cause become elusive and we may give up for some time, or jump to false conclusions.

This is where SQLTXPLAIN (MOS 215187.1) or its younger and lighter sister SQLHC (MOS 1366133.1) come to the rescue. These free tools, specially SQLTXPLAIN (SQLT), capture the static and dynamic environment around the one SQL we want to analyze.

By having a consistent view of all elements affecting the execution plan, it becomes much easier to find the root cause of a plan change. And since SQLT also includes current and past details about parameters, statistics and bind variable values, we can even correlate those values to prior execution plans. So I encourage you to try SQLTXPLAIN on a SQL next time you are confronted by the old question: “WHY Oracle Queries are taking different Plans?”.

I am aware SQLTXPLAIN is intimidating. Its output is exuberant, but the point is that it includes what we need to find the root cause of most if not all SQL Tuning cases where a plan has changed.

I have presented this SQLT tool in several conferences during the past year or so. I think it is appropriate now to put together a 3 days workshop to master this tool… Just an idea at this time.

Great Lakes Oracle User Group 2013 Conference

with 4 comments

I delivered my two sessions at the Great Lakes Oracle Users Group today. It was a great experience! I had the honor to fill to capacity both sessions, and both were scheduled in the largest room out of 5 concurrent tracks! I estimate that in each session I had more than 50% of the total audience. It is very rewarding been able to share some knowledge with such a great crowd. I was asked if I would do a half -day pre-conference workshop next year. I would certainly do if I can.

Anyways, with the conference behind, I am sharing here both presentations. For the one in Adaptive Cursor Sharing, if you want to perform the labs we did today, please post your request here and I will find a way to share those demo scripts.

  1. SQL Tuning made much easier with SQLTXPLAIN (SQLT)
  2. Using Adaptive Cursor Sharing (ACS) to produce multiple Optimal Plans

Scripts are now uploaded into acs_demo. Download this “doc” file and change its type to “zip” before opening it.

Written by Carlos Sierra

May 15, 2013 at 5:10 pm

Migrating an Execution Plan using SQL Plan Management

with 2 comments

SQL Plan Management (SPM) has been available since the first release of 11g. As you know SPM is the new technology that provides Plan Stability with some extra Plan Control and Management features. Maria Colgan has done an excellent job documented the “SPM functionality” pretty well in 4 of her popular blog postings:

  1. Creating SQL plan baselines
  2. SPM Aware Optimizer
  3. Evolving SQL Plan Baselines
  4. User Interfaces and Other Features

A question that I often get is: How do I move this good plan from system A into system B? To me, this translates into: How do I migrate an Execution Plan? And if source and target systems are 11g, the answer is: Use SQL Plan Management (SPM).

Migrating a Plan using SPM

Assuming both – source and target systems are on 11g then I suggest one of the two approaches below. If the source is 10g and target is 11g, then the 2nd approach below would work. In both cases the objective is to create a SQL Plan Baseline (SPB) into the target system out of a known plan from the source system.

Option 1: Create SPB on source then migrate SPB into target

Steps:

  1. Create SQL Plan Baseline (SPB) in Source
    1. From Memory; or
    2. From AWR (requires Diagnostics Pack license)
  2. Package & Export SPB from Source
  3. Import & Restore SPB into Target

Pros: Simple

Cons: Generates a SPB in Source system

Option 2: Create SQL Tuning Set (STS) on source, migrate STS into target, promote STS into SPB in target

Steps:

  1. Create SQL Tuning Set (STS) in Source (requires Tuning Pack license)
    1. From Memory; or
    2. From AWR (requires Diagnostics Pack license)
  2. Package & Export STS from Source
  3. Import & Restore STS into Target
  4. Create SPB from STS in Target

Pros: No SPB is created in Source system

Cons: Requires license for SQL Tuning Pack

How SQLTXPLAIN (SQLT) can help?

SQLT has been generating for quite some time a STS for each Plan Hash Value (PHV) of the SQL being analyzed. This STS for each PHV created on the source system is also stored inside the SQLT repository and included in the export of this SQLT repository. By doing this every time, options 1 and 2 above are simplified. If we want to promote a Plan into a SPB in source system we only have to execute an API that takes the Plan from the STS and creates the SPB. The dynamic readme included with SQLT has the exact command. And if we want to create a SPB on a target system having a SQLT from a source system, we have to restore the SQLT repository into the target system, then restore the STS out of the SQLT repository, and last create the SPB out of the STS. All these steps are clearly documented in the SQLT dynamic readme, including exact commands. There is one caveat although: you need SQLT in source and restore its repository in target…

Stand-alone scripts to Migrate a Plan using SPM

Options 1 and 2 above list the steps to take a plan from a source system and implement with it a SPB into a target system. The questions is: How exactly do I perform each of the steps? Yes, there are APIs for each step, but some are a bit difficult to use. That is WHY I have created a set of scripts that pretty much facilitate each of the steps. No magic here, only some time savings. If you want to use these scripts, look for SQLT directory sqlt/utl/spm, which will be available with SQLT 11.4.5.8 on May 10, 2013. If you need these scripts before May 10, then please send me an email or post a comment here.

Written by Carlos Sierra

May 2, 2013 at 8:02 am

How SQLTXPLAIN became to be…

with 8 comments

For those of you who like stories, this is the story about “how SQLTXPLAIN became to be“. I wrote it as a foreword for Stelios Charalambides book on SQLTXPLAIN. If you want to learn more about SQLTXPLAIN you may want to consider reading Stelios book.

*****************************************************

Foreword by Carlos Sierra – author of the SQLTXPLAIN tool.

You are about to read this book on the subject of SQL Tuning using SQLTXPLAIN (also referred to as SQLT for short). Chances are you know a bit about what this SQLT tool offers but not much about the story behind it. This foreword provides some background on how this tool became what it is today. I hope you enjoy this behind-the-scenes look at the motivations and events that slowly came together over the years as I’ve worked on the tool you’re about to learn.

In the late 1990s I was a “road warrior” like many consultants back then. I was an Oracle “field support engineer” with pretty good expertise in manufacturing and SQL tuning, and I had chosen to be part of the Short-term Assignments Team. (We had two options back then, short or long-term assignments!). I used to travel a lot from site to site for these assignments.

Manufacturing and SQL Tuning was a good combination for someone like me, willing to go on site and handle some of the big fires affecting large manufacturing corporations using Oracle ERP. Life was intense, and it was good! After several week-long assignments I started noticing some patterns: when trying to diagnose a SQL statement performing poorly, there were many areas to look at, and trace/tkprof would not give me everything I needed to effectively diagnose SQL tuning issues promptly! Through my assignments, I developed a set of flexible scripts. That is how the legendary coe_xplain.sql came to life.

Karl Daday, my manager at the time, supported my endeavors and actually encouraged me to build my own set of tools so I could deliver results in a one-week time frame. Thus, I used coe_xplain.sql on pretty much every performance assignment and always with good results.  Over time I was glad I developed this little tool, since most of my enhancement requests for tkprof are still pending.

One day, while I has happily using my coe_xplain.sql personal script, as well as some other scripts I had developed, a talented DBA in one of the corporations I was working with at the time asked me if he could keep my set of tools for later use. My concern over leaving behind my toys was that I knew from experience that anything you code will follow you your entire life, which is both a reward and a curse. If your code has only a few bugs, you do fine. Otherwise, you feel haunted and hunted for decades!

Nevertheless, I decided to share my coe_xplain.sql and other small tools, with the understanding that their users would take them “as is.” A year later, the same DBA asked me if I would publish my coe_xplain.sql script in Metalink (now known as MyOracle Support), so he could keep getting new versions if I decided to keep it updated. This was a turning point for me. Making a tool available in Metalink back in 2001 meant only one thing to me: I was willing to compromise to keep this tool free of bugs as much as possible, and I would have to react to enhancement requests even if that meant declining them all. I knew back then (as I know today) that I always have a hard time saying “no” when I actually can and want to say “yes.” So after coe_xplain.sql was published in Metalink, I quickly started getting some questions like: “Can you add this little tiny functionality to your otherwise nice script?”

Late in 2002 the coe_xplain.sql script had become a large script, and I decided it was time to actually upgrade it to PL/SQL. That would mean rewriting the entire functionality but using one PL/SQL package instead of several SQL statements embedded into one large script. In those days I was still part of a team named the “Center of Expertise.” That is why coe_xplain.sql had that prefix “coe_”. Today there are many teams within Oracle sharing the same “CoE” name, so I feel its meaning is somewhat diluted. (Something similar happened to the “BDE” team, which means “Bug Diagnostics and Escalations.” That’s the reason why some of my scripts had and still have the prefix “bde_”.)

I decided it was time to upgrade coe_xplain.sql to something more robust. I no longer wanted to name my scripts after the name of the team I was working for. So in 2002, on the second major version, this coe_xplain.sql tool came to be SQLTXPLAIN, and I published it on Metalink (MyOracle Support) under note 215187.1, which still is its location today. The name SQLTXPLAIN is loosely derived from “SQL Tuning and Explain Plan”. I had searched the Internet and had not found any references to this SQLTXPLAIN name. I was trying to avoid collisions with other tool names, products, or companies, and as of today I have succeeded, at least in regard to this naming!

SQLTXPLAIN was rapidly adopted by many Oracle ERP teams within Oracle Support and gradually through Oracle Development. Most of the SQLT enhancement requests I used to get in those days were E-Business Suite (EBS) specific, so SQLTXPLAIN became mostly a tool to diagnose SQL statements performing poorly within EBS. From 2002 all the way to 2006, this tool was installed inside the application schema, what was APPS for EBS. So SQLTXPLAIN used to have strong EBS dependencies. It was created inside an application schema, creating and dropping objects there. But don’t panic! It no longer works like this.

Those years between 2002 and 2006 were difficult for me at Oracle. I was extremely busy developing internal applications. I was also getting my master’s degree in computer science, so I had a double challenge on my hands. And that’s not to mention that my wife and I were raising four teenagers! Thus, SQLTXPLAIN was practically frozen until 2007.

I didn’t mention it before, so I will now: SQLTXPLAIN was never an official project with Oracle. SQLTXPLAIN up until 2007 was my weekend pet project (a big one!). I spent pretty much all my personal free time developing and maintaining this tool. It was my baby and still is today. I am very lucky that my wife Kelly Santana, who also works for Oracle, has been so very supportive during all these busy years. Without her patience and understanding I would have had to abandon SQLTXPLAIN and taken it off Metalink years ago!

Late in 2006 I moved to the Server Technologies Center of Expertise (ST CoE) within Oracle. I had been at the Applications CoE before, and now I was getting into the ST CoE. That was “the other side of the house,” “the dark side” as some call it, or simply “the other side.” I took with me all my EBS expertise and all my tools. Steve Franchi, one of the best managers I have had at Oracle, gave me the green light to keep maintaining my tools, so I was going to be able to work on SQLTXPLAIN during my regular working hours. That was a huge incentive for me, so I was very happy at my new home within Oracle. Until then, most of my scripts were actually worked only at night and on the weekend. That’s the problem when your hobby and your job blend so smoothly into one: and on top of that, you have the privilege of working from home. So today I would say I take care of SQLTXPLAIN pretty much around the clock, every day of the week. But of course I still sleep and eat!

Once I joined the ST CoE, I was on a mission of my own: I wanted the Server Technologies Support group to get to know SQLTXPLAIN and actually adopt it as much as EBS, so I requested a meeting with one of the legends and leaders at the performance support team. This is how I got to meet Abel Macias. When I asked Abel about using SQLTXPLAIN for ST performance issues, his answer was brusque. I don’t recall his exact words, but the meaning was something like this: “We won’t use it, because it sucks big time.”

Abel is known for his sometimes no-nonsense (but correct) answers. He is a very humble human being, and is way too direct in his observations, which can be intimidating. After my initial shock, I asked him why he thought that way.  This opened the door to a short but significant list of deficiencies SQLTXPLAIN had. I walked away demoralized but with a new challenge in front of me: I would take all that constructive criticism and turn it around into big enhancements to SQLTXPLAIN. After several weeks of hard work (and way too many cups of coffee), I met Abel again and showed to him all those “adjustments” to SQLTXPLAIN. Then and only then did he like it, and Abel and I became friends! Today, we ride our bikes together for miles. And even if we don’t want to, we frequently have short but meaningful conversations on SQL Tuning and SQLTXPLAIN while we enjoy our trail rides, or while having a beer… or two.

With Abel’s input, SQLT became application independent, installed in its own schema, and RAC aware. Since the name SQLTXPLAIN was so unique, it became the name of the application schema into which all the tool objects would be installed. Everything looked fine, but there was something bothering me, and it was the fact that SQLTXPLAIN had been heavily modified and was requiring more and more maintenance. It was time for a third full rewrite. Since the code was already big, I knew I would need a lot of dedicated time with no interruptions at all. I decided to take advantage of all my accumulated vacation time, together with a few holidays; and during December 2009 and January 2010 I pretty much lived in my man-cave developing thethird major version of SQLT.

I finished with all the details and testing on April 2010. SQLT now had several packages, and it was getting close to 100,000 lines of code, all developed with two fingers (I still type with two fingers). Some people think SQLT is developed and maintained by a large team. Sorry to disappoint you if you think this way: SQLT was conceived as a tool for someone from support to make his/her life easier and is now somehow shared and used by many others working on the complex task of SQL Tuning. Still the spirit of this tool remains unchanged: a tool to help to diagnose a SQL statement that performs poorly (or which generates wrong results).

SQLTXPLAIN, on its third major version from April 2010, was rapidly adopted by the Server Technologies performance team within support. Since I joined the ST CoE I also have had the pleasure to develop two internal one-week SQL Tuning workshops. After delivering them dozens of times to more than 700 Oracle professionals around the world, I have collected and incorporated hundreds of enhancements to SQLT that were communicated verbally during class.

From all the good and smart people providing me with ideas on how to enhance SQLT, I want to mention two by name, who in addition to Abel have made exceptional contributions to the feature set of the tool: Mauro Pagano and Mark Jefferys. Mauro, with his constant insight, inspired SQLTXPLAIN’s XPLORE method, which uses brute force analysis to narrow possible culprits of a regression after an upgrade. Mark, with his strong math and science background, showed me several mistakes I had made, especially around statistics and test case creation. With all sincerity, I think SQLT is what it is today thanks to all the constructive feedback I constantly get from very smart people. Sometimes the feedback is brutal, but for the most part it is just asking me to incorporate a thing here or there, or to fix a small bug. As enhancement examples I would mention some big ones: adding Siebel and PeopleSoft awareness in SQLT, and all the health-checks it performs.

I consider myself a very lucky guy to have had the opportunity to work for such a great company as Oracle since 1996; I have also had some great managers who have been very supportive in taking what I believe is the right approach to diagnose difficult SQL Tuning issues. I do not regret a single minute of my personal time that I have dedicated to this SQLTXPLAIN tool. At the beginning it was from me and to me. Now it is from me and many other bright minds to basically anybody who wants to get his/her hands into some serious SQL Tuning in an Oracle database.

When Stelios Charlambides asked my opinion about him writing a book in which SQLXPLAIN was a central part, I felt honored and happy. Many people have asked me: “When can we have a book on SQLTXPLAIN?” If you are one of those people, please enjoy what Stelios has developed during his own personal time, and thank him for this book! I truly hope you get to enjoy SQLTXPLAIN and even SQL Tuning.

I always say that SQL Tuning is like sushi, you either love it or you hate it! I hope that as you get to learn more, you’ll fall in love with it as I did back in the 1990s. Also keep in mind that with SQL Tuning you never finish learning. Good foundations, curiosity, perseverance, and experience—these are just some of the ingredients to one day feeling comfortable doing SQL Tuning. Cheers!

Written by Carlos Sierra

April 18, 2013 at 7:49 am

A couple of new cool Blogs in the area of Oracle Performance!

with 2 comments

Abel Macias and Stelios Charalambides are now blogging! Abel is one of those truly gurus in the area of Oracle performance (both server and SQL Tuning) and Stelios is well versed in Oracle performance and Oracle in general. Both work for Oracle helping out in the Exadata team. And both know SQLTXPLAIN in and out. Abel has been a key contributor in the creation of SQLTXPLAIN and Stelios has written a book on SQLTXPLAIN.  Please take some time to browse their new blogs. You may want to bookmark them or subscribe for future references. Please join me to welcome both! I am so happy to see more Oracle professionals opening up to share their treasured knowledge through some blog of their own!

Written by Carlos Sierra

April 17, 2013 at 11:07 am

How a Cursor becomes Bind Aware?

with 7 comments

On 11g we have Adaptive Cursor Sharing (ACS) as a mechanism to allow the generation of multiple optimal execution plans for a query. Before ACS we would only have one optimal execution plan in memory for a query given an optimizer environment. There were some exceptions to the rule like having different plans among RAC nodes, or different plans among child cursors when they were not shared. But in essence, before 11g we had one plan in memory for each query.

On ACS we have “bind sensitive” and “bind aware” cursors. The former stage only means “been monitored”, while the latter means: selectivity of predicates during a soft parse determine which optimal plan to use among the ones in memory. So it requires two mechanisms: one to control how to change state from “sensitive” to “aware” and another to select a plan in memory as per the values of the binds.

Becoming Bind Aware

Column v$sql_cs_statistics.rows_processed seems to drive this mechanism. This column is updated during a hard parse and it is a fuzzy representation of the amount of data the query manipulates during its execution (correlated to actual rows in execution plan).

For small values of v$sql_cs_statistics.rows_processed we increment by one v$sql_cs_histogram.bucket_id(0). For medium values we increase by one v$sql_cs_histogram.bucket_id(1). And for large values we do v$sql_cs_histogram.bucket_id(2). Observations on a test case built on 11.2.0.3 seem to indicate that boundaries for these 3 buckets are: less than 1,000; between 1,000 and 1,000,000; and more than 1,000,000.

Cursor Sharing histogram buckets 0-2 are updated on every execution of a bind sensitive query. They represent data volume manipulation of every execution. If there are significant variances then the query would benefit of ACS and it becomes Bind Aware.

I have observed a couple of scenarios where the values stored on these 3 buckets cause the cursor to become bind aware:

  1. When two of the buckets have the same value, and this value is not zero
  2. When the smallest and the largest buckets (0 and 2) have a value other than zero

Keep in mind that observations above are not comprehensive and they may change in future releases. But they give you some clues WHY in some cases your query seems to be erratic in regards to ACS. WHY sometimes it stays as bind sensitive and sometimes it becomes bind aware. Basically ACS requires a ramp-up process. If your SQL executes mostly in one of the 3 volume ranges (as per buckets boundaries), it is possible that sometimes it becomes bind aware but not always. In such cases you may want to use the /*+ bind_aware */ CBO Hint. See this Optimizer Blog.

To help you inserting the /*+ bind_aware */ CBO Hint on an application that cannot be easily modified, you may want to use sqlt/utl/coe_gen_sql_patch.sql which is part of SQLTXPLAIN (MOS/Metalink 215187.1). You can also create a SQL Patch as documented in this other Optimizer Blog.

Script sqlt/utl/coe_gen_sql_patch.sql implements a SQL Patch adding by default 3 CBO Hints to your SQL: GATHER_PLAN_STATISTICS, MONITOR and BIND_AWARE. These 3 Hints would help to force “bind aware” status, and would also enable diagnostics on your query. When you execute this script sqlt/utl/coe_gen_sql_patch.sql you can change the CBO Hints you want in your SQL Patch. I recommend you only pass diagnostics Hints and not access path modifiers.

Written by Carlos Sierra

March 29, 2013 at 7:04 am