Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘General’ Category


with 2 comments

eSPEnkitec’s Sizing and Provisioning (eSP) is a new internal tool designed and developed with Oracle Engineered Systems in mind. Thanks to the experience and insights from Randy Johnson, Karl Arao and Frits Hoogland, what began as a pet project for some of us, over time became an actual robust APEX/PLSQL application, developed by Christoph Ruepprich and myself, and ready to debut at Oracle Open World 2014.

This posting is about eSP, what it does, and how it helps on the sizing and provisioning of Oracle Engineered System, or I would rather say, any System where Oracle runs.

We used to size Engineered Systems using a complex and very useful spread sheet developed by Randy Johnson and Karl Arao. Now, it is the turn for eSP to take the next step, and move this effort forward into a more scalable application that sits on top of one of our Exadata machines.

Sizing an Engineered System

Sizing a System can be quite challenging, especially when the current system is composed of several hosts with multiple databases of diverse use, size, versions, workloads, etc. The new target system may also bring some complexities; as the number of possible configurations grows, finding the right choice becomes harder. Then we also have the challenge of disk redundancy, recovery areas, the potential benefits of offloading with their smart scans, just to mention some added complexities.

At a very high level, Sizing a System is about 3 entities: Resources, Capacity and Utilization. Resources define what I call “demand”, which is basically the set of computational resources from your original System made of one or many databases and instances over some hosts. Capacity, which I also call it “supply”, is the set of possible target Systems with their multiple Configurations, in other words Engineered Systems, or any other hardware capable to host Oracle databases. Utilization, which I may also refer as “allocation” is where the magic and challenge resides. It is a clever and unbiassed mapping between databases and configurations, then between instances and nodes. This mapping has to consider at the very least CPU footprint, Memory for SGA and PGA, database disk space, and throughput in terms of IOPS and MBPS. Additional constraints, as mentioned before, include redundancy and offloading among others. CPU can be a bit tricky since each CPU make and model has its own characteristics, so mapping them requires the use of SPEC.

Other challenge a Sizing tool has to consider is the variability of the Resources. The question becomes: Do we see the Resources as a worst case scenario, or shall we rather consider them as time series? In other words, do we compute and use peaks, or do we observe the use of Resources over time, then develop some methods to aggregate them consistently as time series? If we decide to use a reduced set of data points, do we use peaks or percentiles? if the latter, which percentile is well balanced? 99.9, 99, 95 or maybe 90? How conservative are those values? There are so many questions and the answer for most of them, as you may guess is: “it all depends”.

How eSP Works

Without getting into the technical details, I can say that eSP is an APEX application with a repository on an Oracle database, which inputs collected “Requirements” from the databases to be sized, then it processes these Requirements and prepares them to be “Allocated” into one or more defined hardware configurations. The process is for the most part “automated”, meaning this: we execute some tool or script in the set of hosts where the databases reside, then upload the output of these collectors into eSP and we are ready to Plan and apply “what-if” scenarios. Having an Exadata System as our work engine, it allows this eSP application to scale quite well. A “what-if” scenario takes as long as it takes to navigate APEX pages,while all the computations are done in sub-seconds behind scenes, thanks to Exadata!

Once we load the Resources from the eSP collector script, or from the eAdam tool, we can start playing with the metadata. Since eSP’s set of known Configurations (Capacity) include current Engineered Systems (X4), allocating Configurations is a matter of seconds, then mapping databases and instances becomes the next step. eSP contains an auto “allocate” algorithm for databases and instances, where we can choose between a “balanced” allocation or one that is “dense” with several density factors to choose from (100%, 90%, 80%, 70%, 60% and 50%). With all these automated options, we can try multiple sizing and allocation possibilities in seconds, regardless if we are Sizing and Provisioning for one database or a hundred of them.

eSP and OOW

eSP DemoThe Enkitec’s Sizing and Provisioning (eSP) tool is an internal application that we created to help our customers to Size their next System or Systems in a sensible manner. The methods we implemented are transparent and unbiassed. We are bringing eSP to Oracle Open World 2014. I will personally demo eSP at our assigned booth, which is #111 at the Moscone South. I will be on and off the booth, so if you are interested on a demo please let me know, or contact your Enkitec/Accenture representative. We do prefer appointments, but walk-ins are welcomed. Hope to see you at OOW!

Written by Carlos Sierra

September 21, 2014 at 5:40 pm

Posted in eAdam, edb360, Exadata, General, OOW

A healthy way to do an Oracle database health-check

with 12 comments

Q: How do I do an Oracle database health-check?

A: It all depends. (hint: we can use this answer for most Oracle related questions in general and Performance related in particular, but don’t try it at home).

This seems like a quite broad question and yes it is. And of course there are many ways to proceed with a database health-check. So at this post I ‘d rather talk about: what I think is a healthy way to approach an Oracle database health-check.

  1. Start with the basics: Listen to the users of this database. If nobody complains then most probably you would have to define the scope by yourself. In any case, go on.
  2. Gather environment information. This includes the understanding of the architecture used, the databases on such architecture and the applications on those databases. Also learn who is who: Users, DBAs and Developers.
  3. Gather metrics. I am referring to OS metrics (CPU, IO and Memory), and also database metrics (AWR) together with alert logs. When gathering these metrics focus on time periods where the pain has been reported, and slice the time as small as possible (i.e. AWR reports for each time slice captured, avoiding the 6-24 hours time frame common mistake).
  4. Let the combination of complains (concerns) and findings on metrics to guide you to the next step. This is where most get lost. So don’t panic and dive in into what you see as contention on your metrics. Keep in mind that the most important metric of all is “user response time”, so anything affecting it must be in your priority list.
  5. There are many more things to check, but they are more in the configuration and sound practices arena. For example: redundancy on control files, archive mode, backups, non-default parameters, PX setup, memory setup, etc. For these, creating a check list would help.
  6. At some point you will have many leads and you will start to lose focus. Do some yoga or go for a walk, then make an A, B, C list with what is really important, what is kind-of and what is mere style.
  7. You are not an expert on every aspect of the database (nobody is). So, do not pretend you can do everything yourself. Rely on your peers and/or contacts. Reach out for help in those areas where you feel insecure (feeling insecure is a good thing, much better than feeling secure without any solid foundation).
  8. Once you think you have it nailed, go to your peers, colleagues, boss(es), friends, partner, or strangers if necessarily, and solicit a review of your findings and recommendations. Accept feedback. This is key. Maybe what you thought was sound it makes absolutely no sense to someone with more experience or simply with a different view.
  9. Reconsider everything. Avoid the pitfall of assuming that what you have learn in your two-digits years of experience can be applied to every case. For example, if you have done mostly SQL Tuning, don’t expect every issue to be SQL Tuning. Health-checks are like fortune cookies, you never know what you will get.
  10. Last but not least: Learn from your new experience, practice listening to others, use your common sense, exercise your knowledge, and work as a team member. Add the word “collaboration” to your daily work and maybe one day you will learn you are not alone.

Cheers — Carlos

Written by Carlos Sierra

November 1, 2013 at 7:27 am

SQLTXPLAIN is now available

with 12 comments

What is new in SQLT? A couple of new stand-alone utilities. Both can be used without having to install SQLT. And both are only for 11g+. Look for sqlt/utl/mon and sqlt/utl/spm. The former is to monitor executions taking longer than 5 seconds (requires Oracle Tuning Pack and it uses the SQL Monitor report). The latter is to facilitate the use of SQL Plan Management (SPM), specially for plan migrations. For a complete log of changes to SQLT you can review the changes log here: SQLT Changes Log as of Most recent set is this: May 10, 2013

  1. BUG: Fix TRCA parsing errors due to missing CURSOR information: invalid call at line 27 “1” “1” “” “2”.
  2. ENH: List of Cursors to include now “is_obsolete” flag.
  3. ENH: Improved performance on SQLT TC restore script. This SQL was taking too long to execute: UPDATE SQLI$_STATTAB_TEMP SET C5 = :B2 WHERE STATID = :B1 AND TYPE IN (‘C’, ‘I’, ‘T’).
  4. ENH: Traces generated by SQLT are now limited to 200 MB per file.
  5. ENH: New stand-alone set of scripts to migrate a SQL Plan Basaline. Look for readme in sqlt/utl/spm.
  6. ENH: New stand-alone monitoring tool to capture executions of SQL statements taking more than 5 seconds. Look for readme in sqlt/utl/mon. April 5, 2013

  1. BUG: SQL Tuning Advisor was invoked only when SQL existed in memory. Now it is invoked also if SQL resides only in AWR.
  2. BUG: File sqlt_instructions.html had links with XPREXT tag instead of XTRACT.
  3. ENH: TKPROF may error out under some circumstances. SQLT now displays a message to “ignore this error”.
  4. ENH: Improved performance of SQLT COMPARE when number of partitions is high.
  5. ENH: Collection of dba_sql_patches only happens on 11g+. It used to error out on 10g.
  6. ENH: Following error shows now on log file but not in MAIN html report:
    sqlt$a: *** i:DBMS_SPACE.CREATE_INDEX_COST: ORA-01031: insufficient privileges
    sqlt$a: *** i:index_name:XIE1_PCRD_APPROVER
  7. ENH: Best and Worst plans now consider last those plans with no fetches. Before this change it was possible to see killed executions (fetches 0) ranking as good plans.
  8. ENH: Purge of TRCA global temporary tables uses now a TRUNCATE instead of a DELETE.
  9. ENH: SQLT produces now a with all installation logs.
  10. ENH: Include Product Component Version in installation and main methods logs.
  11. ENH: SQLT log zip includes now alert.log and spfile.
  12. ENH: SQLDX (SQL Dynamic eXtractor) includes now details about tables accessed by execution plans of SQL provided. March 5, 2013

  1. ENH: New script sqlt/utl/sqltq.sql to generate “q” script that contains binds and sql text for given SQL_ID.
  2. ENH: Script sqlt/utl/coe_gen_sql_patch.sql inputs now CBO Hints text in addition to SQL_ID.
  3. ENH: New min methods XPREXT and XPREXC similar to XTRACT and XECUTE but disabling several SQLT features as per sqlt/run/sqltcommon11.sql in order to improve SQLT performance. March 1, 2013

  1. BUG: Following SQLT Parameters where not updateable: sql_monitor_reports, awr_reports, addm_reports, sta_time_limit_secs
    Workaround: Execute this update before using API to change value: update SQLTXPLAIN.sqli$_parameter set low_value = 0, high_value = 9999 where name = ‘addm_reports’;
  2. BUG: Eliminate duplicate entries on views: sqlt$_gv$act_sess_hist_pl_v and sqlt$_dba_act_sess_hist_pl_v.
  3. BUG: Incorrect observation “Table rebuild candidate.” when getting error on DBMS_SPACE.CREATE_TABLE_COST.
  4. BUG: Incorrect observation “Index coalesce candidate.” when getting error on DBMS_SPACE.CREATE_INDEX_COST.
  5. BUG: Trap this error on 10g:
    ORA-00942: table or view does not exist
    sqlt$a: *** d:INSERT INTO sqlt$_gv$cell_state (statement_id, statid, begin_end_flag) SELECT :statement_id, :statid, :begin_end_flag FROM gv$cell_state
  6. ENH: Remove health-check references to Bug 3620168, which was fixed on 10.2.
  7. ENH: COUNT(*) on Tables referenced by SQL includes now a FULL(t) Hint. This is to improve performance of COUNT.
  8. ENH: SQLT now sets TRCA to skip extents capture (by setting TRCA tool parameter “capture_extents” to N during SQLT installation). This is to improve XECUTE performance.
  9. ENH: New SQLT parameter “tcb_export_pkg_body” controls if 11.2+ Test Case Builder includes package bodies (of the packages referenced in the SQL are exported). Default is FALSE.
  10. ENH: View DBA_HIST_PARAMETER is now excluded from SQLT repository export file. This is to improve performance of export and to reduce size of zip.
  11. ENH: Value of parameter “_db_file_exec_read_count” is now displayed in MAIN and COMPARE reports.
  12. ENH: 11g PLSQL_CODE_TYPE is set to NATIVE by default (to improver performance). 10g defaults back to INTERPRETED.
  13. ENH: SQLY XPLORE now iterates “_optimizer_max_permutations” on values: 100, 2000, 40000, 79999 and 80000.
  14. ENH: SQL Patches are now displayed on the MAIN report.
  15. ENH: New script sqlt/utl/flush_cursor.sql flushes one cursor out of the shared pool.
  16. ENH: New script sqlt/utl/coe_gen_sql_patch.sql generates a SQL Patch for one SQL with some diagnostics CBO Hints. It also turns trace 10053 for same SQL.

Written by Carlos Sierra

May 10, 2013 at 10:43 am

Posted in General

Browsing Schema Objects CBO Statistics Versions

leave a comment »

Execution Plans have a strong dependency on CBO Statistics. In corner cases, a small change on the Schema Objects statistics creates havoc in an Execution Plan. If you strongly suspect the only change to your environment was a CBO Statistics gathering, you may consider restoring them to a prior version and test the effect in your SQL Execution Plan. If you consider doing this, you’d rather try your test on an actual “test” environment. You could use SQLT XHUME for that, but that is more of an advanced SQLT method… A topic that I will leave for some other day.

Anyways, if you want to see what I call “Statistics Versions”, which are prior Statistics together with Current and Pending Statistics, you may want to make use of some of SQLT views, available under sqlt/install/sqcvw.sql; and also copied into this sqlt_views.doc file (just change file type to *.txt after download). Then search for these views almost at the top of the file:

  1. sqlt$_dba_tab_stats_vers_v
  2. sqlt$_dba_ind_stats_vers_v
  3. sqlt$_dba_col_stats_vers_v
  4. sqlt$_dba_hgrm_stats_vers_v

You do not have to install or use SQLT in order to benefit of these views. Simply grab them from the shared file and create your own version. Then you can monitor fluctuations or do whatever you need to do with this information.

Written by Carlos Sierra

May 8, 2013 at 7:52 am

Posted in General

A couple of new cool Blogs in the area of Oracle Performance!

with 2 comments

Abel Macias and Stelios Charalambides are now blogging! Abel is one of those truly gurus in the area of Oracle performance (both server and SQL Tuning) and Stelios is well versed in Oracle performance and Oracle in general. Both work for Oracle helping out in the Exadata team. And both know SQLTXPLAIN in and out. Abel has been a key contributor in the creation of SQLTXPLAIN and Stelios has written a book on SQLTXPLAIN.  Please take some time to browse their new blogs. You may want to bookmark them or subscribe for future references. Please join me to welcome both! I am so happy to see more Oracle professionals opening up to share their treasured knowledge through some blog of their own!

Written by Carlos Sierra

April 17, 2013 at 11:07 am

2-days open SQL Tuning Class in Pleasanton, CA this month!

leave a comment »

Written by Carlos Sierra

February 8, 2013 at 6:15 am

Posted in General

SQL Tuning Workshop I (updated schedule of sessions)

leave a comment »

I opened 3 sessions of the internal SQL Tuning Workshop I.

  • June 11-14, Orlando, FL
  • July 9-13, Toronto, Canada
  • July 30-August 3, Colorado Springs, CO

This intermediate to advanced class includes these topics:

  • Explaining the Explain Plan
  • Statistics Foundations
  • CBO Fundamentals I
  • Statistics Best Practices
  • CBO Fundamentals II
  • Extended Statistics
  • Dynamic Sampling
  • Event 10053 Tracing the CBO
  • Event 10046 Tracing the Execution
  • SQLT Foundations
  • SQLT Intermediate
  • Real-life Case Studies

I still have some seats available for Orlando and Colorado Springs. For registration please follow instructions on internal link.

Written by Carlos Sierra

May 29, 2012 at 2:43 pm

Welcome to my blog on Oracle SQL Tuning!

with one comment

It has been a while since I first thought on starting a blog on Oracle SQL Tuning. Over the past decade or so I have accumulated a lot of practical knowledge in this interesting area. Even when I keep sharing this knowledge with my fellow co-workers at Oracle, I always feel like I should expand my horizon to the larger community who is out there, dealing the same kind of issues that I do, re-discovering the hard way many things that we already know.

After an enlightening moment while participating as a speaker at the awesome Hotsos Symposium 2012, I decided to open up and reach out. I have been learning from others through their valuable blogs. Now it is time to pay back, and share with whoever is interested in Oracle SQL Tuning, whatever I have learned over the years.

Nothing that I post is Oracle confidential. I blog about what I have learned through others, and through carefully crafted models that demonstrate how things actually work. I also blog about the tools that I have created over the years, and which I have made available to the Oracle community through My Oracle Support (MOS).

This place is a a friendly place. If you come here, I hope it is because you want to learn something new, or you want to share something you know. I will make mistakes, and I expect you to let me know if what I write seems to be wrong. I do my best to minimize the number of mistakes that I make, but I am far from perfect.

With all that said, I truly welcome you to this place.

Written by Carlos Sierra

April 1, 2012 at 12:58 pm

Posted in General