Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘General’ Category

Finding SQL with Performance changing over time

with 10 comments

I upgraded my database a couple of weeks ago and now my users complain their application is slower. They do not provide specifics but they “feel” it is running slower. Sounds familiar?

Every once in a while I get a request that goes like this: “how can I find if some SQL on my database is performing worse over time?”

It is very hard to deal with the ambiguities of some problems like “finding SQL that performs worse or better over time”. But if you simplify the problem and consider for example “Elapsed Time per Execution”, then you can easily produce a script like the one below, which returns a small list of SQL statements that seem to experience either a regression or an improvement over time. It uses linear regression on the ratio between “Elapsed Time per Execution” and its Median per SQL.

Then, If you are suspecting you have some SQL that may have regressed and need a hand to identify them, you can try this script below. It is now part of a small collection of scripts that you can download and use for free out of the cscripts link on the right hand side of this page, under “Downloads”.

----------------------------------------------------------------------------------------
--
-- File name:   sql_performance_changed.sql
--
-- Purpose:     Lists SQL Statements with Elapsed Time per Execution changing over time
--
-- Author:      Carlos Sierra
--
-- Version:     2014/10/31
--
-- Usage:       Lists statements that have changed their elapsed time per execution over
--              some history.
--              Uses the ration between "elapsed time per execution" and the median of 
--              this metric for SQL statements within the sampled history, and using
--              linear regression identifies those that have changed the most. In other
--              words where the slope of the linear regression is larger. Positive slopes
--              are considered "improving" while negative are "regressing".
--
-- Example:     @sql_performance_changed.sql
--
-- Notes:       Developed and tested on 11.2.0.3.
--
--              Requires an Oracle Diagnostics Pack License since AWR data is accessed.
--
--              To further investigate poorly performing SQL use sqltxplain.sql or sqlhc 
--              (or planx.sql or sqlmon.sql or sqlash.sql).
--             
---------------------------------------------------------------------------------------
--
SPO sql_performance_changed.txt;
DEF days_of_history_accessed = '31';
DEF captured_at_least_x_times = '10';
DEF captured_at_least_x_days_apart = '5';
DEF med_elap_microsecs_threshold = '1e4';
DEF min_slope_threshold = '0.1';
DEF max_num_rows = '20';

SET lin 200 ver OFF;
COL row_n FOR A2 HEA '#';
COL med_secs_per_exec HEA 'Median Secs|Per Exec';
COL std_secs_per_exec HEA 'Std Dev Secs|Per Exec';
COL avg_secs_per_exec HEA 'Avg Secs|Per Exec';
COL min_secs_per_exec HEA 'Min Secs|Per Exec';
COL max_secs_per_exec HEA 'Max Secs|Per Exec';
COL plans FOR 9999;
COL sql_text_80 FOR A80;

PRO SQL Statements with "Elapsed Time per Execution" changing over time

WITH
per_time AS (
SELECT h.dbid,
       h.sql_id,
       SYSDATE - CAST(s.end_interval_time AS DATE) days_ago,
       SUM(h.elapsed_time_total) / SUM(h.executions_total) time_per_exec
  FROM dba_hist_sqlstat h, 
       dba_hist_snapshot s
 WHERE h.executions_total > 0 
   AND s.snap_id = h.snap_id
   AND s.dbid = h.dbid
   AND s.instance_number = h.instance_number
   AND CAST(s.end_interval_time AS DATE) > SYSDATE - &&days_of_history_accessed. 
 GROUP BY
       h.dbid,
       h.sql_id,
       SYSDATE - CAST(s.end_interval_time AS DATE)
),
avg_time AS (
SELECT dbid,
       sql_id, 
       MEDIAN(time_per_exec) med_time_per_exec,
       STDDEV(time_per_exec) std_time_per_exec,
       AVG(time_per_exec)    avg_time_per_exec,
       MIN(time_per_exec)    min_time_per_exec,
       MAX(time_per_exec)    max_time_per_exec       
  FROM per_time
 GROUP BY
       dbid,
       sql_id
HAVING COUNT(*) >= &&captured_at_least_x_times. 
   AND MAX(days_ago) - MIN(days_ago) >= &&captured_at_least_x_days_apart.
   AND MEDIAN(time_per_exec) > &&med_elap_microsecs_threshold.
),
time_over_median AS (
SELECT h.dbid,
       h.sql_id,
       h.days_ago,
       (h.time_per_exec / a.med_time_per_exec) time_per_exec_over_med,
       a.med_time_per_exec,
       a.std_time_per_exec,
       a.avg_time_per_exec,
       a.min_time_per_exec,
       a.max_time_per_exec
  FROM per_time h, avg_time a
 WHERE a.sql_id = h.sql_id
),
ranked AS (
SELECT RANK () OVER (ORDER BY ABS(REGR_SLOPE(t.time_per_exec_over_med, t.days_ago)) DESC) rank_num,
       t.dbid,
       t.sql_id,
       CASE WHEN REGR_SLOPE(t.time_per_exec_over_med, t.days_ago) > 0 THEN 'IMPROVING' ELSE 'REGRESSING' END change,
       ROUND(REGR_SLOPE(t.time_per_exec_over_med, t.days_ago), 3) slope,
       ROUND(AVG(t.med_time_per_exec)/1e6, 3) med_secs_per_exec,
       ROUND(AVG(t.std_time_per_exec)/1e6, 3) std_secs_per_exec,
       ROUND(AVG(t.avg_time_per_exec)/1e6, 3) avg_secs_per_exec,
       ROUND(MIN(t.min_time_per_exec)/1e6, 3) min_secs_per_exec,
       ROUND(MAX(t.max_time_per_exec)/1e6, 3) max_secs_per_exec
  FROM time_over_median t
 GROUP BY
       t.dbid,
       t.sql_id
HAVING ABS(REGR_SLOPE(t.time_per_exec_over_med, t.days_ago)) > &&min_slope_threshold.
)
SELECT LPAD(ROWNUM, 2) row_n,
       r.sql_id,
       r.change,
       TO_CHAR(r.slope, '990.000MI') slope,
       TO_CHAR(r.med_secs_per_exec, '999,990.000') med_secs_per_exec,
       TO_CHAR(r.std_secs_per_exec, '999,990.000') std_secs_per_exec,
       TO_CHAR(r.avg_secs_per_exec, '999,990.000') avg_secs_per_exec,
       TO_CHAR(r.min_secs_per_exec, '999,990.000') min_secs_per_exec,
       TO_CHAR(r.max_secs_per_exec, '999,990.000') max_secs_per_exec,
       (SELECT COUNT(DISTINCT p.plan_hash_value) FROM dba_hist_sql_plan p WHERE p.dbid = r.dbid AND p.sql_id = r.sql_id) plans,
       REPLACE((SELECT DBMS_LOB.SUBSTR(s.sql_text, 80) FROM dba_hist_sqltext s WHERE s.dbid = r.dbid AND s.sql_id = r.sql_id), CHR(10)) sql_text_80
  FROM ranked r
 WHERE r.rank_num <= &&max_num_rows.
 ORDER BY
       r.rank_num
/

SPO OFF;

Once you get the output of this script above, you can use the one below to actually list the time series for one of the SQL statements of interest:

----------------------------------------------------------------------------------------
--
-- File name:   one_sql_time_series.sql
--
-- Purpose:     Performance History for one SQL
--
-- Author:      Carlos Sierra
--
-- Version:     2014/10/31
--
-- Usage:       Script sql_performance_changed.sql lists SQL Statements with performance
--              improvement or regressed over some History.
--              This script one_sql_time_series.sql lists the Performance Time Series for
--              one SQL.
--
-- Parameters:  SQL_ID
--
-- Example:     @one_sql_time_series.sql
--
-- Notes:       Developed and tested on 11.2.0.3.
--
--              Requires an Oracle Diagnostics Pack License since AWR data is accessed.
--
--              To further investigate poorly performing SQL use sqltxplain.sql or sqlhc 
--              (or planx.sql or sqlmon.sql or sqlash.sql).
--             
---------------------------------------------------------------------------------------
--
SPO one_sql_time_series.txt;
SET lin 200 ver OFF;

COL instance_number FOR 9999 HEA 'Inst';
COL end_time HEA 'End Time';
COL plan_hash_value HEA 'Plan|Hash Value';
COL executions_total FOR 999,999 HEA 'Execs|Total';
COL rows_per_exec HEA 'Rows Per Exec';
COL et_secs_per_exec HEA 'Elap Secs|Per Exec';
COL cpu_secs_per_exec HEA 'CPU Secs|Per Exec';
COL io_secs_per_exec HEA 'IO Secs|Per Exec';
COL cl_secs_per_exec HEA 'Clus Secs|Per Exec';
COL ap_secs_per_exec HEA 'App Secs|Per Exec';
COL cc_secs_per_exec HEA 'Conc Secs|Per Exec';
COL pl_secs_per_exec HEA 'PLSQL Secs|Per Exec';
COL ja_secs_per_exec HEA 'Java Secs|Per Exec';

SELECT h.instance_number,
       TO_CHAR(CAST(s.end_interval_time AS DATE), 'YYYY-MM-DD HH24:MI') end_time,
       h.plan_hash_value, 
       h.executions_total,
       TO_CHAR(ROUND(h.rows_processed_total / h.executions_total), '999,999,999,999') rows_per_exec,
       TO_CHAR(ROUND(h.elapsed_time_total / h.executions_total / 1e6, 3), '999,990.000') et_secs_per_exec,
       TO_CHAR(ROUND(h.cpu_time_total / h.executions_total / 1e6, 3), '999,990.000') cpu_secs_per_exec,
       TO_CHAR(ROUND(h.iowait_total / h.executions_total / 1e6, 3), '999,990.000') io_secs_per_exec,
       TO_CHAR(ROUND(h.clwait_total / h.executions_total / 1e6, 3), '999,990.000') cl_secs_per_exec,
       TO_CHAR(ROUND(h.apwait_total / h.executions_total / 1e6, 3), '999,990.000') ap_secs_per_exec,
       TO_CHAR(ROUND(h.ccwait_total / h.executions_total / 1e6, 3), '999,990.000') cc_secs_per_exec,
       TO_CHAR(ROUND(h.plsexec_time_total / h.executions_total / 1e6, 3), '999,990.000') pl_secs_per_exec,
       TO_CHAR(ROUND(h.javexec_time_total / h.executions_total / 1e6, 3), '999,990.000') ja_secs_per_exec
  FROM dba_hist_sqlstat h, 
       dba_hist_snapshot s
 WHERE h.sql_id = '&sql_id.'
   AND h.executions_total > 0 
   AND s.snap_id = h.snap_id
   AND s.dbid = h.dbid
   AND s.instance_number = h.instance_number
 ORDER BY
       h.sql_id,
       h.instance_number,
       s.end_interval_time,
       h.plan_hash_value
/

SPO OFF;
Advertisements

Written by Carlos Sierra

November 2, 2014 at 4:22 am

eSP

with 2 comments

eSPEnkitec’s Sizing and Provisioning (eSP) is a new internal tool designed and developed with Oracle Engineered Systems in mind. Thanks to the experience and insights from Randy Johnson, Karl Arao and Frits Hoogland, what began as a pet project for some of us, over time became an actual robust APEX/PLSQL application, developed by Christoph Ruepprich and myself, and ready to debut at Oracle Open World 2014.

This posting is about eSP, what it does, and how it helps on the sizing and provisioning of Oracle Engineered System, or I would rather say, any System where Oracle runs.

We used to size Engineered Systems using a complex and very useful spread sheet developed by Randy Johnson and Karl Arao. Now, it is the turn for eSP to take the next step, and move this effort forward into a more scalable application that sits on top of one of our Exadata machines.

Sizing an Engineered System

Sizing a System can be quite challenging, especially when the current system is composed of several hosts with multiple databases of diverse use, size, versions, workloads, etc. The new target system may also bring some complexities; as the number of possible configurations grows, finding the right choice becomes harder. Then we also have the challenge of disk redundancy, recovery areas, the potential benefits of offloading with their smart scans, just to mention some added complexities.

At a very high level, Sizing a System is about 3 entities: Resources, Capacity and Utilization. Resources define what I call “demand”, which is basically the set of computational resources from your original System made of one or many databases and instances over some hosts. Capacity, which I also call it “supply”, is the set of possible target Systems with their multiple Configurations, in other words Engineered Systems, or any other hardware capable to host Oracle databases. Utilization, which I may also refer as “allocation” is where the magic and challenge resides. It is a clever and unbiassed mapping between databases and configurations, then between instances and nodes. This mapping has to consider at the very least CPU footprint, Memory for SGA and PGA, database disk space, and throughput in terms of IOPS and MBPS. Additional constraints, as mentioned before, include redundancy and offloading among others. CPU can be a bit tricky since each CPU make and model has its own characteristics, so mapping them requires the use of SPEC.

Other challenge a Sizing tool has to consider is the variability of the Resources. The question becomes: Do we see the Resources as a worst case scenario, or shall we rather consider them as time series? In other words, do we compute and use peaks, or do we observe the use of Resources over time, then develop some methods to aggregate them consistently as time series? If we decide to use a reduced set of data points, do we use peaks or percentiles? if the latter, which percentile is well balanced? 99.9, 99, 95 or maybe 90? How conservative are those values? There are so many questions and the answer for most of them, as you may guess is: “it all depends”.

How eSP Works

Without getting into the technical details, I can say that eSP is an APEX application with a repository on an Oracle database, which inputs collected “Requirements” from the databases to be sized, then it processes these Requirements and prepares them to be “Allocated” into one or more defined hardware configurations. The process is for the most part “automated”, meaning this: we execute some tool or script in the set of hosts where the databases reside, then upload the output of these collectors into eSP and we are ready to Plan and apply “what-if” scenarios. Having an Exadata System as our work engine, it allows this eSP application to scale quite well. A “what-if” scenario takes as long as it takes to navigate APEX pages,while all the computations are done in sub-seconds behind scenes, thanks to Exadata!

Once we load the Resources from the eSP collector script, or from the eAdam tool, we can start playing with the metadata. Since eSP’s set of known Configurations (Capacity) include current Engineered Systems (X4), allocating Configurations is a matter of seconds, then mapping databases and instances becomes the next step. eSP contains an auto “allocate” algorithm for databases and instances, where we can choose between a “balanced” allocation or one that is “dense” with several density factors to choose from (100%, 90%, 80%, 70%, 60% and 50%). With all these automated options, we can try multiple sizing and allocation possibilities in seconds, regardless if we are Sizing and Provisioning for one database or a hundred of them.

eSP and OOW

eSP DemoThe Enkitec’s Sizing and Provisioning (eSP) tool is an internal application that we created to help our customers to Size their next System or Systems in a sensible manner. The methods we implemented are transparent and unbiassed. We are bringing eSP to Oracle Open World 2014. I will personally demo eSP at our assigned booth, which is #111 at the Moscone South. I will be on and off the booth, so if you are interested on a demo please let me know, or contact your Enkitec/Accenture representative. We do prefer appointments, but walk-ins are welcomed. Hope to see you at OOW!

Written by Carlos Sierra

September 21, 2014 at 5:40 pm

Posted in eAdam, edb360, Exadata, General, OOW

A healthy way to do an Oracle database health-check

with 12 comments

Q: How do I do an Oracle database health-check?

A: It all depends. (hint: we can use this answer for most Oracle related questions in general and Performance related in particular, but don’t try it at home).

This seems like a quite broad question and yes it is. And of course there are many ways to proceed with a database health-check. So at this post I ‘d rather talk about: what I think is a healthy way to approach an Oracle database health-check.

  1. Start with the basics: Listen to the users of this database. If nobody complains then most probably you would have to define the scope by yourself. In any case, go on.
  2. Gather environment information. This includes the understanding of the architecture used, the databases on such architecture and the applications on those databases. Also learn who is who: Users, DBAs and Developers.
  3. Gather metrics. I am referring to OS metrics (CPU, IO and Memory), and also database metrics (AWR) together with alert logs. When gathering these metrics focus on time periods where the pain has been reported, and slice the time as small as possible (i.e. AWR reports for each time slice captured, avoiding the 6-24 hours time frame common mistake).
  4. Let the combination of complains (concerns) and findings on metrics to guide you to the next step. This is where most get lost. So don’t panic and dive in into what you see as contention on your metrics. Keep in mind that the most important metric of all is “user response time”, so anything affecting it must be in your priority list.
  5. There are many more things to check, but they are more in the configuration and sound practices arena. For example: redundancy on control files, archive mode, backups, non-default parameters, PX setup, memory setup, etc. For these, creating a check list would help.
  6. At some point you will have many leads and you will start to lose focus. Do some yoga or go for a walk, then make an A, B, C list with what is really important, what is kind-of and what is mere style.
  7. You are not an expert on every aspect of the database (nobody is). So, do not pretend you can do everything yourself. Rely on your peers and/or contacts. Reach out for help in those areas where you feel insecure (feeling insecure is a good thing, much better than feeling secure without any solid foundation).
  8. Once you think you have it nailed, go to your peers, colleagues, boss(es), friends, partner, or strangers if necessarily, and solicit a review of your findings and recommendations. Accept feedback. This is key. Maybe what you thought was sound it makes absolutely no sense to someone with more experience or simply with a different view.
  9. Reconsider everything. Avoid the pitfall of assuming that what you have learn in your two-digits years of experience can be applied to every case. For example, if you have done mostly SQL Tuning, don’t expect every issue to be SQL Tuning. Health-checks are like fortune cookies, you never know what you will get.
  10. Last but not least: Learn from your new experience, practice listening to others, use your common sense, exercise your knowledge, and work as a team member. Add the word “collaboration” to your daily work and maybe one day you will learn you are not alone.

Cheers — Carlos

Written by Carlos Sierra

November 1, 2013 at 7:27 am

SQLTXPLAIN 11.4.5.8 is now available

with 12 comments

What is new in SQLT? A couple of new stand-alone utilities. Both can be used without having to install SQLT. And both are only for 11g+. Look for sqlt/utl/mon and sqlt/utl/spm. The former is to monitor executions taking longer than 5 seconds (requires Oracle Tuning Pack and it uses the SQL Monitor report). The latter is to facilitate the use of SQL Plan Management (SPM), specially for plan migrations. For a complete log of changes to SQLT you can review the changes log here: SQLT Changes Log as of 11.4.5.8. Most recent set is this:

11.4.5.8 May 10, 2013

  1. BUG: Fix TRCA parsing errors due to missing CURSOR information: invalid call at line 27 “1” “1” “” “2”.
  2. ENH: List of Cursors to include now “is_obsolete” flag.
  3. ENH: Improved performance on SQLT TC restore script. This SQL was taking too long to execute: UPDATE SQLI$_STATTAB_TEMP SET C5 = :B2 WHERE STATID = :B1 AND TYPE IN (‘C’, ‘I’, ‘T’).
  4. ENH: Traces generated by SQLT are now limited to 200 MB per file.
  5. ENH: New stand-alone set of scripts to migrate a SQL Plan Basaline. Look for readme in sqlt/utl/spm.
  6. ENH: New stand-alone monitoring tool to capture executions of SQL statements taking more than 5 seconds. Look for readme in sqlt/utl/mon.

11.4.5.7 April 5, 2013

  1. BUG: SQL Tuning Advisor was invoked only when SQL existed in memory. Now it is invoked also if SQL resides only in AWR.
  2. BUG: File sqlt_instructions.html had links with XPREXT tag instead of XTRACT.
  3. ENH: TKPROF may error out under some circumstances. SQLT now displays a message to “ignore this error”.
  4. ENH: Improved performance of SQLT COMPARE when number of partitions is high.
  5. ENH: Collection of dba_sql_patches only happens on 11g+. It used to error out on 10g.
  6. ENH: Following error shows now on log file but not in MAIN html report:
    sqlt$a: *** i:DBMS_SPACE.CREATE_INDEX_COST: ORA-01031: insufficient privileges
    sqlt$a: *** i:index_name:XIE1_PCRD_APPROVER
  7. ENH: Best and Worst plans now consider last those plans with no fetches. Before this change it was possible to see killed executions (fetches 0) ranking as good plans.
  8. ENH: Purge of TRCA global temporary tables uses now a TRUNCATE instead of a DELETE.
  9. ENH: SQLT produces now a SQLT_installation_logs_archive.zip with all installation logs.
  10. ENH: Include Product Component Version in installation and main methods logs.
  11. ENH: SQLT log zip includes now alert.log and spfile.
  12. ENH: SQLDX (SQL Dynamic eXtractor) includes now details about tables accessed by execution plans of SQL provided.

11.4.5.6 March 5, 2013

  1. ENH: New script sqlt/utl/sqltq.sql to generate “q” script that contains binds and sql text for given SQL_ID.
  2. ENH: Script sqlt/utl/coe_gen_sql_patch.sql inputs now CBO Hints text in addition to SQL_ID.
  3. ENH: New min methods XPREXT and XPREXC similar to XTRACT and XECUTE but disabling several SQLT features as per sqlt/run/sqltcommon11.sql in order to improve SQLT performance.

11.4.5.5 March 1, 2013

  1. BUG: Following SQLT Parameters where not updateable: sql_monitor_reports, awr_reports, addm_reports, sta_time_limit_secs
    Workaround: Execute this update before using API to change value: update SQLTXPLAIN.sqli$_parameter set low_value = 0, high_value = 9999 where name = ‘addm_reports’;
  2. BUG: Eliminate duplicate entries on views: sqlt$_gv$act_sess_hist_pl_v and sqlt$_dba_act_sess_hist_pl_v.
  3. BUG: Incorrect observation “Table rebuild candidate.” when getting error on DBMS_SPACE.CREATE_TABLE_COST.
  4. BUG: Incorrect observation “Index coalesce candidate.” when getting error on DBMS_SPACE.CREATE_INDEX_COST.
  5. BUG: Trap this error on 10g:
    ORA-00942: table or view does not exist
    sqlt$a: *** d:INSERT INTO sqlt$_gv$cell_state (statement_id, statid, begin_end_flag) SELECT :statement_id, :statid, :begin_end_flag FROM gv$cell_state
  6. ENH: Remove health-check references to Bug 3620168, which was fixed on 10.2.
  7. ENH: COUNT(*) on Tables referenced by SQL includes now a FULL(t) Hint. This is to improve performance of COUNT.
  8. ENH: SQLT now sets TRCA to skip extents capture (by setting TRCA tool parameter “capture_extents” to N during SQLT installation). This is to improve XECUTE performance.
  9. ENH: New SQLT parameter “tcb_export_pkg_body” controls if 11.2+ Test Case Builder includes package bodies (of the packages referenced in the SQL are exported). Default is FALSE.
  10. ENH: View DBA_HIST_PARAMETER is now excluded from SQLT repository export file. This is to improve performance of export and to reduce size of zip.
  11. ENH: Value of parameter “_db_file_exec_read_count” is now displayed in MAIN and COMPARE reports.
  12. ENH: 11g PLSQL_CODE_TYPE is set to NATIVE by default (to improver performance). 10g defaults back to INTERPRETED.
  13. ENH: SQLY XPLORE now iterates “_optimizer_max_permutations” on values: 100, 2000, 40000, 79999 and 80000.
  14. ENH: SQL Patches are now displayed on the MAIN report.
  15. ENH: New script sqlt/utl/flush_cursor.sql flushes one cursor out of the shared pool.
  16. ENH: New script sqlt/utl/coe_gen_sql_patch.sql generates a SQL Patch for one SQL with some diagnostics CBO Hints. It also turns trace 10053 for same SQL.

Written by Carlos Sierra

May 10, 2013 at 10:43 am

Posted in General

Browsing Schema Objects CBO Statistics Versions

leave a comment »

Execution Plans have a strong dependency on CBO Statistics. In corner cases, a small change on the Schema Objects statistics creates havoc in an Execution Plan. If you strongly suspect the only change to your environment was a CBO Statistics gathering, you may consider restoring them to a prior version and test the effect in your SQL Execution Plan. If you consider doing this, you’d rather try your test on an actual “test” environment. You could use SQLT XHUME for that, but that is more of an advanced SQLT method… A topic that I will leave for some other day.

Anyways, if you want to see what I call “Statistics Versions”, which are prior Statistics together with Current and Pending Statistics, you may want to make use of some of SQLT views, available under sqlt/install/sqcvw.sql; and also copied into this sqlt_views.doc file (just change file type to *.txt after download). Then search for these views almost at the top of the file:

  1. sqlt$_dba_tab_stats_vers_v
  2. sqlt$_dba_ind_stats_vers_v
  3. sqlt$_dba_col_stats_vers_v
  4. sqlt$_dba_hgrm_stats_vers_v

You do not have to install or use SQLT in order to benefit of these views. Simply grab them from the shared file and create your own version. Then you can monitor fluctuations or do whatever you need to do with this information.

Written by Carlos Sierra

May 8, 2013 at 7:52 am

Posted in General

A couple of new cool Blogs in the area of Oracle Performance!

with 2 comments

Abel Macias and Stelios Charalambides are now blogging! Abel is one of those truly gurus in the area of Oracle performance (both server and SQL Tuning) and Stelios is well versed in Oracle performance and Oracle in general. Both work for Oracle helping out in the Exadata team. And both know SQLTXPLAIN in and out. Abel has been a key contributor in the creation of SQLTXPLAIN and Stelios has written a book on SQLTXPLAIN.  Please take some time to browse their new blogs. You may want to bookmark them or subscribe for future references. Please join me to welcome both! I am so happy to see more Oracle professionals opening up to share their treasured knowledge through some blog of their own!

Written by Carlos Sierra

April 17, 2013 at 11:07 am

2-days open SQL Tuning Class in Pleasanton, CA this month!

leave a comment »

Written by Carlos Sierra

February 8, 2013 at 6:15 am

Posted in General