Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Exadata’ Category

Smart Scans efficiency chart for Oracle Engineered Systems

with 4 comments

If you manage an Oracle Engineered System you may wonder how well your Smart Scans are performing. Are you taking full advantage of Exadata Optimizations? If so, how do you measure them?

Uwe Hesse explains well some important statistics on Exadata. For some time now, eDB360 includes a report on Smart Scan efficiency, which is nothing but a Google Chart on top of relevant statistics.

Sample chart below is from a data warehouse DW application. It shows in blue that bytes eligible for offloading are around 95%, which denote a large amount of desired full scans. We see in red that between 80 and 95% of the expected I/O did not hit the network, i.e. it was saved (incorrectly referred as IO Saved since what is saved is the network traffic). And in yellow we see between 30 and 45% of the IO was entirely avoided due to Storage Indexes. So, with 80-95% of the expected data not going through the network and 30-45% of IO entirely eliminated, I could conclude that Exadata Optimizations are well applied on this DW.

screen-shot-2016-11-08-at-7-06-14-am

If you have SQL*Plus access to an Engineered System, and want to produce a chart like this, simply download and execute free tool eDB360. This tool installs nothing on your database!

Written by Carlos Sierra

November 8, 2016 at 10:26 am

Posted in edb360, Exadata, Tools

eSP

with 2 comments

eSPEnkitec’s Sizing and Provisioning (eSP) is a new internal tool designed and developed with Oracle Engineered Systems in mind. Thanks to the experience and insights from Randy Johnson, Karl Arao and Frits Hoogland, what began as a pet project for some of us, over time became an actual robust APEX/PLSQL application, developed by Christoph Ruepprich and myself, and ready to debut at Oracle Open World 2014.

This posting is about eSP, what it does, and how it helps on the sizing and provisioning of Oracle Engineered System, or I would rather say, any System where Oracle runs.

We used to size Engineered Systems using a complex and very useful spread sheet developed by Randy Johnson and Karl Arao. Now, it is the turn for eSP to take the next step, and move this effort forward into a more scalable application that sits on top of one of our Exadata machines.

Sizing an Engineered System

Sizing a System can be quite challenging, especially when the current system is composed of several hosts with multiple databases of diverse use, size, versions, workloads, etc. The new target system may also bring some complexities; as the number of possible configurations grows, finding the right choice becomes harder. Then we also have the challenge of disk redundancy, recovery areas, the potential benefits of offloading with their smart scans, just to mention some added complexities.

At a very high level, Sizing a System is about 3 entities: Resources, Capacity and Utilization. Resources define what I call “demand”, which is basically the set of computational resources from your original System made of one or many databases and instances over some hosts. Capacity, which I also call it “supply”, is the set of possible target Systems with their multiple Configurations, in other words Engineered Systems, or any other hardware capable to host Oracle databases. Utilization, which I may also refer as “allocation” is where the magic and challenge resides. It is a clever and unbiassed mapping between databases and configurations, then between instances and nodes. This mapping has to consider at the very least CPU footprint, Memory for SGA and PGA, database disk space, and throughput in terms of IOPS and MBPS. Additional constraints, as mentioned before, include redundancy and offloading among others. CPU can be a bit tricky since each CPU make and model has its own characteristics, so mapping them requires the use of SPEC.

Other challenge a Sizing tool has to consider is the variability of the Resources. The question becomes: Do we see the Resources as a worst case scenario, or shall we rather consider them as time series? In other words, do we compute and use peaks, or do we observe the use of Resources over time, then develop some methods to aggregate them consistently as time series? If we decide to use a reduced set of data points, do we use peaks or percentiles? if the latter, which percentile is well balanced? 99.9, 99, 95 or maybe 90? How conservative are those values? There are so many questions and the answer for most of them, as you may guess is: “it all depends”.

How eSP Works

Without getting into the technical details, I can say that eSP is an APEX application with a repository on an Oracle database, which inputs collected “Requirements” from the databases to be sized, then it processes these Requirements and prepares them to be “Allocated” into one or more defined hardware configurations. The process is for the most part “automated”, meaning this: we execute some tool or script in the set of hosts where the databases reside, then upload the output of these collectors into eSP and we are ready to Plan and apply “what-if” scenarios. Having an Exadata System as our work engine, it allows this eSP application to scale quite well. A “what-if” scenario takes as long as it takes to navigate APEX pages,while all the computations are done in sub-seconds behind scenes, thanks to Exadata!

Once we load the Resources from the eSP collector script, or from the eAdam tool, we can start playing with the metadata. Since eSP’s set of known Configurations (Capacity) include current Engineered Systems (X4), allocating Configurations is a matter of seconds, then mapping databases and instances becomes the next step. eSP contains an auto “allocate” algorithm for databases and instances, where we can choose between a “balanced” allocation or one that is “dense” with several density factors to choose from (100%, 90%, 80%, 70%, 60% and 50%). With all these automated options, we can try multiple sizing and allocation possibilities in seconds, regardless if we are Sizing and Provisioning for one database or a hundred of them.

eSP and OOW

eSP DemoThe Enkitec’s Sizing and Provisioning (eSP) tool is an internal application that we created to help our customers to Size their next System or Systems in a sensible manner. The methods we implemented are transparent and unbiassed. We are bringing eSP to Oracle Open World 2014. I will personally demo eSP at our assigned booth, which is #111 at the Moscone South. I will be on and off the booth, so if you are interested on a demo please let me know, or contact your Enkitec/Accenture representative. We do prefer appointments, but walk-ins are welcomed. Hope to see you at OOW!

Written by Carlos Sierra

September 21, 2014 at 5:40 pm

Posted in eAdam, edb360, Exadata, General, OOW

Meet “edb360”: a free tool that provides a 360-degree view of an Oracle database

with 39 comments

Simply put: edb360 is a new free tool that provides a 360-degree view of an Oracle database.

What is “edb360“?

This “edb360” tool is the product of a collaborative effort of some very smart guys, and me. Special thanks to Frits Hoogland, Karl Arao, Randy Johnson, Martin Bach, Kyle Hailey, Tanel Poder, Alex Fatkulin, Mauro Pagano, Abel Macias, Jon Adams and Jack Agustin. These guys helped me to envision edb360, some directly and some indirectly, but their help and shared knowledge motivated me to develop edb360 and make it available today.

The edb360 tool started as a quick and dirty “script” to gather basic information about a database without knowing anything about it before hand. The first rule for edb360 was: it has to install nothing in the database. The second rule became: it has to provide some insight about a database.

The output is presented for the most part into 3 formats: HTML, Text and Comma-separated Values (CSV). Why? HTML and Text can be easily used to consolidate important findings into a Word report. Sometimes HTML is more useful and sometimes Text is better. Then CSV is used to produce charts out of Performance Trends. Some people can visualize trends easier with a graph (me included).

What about other tools?

Of course there are wonderful tools that can help in this arena, like Oracle Enterprise Manager (OEM) or Oracle’s Automatic Workload Repository (AWR). So why not using those tools? Well, if I had access to OEM or I knew before hand which time intervals I want to analyze with AWR, then I would not have a strong need to use edb360. The reality that we consultants face when we are getting acquainted of a system, is that we are not given any access to the database of interest (usually production). And asking for a server account feels like asking for coke’s secret formula: then we simply cannot poke the database at our own will, and that is understandable. So, what is our second best?: please run this script that installs nothing and generates a zip file with some metadata from your system.  The script is plain text and its output is also plain text (html, text and csv files). So, any DBA or System Administration can validate that no customer confidential data is extracted or exposed. A win-win!

If the system we want to understand is an Exadata system, we can also request for an Exacheck output, if  not an Exadata system but a RAC cluster, there is Raccheck. These two tools, available though My Oracle Support (MOS) make a good companion for the edb360. In other words, edb360 is not a replacement for the other two but more of an add-on or companion.

Why is edb360 free?

Why not? Often I get asked: why do you give away the tools and scripts you develop? The answer is simple: tools, scripts, white papers, blog entries like this, in my mind they all represent the same: sharing knowledge with our Oracle community. I wish for a community where knowledge (and tools) flows for all to benefit. Let’s say my personal time I invest building tools and scripts kinds of make it up for my lame blog postings. 😉

What is the catch?

No catch. Just be aware that edb360 makes use of some DBA_HIST views and ASH data, and those are part of the Oracle Diagnostics Pack. So when executing the tool it will ask  to indicate if your site has those licenses. Your answer determines the scope of the output. So if you specify you have a license for the Oracle Diagnostics Pack then your edb360 output includes pieces from AWR and ASH, else AWR and ASH are not accessed.

About versions, feedback and support

For the most part, I am committed to maintain this tool as my personal time permits. That means I can only work on it during odd hours and not every day. Nothing different than SQLTXPLAIN during the first few years of its existence, so I am not scared. Keep also in mind this edb360 is work in progress, and version v1405 is the first one I feel comfortable sharing with the community. In other words, it is far from perfect and I foresee it growing in multiple directions.

If you like this tool, and want to enhance its output, get SQLHC from MOS 1366133.1, and place the sqlhc.sql script into the same db360/sql directory. By doing so, you will also get 3 SQL health-checks. In other words, edb360 is SQLHC aware.

Conclusion

If you like free tools and have a use for this edb360 tool, you might as well download it and give it a shot. Nothing to lose (besides a few minutes of your spare time). A sample output is also available under same link above.

Life is Good!

Written by Carlos Sierra

February 19, 2014 at 7:34 pm

Exadata Optimizations and SQLTXPLAIN Courses

leave a comment »

I will be delivering a couple of courses soon. One in January and the second in February. I will keep posting upcoming Training and Conferences on a new link at the right margin of this blog.

Exadata Optimizations Jan 13-14

This 2-days “Exadata Optimizations” course is for Developers and DBAs new to Exadata and in need to ramp-up quickly. As the name implies, its focus is on Exadata Optimizations. We talk about Smart Scans, Storage Indexes, Smart Flash Cache, Hybrid Columnar Compression (HCC) and Parallel Execution (PX). This course is hands-on, with a fair amount of demos and labs.

SQLTXPLAIN (SQLT) Feb 20-21

This “SQL Tuning with SQLTXPLAIN” 2-days course shows how to use SQLT to actually do SQL Tuning. We go over the ying-yang of the CBO, meaning: Plan Flexibility versus Plan Stability. We use SQLT for labs and we also go over some real-life SQL Tuning cases. If you are currently using SQLT, you are welcome to bring a SQLT Report to class and we could review it there.

Conclusion

New year, new resolutions. I will be investing part of my time sharing knowledge through formal courses and conferences. These days it is hard to find the time and budget to keep our knowledge on the edge, but again and again I see that many of our daily struggles could be mitigated by some concise technical training. So I encourage you to add some training to your list of resolutions for this new year; or at the very least, to get and read some fresh books.

Happy New Year 2014!

Written by Carlos Sierra

December 27, 2013 at 1:24 pm

Counting rows fast

with one comment

A friend of mine asked me last night basically this: “How is that SQLTXPLAIN counts rows?”. In particular, he was referring to the use of the SAMPLE clause of the SELECT statement. Look at this SQLT’s log piece:

SQL_ID a9x1kc4ymyhkz
--------------------
SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e4
FROM "XYPZ"."INSTRUMENT" SAMPLE (.01) t

SQL_ID 025v6k1032t69
--------------------
SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e5
FROM "XYPZ"."POSITION_COMPOSITION" SAMPLE (.001) t

SQL_ID 8rby3340xpd9k
--------------------
SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e5
FROM "XYPZ"."POSITION_EVENT" SAMPLE (.001) t

WHY is it that SQLT has to count rows?

SQLT has to count rows so it can report side by side DBA_TABLES.NUM_ROWS and COUNT(*) from each Table. This is an easy way to see if your statistics are way off, and this mechanism exists on SQLT well before DBA_TAB_MODIFICATIONS came to existence. Actually, SQLT uses both methods to health-check how stale are your Table statistics.

The conundrum here is: “I use SQLT because I want to diagnose a performance issue on a QUERY on top of large Tables, but I do not want SQLT to take a long time just to produce a COUNT(*) of my Tables…”.

Fast versus Precise

In Performance tuning, there is always a trade-off. You want X but you sacrifice B. Counting rows is no different. Do you want it faster? Then you sacrifice precision. The SAMPLE clause of the SELECT statement allows you to do exactly that (syntax below):

SAMPLE [ BLOCK ] ( sample_percent ) [ SEED ( seed_value ) ]

So, if you specify a 10% sample size then you have to multiply the COUNT(*) by 10. If you sample 1% you multiply the COUNT(*) by 100. In large Tables if you sample, lets say 0.1%, your multiplier becomes 1,000, which is the same than 1e3 (10**3 or 10^3 depending where you went to school). Sample size can be as small as 0.000,001 and as large as 100 (but without including 100 itself). It represents probabilities more than an actual sample size.

The optional BLOCK clause simple says: use sample blocks instead of rows. And the optional SEED clause tries to provide some consistency in the result of the count when you use the same value for two executions of the exact same count. This SEED clause takes a value between 0 and 4,294,967,295.

How SQLT counts rows?

SQLT has over 40 tool parameters. One of them is count_star_threshold with a seeded value of 10,000.

SQLT includes a small algorithm (below) that determines the size of the SAMPLE according to the estimated size of the Table itself, by looking at its statistics as per DBA_TABLES.NUM_ROWS. No statistics? then skip the sample and do a normal full scan. If the Table is expected to be smaller then the count_star_threshold, then do a full scan. So is up to 10x this threshold. After that, use a sample size proportionally inverse to the Table size. The bigger the Table the smaller the Sample.

SQLT also forces a full Table scan and invokes Parallel Execution (PX) as a method to expedite the count. This count can be really fast on Exadata systems as you can imagine.

 /* -------------------------
 *
 * private perform_count_star
 *
 * called by: sqlt$i.common_calls and sqlt$i.remote_xtract
 *
 * ------------------------- */
 PROCEDURE perform_count_star (p_statement_id IN NUMBER)
 IS
 l_sql VARCHAR2(32767);
 l_number NUMBER;
 l_count NUMBER;
 BEGIN
 write_log('=> perform_count_star');

IF sqlt$a.get_param_n('count_star_threshold') = 0 THEN
 write_log('skip "count_star" as per corresponding parameter');
 ELSE
 FOR i IN (SELECT owner, table_name, num_rows, source
 FROM &&tool_administer_schema..sqlt$_dba_all_tables_v
 WHERE statement_id = p_statement_id
 ORDER BY
 owner, table_name)
 LOOP
 IF i.num_rows IS NULL THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*)
FROM "'||i.owner||'"."'||i.table_name||'" t WHERE ROWNUM <= :number';
 l_number := sqlt$a.get_param_n('count_star_threshold');
 ELSIF i.num_rows < sqlt$a.get_param_n('count_star_threshold') THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*)
FROM "'||i.owner||'"."'||i.table_name||'" t WHERE ROWNUM <= :number';
 l_number := sqlt$a.get_param_n('count_star_threshold') * 10;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e1) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e1
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1e1;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e2) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e2
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1e0;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e3) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e3
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e1;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e4) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e4
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e2;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e5) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e5
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e3;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e6) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e6
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e4;
 ELSIF i.num_rows < (sqlt$a.get_param_n('count_star_threshold') * 1e7) THEN
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e7
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e5;
 ELSE
 l_sql := 'SELECT /*+ FULL(t) PARALLEL */ COUNT(*) * 1e8
FROM "'||i.owner||'"."'||i.table_name||'" SAMPLE (:number) t';
 l_number := 1/1e6;
 END IF;

l_sql := REPLACE(l_sql, ':number', l_number);
 write_log('num_rows='||i.num_rows||' sql='||l_sql);
 l_count := NULL;

BEGIN
 EXECUTE IMMEDIATE l_sql INTO l_count;
 write_log(l_count||' rows counted');
 EXCEPTION
 WHEN OTHERS THEN
 write_log('** '||SQLERRM);
 write_log(l_sql||' failed with error above. Process continues.');
 END;

IF l_count IS NOT NULL THEN
 IF i.source = 'DBA_TABLES' THEN
 UPDATE &&tool_repository_schema..sqlt$_dba_tables
 SET count_star = l_count
 WHERE statement_id = p_statement_id
 AND owner = i.owner
 AND table_name = i.table_name;
 ELSIF i.source = 'DBA_OBJECT_TABLES' THEN
 UPDATE &&tool_repository_schema..sqlt$_dba_object_tables
 SET count_star = l_count
 WHERE statement_id = p_statement_id
 AND owner = i.owner
 AND table_name = i.table_name;
 END IF;
 END IF;
 END LOOP;

COMMIT;
 END IF;

write_log('<= perform_count_star');
 END perform_count_star;

Conclusion

Counting rows is like counting beans, you can count one at a time, or you can take some shortcuts. If you are willing to sacrifice some precision for the sake of gaining performance, consider then using the SAMPLE clause of the SELECT statement.

YASTS: Yet Another SQL Tuning Script: planx.sql

with 6 comments

Having SQLTXPLAIN and SQLHC available, WHY do I need yet another way to display execution plans?

New script planx.sql reports execution plans for one SQL_ID from RAC and AWR. It is lightweight and installs nothing. It produces list of performance metrics for given SQL out of gv$sqlstats, gv$sqlstats_plan_hash, gv$sql and dba_hist_sqlstat. It also displays execution plans from gv$sql_plan_statistics_all and dba_hist_sql_plan. It is RAC aware. It also reports on io_saved when executed on Exadata.

Most stand-alone light-weight scripts I have seen only report plans from connected RAC node. This script reports from all RAC nodes. The AWR piece is optional. In other words, if your site does not have a Diagnostics Pack License you can specify so when executing this script, thus all access to AWR data is simply skipped. Output is plain text and it executes in seconds.

I will be using this planx.sql as my first step in the analysis of queries performing slowly. If I need more, then I will use SQLHC or SQLTXPLAIN. This planx.sql script, as well as some others, are beginning to populate my new shared directory of “free” scripts. The link is at the right of the screen, and also here. Quite often I write small scripts to do my job, now they will have a new house there. A readme provides a one-line description of each script.

Conclusion

New planx.sql is an alternative to plain DBMS_XPLAIN.DISPLAY_CURSOR. It displays plans from all RAC nodes and from AWR(opt). It also reports relevant performance metrics for all recorded execution plans. It is fast and installs nothing.

Written by Carlos Sierra

October 9, 2013 at 6:06 pm

SQL Tuning with SQLTXPLAIN 2-days Workshop

with 6 comments

SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.

Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.

The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!

The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!

Creating a big Sales table

with 2 comments

I need a large Table to do some testing of SQLTXPLAIN in Exadata. Seeded SALES Table out of the SH sample schema is not big enough for the test I want to do. This SH.SALES tables has less than 1M rows and I need a few billion rows. So, I could just fabricate my own data or I could expand the SH.SALES table. I opted for the latter so I could join this big table with some dimension tables out of the same sample schema.

This is the script that I used. It is still running. I expect 31B rows out of it. Sharing here in case you need a big sales table.

CREATE TABLE sales_big AS
WITH
p AS (SELECT prod_id FROM sh.products WHERE ROWNUM <= 50), -- max 50
c AS (SELECT cust_id FROM sh.customers WHERE ROWNUM <= 5000), -- max 50000
t AS (SELECT time_id FROM sh.times WHERE ROWNUM <= 500), -- max 1000
h AS (SELECT channel_id FROM sh.channels WHERE ROWNUM <= 5), -- max 5
r AS (SELECT promo_id FROM sh.promotions WHERE ROWNUM <= 50) -- max 500
-- 50 * 5000 * 500 * 5 * 50 = 31,250,000,000
SELECT p.prod_id,
 c.cust_id,
 t.time_id,
 h.channel_id,
 r.promo_id,
 ROUND(ABS(DBMS_RANDOM.NORMAL) * 100) quantity_sold,
 ROUND(DBMS_RANDOM.VALUE(0, 1000) * ABS(DBMS_RANDOM.NORMAL), 2) amount_sold
 FROM p, c, t, h, r;

Written by Carlos Sierra

August 12, 2013 at 5:36 pm

Posted in Exadata, Testing