Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Search Results

eSP

with 2 comments

eSPEnkitec’s Sizing and Provisioning (eSP) is a new internal tool designed and developed with Oracle Engineered Systems in mind. Thanks to the experience and insights from Randy Johnson, Karl Arao and Frits Hoogland, what began as a pet project for some of us, over time became an actual robust APEX/PLSQL application, developed by Christoph Ruepprich and myself, and ready to debut at Oracle Open World 2014.

This posting is about eSP, what it does, and how it helps on the sizing and provisioning of Oracle Engineered System, or I would rather say, any System where Oracle runs.

We used to size Engineered Systems using a complex and very useful spread sheet developed by Randy Johnson and Karl Arao. Now, it is the turn for eSP to take the next step, and move this effort forward into a more scalable application that sits on top of one of our Exadata machines.

Sizing an Engineered System

Sizing a System can be quite challenging, especially when the current system is composed of several hosts with multiple databases of diverse use, size, versions, workloads, etc. The new target system may also bring some complexities; as the number of possible configurations grows, finding the right choice becomes harder. Then we also have the challenge of disk redundancy, recovery areas, the potential benefits of offloading with their smart scans, just to mention some added complexities.

At a very high level, Sizing a System is about 3 entities: Resources, Capacity and Utilization. Resources define what I call “demand”, which is basically the set of computational resources from your original System made of one or many databases and instances over some hosts. Capacity, which I also call it “supply”, is the set of possible target Systems with their multiple Configurations, in other words Engineered Systems, or any other hardware capable to host Oracle databases. Utilization, which I may also refer as “allocation” is where the magic and challenge resides. It is a clever and unbiassed mapping between databases and configurations, then between instances and nodes. This mapping has to consider at the very least CPU footprint, Memory for SGA and PGA, database disk space, and throughput in terms of IOPS and MBPS. Additional constraints, as mentioned before, include redundancy and offloading among others. CPU can be a bit tricky since each CPU make and model has its own characteristics, so mapping them requires the use of SPEC.

Other challenge a Sizing tool has to consider is the variability of the Resources. The question becomes: Do we see the Resources as a worst case scenario, or shall we rather consider them as time series? In other words, do we compute and use peaks, or do we observe the use of Resources over time, then develop some methods to aggregate them consistently as time series? If we decide to use a reduced set of data points, do we use peaks or percentiles? if the latter, which percentile is well balanced? 99.9, 99, 95 or maybe 90? How conservative are those values? There are so many questions and the answer for most of them, as you may guess is: “it all depends”.

How eSP Works

Without getting into the technical details, I can say that eSP is an APEX application with a repository on an Oracle database, which inputs collected “Requirements” from the databases to be sized, then it processes these Requirements and prepares them to be “Allocated” into one or more defined hardware configurations. The process is for the most part “automated”, meaning this: we execute some tool or script in the set of hosts where the databases reside, then upload the output of these collectors into eSP and we are ready to Plan and apply “what-if” scenarios. Having an Exadata System as our work engine, it allows this eSP application to scale quite well. A “what-if” scenario takes as long as it takes to navigate APEX pages,while all the computations are done in sub-seconds behind scenes, thanks to Exadata!

Once we load the Resources from the eSP collector script, or from the eAdam tool, we can start playing with the metadata. Since eSP’s set of known Configurations (Capacity) include current Engineered Systems (X4), allocating Configurations is a matter of seconds, then mapping databases and instances becomes the next step. eSP contains an auto “allocate” algorithm for databases and instances, where we can choose between a “balanced” allocation or one that is “dense” with several density factors to choose from (100%, 90%, 80%, 70%, 60% and 50%). With all these automated options, we can try multiple sizing and allocation possibilities in seconds, regardless if we are Sizing and Provisioning for one database or a hundred of them.

eSP and OOW

eSP DemoThe Enkitec’s Sizing and Provisioning (eSP) tool is an internal application that we created to help our customers to Size their next System or Systems in a sensible manner. The methods we implemented are transparent and unbiassed. We are bringing eSP to Oracle Open World 2014. I will personally demo eSP at our assigned booth, which is #111 at the Moscone South. I will be on and off the booth, so if you are interested on a demo please let me know, or contact your Enkitec/Accenture representative. We do prefer appointments, but walk-ins are welcomed. Hope to see you at OOW!

Written by Carlos Sierra

September 21, 2014 at 5:40 pm

Posted in eAdam, edb360, Exadata, General, OOW

eDB360 includes now an optional staging repository

with 3 comments

eDB360 has always worked under the premise “no installation required”, and still is the case today – it is part of its fundamental essence: give me a 360-degree view of my Oracle database with no installation whatsoever. With that in mind, this free tool helps sites that have gone to the cloud, as well as those with “on-premises” databases; and in both cases not installing anything certainly expedites diagnostics collections. With eDB360, you simply connect to SQL*Plus with an account that can select from the catalog, execute then a set of scripts behind eDB360 and bingo!, you get to understand what is going on with your database just by navigating the html output. With such functionality, we can remotely diagnose a database, and even elaborate on the full health-check of it. After all, that is how we successfully use it every day!, saving us hundreds of hours of metadata gathering and cross-reference analysis.

Starting with release v1706, eDB360 also supports an optional staging repository of the 26 AWR views listed below. Why? the answer is simple: improved performance! This can be quite significant on large databases with hundreds of active sessions, with frequent snapshots, or with a long history on AWR. We have seen cases where years of data are “stuck” on AWR, specially in older releases of the database. Of course cleaning up the outdated AWR history (and corresponding statistics) is highly recommended, but in the meantime trying to execute edb360 on such databases may lead to long execution hours and frustration, taking sometimes days for what should take only a few hours.

  1. dba_hist_active_sess_history
  2. dba_hist_database_instance
  3. dba_hist_event_histogram
  4. dba_hist_ic_client_stats
  5. dba_hist_ic_device_stats
  6. dba_hist_interconnect_pings
  7. dba_hist_memory_resize_ops
  8. dba_hist_memory_target_advice
  9. dba_hist_osstat
  10. dba_hist_parameter
  11. dba_hist_pgastat
  12. dba_hist_resource_limit
  13. dba_hist_service_name
  14. dba_hist_sga
  15. dba_hist_sgastat
  16. dba_hist_sql_plan
  17. dba_hist_sqlstat
  18. dba_hist_sqltext
  19. dba_hist_sys_time_model
  20. dba_hist_sysmetric_history
  21. dba_hist_sysmetric_summary
  22. dba_hist_sysstat
  23. dba_hist_system_event
  24. dba_hist_tbspc_space_usage
  25. dba_hist_wr_control
  26. dba_hist_snapshot

Thus, if you are contemplating executing eDB360 on a large database, and provided pre-check script edb360-master/sql/awr_ash_pre_check.sql shows that eDB360 might take over 24 hours, then while you clean up your AWR repository you can use the eDB360 staging repository as a workaround to speedup eDB360 execution. The use of this optional staging repository is very simple, just look inside the edb360-master/repo directory for instructions. And as always, shoot me an email or comment here if there were any questions.

Written by Carlos Sierra

February 19, 2017 at 8:43 pm

eDB360 takes long to execute!

leave a comment »

eDB360 provides a lot of insight about an Oracle database. It executes thousands of SQL statements querying GV$ and DBA views from the Oracle’s data dictionary. It was designed to impose very little load on the system where it executes, thus it consumes only one session and avoids parallel execution. On a system where the state of historical metrics is “normal”, eDB360 may take one or two hours to execute. In the other hand, when the volume of historical metrics is abnormally “large”, then eDB360 may execute for many hours up to default threshold of 24, then quit. No worries here, it can be restarted pretty much where it left…

If you are considering executing eDB360 on one of your databases, before you do please execute first a light-weight “pre-check”, which is basically a small script that reports on the state of Active Session History (ASH) stored into the Automatic Workload Repository (AWR). Bare in mind that ASH and AWR are part of the Oracle Diagnostics Pack, so consider such “pre-check” only if you have a license for such Oracle Pack for the target database.

AWR ASH Pre-Check

You can execute stand-alone the awr_ash_pre_check.sql script available as free software in GitHub within edb360-master or cscripts-master. If downloading eDB360, you can find awr_ash_pre_check.sql under the edb360-master/sql directory, and if you download the cscripts set, then you can find awr_ash_pre_check.sql under cscripts-master/sql.

This pre-check script reports on the following potential concerns:

  1. Retention period of AHR, frequency of Snapshots, AWR Baselines, and similar.
  2. Age of CBO Statistics on the main table behind DBA_HIST_ACTIVE_SESS_HISTORY (WRH$_ACTIVE_SESSION_HISTORY) and its partitions.
  3. State of CBO Statistics for WRH$_ACTIVE_SESSION_HISTORY segments: Are the statistics Locked? Are they Stale?
  4. Size of WRH$_ACTIVE_SESSION_HISTORY segments, with range of Snapshots per segment (partition). Are they outside the desired retention window?
  5. Creation date and last DDL date for WRH$_ACTIVE_SESSION_HISTORY segments. So expected partition splitting can be verified.

Take for example result below. It shows that current ASH repository contains 320.6 days of history, even when in this case only 30 were requested (displayed on an upper part of report not shown here). We also see ASH is 16.4 GBs in size, where normally 1-2 GBs are plenty. We can conclude last partition split (or simply last DDL) is 289.7 days old, where normally only a few days-old are expected. This pre-check sample results on a ballpark estimate of 127 hours to execute eDB360. In other words, if we had one month of history (as current AWR configuration requests) instead of 320.6 days of history, then ASH repository would be less than 10% its current size and eDB360 would be estimated to execute in about 13 hours instead of 127. Keep in mind this “pre-check” provides a ballpark estimate, so an actual execution of eDB360 would take more or less the estimated time.

screen-shot-2016-11-23-at-2-03-55-pm

What to do if eDB36o is estimated to run for several days like in sample above? What I recommend is to fix ASH first. This process requires to read and digest at least 3 My Oracle Support (MOS) notes below, but it is not as bad as it sounds. You want to follow this path anyways, so any diagnostics tool that relies on ASH from AWR would benefit of the much needed cleanup.

  1. WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged Based Upon the Retention Policy (Doc ID 387914.1)
  2. Bug 14084247 – ORA-1555 or ORA-12571 Failed AWR purge can lead to continued SYSAUX space use (Doc ID 14084247.8)
  3. Manually Purge the Optimizer Statistics and AWR Snaphots to Reduce Space Usage of SYSAUX Tablespace (Doc ID 1965061.1)

Diagnosing eDB360 taking long

Let’s say you executed an older version of eDB360 and it took long (newer versions starting on v1620 perform an automatic pre-check, and if execution is estimated to last more than 8 hours, then it pauses and ask for confirmation before executing). If you executed an older version of eDB360, or simply your execution took longer than expected, review next the following files included on the directory from where you executed eDB360 (edb360-master), or stored within the edb360_*.zip file generated by eDB360 as it executed.

  1. awr_ash_pre_check_<database_name>.txt
  2. verify_stats_wr_sys_<database_name>.txt
  3. 00002_edb360_<NNNNNN>_log.txt
  4. 00003_edb360_<NNNNNN>_log2.txt
  5. 00004_edb360_<NNNNNN>_log3.txt
  6. 00005_edb360_157536_tkprof_sort.txt

If you find one of the SQL statements from eDB360 is taking more than a few minutes to execute, suspect first outdated CBO statistics on the Data Dictionary objects behind such query. Or if volume of data accessed by such query from eDB360 is large, suspect AWR data not getting purged. In either case take action accordingly.

If you cannot make sense of the information provided by these diagnostics files above, then contact eDB360 author at carlos.sierra.usa@gmail.com.

Keep in mind that execution time for eDB360 really depends on the state of several metrics stored in AWR, where ASH is the most relevant piece but not the only one. Think outdated statistics and outdated metrics!

Written by Carlos Sierra

November 23, 2016 at 5:47 pm

Posted in General

SQL Monitoring without MONITOR Hint

leave a comment »

I recently got this question:

<<<Is there a way that I can generate SQL MONITORING report for a particular SQL_ID ( This SQL is generated from application code so I can’t add “MONITOR”  hint) from command prompt ? If yes can you please help me through this ?>>>

Since this question is of general interest, I’d rather respond here:

As you know, SQL Monitoring starts automatically on a SQL that executes a PX plan, or when its Serial execution has consumed over 5 seconds on CPU or I/O.

If you want to force SQL Monitoring on a SQL statement, without modifying the SQL text itself, I suggest you create a SQL Patch for it. But before you do, please be aware that SQL Monitoring requires the Oracle Tuning Pack.

How to turn on SQL Monitoring for a SQL that executes Serial, takes less than 5 seconds, and without modifying the application that issues such SQL

Use SQL Patch with the MONITOR Hint. An easy way to do that is by using the free sqlpch.sql script provided as part of the cscripts (see right-hand side of this blog under Downloads).

To use sqlpch.sql script, pass as parameter #1 your SQL_ID and for parameter #2 pass “GATHER_PLAN_STATISTICS MONITOR” (without the double quotes).

This sqlpch.sql script will create a SQL Patch for your SQL, which will produce SQL Monitoring (and the collection of A-Rows) for every execution of your SQL.

Be aware there is some overhead involved, so after you are done with your analysis drop the SQL Patch.

Script sqlpch.sql shows the name of the SQL Patch it creates (look at its spool file), and it gives you the command to drop such SQL Patch.

For the actual analysis and diagnostics of your SQL (after you have executed it with SQL Patch in place) use free tool SQLd360.

And for more details about sqlpch.sql and other uses of this script please refer to this entry on my blog.

Written by Carlos Sierra

February 29, 2016 at 10:16 am

Forcing a “Nested Loop only” Execution Plan

with 5 comments

Sometimes you do what you have to do. So here I confess doing something I usually avoid: forcing an Execution Plan (which is not the same as using a more conventional method for Plan stability).

This is a case on 11.2.0.3.0 base release where the application vendor sets the optimizer to 9i, and tweaks other CBO parameters in questionable ways, then some queries produce suboptimal plans (as expected); and you are called to help without changing the obvious.

There is a family of queries from an ad-hoc query generator that permits users to issue queries without a set of selective predicates. These queries join several large tables and their performance is poor (as expected as well!). On top of the previous, all these queries include the /*+ FIRST_ROWS */ CBO Hint and the questionable DISTINCT keyword. Note: it is quite common for developers to throw a DISTINCT keyword “to avoid duplicates” where the mere existence of duplicates would be an indication of an application bug; so “why fix it if I can hide it, right?”.

There is one caveat although: these queries include a generic predicate “rownum <= :b1”, and value passed defaults to 5000, so users rationale is “if I only want the first X rows my query should return fast”. This highlights still another questionable practice since it is hard to imagine a user scrolling 5000 rows and making any sense of such large set, especially when the full “filtered” set would be several million rows long. So the original problem is questionable in several ways. Nevertheless, sometimes we are called to help besides providing advice. And no, we are not allowed to slap hands 😉

The good news is that we can use this extra predicate on rownum and make these queries to return the first X rows really fast; and I mean less than 5 seconds instead of over one hour or more! And if users want not 5000 but 500 or even 50 rows, then we can be in the sub-second range!

You may be thinking FIRST_ROWS optimization, and that was my first try. Unfortunately, on 11.2.0.3.0, even reversing all the suboptimal CBO parameters at the session level, I would consistently get an Execution Plan with a few Hash Joins and a large Cost; and if I were to force a Nested Loop Plan, the cost would be several orders of magnitude larger so the CBO would not pick it! Nevertheless, such a “Nest Loop only” Execution Plan would fulfill the user’s expectations, regardless the validity of the initial request. And yes, CBO statistics are OK, not perfect but simply OK. One more piece of info: this is not Exadata! (if it were Exadata most probably these same Execution Plans with full table scans and Hash Joins would simply fly!).

So, my issue became: How do I force an Execution Plan that only contains Nested Loops? If I could do that, then the COUNT STOP operation could help me to halt my SQL execution once I fetched the first X rows (Hash Join does not allow me do that). Remember: these tables have literally millions of rows. I could pepper these queries with a ton of CBO Hints and I would get my desired “Nested Loop only” Execution Plan… But that would be a lot of work and tricky at best.

SQL Patch to the rescue

I could had used a SQL Profile, but I think this dirty trick of suppressing Hash Joins and Sort Merge Joins, would be better served with a SQL Patch. I also thought Siebel: They do tweak CBO parameters as well, and they suppress Hash Joins, but they change System and Session level parameters… Since I wanted my change to be very localized, SQL Patch could provide me just what I needed.

Under the Downloads section on the margin of this page, there is a “cscripts” link that includes the sqlpch.sql script. I used this script and passed as the second parameter the following string (1st parameter is SQL_ID). With a SQL Patch generated this way, I could systematically produce a “Nested-Loops only” Execution Plan for these few queries. I did not have to change the original SQL, nor change the CBO environment at the System or Session level, neither restrict the query generator, and I did not had to “educate” the users to avoid such unbounded queries.

OPT_PARAM("_optimizer_sortmerge_join_enabled" "FALSE") OPT_PARAM("_hash_join_enabled" "FALSE")

Conclusion

I have to concede doing something questionable, in this case using a SQL Patch to force a desired Execution Plan instead of fixing the obvious, simply because that was the shortest path to alleviate the user’s pain.

I consider this technique above a temporary work-around and not a solution to the actual issue. In this case the right way to handle this issue would be:

  1. Have the application vendor certify their application to the latest release of the database and reset all CBO related parameters, plus
  2. Have the application vendor remove CBO Hints and DISTINCT keyword from queries, plus
  3. Configure the ad-hoc query generator to restrict users from executing queries without selective predicates, then
  4. Tune those outlier queries that may still need some work to perform as per business requirements, and possibly
  5. Educate the users to provide as many selective predicates as possible

Anyways, the potential of using a SQL Patch to tweak an Execution Plan in mysterious ways is quite powerful, and something we may want to keep in the back of our minds for a rainy day…

Written by Carlos Sierra

August 8, 2015 at 8:30 am

eDB360: 360-degree comprehensive report on an Oracle database

with 14 comments

eDB360_mini_496_468Overview: eDB360 is a free tool that provides a 360-degree view of an Oracle database on the Cloud or on-premises. Its output can be used as a foundation for a database health-check, a performance evaluation, or to collect resources utilization needed for a sizing and provisioning plan. Of course it can also be used to simply get a glance of an entire Oracle database.

With eDB360, a user with limited access can acquire a good understanding of an Oracle database without having to log into the server directly. This capability is of great value to developers, system administrators, 3rd party consultants, or any remote user with limited access to an Oracle database.

eDB360 tool is database centric. Notice that in case of RAC database(s), it only needs to be executed on one node and it gets information about all of them. eDB360 works on Oracle 10g to 12c databases. eDB360 works on Linus and UNIX systems. For Windows systems you may want to install first UNIX Utilities (UnxUtils) and a zip program, else a few OS commands may not properly work.

Each execution of eDB360 may take up to 24 hours, thus it is recommended executing it overnight or over a weekend. Test eDB360 on a lower environment before executing on Production. Since eDB360 execution time is a frequent concern, a blog post explains how to diagnose cases where eDB360 takes long to execute.

For other FAQ or more information about eDB360, check links below or simply Google: eDB360.

Note: To execute this eDB360 tool, the database should be licensed to use at least the Oracle Diagnostics pack (else the output is very small and of little use). If the database is licensed for both the Tuning and the Diagnostics pack, then respond with a “T” to the first execution parameter. This tool has an optional second parameter, which allows to specify a custom configuration filename. This second parameter is seldom needed, so when asked to enter such filename, simply hit the “enter” or “return” key, unless of course you want to customize an aspect of eDB360 (advanced mode).

Links:

  1. Download latest version v1707 (Mar 6, 2017)
  2. Sample Output
  3. Readme included in tool
  4. Blog post: eDB360 takes long to execute
  5. Frequently asked questions (FAQ)

Instructions steps:

  1. Download edb360-master.zip from link above.
  2. Copy edb360-master.zip to server where eDB360 will be executed, and unzip there.
  3. Navigate to edb360-master directory and connect into SQL*Plus (use an account that can read from the data dictionary).
  4. Execute SQL script sql/awr_ash_pre_check.sql and review output, specially last page. Then decide if continuing with eDB360 or remediate first findings reported.
  5. Execute SQL script edb360.sql. Pass value “T” as the first parameter if the database has the Tuning pack license, or “D” if it has the Diagnostics pack license (if having both licenses then pass “T”). For most cases, simply hit “enter” to skip the 2nd parameter, which is optional and it is designed to configure eDB360.
  6. Provide to requestor the compressed output file: edb360_<NNNNNN>_<NNNNNN>_YYYYMMDD_HH24MI.zip.

Written by Carlos Sierra

July 3, 2015 at 2:16 pm

Posted in

SQLTXPLAIN under new administration

with 3 comments

During my 17 years at Oracle, I developed several tools and scripts. The largest and more widely used is SQLTXPLAIN. It is available through My Oracle Support (MOS) under document_id 215187.1.

SQLTXPLAIN, also know as SQLT, is a tool for SQL diagnostics, including Performance and Wrong Results. I am the original developer and author, but since very early stages of its development, this tool encapsulates the expertise of many bright engineers, DBAs, developers and others, who constantly helped to improve this tool on every new release by providing valuable feedback. SQLT is then nothing but the collection of many good ideas from many people. I was just the lucky guy that decided to build something useful for the Oracle SQL tuning community.

When I decided to join Enkitec back on 2013, I asked Mauro Pagano to look after my baby (I mean SQLT), and sure enough he did an excellent job. Mauro fixed most of my bugs, as he jokes about, and also incorporated some of his own :-). Mauro kept SQLT in good shape and he was able to continue improving it on every new release. Now Mauro also works for Enkitec, so SQLT has a new owner and custodian at Oracle.

Abel Macias is the new owner of SQLT, and as such he gets busy maintaining and enhancing this tool among other duties at Oracle. So, if you have enhancement requests, or positive feedback, please reach out to Abel at his Oracle account: abel.macias@oracle.com. If you come across some of my other tools and scripts, and they show my former Oracle account (carlos.sierra@oracle.com), please reach out to Abel and he might be able to route your concern or question.

Since one of my hobbies is to build free software that I also consume, my current efforts are on eDB360, eAdam and eSP. The most popular and openly available is eDB360, which basically gives your a 360-degree view of a database without installing anything. Then, Mauro is also building something cool on his own free time. Mauro is building the new SQLd360 tool, which is already available on the web (search for SQLd360). This SQLd360 tool, similar to eDB360, provides a 360-degree view, but instead of a database its focus is one SQL. And similarly than eDB360 it installs nothing on the database. Both are available as “free software” for anyone to download and use. That is the nice part: everyone likes free! (specially if any good).

What is the difference between SQLd360 and SQLT?

Both are exceptional tools. And both can be used for SQL Tuning and for SQL diagnostics. The main differences in my opinion are these:

  1. SQLT has it all. It is huge and it covers pretty much all corners. So, for SQL Tuning this SQLTXPLAIN is “THE” tool.
  2. SQLd360 in the other hand is smaller, newer and faster to execute. It gives me what is more important and most commonly used.
  3. SQLT requires to install a couple of schemas and hundreds of objects. SQLd360 installs nothing!
  4. To download SQLT you need to login into MOS. In contrast, SQLd360 is wide open (free software license), and no login is needed.
  5. Oracle Support requires SQLT, and Oracle Engineers are not exposed yet to SQLd360.
  6. SQLd360 uses Google charts (as well as eDB360 does) which enhance readability of large data sets, like time series for example. Thus SQLd360 output is quite readable.
  7. eDB360 calls SQLd360 on SQL of interest (large database consumers), so in that sense SQLd360 enhances eDB360. But SQLd360 can also be used stand-alone.

If you ask me which one would I recommend, I would answer: both!. If you can use both, then that is better than using just one. Each of these two tools (SQLT and SQLd360) has pros and cons compared to the other. But at the end both are great tools. And thanks to Abel Macias, SQLT continues its lifecycle with frequent enhancements. And thanks to Mauro, we have now a new kid on the block! I would say we have a win-win for our large Oracle community!

Written by Carlos Sierra

March 18, 2015 at 12:37 pm

Some eDB360 Frequently Asked Questions

with 6 comments

As I get frequent questions about eDB360 I decided to post some FAQ here. First, some links to prior posts:

  1. Video: Introducing the eDB360 Tool
  2. What to do if edb360 takes long to run
  3. eDB360
  4. What is new with EDB360?
  5. How to execute eAdam and eDB360
  6. Meet “edb360″: a free tool that provides a 360-degree view of an Oracle database
  7. Presentations

Q: When is the best time to execute edb360?

A: At the end of a regular working day. Since it may take several hours to execute, some customers schedule it for a Friday evening.

Q: How long will edb360 take to execute on my production environment?

A: I don’t know. Best way to predict this is by executing edb360 in a lower but similar environment, like QA or TEST.

Q: Are there any risks on executing edb360 on a production environment?

A: No that I know. Think edb360 as a long sequence of SQL statements against DBA and GV$ views. Are they going to break anything? In theory not. So far I haven’t see edb360 breaking any environment. And if it does most probably it would a database bug. Most conservative approach is to try it first on a lower environment.

Q: Does edb360 install anything on my database?

A: No.

Q: Does edb360 changes the state of my database?

A: No.

Q: My edb360 seems to be hanging. How can I tell for sure?

A: Use any normal method: OEM, query long operations, look at V$SQL for active SQL. If you see edb360 is “stuck” at one SQL for let’s say more than one hour, you may want to report this to me. You can also find the SQL, kill the session, comment out that SQL out of edb360 code, and retry. When I have seen this, it happens to be either a known database bug, or lack of good statistics on schema objects behind particular query.

Q: Why is the output of edb360 so extensive?

A: We don’t know what we don’t know. Since edb360 is used for Oracle databases health-checks, trying to minimize the number of round trips to collect diagnostics, this tool collects many pieces of information. Sometimes we use most, sometimes we use a small subset. It all depends what we see.

Q: How do I read edb360?

A: Personally, I read everything. It takes me at least one day for an initial review. When I look at it for the 2nd or 3rd time, my search is narrower and deeper. While I look at it the 1st time I take some light notes, else I forget what I have seen so far.

Q: Can I use edb360 on any Oracle database?

A: Yes, as long as it is 10g, 11g or 12c. It works on any platform, but it works better on Linux and UNIX. It works for any application, including EBS, PSFT, Siebel, SAP and any custom one.

Q: Can I use edb360 on a non-Oracle database?

A: No.

Q: Is edb360 certified to run on 12c?

A: Not yet. It should work fine for single-tenant databases. I haven’t tested it on 12c although. I will do soon.

Q: Can I use edb360 for sizing and/or capacity?

A: Yes and no. edb360 captures sizing metadata using the eSP collector. The eSP application for Sizing and Provisioning is Enkitec’s proprietary (now Accenture’s). So we can use edb360 for sizing, but only internally and when our client asks for it.

Q: Is edb360 free software?

A: Yes. look for license file included on tool.

Q: Where do I get the latest version of this tool?

A: Look at the column at the right of this entry. There is a section with a “Download” title. That links will take you to a GitHub location where you can download the latest version.

Q: Can I use edb360 for SQL Tuning?

A: Actually, for that you may want to get SQLd360, which is work in progress by Mauro Pagano. In the meantime you can use planx.sql, sqlmon.sql, sqlash.sql (under my cscripts) or download SQLTXPLAIN and/or SQLHC from MOS.

Q: Can I share edb360 tool with others?

A: Absolutely! I developed this tool thinking on our Oracle community.

Q: I have some ideas to improve edb360. How can I provide them?

A: Send me an email, or post them here. Every so often I get some good ideas, which I put on my list and one day I get to them. Keep in mind that I work on edb360 on my free personal time, so some ideas take longer to implement than others.

Q: Where can I learn more about edb360?

A: Look at links at the top. There is a 1hr presentation that goes over what this tool does. There is also a fresh sample output available.

Written by Carlos Sierra

January 7, 2015 at 2:56 pm

Posted in edb360

Video: Introducing the eDB360 Tool

with one comment

Some of you have asked if the “Introducing the eDB360 Tool” session at the Oaktable World 2014 was recorder. Actually it was, and thanks to Kyle Hailey it is now available as well as the slides. Just go to the agenda of this event and click on corresponding link. There you will also find video and/or slides for all other sessions.

Thanks Kyle for making this possible!

Written by Carlos Sierra

November 1, 2014 at 4:22 am

Posted in Conferences, OakTable

eAdam

with 9 comments

Enkitec’s Oracle AWR Data Mining Tool

eAdameAdam is a free tool that extracts a subset of data and metadata from an Oracle database with the objective to perform some data mining using a separate staging Oracle database. The data extracted is relevant to Performance Evaluations projects. Most of the data eAdam extracts is licensed by Oracle under the Diagnostics Pack, and some under the Tuning Pack. Therefore, in order to use this eAdam tool, the source database must be licensed to use both Oracle Packs (Tuning and Diagnostics).

To a point, eAdam is similar to eDB360; both access the Data Dictionary in order to produce some reports. The key difference is that eDB360 generates all the reports (after doing some intensive processing) at the source database, while eAdam simply extracts a set of flat files into a TAR file, using a very light-weight script, delaying all the intensive processing for a later time and on a separate staging system. This feature can be very attractive for busy systems where the amount of processing of any external monitoring tool needs to be minimized.

On the source system, eAdam only needs to execute a short script to extract the data and metadata of interest, producing a dense TAR file. On a staging system, eAdam does the heavy lifting, requiring the creation of a repository, the load of this repository and finally the computation of meaningful reports. The processing of the TAR file into the staging system is usually performed by the requestor, using a lower-level database, or a remote one.

The list of objects eAdam extracts as flat files from the source database includes the following:

dba_hist_active_sess_history
dba_hist_database_instance
dba_hist_event_histogram
dba_hist_osstat
dba_hist_parameter
dba_hist_pgastat
dba_hist_sga
dba_hist_sgastat
dba_hist_snapshot
dba_hist_sql_plan
dba_hist_sqlstat
dba_hist_sqltext
dba_hist_sys_time_model
dba_hist_sysstat
gv$active_session_history
gv$log
gv$sql_monitor
gv$sql_plan_monitor
gv$sql_plan_statistics_all
gv$sql
gv$system_parameter2
v$controlfile
v$datafile
v$tempfile

eAdam works on 10gR2, 11gR2, and on higher releases of Oracle; and it can be used on Linux or UNIX Platforms. It has not been tested on Windows. An eAdam sample output is available at this Dropbox location; after downloading the sample output, look for the 0001_eadam36_N_dbname_index.html file and start browsing.

Instructions – Source Database

Download the tool, uncompress the master ZIP file, and look for file eadam-master/source_system/eadam_extract.sql. Review and execute this single and short script connecting to the source database as SYS or DBA. Locate the TAR file produced, and send it to the requestor.

Be aware that the TAR file produced by the extraction process can be large, so be sure you execute this extract script from a directory with at least 10 GBs of free space. Common sizes of this TAR file range between 100 MBs and 1 GB. Execution time for this extraction process may exceed 1 hour, depending on the size of the Data Dictionary.

Instructions – Staging Database

Be sure you have both the eAdam tool (eadam-master.zip) and the TAR file produced on a source system. Your staging database can be of equal, higher or lower release level than the source, but equal or higher is recommended. The Platform can be the same or different.

To install, load and report on the staging database, proceed with the following steps:

  1. Create on the staging system a file directory available to Oracle for read and write. Most probably you want to create this directory connecting to OS as Oracle and create a new directory like /home/oracle/eadam-master. Put in there the content of the eadam-master.zip file.
  2. Create the eAdam repository on the staging database. This step is needed only one time. Follow instructions from the eadam_readme.txt.  Basically you need to execute eadam-master/stage_system/eadam_install.sql connected as SYS. This script asks for 4 parameters: Tablespace names for permanent and temporary schema objects, and the username and password of the new eAdam account. For the username I recommend eadam, but you can use any valid name.
  3. Load the data contained in the TAR file into the database. To do this you need first to copy the TAR file into the eadam-master/stage_system sub-directory and execute next the stage_system/eadam_load.sql script while on the stage_system sub-directory, and connecting as SYS. This script asks for 4 parameters. Pass first the directory path of your stage_system sub-directory, for example /home/oracle/eadam-master/stage_system (this sub-directory must contain the TAR file). Pass next the username and password of your eadam account as you created them. Pass last the name of the TAR file to be loaded into the database.
  4. The load process performs some data transformations and it produces at the end an output similar to eDB360 but smaller in content. After you review the eAdam output, you may decide to generate new output for shorter time series, in such case use the eadam-master/stage_system/eadam_report.sql connecting as the eadam user. This reporting process asks for 3 parameters. Pass the EADAM_SEQ_ID which identifies your particular load (a list of values is displayed), then pass the range of dates using format YYYY-MM-DD/HH24:MI, for example 2014-07-27/17:33.

Download

EADAM @ GitHub is available as free software. You can see its eadam_readme.txt, license.txt or any other piece of the tool before downloading it. Use this link eadam-master.zip to actually download eAdam as a compressed file.

Feedback

Please post your feedback about this eAdam tool at this blog, or send and email directly to the tool author: Carlos Sierra.

Written by Carlos Sierra

July 27, 2014 at 6:25 pm