eDB360 provides a lot of insight about an Oracle database. It executes thousands of SQL statements querying GV$ and DBA views from the Oracle’s data dictionary. It was designed to impose very little load on the system where it executes, thus it consumes only one session and avoids parallel execution. On a system where the state of historical metrics is “normal”, eDB360 may take one or two hours to execute. In the other hand, when the volume of historical metrics is abnormally “large”, then eDB360 may execute for many hours up to default threshold of 24, then quit. No worries here, it can be restarted pretty much where it left…
If you are considering executing eDB360 on one of your databases, before you do please execute first a light-weight “pre-check”, which is basically a small script that reports on the state of Active Session History (ASH) stored into the Automatic Workload Repository (AWR). Bare in mind that ASH and AWR are part of the Oracle Diagnostics Pack, so consider such “pre-check” only if you have a license for such Oracle Pack for the target database.
AWR ASH Pre-Check
You can execute stand-alone the awr_ash_pre_check.sql script available as free software in GitHub within edb360-master or cscripts-master. If downloading eDB360, you can find awr_ash_pre_check.sql under the edb360-master/sql directory, and if you download the cscripts set, then you can find awr_ash_pre_check.sql under cscripts-master/sql.
This pre-check script reports on the following potential concerns:
- Retention period of AHR, frequency of Snapshots, AWR Baselines, and similar.
- Age of CBO Statistics on the main table behind DBA_HIST_ACTIVE_SESS_HISTORY (WRH$_ACTIVE_SESSION_HISTORY) and its partitions.
- State of CBO Statistics for WRH$_ACTIVE_SESSION_HISTORY segments: Are the statistics Locked? Are they Stale?
- Size of WRH$_ACTIVE_SESSION_HISTORY segments, with range of Snapshots per segment (partition). Are they outside the desired retention window?
- Creation date and last DDL date for WRH$_ACTIVE_SESSION_HISTORY segments. So expected partition splitting can be verified.
Take for example result below. It shows that current ASH repository contains 320.6 days of history, even when in this case only 30 were requested (displayed on an upper part of report not shown here). We also see ASH is 16.4 GBs in size, where normally 1-2 GBs are plenty. We can conclude last partition split (or simply last DDL) is 289.7 days old, where normally only a few days-old are expected. This pre-check sample results on a ballpark estimate of 127 hours to execute eDB360. In other words, if we had one month of history (as current AWR configuration requests) instead of 320.6 days of history, then ASH repository would be less than 10% its current size and eDB360 would be estimated to execute in about 13 hours instead of 127. Keep in mind this “pre-check” provides a ballpark estimate, so an actual execution of eDB360 would take more or less the estimated time.
What to do if eDB36o is estimated to run for several days like in sample above? What I recommend is to fix ASH first. This process requires to read and digest at least 3 My Oracle Support (MOS) notes below, but it is not as bad as it sounds. You want to follow this path anyways, so any diagnostics tool that relies on ASH from AWR would benefit of the much needed cleanup.
- WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged Based Upon the Retention Policy (Doc ID 387914.1)
- Bug 14084247 – ORA-1555 or ORA-12571 Failed AWR purge can lead to continued SYSAUX space use (Doc ID 14084247.8)
- Manually Purge the Optimizer Statistics and AWR Snaphots to Reduce Space Usage of SYSAUX Tablespace (Doc ID 1965061.1)
Diagnosing eDB360 taking long
Let’s say you executed an older version of eDB360 and it took long (newer versions starting on v1620 perform an automatic pre-check, and if execution is estimated to last more than 8 hours, then it pauses and ask for confirmation before executing). If you executed an older version of eDB360, or simply your execution took longer than expected, review next the following files included on the directory from where you executed eDB360 (edb360-master), or stored within the edb360_*.zip file generated by eDB360 as it executed.
If you find one of the SQL statements from eDB360 is taking more than a few minutes to execute, suspect first outdated CBO statistics on the Data Dictionary objects behind such query. Or if volume of data accessed by such query from eDB360 is large, suspect AWR data not getting purged. In either case take action accordingly.
If you cannot make sense of the information provided by these diagnostics files above, then contact eDB360 author at email@example.com.
Keep in mind that execution time for eDB360 really depends on the state of several metrics stored in AWR, where ASH is the most relevant piece but not the only one. Think outdated statistics and outdated metrics!
If you manage an Oracle Engineered System you may wonder how well your Smart Scans are performing. Are you taking full advantage of Exadata Optimizations? If so, how do you measure them?
Sample chart below is from a data warehouse DW application. It shows in blue that bytes eligible for offloading are around 95%, which denote a large amount of desired full scans. We see in red that between 80 and 95% of the expected I/O did not hit the network, i.e. it was saved (incorrectly referred as IO Saved since what is saved is the network traffic). And in yellow we see between 30 and 45% of the IO was entirely avoided due to Storage Indexes. So, with 80-95% of the expected data not going through the network and 30-45% of IO entirely eliminated, I could conclude that Exadata Optimizations are well applied on this DW.
If you have SQL*Plus access to an Engineered System, and want to produce a chart like this, simply download and execute free tool eDB360. This tool installs nothing on your database!
eDB360 has two execution parameters. The first one specifies if the database is licensed to use the Tuning or Diagnostics packs (or none). The second parameter is optional, and if entered it specifies the name of a custom configuration file that allows to change the behavior of eDB360. With this optional configuration file you can make changes such as: reduce the scope of the output to maybe a single column, a section, or even a range of sections. You can also change the time window from the default (last 31 days) to a larger window (if your AWR retention period allows) or smaller window. You can also specify a particular range of dates, or include additional report formats such as text, csv or xml.
If you are an experienced user of eDB360, you may want then to familiarize yourself with the configuration file sql/edb360_00_config.sql (listed below) and the sample custom configuration file provided (sql/custom_config_01.sql). Then you may want to assembly your own configuration file as a subset of the DEF values from sql/edb360_00_config.sql, and provide the name of your custom configuration script as the second execution parameter for eDB360. Your configuration file should reside under edb360-master/sql and the filename is case sensitive.
-- edb360 configuration file. for those cases where you must change edb360 functionality /*************************** ok to modify (if really needed) ****************************/ -- section to report. null means all (default) -- report column, or section, or range of columns or range of sections i.e. 3, 3-4, 3a, 3a-4c, 3-4c, 3c-4 DEF edb360_sections = ''; -- edb360 trace DEF sql_trace_level = '1'; -- history days (default 31) DEF edb360_conf_days = '31'; -- range of dates below superceed history days when values are other than YYYY-MM-DD DEF edb360_conf_date_from = 'YYYY-MM-DD'; DEF edb360_conf_date_to = 'YYYY-MM-DD'; -- working hours are defined between these two HH24MM values (i.e. 7:30AM and 7:30PM) DEF edb360_conf_work_time_from = '0730'; DEF edb360_conf_work_time_to = '1930'; -- working days are defined between 1 (Sunday) and 7 (Saturday) (default Mon-Fri) DEF edb360_conf_work_day_from = '2'; DEF edb360_conf_work_day_to = '6'; -- maximum time in hours to allow edb360 to execute (default 24 hrs) DEF edb360_conf_max_hours = '24'; -- include GV$ACTIVE_SESSION_HISTORY (default N) DEF edb360_conf_incl_ash_mem = 'N'; -- include GV$SQL_MONITOR (default N) DEF edb360_conf_incl_sql_mon = 'N'; -- include GV$SYSSTAT (default Y) DEF edb360_conf_incl_stat_mem = 'Y'; -- include GV$PX and GV$PQ (default Y) DEF edb360_conf_incl_px_mem = 'Y'; -- include DBA_SEGMENTS on queries with no filter on segment_name (default Y) -- note: some releases of Oracle produce suboptimal plans when no segment_name is passed DEF edb360_conf_incl_segments = 'Y'; -- include DBMS_METADATA calls (default Y) -- note: some releases of Oracle take very long to generate metadata DEF edb360_conf_incl_metadata = 'Y'; /**************************** not recommended to modify *********************************/ -- excluding report types reduce usability while providing marginal performance gain DEF edb360_conf_incl_html = 'Y'; DEF edb360_conf_incl_xml = 'N'; DEF edb360_conf_incl_text = 'N'; DEF edb360_conf_incl_csv = 'N'; DEF edb360_conf_incl_line = 'Y'; DEF edb360_conf_incl_pie = 'Y'; -- excluding awr reports substantially reduces usability with minimal performance gain DEF edb360_conf_incl_awr_rpt = 'Y'; DEF edb360_conf_incl_addm_rpt = 'Y'; DEF edb360_conf_incl_ash_rpt = 'Y'; DEF edb360_conf_incl_tkprof = 'Y'; -- top sql to execute further diagnostics (range 0-128) DEF edb360_conf_top_sql = '48'; DEF edb360_conf_top_cur = '4'; DEF edb360_conf_top_sig = '4'; DEF edb360_conf_planx_top = '48'; DEF edb360_conf_sqlmon_top = '0'; DEF edb360_conf_sqlash_top = '0'; DEF edb360_conf_sqlhc_top = '0'; DEF edb360_conf_sqld360_top = '16'; DEF edb360_conf_sqld360_top_tc = '0'; /************************************ modifications *************************************/ -- If you need to modify any parameter create a new custom configuration file with a -- subset of the DEF above, and place on same edb360-master/sql directory; then when -- you execute edb360.sql, pass on second parameter the name of your configuration file
Every so often I need to gain an understanding of an application represented by one or several schemas. In such cases I look at the data model if one exists, else I browse the schema objects including tables, views, pl/sql libraries and extending into synonyms, triggers, sequences, indexes, materialized views and types.
I created a free small tool that installs nothing on the database and it generates a zip file with the most common DDL metadata related to one or multiple application schemas. This free new tool is meta360 and can be found on the right margin of my blog. To use it you just need to download and unzip it into your server, then navigate to main meta360-master directory and connecting as a user with access to DBMS_METADATA for the schema(s) you want to extract, execute one of these four scripts below. All parameters are case sensitive. A tool configuration file and a readme.txt are included.
- SQL> @sql/get_top_N_schemas.sql
- SQL> @sql/get_schema.sql <SCHEMA>
- SQL> @sql/get_table.sql <SCHEMA> <TABLE_NAME>
- SQL> @sql/get_object.sql <SCHEMA> <OBJECT_NAME> <OBJECT_TYPE>
I recently got this question:
<<<Is there a way that I can generate SQL MONITORING report for a particular SQL_ID ( This SQL is generated from application code so I can’t add “MONITOR” hint) from command prompt ? If yes can you please help me through this ?>>>
Since this question is of general interest, I’d rather respond here:
As you know, SQL Monitoring starts automatically on a SQL that executes a PX plan, or when its Serial execution has consumed over 5 seconds on CPU or I/O.
If you want to force SQL Monitoring on a SQL statement, without modifying the SQL text itself, I suggest you create a SQL Patch for it. But before you do, please be aware that SQL Monitoring requires the Oracle Tuning Pack.
How to turn on SQL Monitoring for a SQL that executes Serial, takes less than 5 seconds, and without modifying the application that issues such SQL
Use SQL Patch with the MONITOR Hint. An easy way to do that is by using the free sqlpch.sql script provided as part of the cscripts (see right-hand side of this blog under Downloads).
To use sqlpch.sql script, pass as parameter #1 your SQL_ID and for parameter #2 pass “GATHER_PLAN_STATISTICS MONITOR” (without the double quotes).
This sqlpch.sql script will create a SQL Patch for your SQL, which will produce SQL Monitoring (and the collection of A-Rows) for every execution of your SQL.
Be aware there is some overhead involved, so after you are done with your analysis drop the SQL Patch.
Script sqlpch.sql shows the name of the SQL Patch it creates (look at its spool file), and it gives you the command to drop such SQL Patch.
For the actual analysis and diagnostics of your SQL (after you have executed it with SQL Patch in place) use free tool SQLd360.
And for more details about sqlpch.sql and other uses of this script please refer to this entry on my blog.
With permission of the Northern California Oracle Users Group (NoCOUG) I am reproducing a warm interview on SQLTXPLAIN and SQLd360. During this interview Mauro Pagano and myself talk about the history behind these two free tools and how the former has evolved into the latter. You can find the full transcript of the interview here: YesSQL(T). If you want to read the entire free online NoCOUG Journal, you will discover other cool articles.
Anyways, I am glad Iggy Fernandez invited us to participate first on this interview, and second to collaborate on the meeting planned for January. On that meeting Mauro and I will conduct a one full day workshop on “Practical SQL Tuning” (January 28) in Northern California. We hope to see many of you guys there, and please bring questions and case studies.