Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Active Session History’ Category

eDB360 new features (March 2017)

with 2 comments

As many of you know, eDB360 is a free tool that provides a 360-degree view of an Oracle database without any installation. A new version is available like once per month, but occasionally a large number of enhancements are implemented at once. This new release v1708 (March 25, 2017) includes several new features requested recently by some users of the tool, thus the need to blog about what is new:

  1. Reducing the scope of eDB360 is now possible without having to generate a custom configuration file. Prior to this version, if a user wanted to generate output for let’s say AWR reports only (section 7a), the tool needed a custom.sql file with line DEF edb360_sections = ‘7a’;. Then we would pass to edb360.sql as 2nd execution parameter the name of this custom configuration file (too cumbersome!). Starting on v1708, we can directly pass to edb360.sql the section that we desire (i.e. SQL> @edb360 T 7a). This 2nd parameter can either input the name of a custom configuration file (legacy functionality), but now it also accepts a column, a section, a list of columns or a list of sections; for example: 7a, 7, 7a-7b, 1-4 and 3 are all valid values.
  2. A couple of reports were added to section 3h: “SQL in logon storms” and “SQL executed row-by-row”. The former identifies those SQL statements that are seen frequently on very short-lived sessions (based on ASH), and the latter presents a list of SQL statements with large number of executions and small number of rows processed.
  3. eDB360 now extracts ASH from eAdam for top 16 SQL_ID (as per SQLd360 list) + top 12 SNAP_ID (as per AWR MAX from column 7a). What it means is that eDB360 includes now a tar file with raw ASH data for both: SQL statements of interest and for AWR periods of interest (both according to what eDB360 considers important). Using eAdam is easy, so when content of eDB360 does not include a very specific aggregation of ASH data that we need, or when we have to understand the sequence of some ASH samples for example, we can then restore this eAdam data on any Oracle database and data mine it.
  4. Some reports on section 2b show now totals at the bottom. That is to SUM some numeric values. Other reports may follow in future releases.
  5. RMAN section includes now a new report “Blocks with Corruption or Non-logged”.
  6. Added Load Profile (Per Sec, Per Txn and Count) as per DBA_HIST_SYSMETRIC_SUMMARY. This Load Profile resembles what we see on AWR at the top, but this is computed for the entire period of diagnostics (31 days by default). It shows max values, average, median and several percentiles. With this new report on section 1a, we can glance over it and discover in minutes some areas of further interest, for example: logons per second too high, just to mention one.
  7. There is a new section 4i with “Waits Count v.s. Average Latency for top 24 Wait Events”. With this set of 24 reports (one for each of the top wait events) we can observe if patterns on the number of counts relate to patterns on the latency for such wait event; for example we are able to see if an increase in the number of waits for db file sequential reads correlates to an increase of average latency for such wait event. We can also observe cases were latency for a wait event cannot be explained by load on current database, thus hinting an external influence.
  8. Fixed “ORA-01476: divisor is equal to zero” on planx at DBA_HIST_SQLSTAT.
  9. Added AWR DIFF reports for RAC and per instance. These are computed comparing MAX reports to MEDIAN reports, and they help to quickly identify large differences on load. These new AWR DIFF reports are regulated by configuration parameter edb360_conf_incl_addm_rpt (enabled by default). They exist on 11R2 and higher.
  10. Added the ASH Analytics Active report for 12c. This new ASH report is regulated by configuration parameter edb360_conf_incl_ash_analy_rpt (enabled by default). This applies to 12c and higher.
  11. The name of the database is now part of the main filename. Some users requested to include this database name as part of the main zip file since they are using eDB360 periodically on several databases. This new feature is regulated by configuration parameter edb360_conf_incl_dbname_file (disabled by default).
  12. At completion, main eDB360 zip file can now by automatically moved to a location other than the standard SQL*Plus working directory. All output files are still generated on the local SQL*Plus directory from where the script edb360.sql is executed (i.e. edb360-master directory), but at the completion of the execution the consolidated output zip file is now moved to a location specified by a new parameter. This new feature is regulated by configuration parameter edb360_move_directory (disabled by default).
  13. Added new report on “Database and Schema Triggers” under column 3h. This new report can be used to see potential LOGON or other global triggers. For triggers on specific tables, refer to SQLd360 which is automatically included on eDB360 for top SQL.
  14. All queries executed by eDB360 to generate its output were modified. New format is q'[query]’. Reason for this change is to improve readability of the code.

Always download and use the latest version of this tool. For questions or feedback email me. And I hope you get to enjoy eDB360 as much as I do!

Advertisements

Written by Carlos Sierra

March 25, 2017 at 6:51 pm

eDB360 includes now an optional staging repository

with 6 comments

eDB360 has always worked under the premise “no installation required”, and still is the case today – it is part of its fundamental essence: give me a 360-degree view of my Oracle database with no installation whatsoever. With that in mind, this free tool helps sites that have gone to the cloud, as well as those with “on-premises” databases; and in both cases not installing anything certainly expedites diagnostics collections. With eDB360, you simply connect to SQL*Plus with an account that can select from the catalog, execute then a set of scripts behind eDB360 and bingo!, you get to understand what is going on with your database just by navigating the html output. With such functionality, we can remotely diagnose a database, and even elaborate on the full health-check of it. After all, that is how we successfully use it every day!, saving us hundreds of hours of metadata gathering and cross-reference analysis.

Starting with release v1706, eDB360 also supports an optional staging repository of the 26 AWR views listed below. Why? the answer is simple: improved performance! This can be quite significant on large databases with hundreds of active sessions, with frequent snapshots, or with a long history on AWR. We have seen cases where years of data are “stuck” on AWR, specially in older releases of the database. Of course cleaning up the outdated AWR history (and corresponding statistics) is highly recommended, but in the meantime trying to execute edb360 on such databases may lead to long execution hours and frustration, taking sometimes days for what should take only a few hours.

  1. dba_hist_active_sess_history
  2. dba_hist_database_instance
  3. dba_hist_event_histogram
  4. dba_hist_ic_client_stats
  5. dba_hist_ic_device_stats
  6. dba_hist_interconnect_pings
  7. dba_hist_memory_resize_ops
  8. dba_hist_memory_target_advice
  9. dba_hist_osstat
  10. dba_hist_parameter
  11. dba_hist_pgastat
  12. dba_hist_resource_limit
  13. dba_hist_service_name
  14. dba_hist_sga
  15. dba_hist_sgastat
  16. dba_hist_sql_plan
  17. dba_hist_sqlstat
  18. dba_hist_sqltext
  19. dba_hist_sys_time_model
  20. dba_hist_sysmetric_history
  21. dba_hist_sysmetric_summary
  22. dba_hist_sysstat
  23. dba_hist_system_event
  24. dba_hist_tbspc_space_usage
  25. dba_hist_wr_control
  26. dba_hist_snapshot

Thus, if you are contemplating executing eDB360 on a large database, and provided pre-check script edb360-master/sql/awr_ash_pre_check.sql shows that eDB360 might take over 24 hours, then while you clean up your AWR repository you can use the eDB360 staging repository as a workaround to speedup eDB360 execution. The use of this optional staging repository is very simple, just look inside the edb360-master/repo directory for instructions. And as always, shoot me an email or comment here if there were any questions.

Written by Carlos Sierra

February 19, 2017 at 8:43 pm

edb360 taking a long time

with 2 comments

In most cases edb360 takes less than 1hr to execute. But I often hear of cases where it takes a lot longer than that. In a corner case it was taking several days and it had to be killed.

So the question is WHY edb360 takes that long?

Well, edb360 executes thousands of SQL statements sequentially (intentionally). Many of these queries read data from AWR and in particular from ASH. So, lets say your ASH historical table has 2B rows, and on top of that you have not gathered statistics on AWR tables in years, thus CBO under-estimates cardinality and tends to use index access and nested loops. In such extreme cases you may end up with suboptimal execution plans that expect to return a few rows, but actually read a couple of billion rows using index access operations and nested loops. A query like this may take hours to complete!

As of version v1515, edb360 has a shortcut algorithm that ends an execution after 8 hours. So you may get an incomplete output, but it ends normally and the partial output can actually be used. This is not a solution but a workaround for those long executions.

How to troubleshoot edb360 taking long?

Steps:

1. Review files 00002_edb360_dbname_log.txt, 00003_edb360_dbname_log2.txt, 00004_edb360_dbname_log3.txt and 00005_edb360_dbname_tkprof_sort.txt. First log shows the state of the statistics for AWR Tables. If stats are old then gather them fresh with script edb360/sql/gather_stats_wr_sys.sql

2. If number of rows on WRH$_ACTIVE_SESSION_HISTORY as per 00002_edb360_dbname_log.txt is several millions, then you may not be purging data periodically. There are some known bugs and some blog posts on this regard. Review MOS 387914.1 and proceed accordingly. Execute query below to validate ASH age:

SELECT TRUNC(sample_time, 'MM'), COUNT(*)
  FROM dba_hist_active_sess_history
 GROUP BY TRUNC(sample_time, 'MM')
 ORDER BY TRUNC(sample_time, 'MM')
 /

3. If edb360 version (first line on its readme) is older than 1 month, download and use latest version: https://github.com/carlos-sierra/edb360/archive/master.zip (link is also provided on the right-hand side of this blog under downloads).

4. Consider suppressing text and or csv reports. Each for an estimated gain of about 20%. Keep in mind that when suppressing reports, you start loosing some functionality. To suppress lets say text and csv reports, place the following two commands at the end of script edb360/sql/edb360_00_config.sql

DEF edb360_conf_incl_text = ‘N’;
DEF edb360_conf_incl_csv = ‘N’;

5. If after going through steps 1-4 above, edb360 still takes longer than a few hours, feel free to email author carlos.sierra.usa@gmail.com and provide 4 files from step 1.

Written by Carlos Sierra

May 6, 2015 at 8:19 pm

How to identify SQL performing poorly on an APEX application?

with 3 comments

Oracle Application Express (APEX) is a great tool to rapidly develop applications on top of an Oracle database. While developing an internal application we noticed that some pages were slow, meaning taking a few seconds to refresh. Suspecting there was some poorly performing SQL behind those pages, we tried to generate a SQL Trace so we could review the generated SQL. Well, there is no out-of-the-box instrumentation to turn SQL Trace ON from an APEX page… Thus our challenge became: How can we identify suspected SQL performing poorly, when such SQL is generated by an APEX page?

Using ASH

Active Session History (ASH) requires an Oracle Diagnostics Pack License. If your site has such a License, and you need to identify poorly performing SQL generated by APEX, you may want to use find_apex.sql script below. It asks for an application user and for the APEX session (a list is provided in both cases). It outputs a list of poorly performing SQL indicating the APEX page of origin, the SQL_ID and the SQL text. With the SQL_ID you can use some other tool in order to gather additional diagnostics details, including the Execution Plan. You may want to use for that: planx.sql, sqlmon.sql or sqlash.sql. Note that find_apex.sql script also references sqld360.sql, but this new tool is not yet available, so use one of the other 3 suggestions for the time being (or SQLHC/SQLT).

To find poorly performing SQL, script find_apex.sql uses ASH instead of SQL Trace. If the action on a page takes more than a second, then most probably ASH will capture the poorly performing SQL delaying the page.

Script

----------------------------------------------------------------------------------------
--
-- File name: find_apex.sql
--
-- Purpose: Finds APEX poorly performing SQL for a given application user and session
--
-- Author: Carlos Sierra
--
-- Version: 2014/09/03
--
-- Usage: Inputs APEX application user and session id, and outputs list of poorly
-- performing SQL statements for further investigation with other tools.
--
-- Example: @find_apex.sql
--
-- Notes: Developed and tested on 11.2.0.3.
--
-- Requires an Oracle Diagnostics Pack License since ASH data is accessed.
--
-- To further investigate poorly performing SQL use sqld360.sql
-- (or planx.sql or sqlmon.sql or sqlash.sql).
--
---------------------------------------------------------------------------------------
--
WHENEVER SQLERROR EXIT SQL.SQLCODE;
ACC confirm_license PROMPT 'Confirm with "Y" that your site has an Oracle Diagnostics Pack License: '
BEGIN
IF NOT '&&confirm_license.' = 'Y' THEN
RAISE_APPLICATION_ERROR(-20000, 'You must have an Oracle Diagnostics Pack License in order to use this script.');
END IF;
END;
/
WHENEVER SQLERROR CONTINUE;
--
COL seconds FOR 999,990;
COL appl_user FOR A30;
COL min_sample_time FOR A25;
COL max_sample_time FOR A25;
COL apex_session_id FOR A25;
COL page FOR A4;
COL sql_text FOR A80;
--
SELECT COUNT(*) seconds,
SUBSTR(client_id, 1, INSTR(client_id, ':') - 1) appl_user,
MIN(sample_time) min_sample_time,
MAX(sample_time) max_sample_time
FROM gv$active_session_history
WHERE module LIKE '%/APEX:APP %'
GROUP BY
SUBSTR(client_id, 1, INSTR(client_id, ':') - 1)
HAVING SUBSTR(client_id, 1, INSTR(client_id, ':') - 1) IS NOT NULL
ORDER BY
1 DESC, 2
/
--
ACC appl_user PROMPT 'Enter application user: ';
--
SELECT MIN(sample_time) min_sample_time,
MAX(sample_time) max_sample_time,
SUBSTR(client_id, INSTR(client_id, ':') + 1) apex_session_id,
COUNT(*) seconds
FROM gv$active_session_history
WHERE module LIKE '%/APEX:APP %'
AND SUBSTR(client_id, 1, INSTR(client_id, ':') - 1) = TRIM('&&appl_user.')
GROUP BY
SUBSTR(client_id, INSTR(client_id, ':') + 1)
ORDER BY
1 DESC
/
--
ACC apex_session_id PROMPT 'Enter APEX session ID: ';
--
SELECT COUNT(*) seconds,
SUBSTR(h.module, INSTR(h.module, ':', 1, 2) + 1) page,
h.sql_id,
SUBSTR(s.sql_text, 1, 80) sql_text
FROM gv$active_session_history h,
gv$sql s
WHERE h.module LIKE '%/APEX:APP %'
AND SUBSTR(h.client_id, 1, INSTR(h.client_id, ':') - 1) = TRIM('&&appl_user.')
AND SUBSTR(h.client_id, INSTR(h.client_id, ':') + 1) = TRIM('&&apex_session_id.')
AND s.sql_id = h.sql_id
AND s.inst_id = h.inst_id
AND s.child_number = h.sql_child_number
GROUP BY
SUBSTR(h.module, INSTR(h.module, ':', 1, 2) + 1),
h.sql_id,
SUBSTR(s.sql_text, 1, 80)
ORDER BY
1 DESC, 2, 3
/
--
PRO Use sqld360.sql (or planx.sql or sqlmon.sql or sqlash.sql) on SQL_ID of interest

Note

This script as well as some others are now available on GitHub.

Written by Carlos Sierra

September 4, 2014 at 5:29 pm

eAdam

with 11 comments

Enkitec’s Oracle AWR Data Mining Tool

eAdameAdam is a free tool that extracts a subset of data and metadata from an Oracle database with the objective to perform some data mining using a separate staging Oracle database. The data extracted is relevant to Performance Evaluations projects. Most of the data eAdam extracts is licensed by Oracle under the Diagnostics Pack, and some under the Tuning Pack. Therefore, in order to use this eAdam tool, the source database must be licensed to use both Oracle Packs (Tuning and Diagnostics).

To a point, eAdam is similar to eDB360; both access the Data Dictionary in order to produce some reports. The key difference is that eDB360 generates all the reports (after doing some intensive processing) at the source database, while eAdam simply extracts a set of flat files into a TAR file, using a very light-weight script, delaying all the intensive processing for a later time and on a separate staging system. This feature can be very attractive for busy systems where the amount of processing of any external monitoring tool needs to be minimized.

On the source system, eAdam only needs to execute a short script to extract the data and metadata of interest, producing a dense TAR file. On a staging system, eAdam does the heavy lifting, requiring the creation of a repository, the load of this repository and finally the computation of meaningful reports. The processing of the TAR file into the staging system is usually performed by the requestor, using a lower-level database, or a remote one.

The list of objects eAdam extracts as flat files from the source database includes the following:

dba_hist_active_sess_history
dba_hist_database_instance
dba_hist_event_histogram
dba_hist_osstat
dba_hist_parameter
dba_hist_pgastat
dba_hist_sga
dba_hist_sgastat
dba_hist_snapshot
dba_hist_sql_plan
dba_hist_sqlstat
dba_hist_sqltext
dba_hist_sys_time_model
dba_hist_sysstat
gv$active_session_history
gv$log
gv$sql_monitor
gv$sql_plan_monitor
gv$sql_plan_statistics_all
gv$sql
gv$system_parameter2
v$controlfile
v$datafile
v$tempfile

eAdam works on 10gR2, 11gR2, and on higher releases of Oracle; and it can be used on Linux or UNIX Platforms. It has not been tested on Windows. An eAdam sample output is available at this Dropbox location; after downloading the sample output, look for the 0001_eadam36_N_dbname_index.html file and start browsing.

Instructions – Source Database

Download the tool, uncompress the master ZIP file, and look for file eadam-master/source_system/eadam_extract.sql. Review and execute this single and short script connecting to the source database as SYS or DBA. Locate the TAR file produced, and send it to the requestor.

Be aware that the TAR file produced by the extraction process can be large, so be sure you execute this extract script from a directory with at least 10 GBs of free space. Common sizes of this TAR file range between 100 MBs and 1 GB. Execution time for this extraction process may exceed 1 hour, depending on the size of the Data Dictionary.

Instructions – Staging Database

Be sure you have both the eAdam tool (eadam-master.zip) and the TAR file produced on a source system. Your staging database can be of equal, higher or lower release level than the source, but equal or higher is recommended. The Platform can be the same or different.

To install, load and report on the staging database, proceed with the following steps:

  1. Create on the staging system a file directory available to Oracle for read and write. Most probably you want to create this directory connecting to OS as Oracle and create a new directory like /home/oracle/eadam-master. Put in there the content of the eadam-master.zip file.
  2. Create the eAdam repository on the staging database. This step is needed only one time. Follow instructions from the eadam_readme.txt.  Basically you need to execute eadam-master/stage_system/eadam_install.sql connected as SYS. This script asks for 4 parameters: Tablespace names for permanent and temporary schema objects, and the username and password of the new eAdam account. For the username I recommend eadam, but you can use any valid name.
  3. Load the data contained in the TAR file into the database. To do this you need first to copy the TAR file into the eadam-master/stage_system sub-directory and execute next the stage_system/eadam_load.sql script while on the stage_system sub-directory, and connecting as SYS. This script asks for 4 parameters. Pass first the directory path of your stage_system sub-directory, for example /home/oracle/eadam-master/stage_system (this sub-directory must contain the TAR file). Pass next the username and password of your eadam account as you created them. Pass last the name of the TAR file to be loaded into the database.
  4. The load process performs some data transformations and it produces at the end an output similar to eDB360 but smaller in content. After you review the eAdam output, you may decide to generate new output for shorter time series, in such case use the eadam-master/stage_system/eadam_report.sql connecting as the eadam user. This reporting process asks for 3 parameters. Pass the EADAM_SEQ_ID which identifies your particular load (a list of values is displayed), then pass the range of dates using format YYYY-MM-DD/HH24:MI, for example 2014-07-27/17:33.

Download

EADAM @ GitHub is available as free software. You can see its eadam_readme.txt, license.txt or any other piece of the tool before downloading it. Use this link eadam-master.zip to actually download eAdam as a compressed file.

Feedback

Please post your feedback about this eAdam tool at this blog, or send and email directly to the tool author: Carlos Sierra.

Written by Carlos Sierra

July 27, 2014 at 6:25 pm

Why using SQLTXPLAIN

leave a comment »

Every so often I see on a distribution list a posting that starts like this: “I upgraded my application from database release X to release Y and now many queries are performing poorly, can you tell why?”

As everyone else on a distribution list, my first impulse is to make an educated guess permeated by a prior set of experiences. The intentions are always good, but the process is painful and time consuming. Many of us have seen this kind of question, and many of us have good hunches. Still I think our eagerness to help blinds us a bit. The right thing to do is to step back and analyze the facts, and I mean all the diagnostics supporting the observation.

What is needed to diagnose a SQL Tuning issue?

The list is large, but I will enumerate some of the most important pieces:

  1. SQL Text
  2. Version of the database (before and after upgrade)
  3. Database parameters (before and after)
  4. State of the CBO Statistics (before and after)
  5. Changes on Histograms
  6. Basics about the architecture (CPUs, memory, etc.)
  7. Values of binds if SQL has them
  8. Indexes compare, including state (visible?, usable?)
  9. Execution Plan (before and after)
  10. Plan stability? (Stored Outlines, Profiles, SQL Plan Management)
  11. Performance history as per evidence on AWR or StatsPack
  12. Trace from Event 10053 to understand the CBO
  13. Trace from Event 10046 level 8 or 12 to review Waits
  14. Active Session History (ASH) if 10046 is not available

I could keep adding bullets to the list, but I think you get the point: There are simply too many things to check! And each takes some time to collect. More important, the state of the system changes overtime, so you may need to re-collect the same diagnostics more than once.

SQLTXPLAIN to the rescue

SQLT or SQLTXPLAIN, has been available on MetaLink (now MOS) under note 215187.1 for over a decade. In short, SQLT collects all the diagnostics listed above and a lot more. That is WHY Oracle Support uses it every day. It simply saves a lot of time! So, I always encourage fellow Oracle users to make use of the FREE tool and expedite their own SQL Tuning analysis. When time permits, I do volunteer to help on an analysis. So, if you get to read this, and you want to help yourself while using SQLT but feel intimidated by this little monster, please give it a try and contact me for assistance. If I can help, I will, if I cannot, I will let you know.

Conclusion

It is fun to guess WHY a SQL is not performing as expected, and trying different guesses is educational but very time consuming. If you want to actually find root causes before trying to fix your SQL, you may want to collect relevant diagnostics. SQLT is there to help, and if installing this tool is not something you can do in a short term, consider then SQL Health-Check SQLHC.

 

What is new with EDB360?

with 9 comments

Many things, but most important is that it got bigger and better. This EDB360 free tool provided by Enkitec is maturing over time. Its core function has not changed although, which is to present a 360-degree view of a database (10g or higher).

EDB360 is a nice complement to other tools like Exacheck, Raccheck or Oracheck. It has some additional benefits, like taking a snapshot of a system to then be analyzed offline or simply to preserve this snapshot as a baseline.

Keep in mind that EDB360 does not install anything on the database, nor it changes any data on it. In some cases, where direct access to the database server is not an option, having the capability of executing EDB360 through a SQL*Plus client connection is a big plus.

I use EDB360 as a starting place to perform a whole database health-check.

Since pictures tell more than words, please find below 4. The first two are about the new entries on EDB360 main menu (menu is a tad bigger than what you see in these two pictures, and its content is dynamic). The last two pictures are just a sample of the charts that are now part of EDB360.

EDB360 execution parameters changed from 4 to 6:

  1. Oracle Pack License: If your site has the Tuning Pack, then enter ‘T’, else if your site has the Diagnostics Pack enter ‘D’, else enter ‘N’.
  2. Days of History to consider. If you entered ‘T’ or ‘D’ on first parameter, then specify on 2nd parameter up to how many days of history you want EDB360 to use. By default it uses 31, assuming your AWR history is at least that big.
  3. Do you want HTML Reports? By default it is ‘Y’.
  4. Do you want Text Reports? Defaults to ‘Y’.
  5. Do you want CSV Files? Defaults to ‘Y’.
  6. Do you want Charts? Defaults to ‘Y’.

Once you login into SQL*Plus while on top of the edb360 directory, simply execute script edb360.sql and pass all 6 parameters one by one or all of them inline. For example: @edb360 T 31 Y Y Y Y

EDB360 menu (part 1)

EDB360 Dynamic Menu (part 1)

EDB360 menu (part 2)

EDB360 Dynamic Menu (part 2)

CPU Load per Instance (sample chart)

CPU Load per Instance (sample chart)

ASH Top Modules and Actions for Cluster (sample chart)

ASH Top Modules and Actions for Cluster (sample chart)

If you have downloaded EDB360 before, then I encourage you to download and test the new version. If you have never used it, I hope you find this tool useful.

Written by Carlos Sierra

April 2, 2014 at 10:04 am