Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Scripts’ Category

“ORA-00997: illegal use of LONG datatype” on a CTAS querying a view with a LONG column

with one comment

Working on the new eDB360 repository I came across this “ORA-00997: illegal use of LONG datatype” while trying to CTAS on the following DBA views:

dba_constraints
dba_ind_partitions
dba_ind_subpartitions
dba_tab_cols
dba_tab_columns
dba_tab_partitions
dba_tab_subpartitions
dba_triggers
dba_views

All these views above include at least a LONG column, which raises the ORA-00997 while trying to do something like: CREATE TABLE edb360.dba#constraints AS SELECT * FROM dba_constraints.

I found several blogs explaining reason and some providing some hints, like using the TO_LOB function. Based on that I created a new stand-alone script that inputs 4 parameters and performs the CTAS I need. The 4 parameters are:

1: owner of source table/view
2: source table/view name
3: owner of target table
4: target table name

I am making this free script available for others to use at will.

-- How to solve ORA-00997: illegal use of LONG datatype while copying tables
-- paramaters
-- 1: owner of source table/view
-- 2: source table/view name
-- 3: owner of target table
-- 4: target table name
-- sample: @repo_edb360_create_one sys dba_constraints edb360 dba#constraints

DEF owner_source = '&1.';
DEF table_source = '&2.';
DEF owner_target = '&3.';
DEF table_target = '&4.';

DECLARE
l_list_ddl VARCHAR2(32767);
l_list_sel VARCHAR2(32767);
l_list_ins VARCHAR2(32767);
BEGIN
FOR i IN (SELECT column_name, data_type, data_length FROM dba_tab_columns WHERE owner = UPPER(TRIM('&&owner_source.')) and table_name = UPPER(TRIM('&&table_source.')) ORDER BY column_id)
LOOP
l_list_ddl := l_list_ddl||','||i.column_name||' '||REPLACE(i.data_type,'LONG','CLOB');
l_list_ins := l_list_ins||','||i.column_name;
IF i.data_type IN ('VARCHAR2', 'CHAR', 'RAW') THEN
l_list_ddl := l_list_ddl||'('||i.data_length||')';
END IF;
IF i.data_type = 'LONG' THEN
l_list_sel := l_list_sel||',TO_LOB('||i.column_name||')';
ELSE
l_list_sel := l_list_sel||','||i.column_name;
END IF;
END LOOP;
EXECUTE IMMEDIATE 'CREATE TABLE &&owner_target..&&table_target. ('||TRIM(',' FROM l_list_ddl)||') COMPRESS';
EXECUTE IMMEDIATE 'INSERT /*+ APPEND */ INTO &&owner_target..&&table_target. ('||TRIM(',' FROM l_list_ins)||') SELECT '||TRIM(',' FROM l_list_sel)||' FROM &&owner_source..&&table_source.';
EXECUTE IMMEDIATE 'COMMIT';
END;
/

Written by Carlos Sierra

April 3, 2017 at 11:51 am

Posted in edb360, Scripts

eDB360 new features (March 2017)

with 2 comments

As many of you know, eDB360 is a free tool that provides a 360-degree view of an Oracle database without any installation. A new version is available like once per month, but occasionally a large number of enhancements are implemented at once. This new release v1708 (March 25, 2017) includes several new features requested recently by some users of the tool, thus the need to blog about what is new:

  1. Reducing the scope of eDB360 is now possible without having to generate a custom configuration file. Prior to this version, if a user wanted to generate output for let’s say AWR reports only (section 7a), the tool needed a custom.sql file with line DEF edb360_sections = ‘7a’;. Then we would pass to edb360.sql as 2nd execution parameter the name of this custom configuration file (too cumbersome!). Starting on v1708, we can directly pass to edb360.sql the section that we desire (i.e. SQL> @edb360 T 7a). This 2nd parameter can either input the name of a custom configuration file (legacy functionality), but now it also accepts a column, a section, a list of columns or a list of sections; for example: 7a, 7, 7a-7b, 1-4 and 3 are all valid values.
  2. A couple of reports were added to section 3h: “SQL in logon storms” and “SQL executed row-by-row”. The former identifies those SQL statements that are seen frequently on very short-lived sessions (based on ASH), and the latter presents a list of SQL statements with large number of executions and small number of rows processed.
  3. eDB360 now extracts ASH from eAdam for top 16 SQL_ID (as per SQLd360 list) + top 12 SNAP_ID (as per AWR MAX from column 7a). What it means is that eDB360 includes now a tar file with raw ASH data for both: SQL statements of interest and for AWR periods of interest (both according to what eDB360 considers important). Using eAdam is easy, so when content of eDB360 does not include a very specific aggregation of ASH data that we need, or when we have to understand the sequence of some ASH samples for example, we can then restore this eAdam data on any Oracle database and data mine it.
  4. Some reports on section 2b show now totals at the bottom. That is to SUM some numeric values. Other reports may follow in future releases.
  5. RMAN section includes now a new report “Blocks with Corruption or Non-logged”.
  6. Added Load Profile (Per Sec, Per Txn and Count) as per DBA_HIST_SYSMETRIC_SUMMARY. This Load Profile resembles what we see on AWR at the top, but this is computed for the entire period of diagnostics (31 days by default). It shows max values, average, median and several percentiles. With this new report on section 1a, we can glance over it and discover in minutes some areas of further interest, for example: logons per second too high, just to mention one.
  7. There is a new section 4i with “Waits Count v.s. Average Latency for top 24 Wait Events”. With this set of 24 reports (one for each of the top wait events) we can observe if patterns on the number of counts relate to patterns on the latency for such wait event; for example we are able to see if an increase in the number of waits for db file sequential reads correlates to an increase of average latency for such wait event. We can also observe cases were latency for a wait event cannot be explained by load on current database, thus hinting an external influence.
  8. Fixed “ORA-01476: divisor is equal to zero” on planx at DBA_HIST_SQLSTAT.
  9. Added AWR DIFF reports for RAC and per instance. These are computed comparing MAX reports to MEDIAN reports, and they help to quickly identify large differences on load. These new AWR DIFF reports are regulated by configuration parameter edb360_conf_incl_addm_rpt (enabled by default). They exist on 11R2 and higher.
  10. Added the ASH Analytics Active report for 12c. This new ASH report is regulated by configuration parameter edb360_conf_incl_ash_analy_rpt (enabled by default). This applies to 12c and higher.
  11. The name of the database is now part of the main filename. Some users requested to include this database name as part of the main zip file since they are using eDB360 periodically on several databases. This new feature is regulated by configuration parameter edb360_conf_incl_dbname_file (disabled by default).
  12. At completion, main eDB360 zip file can now by automatically moved to a location other than the standard SQL*Plus working directory. All output files are still generated on the local SQL*Plus directory from where the script edb360.sql is executed (i.e. edb360-master directory), but at the completion of the execution the consolidated output zip file is now moved to a location specified by a new parameter. This new feature is regulated by configuration parameter edb360_move_directory (disabled by default).
  13. Added new report on “Database and Schema Triggers” under column 3h. This new report can be used to see potential LOGON or other global triggers. For triggers on specific tables, refer to SQLd360 which is automatically included on eDB360 for top SQL.
  14. All queries executed by eDB360 to generate its output were modified. New format is q'[query]’. Reason for this change is to improve readability of the code.

Always download and use the latest version of this tool. For questions or feedback email me. And I hope you get to enjoy eDB360 as much as I do!

Written by Carlos Sierra

March 25, 2017 at 6:51 pm

eDB360 includes now an optional staging repository

with 6 comments

eDB360 has always worked under the premise “no installation required”, and still is the case today – it is part of its fundamental essence: give me a 360-degree view of my Oracle database with no installation whatsoever. With that in mind, this free tool helps sites that have gone to the cloud, as well as those with “on-premises” databases; and in both cases not installing anything certainly expedites diagnostics collections. With eDB360, you simply connect to SQL*Plus with an account that can select from the catalog, execute then a set of scripts behind eDB360 and bingo!, you get to understand what is going on with your database just by navigating the html output. With such functionality, we can remotely diagnose a database, and even elaborate on the full health-check of it. After all, that is how we successfully use it every day!, saving us hundreds of hours of metadata gathering and cross-reference analysis.

Starting with release v1706, eDB360 also supports an optional staging repository of the 26 AWR views listed below. Why? the answer is simple: improved performance! This can be quite significant on large databases with hundreds of active sessions, with frequent snapshots, or with a long history on AWR. We have seen cases where years of data are “stuck” on AWR, specially in older releases of the database. Of course cleaning up the outdated AWR history (and corresponding statistics) is highly recommended, but in the meantime trying to execute edb360 on such databases may lead to long execution hours and frustration, taking sometimes days for what should take only a few hours.

  1. dba_hist_active_sess_history
  2. dba_hist_database_instance
  3. dba_hist_event_histogram
  4. dba_hist_ic_client_stats
  5. dba_hist_ic_device_stats
  6. dba_hist_interconnect_pings
  7. dba_hist_memory_resize_ops
  8. dba_hist_memory_target_advice
  9. dba_hist_osstat
  10. dba_hist_parameter
  11. dba_hist_pgastat
  12. dba_hist_resource_limit
  13. dba_hist_service_name
  14. dba_hist_sga
  15. dba_hist_sgastat
  16. dba_hist_sql_plan
  17. dba_hist_sqlstat
  18. dba_hist_sqltext
  19. dba_hist_sys_time_model
  20. dba_hist_sysmetric_history
  21. dba_hist_sysmetric_summary
  22. dba_hist_sysstat
  23. dba_hist_system_event
  24. dba_hist_tbspc_space_usage
  25. dba_hist_wr_control
  26. dba_hist_snapshot

Thus, if you are contemplating executing eDB360 on a large database, and provided pre-check script edb360-master/sql/awr_ash_pre_check.sql shows that eDB360 might take over 24 hours, then while you clean up your AWR repository you can use the eDB360 staging repository as a workaround to speedup eDB360 execution. The use of this optional staging repository is very simple, just look inside the edb360-master/repo directory for instructions. And as always, shoot me an email or comment here if there were any questions.

Written by Carlos Sierra

February 19, 2017 at 8:43 pm

How to configure eDB360?

leave a comment »

eDB360 has two execution parameters. The first one specifies if the database is licensed to use the Tuning or Diagnostics packs (or none). The second parameter is optional, and if entered it specifies the name of a custom configuration file that allows to change the behavior of eDB360. With this optional configuration file you can make changes such as: reduce the scope of the output to maybe a single column, a section, or even a range of sections. You can also change the time window from the default (last 31 days) to a larger window (if your AWR retention period allows) or smaller window. You can also specify a particular range of dates, or include additional report formats such as text, csv or xml.

If you are an experienced user of eDB360, you may want then to familiarize yourself with the configuration file sql/edb360_00_config.sql (listed below) and the sample custom configuration file provided (sql/custom_config_01.sql). Then you may want to assembly your own configuration file as a subset of the DEF values from sql/edb360_00_config.sql, and provide the name of your custom configuration script as the second execution parameter for eDB360. Your configuration file should reside under edb360-master/sql and the filename is case sensitive.

-- edb360 configuration file. for those cases where you must change edb360 functionality

/*************************** ok to modify (if really needed) ****************************/

-- section to report. null means all (default)
-- report column, or section, or range of columns or range of sections i.e. 3, 3-4, 3a, 3a-4c, 3-4c, 3c-4
DEF edb360_sections = '';

-- edb360 trace
DEF sql_trace_level = '1';

-- history days (default 31)
DEF edb360_conf_days = '31';

-- range of dates below superceed history days when values are other than YYYY-MM-DD
DEF edb360_conf_date_from = 'YYYY-MM-DD';
DEF edb360_conf_date_to = 'YYYY-MM-DD';

-- working hours are defined between these two HH24MM values (i.e. 7:30AM and 7:30PM)
DEF edb360_conf_work_time_from = '0730';
DEF edb360_conf_work_time_to = '1930';

-- working days are defined between 1 (Sunday) and 7 (Saturday) (default Mon-Fri)
DEF edb360_conf_work_day_from = '2';
DEF edb360_conf_work_day_to = '6';

-- maximum time in hours to allow edb360 to execute (default 24 hrs)
DEF edb360_conf_max_hours = '24';

-- include GV$ACTIVE_SESSION_HISTORY (default N)
DEF edb360_conf_incl_ash_mem = 'N';

-- include GV$SQL_MONITOR (default N)
DEF edb360_conf_incl_sql_mon = 'N';

-- include GV$SYSSTAT (default Y)
DEF edb360_conf_incl_stat_mem = 'Y';

-- include GV$PX and GV$PQ (default Y)
DEF edb360_conf_incl_px_mem = 'Y';

-- include DBA_SEGMENTS on queries with no filter on segment_name (default Y)
-- note: some releases of Oracle produce suboptimal plans when no segment_name is passed
DEF edb360_conf_incl_segments = 'Y';

-- include DBMS_METADATA calls (default Y)
-- note: some releases of Oracle take very long to generate metadata
DEF edb360_conf_incl_metadata = 'Y';

/**************************** not recommended to modify *********************************/

-- excluding report types reduce usability while providing marginal performance gain
DEF edb360_conf_incl_html = 'Y';
DEF edb360_conf_incl_xml  = 'N';
DEF edb360_conf_incl_text = 'N';
DEF edb360_conf_incl_csv  = 'N';
DEF edb360_conf_incl_line = 'Y';
DEF edb360_conf_incl_pie  = 'Y';

-- excluding awr reports substantially reduces usability with minimal performance gain
DEF edb360_conf_incl_awr_rpt = 'Y';
DEF edb360_conf_incl_addm_rpt = 'Y';
DEF edb360_conf_incl_ash_rpt = 'Y';
DEF edb360_conf_incl_tkprof = 'Y';

-- top sql to execute further diagnostics (range 0-128)
DEF edb360_conf_top_sql = '48';
DEF edb360_conf_top_cur = '4';
DEF edb360_conf_top_sig = '4';
DEF edb360_conf_planx_top = '48';
DEF edb360_conf_sqlmon_top = '0';
DEF edb360_conf_sqlash_top = '0';
DEF edb360_conf_sqlhc_top = '0';
DEF edb360_conf_sqld360_top = '16';
DEF edb360_conf_sqld360_top_tc = '0';

/************************************ modifications *************************************/

-- If you need to modify any parameter create a new custom configuration file with a
-- subset of the DEF above, and place on same edb360-master/sql directory; then when
-- you execute edb360.sql, pass on second parameter the name of your configuration file

Written by Carlos Sierra

April 24, 2016 at 8:30 pm

Posted in edb360, Scripts, Tools

Getting DDL metadata for an application schema(s)

with one comment

Every so often I need to gain an understanding of an application represented by one or several schemas. In such cases I look at the data model if one exists, else I browse the schema objects including tables, views, pl/sql libraries and extending into synonyms, triggers, sequences, indexes, materialized views and types.

I created a free small tool that installs nothing on the database and it generates a zip file with the most common DDL metadata related to one or multiple application schemas. This free new tool is meta360 and can be found on the right margin of my blog. To use it you just need to download and unzip it into your server, then navigate to main meta360-master directory and connecting as a user with access to DBMS_METADATA for the schema(s) you want to extract, execute one of these four scripts below. All parameters are case sensitive. A tool configuration file and a readme.txt are included.

  1. SQL> @sql/get_top_N_schemas.sql
  2. SQL> @sql/get_schema.sql <SCHEMA>
  3. SQL> @sql/get_table.sql <SCHEMA> <TABLE_NAME>
  4. SQL> @sql/get_object.sql <SCHEMA> <OBJECT_NAME> <OBJECT_TYPE>

Written by Carlos Sierra

March 20, 2016 at 7:31 pm

Posted in Scripts, Tools

SQL Monitoring without MONITOR Hint

leave a comment »

I recently got this question:

<<<Is there a way that I can generate SQL MONITORING report for a particular SQL_ID ( This SQL is generated from application code so I can’t add “MONITOR”  hint) from command prompt ? If yes can you please help me through this ?>>>

Since this question is of general interest, I’d rather respond here:

As you know, SQL Monitoring starts automatically on a SQL that executes a PX plan, or when its Serial execution has consumed over 5 seconds on CPU or I/O.

If you want to force SQL Monitoring on a SQL statement, without modifying the SQL text itself, I suggest you create a SQL Patch for it. But before you do, please be aware that SQL Monitoring requires the Oracle Tuning Pack.

How to turn on SQL Monitoring for a SQL that executes Serial, takes less than 5 seconds, and without modifying the application that issues such SQL

Use SQL Patch with the MONITOR Hint. An easy way to do that is by using the free sqlpch.sql script provided as part of the cscripts (see right-hand side of this blog under Downloads).

To use sqlpch.sql script, pass as parameter #1 your SQL_ID and for parameter #2 pass “GATHER_PLAN_STATISTICS MONITOR” (without the double quotes).

This sqlpch.sql script will create a SQL Patch for your SQL, which will produce SQL Monitoring (and the collection of A-Rows) for every execution of your SQL.

Be aware there is some overhead involved, so after you are done with your analysis drop the SQL Patch.

Script sqlpch.sql shows the name of the SQL Patch it creates (look at its spool file), and it gives you the command to drop such SQL Patch.

For the actual analysis and diagnostics of your SQL (after you have executed it with SQL Patch in place) use free tool SQLd360.

And for more details about sqlpch.sql and other uses of this script please refer to this entry on my blog.

Written by Carlos Sierra

February 29, 2016 at 10:16 am

Forcing a “Nested Loop only” Execution Plan

with 5 comments

Sometimes you do what you have to do. So here I confess doing something I usually avoid: forcing an Execution Plan (which is not the same as using a more conventional method for Plan stability).

This is a case on 11.2.0.3.0 base release where the application vendor sets the optimizer to 9i, and tweaks other CBO parameters in questionable ways, then some queries produce suboptimal plans (as expected); and you are called to help without changing the obvious.

There is a family of queries from an ad-hoc query generator that permits users to issue queries without a set of selective predicates. These queries join several large tables and their performance is poor (as expected as well!). On top of the previous, all these queries include the /*+ FIRST_ROWS */ CBO Hint and the questionable DISTINCT keyword. Note: it is quite common for developers to throw a DISTINCT keyword “to avoid duplicates” where the mere existence of duplicates would be an indication of an application bug; so “why fix it if I can hide it, right?”.

There is one caveat although: these queries include a generic predicate “rownum <= :b1”, and value passed defaults to 5000, so users rationale is “if I only want the first X rows my query should return fast”. This highlights still another questionable practice since it is hard to imagine a user scrolling 5000 rows and making any sense of such large set, especially when the full “filtered” set would be several million rows long. So the original problem is questionable in several ways. Nevertheless, sometimes we are called to help besides providing advice. And no, we are not allowed to slap hands 😉

The good news is that we can use this extra predicate on rownum and make these queries to return the first X rows really fast; and I mean less than 5 seconds instead of over one hour or more! And if users want not 5000 but 500 or even 50 rows, then we can be in the sub-second range!

You may be thinking FIRST_ROWS optimization, and that was my first try. Unfortunately, on 11.2.0.3.0, even reversing all the suboptimal CBO parameters at the session level, I would consistently get an Execution Plan with a few Hash Joins and a large Cost; and if I were to force a Nested Loop Plan, the cost would be several orders of magnitude larger so the CBO would not pick it! Nevertheless, such a “Nest Loop only” Execution Plan would fulfill the user’s expectations, regardless the validity of the initial request. And yes, CBO statistics are OK, not perfect but simply OK. One more piece of info: this is not Exadata! (if it were Exadata most probably these same Execution Plans with full table scans and Hash Joins would simply fly!).

So, my issue became: How do I force an Execution Plan that only contains Nested Loops? If I could do that, then the COUNT STOP operation could help me to halt my SQL execution once I fetched the first X rows (Hash Join does not allow me do that). Remember: these tables have literally millions of rows. I could pepper these queries with a ton of CBO Hints and I would get my desired “Nested Loop only” Execution Plan… But that would be a lot of work and tricky at best.

SQL Patch to the rescue

I could had used a SQL Profile, but I think this dirty trick of suppressing Hash Joins and Sort Merge Joins, would be better served with a SQL Patch. I also thought Siebel: They do tweak CBO parameters as well, and they suppress Hash Joins, but they change System and Session level parameters… Since I wanted my change to be very localized, SQL Patch could provide me just what I needed.

Under the Downloads section on the margin of this page, there is a “cscripts” link that includes the sqlpch.sql script. I used this script and passed as the second parameter the following string (1st parameter is SQL_ID). With a SQL Patch generated this way, I could systematically produce a “Nested-Loops only” Execution Plan for these few queries. I did not have to change the original SQL, nor change the CBO environment at the System or Session level, neither restrict the query generator, and I did not had to “educate” the users to avoid such unbounded queries.

OPT_PARAM("_optimizer_sortmerge_join_enabled" "FALSE") OPT_PARAM("_hash_join_enabled" "FALSE")

Conclusion

I have to concede doing something questionable, in this case using a SQL Patch to force a desired Execution Plan instead of fixing the obvious, simply because that was the shortest path to alleviate the user’s pain.

I consider this technique above a temporary work-around and not a solution to the actual issue. In this case the right way to handle this issue would be:

  1. Have the application vendor certify their application to the latest release of the database and reset all CBO related parameters, plus
  2. Have the application vendor remove CBO Hints and DISTINCT keyword from queries, plus
  3. Configure the ad-hoc query generator to restrict users from executing queries without selective predicates, then
  4. Tune those outlier queries that may still need some work to perform as per business requirements, and possibly
  5. Educate the users to provide as many selective predicates as possible

Anyways, the potential of using a SQL Patch to tweak an Execution Plan in mysterious ways is quite powerful, and something we may want to keep in the back of our minds for a rainy day…

Written by Carlos Sierra

August 8, 2015 at 8:30 am