Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Autonomous Online Index Rebuild for Oracle Multitenant

with 7 comments

First things first: Do not rebuild Oracle indexes! … Unless you have to.

If you are even considering rebuilding indexes on an autonomous manner, please stop now, and first spend some time reading some of the many things Richard Foote has to say on his well-recommended blog.

A little side story: Many, many years ago, as I was supporting EBS at Oracle, one day I got a call from a guy with an aussie accent (even today I think it was Richard). He called Support simply to set the record straight (for me): I should not be promoting dropping, recreating or rebuilding indexes with a couple of nice notes/scripts I wrote for the delight of my then EBS customers. This guy educated me back then: such practices were no longer needed, as Oracle automatically reused index blocks after massive deletions (I was being a bad hombre!)… And I agreed with him. This was back before Y2K, and I still do agree with him today. But (there is always a but), there are some corner cases where we still have to rebuild indexes ourselves. And that is the reason of my story today.

Why would you even want to automate index rebuild?

Since my background is Oracle performance,  and “if all you have is a hammer, everything looks like a nail”, I am a bit bias in that direction, and I would have to answer: because there are some valid performance reasons. There is also a lesser reason about disk space savings, which is not so relevant these days of cheap storage… Nevertheless, I will briefly justify below these two (performance and space) under my specific corner case. Maybe you have a similar case of your own to consider…


We have this key custom OLTP application installed in over 30+ servers and 700+ PDBs. It executes high-rate simple queries that due to current application constraints, often produce “optimal” executions plans with persistent fast full index scans operations on them, or some very long index range scans. The data is a time-based progressive window which contains anything between the last two hours and up the last few days of rapidly changing values (intense on inserts and deletes). All indexes are appended with a monolithically increasing numeric column seeded from a sequence.

If you are thinking “why doing fast full index scans on OLTP queries in the 1st place?” I would have to ask you to bare with me on that question for a few months (we do need to make some improvements on the application data model, but that is a story for a day in the far future yet to come). For now, we are stuck with large indexes growing on their far right (like many citizens today) of every value of their leading column(s), and we have to do something about them! One more thing worth mentioning: the number of reads exceeds by far the number of writes or deletes.

Take for example, the output of script indexes_2b_shrunk.sql (below). We can see that 4th index down WORKXXXXXXXTANCES_PK, has a current size of 114.9 MB (you may need to scroll to the right using a hidden bar at the bottom of the report), but if it were to rebuilt it, its size would go down to 43 MB, reducing then its space by 62.6%. This script basically lists indexes that consume more than 10 MB of disk space, and with a potential savings of at least 25% if they were to be rebuilt.

Output of similar script indexes_2b_shrunk_fast_full_scan.sql (which I executed but omitted from this post), includes the same index because in addition to the considerations of space and savings, such index is referenced by at least one execution plan as a full scan. Then, if I can reduce its size by 62.6%, at least some SQL performing full scans on it would benefit, performance wise. This second script is identical to the first one with one exception: it only includes indexes for which at least a recent execution plan references them with full scans.

These two scripts referenced above take no action on the database – they simply list indexes that could be rebuilt as per their current size and estimated space savings (and if they are referenced by full scans in the case of the latter). By the way, the estimated size if rebuilt is pretty accurate. Just saying.

please wait...
------------------------ --------------------------------------- ---------- ------------- --------------  --------------------------------------------------------------------
XXXXINSTANCES            XXXXXXXXXXXX.XXXXINSTANCES_PK              50.6 %       647.7 MB       320.0 MB  ALTER INDEX xxxxxxxxxxxx.xxxxinstances_pk REBUILD ONLINE;
XXXXTRANSACTIONKEYS      XXXXXXXXXXXX.XXXXTRANSACTIONKEYS_PK        95.9 %    18,391.6 MB       752.0 MB  ALTER INDEX xxxxxxxxxxxx.xxxxtransactionkeys_pk REBUILD ONLINE;
HISTOXXXXXXXXIGNMENT     XXXXXXXXXXXX.HISTOXXXXXXXXIGNMENT_PK       55.1 %       783.8 MB       352.0 MB  ALTER INDEX xxxxxxxxxxxx.histxxxxxxxignment_pk REBUILD ONLINE;
WORKXXXXXXXTANCES        XXXXXXXXXXXX.WORKXXXXXXXTANCES_PK          62.6 %       114.9 MB        43.0 MB  ALTER INDEX xxxxxxxxxxxx.workxxxxxxxtances_pk REBUILD ONLINE;
XXXXTRANSACTIONKEYS      XXXXXXXXXXXX.XXXXTRANSACTIONKEYS_AK        96.9 %    23,896.1 MB       752.0 MB  ALTER INDEX xxxxxxxxxxxx.xxxxtransactionkeys_ak REBUILD ONLINE;
XXXXGCWORKSPACE          XXXXXXXXXXXX.XXXXGCWORKSPACE_PK            31.3 %       745.1 MB       512.0 MB  ALTER INDEX xxxxxxxxxxxx.xxxxgcworkspace_pk REBUILD ONLINE;
WORKXXXXXXXTANCESINDEX   XXXXXXXXXXXX.WORKXXXXXXXTANCESINDEX_PK     51.8 %        70.5 MB        34.0 MB  ALTER INDEX xxxxxxxxxxxx.workxxxxxxxtancesindex_pk REBUILD ONLINE;


Take same output above, and look now at 5th index down XXXXTRANSACTIONKEYS_AK, currently consuming 23,896.1 MB of space. This is on an X5-2 with NVMe SSD, where space is kind of premium. If we can reduce the size of this index by a factor of 96.9% to only 752 MB, even if there are no execution plans referencing this index with a full scan, we would simply recover close to 23 GB of space! This is good news for operations DBAs who get paged when a Tablespace is running short (kind of annoying, specially at night!).

Of course there are two follow-up questions here, which I may just mention and leave for future blog posts: 1) do we really need to append every index with a monolithically increasing numeric column based on a sequence? (most probably a “no”); and b) considering that most leading columns have moderate cardinality and super wide values, would index compression help to reduce index size? (spoiler alert: definitely a “yes”).

Autonomous Online Index Rebuild for Oracle Multitenant

Script iod_indexes_rebuild_online.sql is what I use today to automate indexes rebuild, through a weekly Oracle Enterprise Manager (OEM) job executed in +30 12c Oracle multitenant CDBs (with +700 PDBs). This script rebuilds indexes that are referenced by full scans, are larger than 10 MB, and with an estimated space reduction of at least 25%. It excludes one particular table that for a very sticky reason needs to be avoided (an explicit lock that produces kind of an ugly concurrency contention)… The script also includes the indexes of a known big table, even if its indexes are not referenced by full scans. Both, the one exclusion and the one inclusion, are clearly marked on the script. You may need your own exclusions and inclusions (like I do on my ThanksGiving dinner), in case you decide to write your own script based on this.

This rebuild script is written in such a way that can be executed at the root container level either through OEM or SQL*plus, or at the PDB level, also from OEM or SQL*Plus.

Output of the autonomous online index rebuild script is recorded in 3 places: a spool file directed to the tmp directory, a trace file, and into the alert log. The content of each varies. The shortest is on the alert log, so we can see we ran something in the event we need to audit that timeframe. And the longest is the trace, which records accurate times and messages. Edited output below is from a sample trace file. On it we can see how an index with a size of 21,973.1 MB was reduced to 809.8 MB (96% reduction). Such index took 36 seconds to be rebuilt online. The estimated time is rubbish, but the estimated size is very accurate in my opinion.

+++ DBE IOD +++ iod_indexes_rebuild_online.sql begin
+++ DBE IOD +++ host:
+++ DBE IOD +++ trace: /u01/app/oracle/diag/rdbms/xxxx1xx2_c/xxxx1xx2/trace/xxxx1xx2_ora_40444_iod_indexes_rebuild_online.trc
+++ DBE IOD +++ report: /tmp/iod_indexes_rebuild_online_2017-10-06T20-11-36.txt
*** MODULE NAME:(IOD_INDEXES_REBUILD_ONLINE (15) XXXXX_XX) 2017-10-06 20:32:32.618
*** CONTAINER ID:(15) 2017-10-06 20:32:32.618
CON_NAME:XXXXX_XX CON_ID:15 DB_NAME:xxxx1xx2 DB_UNIQUE_NAME:XXXX1XX2_C SERVER_HOST:iod-db-xxxx-99999 TIME:2017-10-06T20:32:32
*** 2017-10-06 20:32:45.307
*** ACTION NAME:(XXXXTRANSACTIONKEYS_PK {COST}) 2017-10-06 20:34:46.527
------------------------------ ------------------------------------------------------- -------------------- --------------------  ---------- ----------- --------------------  ---------- -----------
XXXXTRANSACTIONKEYS            XXXXXWFR2XX2.XXXXTRANSACTIONKEYS_PK                              21,973.1 MB             880.0 MB      96.0 %       11.0s
*** 2017-10-06 20:37:22.683
*** ACTION NAME:(XXXXTRANSACTIONKEYS_PK {DONE}) 2017-10-06 20:37:22.683

XXXXTRANSACTIONKEYS            XXXXXWFR2XX2.XXXXTRANSACTIONKEYS_PK                              21,973.1 MB             880.0 MB      96.0 %       11.0s             809.8 MB      96.3 %       36.0s
*** ACTION NAME:(HISTORICALASSIGNMENT_PK {COST}) 2017-10-06 20:37:22.688
TOTAL                          PDB NAME                                                 SIZE BEFORE REBUILD       ESTIMATED SIZE  EST SAVING EST SECONDS   SIZE AFTER REBUILD  ACT SAVING ACT SECONDS
------------------------------ ------------------------------------------------------- -------------------- --------------------  ---------- ----------- --------------------  ---------- -----------
TOTAL                          XXXXX_XX                                                         53,101.1 MB           2,585.0 MB      95.1 %       28.0s           2,505.2 MB      95.3 %      101.0s
+++ DBE IOD +++ DB_INDEXES:63 DB_BEFORE:130,111.0MB(127.1GB) DB_AFTER:12,608.1MB(12.3GB) DB_SAVING:90.3%
+++ DBE IOD +++ report: /tmp/iod_indexes_rebuild_online_2017-10-06T20-11-36.txt
+++ DBE IOD +++ trace: /u01/app/oracle/diag/rdbms/xxxx1xx2_c/xxxx1xx2/trace/xxxx1xx2_ora_40444_iod_indexes_rebuild_online.trc
+++ DBE IOD +++ host:
+++ DBE IOD +++ iod_indexes_rebuild_online.sql end
+++ DBE IOD +++ duration: 8301 seconds

Companion scripts

Besides the two “change-nothing” indexes_2b_shrunk*.sql scripts, and the actual rebuild script, there are two more that are part of the set and can be handy.

The first one is iod_indexes_rebuild_online_mon.sql, and it helps to monitor progress when executed on a separe SQL*Plus sessions of its own. This monitoring script executes a query that if you simply keep entering the “/” command, it keeps telling you the PDB and index that is getting rebuilt (or just ended, or is next). This is kind of “are we there yet?”, but for some reason sometimes I just need to know which PDB and particular index I am working on, just in case I see something like a suspicious spike on my OEM window. I do recommend this at least during the first couple of cycles.

The seconds one is iod_indexes_rebuild_online_kill_job.sql, and before you freak out and close this page, let me state that the “kill” script does nothing but interrupt the execution of the rebuild script on a nice manner, meaning between the rebuild of two indexes, or in other words: after it completes the index getting rebuilt. This is kind of important se we do not end with half-baked indexes which may need to get recreated. I am planning on never have to use the “kill” script, but if I do, I already have a OEM job ready with it: This is like a “poisson pill”. A word of unsolicited advice here: prepare always for the worst, and hope for the best!


Before you rush to implement these scripts, please review them extensively, justify their use, and fully digest them. Make them yours, customize them as needed, then test, test and test again. Only then consider implementing an “autonomous index rebuild job” on a production environment. Best wishes!


Written by Carlos Sierra

October 7, 2017 at 4:42 pm

Poor’s man script to summarize reasons WHY cursors are not shared

with 5 comments

Having a large number of child cursors can affect parsing performance as hinted by Abel Macias on his blog post about Diagnosis of a High Version Count (HVC). On his post, Abel also refers to a note on MOS which includes a script that dives into the reasons WHY our cursors are not getting shared. Then, for deep-dives in this area, I strongly suggest to read his post and use the referenced script provided at MOS.

Besides longer parse times, and potential library cache contention, manifested by some waits (such as on mutex), there is another side effect that may bite us: CBO may produce a different plan when a SQL statement is hard-parsed while creating a new child cursor. This latter side effect can be critical for transactional applications with SLA depending on very short latencies of some queries.

This post is about a poor’s man script, that with no installation whatsoever, it lists an aggregated summary of the reasons why our cursors are not shared, including child cursor counts and distinct SQL_IDs counts for each reason (see sample output below). I had to write such script since in our environments we cannot simply run diagnostics scripts that create objects in the database, such as the one provided by MOS.

---------- ---------- -----------------------------
    226916       7826 ROLL_INVALID_MISMATCH
     29387        105 BIND_EQUIV_FAILURE
     21794       4027 HASH_MATCH_FAILED
     11588       2134 OPTIMIZER_MISMATCH
     11027        413 BIND_LENGTH_UPGRADEABLE
     11008        384 BIND_MISMATCH
     10125       2697 USE_FEEDBACK_STATS
      4540        109 OPTIMIZER_MODE_MISMATCH
      1652         72 PURGED_CURSOR
      1245         81 BIND_UACS_DIFF
      1062        316 LANGUAGE_MISMATCH
       771        103 LOAD_OPTIMIZER_STATS
       500         52 STATS_ROW_MISMATCH
       238         86 MV_QUERY_GEN_MISMATCH
        94          3 MULTI_PX_MISMATCH
        28          4 AUTH_CHECK_MISMATCH
        23          1 INSUFF_PRIVS

Once I get to see some reasons for not sharing, some responsible for a large number of child cursors (and distinct SQL_IDs), then I can search on MOS as Abel suggested. Ideally, if you are interested in plan stability, you may want to reduce the times the CBO is tasked to create a new child cursor (and potentially a new Execution Plan).

In output sample above, top in our list is ROLL_INVALID_MISMATCH, causing 226,916 child cursors in as many as 7,826 SQL statements. This particular reason for not sharing cursors is due to a persistent gathering of schema object statistics with the explicit request to invalidate cursors. Since we want to promote plan stability, we would need to suspend such aggressive gathering of CBO statistics and validate reason ROLL_INVALID_MISMATCH is reduced.

Anyways, free script used is below. Enjoy it!

*** edited *** a new version of the script is now available (below). Thanks to stewashton for his input.

-- sql_shared_cursor.sql
SPO all_reasons.sql
SELECT CASE WHEN ROWNUM = 1 THEN '( ' ELSE ', ' END||column_name
  FROM dba_tab_columns
 WHERE table_name = 'V_$SQL_SHARED_CURSOR'
   AND owner = 'SYS'
   AND data_type = 'VARCHAR2'
   AND data_length = 1
GET all_reasons.sql
I )
I )
I WHERE value = 'Y'
I GROUP BY reason_not_shared
I ORDER BY cursors DESC, sql_ids DESC, reason_not_shared
0 ( value FOR reason_not_shared IN 
0 FROM v$sql_shared_cursor UNPIVOT
0 SELECT COUNT(*) cursors, COUNT(DISTINCT sql_id) sql_ids, reason_not_shared
PRO please wait
!rm all_reasons.sql

Written by Carlos Sierra

September 1, 2017 at 1:01 pm

Script to identify index rebuild candidates on 12c

with 9 comments

Some time back I blogged about an easy way to estimate the size of an index. It turns out there is an API that also uses the plan_table under the hood in order to estimate what would be the size of an index if it were rebuilt. Such API is DBMS_SPACE.CREATE_INDEX_COST.

Script below uses  DBMS_SPACE.CREATE_INDEX_COST, and when executed on 12c connected into a PDB, it outputs a list of indexes with enlarged space, which if rebuilt they would shrink at least 25% their current size. This script depends on the accuracy of the CBO statistics.

Once you are comfortable with the output, you may even consider automating its execution using OEM. I will post in a few days a way to do that using DBMS_SQL. In the meantime, here I share the stand-alone version.

-- File name: indexes_2b_shrunk.sql
-- Purpose: List of candidate indexes to be shrunk (rebuild online)
-- Author: Carlos Sierra
-- Version: 2017/07/12
-- Usage: Execute on PDB
-- Example: @indexes_2b_shrunk.sql
-- Notes: Execute connected into a PDB.
-- Consider then:

-- select only those indexes with an estimated space saving percent greater than 25%
VAR savings_percent NUMBER;
EXEC :savings_percent := 25;
-- select only indexes with current size (as per cbo stats) greater then 1MB
VAR minimum_size_mb NUMBER;
EXEC :minimum_size_mb := 1;


COL report_date NEW_V report_date;
SPO /tmp/indexes_2b_shrunk_&&report_date..txt;

l_used_bytes NUMBER;
l_alloc_bytes NUMBER;
l_percent NUMBER;
LPAD('SAVING %', 10)||' '||
LPAD('CURRENT SIZE', 20)||' '||
RPAD('-', 35, '-')||' '||
LPAD('-', 10, '-')||' '||
LPAD('-', 20, '-')||' '||
LPAD('-', 20, '-'));
FOR i IN (SELECT x.owner, x.index_name, SUM(s.leaf_blocks) * TO_NUMBER(p.value) index_size,
REPLACE(DBMS_METADATA.GET_DDL('INDEX',x.index_name,x.owner),CHR(10),CHR(32)) ddl
FROM dba_ind_statistics s, dba_indexes x, dba_users u, v$parameter p
WHERE u.oracle_maintained = 'N'
AND x.owner = u.username
AND x.tablespace_name NOT IN ('SYSTEM','SYSAUX')
AND x.index_type LIKE '%NORMAL%'
AND x.table_type = 'TABLE'
AND x.status = 'VALID'
AND x.temporary = 'N'
AND x.dropped = 'NO'
AND x.visibility = 'VISIBLE'
AND x.segment_created = 'YES'
AND x.orphaned_entries = 'NO'
AND = 'db_block_size'
AND s.owner = x.owner
AND s.index_name = x.index_name
x.owner, x.index_name, p.value
SUM(s.leaf_blocks) * TO_NUMBER(p.value) > :minimum_size_mb * POWER(2,20)
index_size DESC)
IF i.index_size * (100 - :savings_percent) / 100 > l_alloc_bytes THEN
l_percent := 100 * (i.index_size - l_alloc_bytes) / i.index_size;
RPAD(i.owner||'.'||i.index_name, 35)||' '||
LPAD(TO_CHAR(ROUND(l_percent, 1), '990.0')||' % ', 10)||' '||
LPAD(TO_CHAR(ROUND(i.index_size / POWER(2,20), 1), '999,999,990.0')||' MB', 20)||' '||
LPAD(TO_CHAR(ROUND(l_alloc_bytes / POWER(2,20), 1), '999,999,990.0')||' MB', 20));


And if you want to try the DBMS_SPACE.CREATE_INDEX_COST API by itself, you can also grab the estimated size of the index after calling this API, using query below. But the API already returns that value!

FROM plan_table
WHERE other_xml LIKE '%index_size%')) d
WHERE EXTRACTVALUE(VALUE(d), '/info/@type') = 'index_size'

Written by Carlos Sierra

July 12, 2017 at 11:03 am

Posted in Index Rebuild

One Maintenance Window for all PDBs… Really?

leave a comment »

Using the default Maintenance Window for automatic tasks such as statistics gathering and space advisor is just fine… except when the number of PDBs is large (in our case over 100 PDBs in one DB). And starting resource-intensive tasks regulated by the Maintenance Window cause CPU to spike when all PDBs start their weekday window at 10PM…

Why should I care that my CPU spikes at 100% when all 100+ PDBs start their Maintenance Window at 10PM? At that time I should be sleeping, right? Well, if your shop is a quasi 24×7, then such spikes can be noticeable to your users, and possibly affect SLAs.

The solution is to stagger the start time of the Maintenance Window for all your PDBs. For example, start one PDB at 10 PM and another at 10:30 and so on. If you want to start them all within a 4 hours interval, and if you have 100 PDBs, that means you can start 25 per hour. You may also want to reduce the window duration for each PDB from default of 4 hours to let’s say 1 hour. So you could start all within 4 hours, and have the last one close its window one hour later, thus the total time for all Maintenance Windows could be 5 hours, and you would had flattened the CPU spike.

In our case, since we are a 24×7 shop, and since in some cases one of the Tasks executed during the Maintenance Window invalidates a cursor that renders a new Execution Plan (waking at times the on-call DBA), then we decided to start our Maintenance Windows at 8 AM PST (15:00 UTC), and stagger all PDBs to start their window within 4 hours (between 15 and 19 UTC), then have each a duration of 1hr maximum. On weekends we can start at the same time, but allow all PDBs to start their window within 5 hours and with a maximum duration of 2hr. So on weekends we can open the first Maintenance Window at 8 AM PST, and the last one at 1 PM, while allowing each PDB to take up to 2hrs. This is not the ideal solution, but it is much better than opening the gate for 100+ PDBs all at the same time.

The fun part is how to apply such change to all databases in all regions, where basically on each server the databases on it would have a Maintenance Window that starts at different times for each of the PDBs. We ended up with script below, which takes some variables to specify start time, lapse of time where PDBs can start their Maintenance Window, and duration of it per PDB. This script is written in such a way that it can be executed using OEM targeting all databases. Feel free to use it in case you are dealing with PDBs and see CPU spikes caused by Maintenance Window and such spikes affect user experience.

-- File name: pdb_windows.sql
-- Purpose: Sets staggered maintenance windows for all PDBs in order to avoid CPU
-- spikes caused by all PDBs starting their maintenance window at the same
-- time
-- Author: Carlos Sierra
-- Version: 2017/07/12
-- Usage: Execute as SYS on CDB
-- Example: @pdb_windows.sql
-- Notes:
-- Stagger PDBs maintenance windows to start at different times in order to
-- reduce CPU spikes, for example: all PDBs start their maintenance window
-- between 8AM PST and 10AM PST (2 hours), and each window lasts 4 hour,
-- so all maintenance tasks are executed between 8AM and 2PM PST.
-- Notice that in such example, 1st window opens at 8AM and last one opens
-- at 10AM. First closes at 12NOON and last at 2PM
-- Be aware that default cursor invalidation may happen a few hours (3 or 4)
-- after a maintenance window closes. Then, since most PDBs process their
-- tasks fast, actual cursor invalidation may happen between 11AM and 1PM.

COL report_date NEW_V report_date;
SPO /tmp/pdb_window_&&report_date..txt;

-- hour of the day (military format) when first maintenance window for a PDB may open during week days
-- i.e. if dbtimezone is UCT and we want to open 1st at 8AM PST we set to 15
VAR weekday_start_hh24 NUMBER;
EXEC :weekday_start_hh24 := 15;
-- for how many hours we want to open maintenance windows for PDBs during week days
-- i.e. if we want to open windows during a 2 hours interval we set to 2
VAR weekday_hours NUMBER;
EXEC :weekday_hours := 2;
-- how long we want the maintenance window to last for each PDB during week days
-- i.e. if we want each PDB to have a window of 4 hours then we set to 4
VAR weekday_duration NUMBER;
EXEC :weekday_duration := 4;

-- hour of the day (military format) when first maintenance window for a PDB may open during weekends
-- i.e. if dbtimezone is UCT and we want to open 1st at 8AM PST we set to 15
VAR weekend_start_hh24 NUMBER;
EXEC :weekend_start_hh24 := 15;
-- for how many hours we want to open maintenance windows for PDBs during weekends
-- i.e. if we want to open windows for 2 hours interval we set to 2
VAR weekend_hours NUMBER;
EXEC :weekend_hours := 2;
-- how long we want the maintenance window to last for each PDB during weekends
-- i.e. if we want each PDB to have a window of 8 hours then we set to 8
VAR weekend_duration NUMBER;
EXEC :weekend_duration := 8;

-- PL/SQL block to be executed on each PDB
VAR v_cursor CLOB;
:v_cursor := q'[
DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=MON; byhour='||:b_weekdays_hh24||'; byminute='||:b_weekdays_mi||'; bysecond='||:b_weekdays_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekday_duration, 2, '0')||':00:00'));
DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=TUE; byhour='||:b_weekdays_hh24||'; byminute='||:b_weekdays_mi||'; bysecond='||:b_weekdays_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekday_duration, 2, '0')||':00:00'));
DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=WED; byhour='||:b_weekdays_hh24||'; byminute='||:b_weekdays_mi||'; bysecond='||:b_weekdays_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekday_duration, 2, '0')||':00:00'));
DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=THU; byhour='||:b_weekdays_hh24||'; byminute='||:b_weekdays_mi||'; bysecond='||:b_weekdays_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekday_duration, 2, '0')||':00:00'));
DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=FRI; byhour='||:b_weekdays_hh24||'; byminute='||:b_weekdays_mi||'; bysecond='||:b_weekdays_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekday_duration, 2, '0')||':00:00'));
DBMS_SCHEDULER.SET_ATTRIBUTE('SATURDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=SAT; byhour='||:b_weekends_hh24||'; byminute='||:b_weekends_mi||'; bysecond='||:b_weekends_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('SATURDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekend_duration, 2, '0')||':00:00'));
DBMS_SCHEDULER.SET_ATTRIBUTE('SUNDAY_WINDOW', 'repeat_interval', 'freq=daily; byday=SUN; byhour='||:b_weekends_hh24||'; byminute='||:b_weekends_mi||'; bysecond='||:b_weekends_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('SUNDAY_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekend_duration, 2, '0')||':00:00'));
-- Weeknight window - for compatibility only - KEEP IT DISABLED
DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKNIGHT_WINDOW', 'repeat_interval', 'freq=daily; byday=MON,TUE,WED,THU,FRI; byhour='||:b_weekdays_hh24||'; byminute='||:b_weekdays_mi||'; bysecond='||:b_weekdays_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKNIGHT_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekday_duration, 2, '0')||':00:00'));
--DBMS_SCHEDULER.ENABLE('WEEKNIGHT_WINDOW'); KEEP IT DISABLED - Weeknight window - for compatibility only
-- Weekend window - for compatibility only - KEEP IT DISABLED
DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKEND_WINDOW', 'repeat_interval', 'freq=daily; byday=SAT,SUN; byhour='||:b_weekends_hh24||'; byminute='||:b_weekends_mi||'; bysecond='||:b_weekends_ss);
DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKEND_WINDOW', 'duration', TO_DSINTERVAL('+000 '||LPAD(:b_weekend_duration, 2, '0')||':00:00'));
--DBMS_SCHEDULER.ENABLE('WEEKEND_WINDOW'); KEEP IT DISABLED - Weekend window - for compatibility only
PRINT v_cursor;

l_cursor_id INTEGER;
l_rows_processed INTEGER;
l_cursor_id := DBMS_SQL.OPEN_CURSOR;
pdbs AS ( -- list of PDBs ordered by CON_ID with enumerator as rank_num
SELECT con_id, name, RANK () OVER (ORDER BY con_id) rank_num FROM v$containers WHERE con_id <> 2
slot AS ( -- PDBs count
SELECT MAX(rank_num) count FROM pdbs
start_time AS (
SELECT con_id, name,
(TRUNC(SYSDATE) + (:weekday_start_hh24 / 24) + ((pdbs.rank_num - 1) * :weekday_hours / (slot.count - 1) / 24)) weekdays,
(TRUNC(SYSDATE) + (:weekend_start_hh24 / 24) + ((pdbs.rank_num - 1) * :weekend_hours / (slot.count - 1) / 24)) weekends
FROM pdbs, slot
WHERE slot.count > 1
SELECT con_id, name,
TO_CHAR(weekdays, 'HH24') weekdays_hh24, TO_CHAR(weekdays, 'MI') weekdays_mi, TO_CHAR(weekdays, 'SS') weekdays_ss,
TO_CHAR(weekends, 'HH24') weekends_hh24, TO_CHAR(weekends, 'MI') weekends_mi, TO_CHAR(weekends, 'SS') weekends_ss
FROM start_time
DBMS_OUTPUT.PUT_LINE('PDB:'||||' weekdays:'||i.weekdays_hh24||':'||i.weekdays_mi||':'||i.weekdays_ss||' duration:'||:weekday_duration||'h weekends:'||i.weekends_hh24||':'||i.weekends_mi||':'||i.weekends_ss||' duration:'||:weekend_duration||'h');
DBMS_SQL.PARSE(c => l_cursor_id, statement => :v_cursor, language_flag => DBMS_SQL.NATIVE, container =>;
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekdays_hh24', value => i.weekdays_hh24);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekdays_mi', value => i.weekdays_mi);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekdays_ss', value => i.weekdays_ss);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekday_duration', value => :weekday_duration);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekends_hh24', value => i.weekends_hh24);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekends_mi', value => i.weekends_mi);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekends_ss', value => i.weekends_ss);
DBMS_SQL.BIND_VARIABLE(c => l_cursor_id, name => 'b_weekend_duration', value => :weekend_duration);
l_rows_processed := DBMS_SQL.EXECUTE(c => l_cursor_id);
DBMS_SQL.CLOSE_CURSOR(c => l_cursor_id);



Written by Carlos Sierra

July 5, 2017 at 6:16 pm

Posted in OEM, PDB

How to execute some SQL in all Pluggable Databases (PDBs)

leave a comment »

If you are on 12c (and you should) and your database is truly multitenant, you may be in need to execute some SQL in all PDBs. OEM is awesome when it comes to executing a Job in a set of databases, and if such Job is a SQL Script then you can write it to do the same SQL in each PDB out of the set, as long as it is not a Standby. I have seen SQL Scripts doing that, making use of dynamic SQL and generating “ALTER SESSION SET CONTAINER = xxx” commands, spooled to a text file and executing such text file trusting its content. This approach works fine but is not very clean, and opens the door to some issues about the spooled file. I won’t get into the details, but one is security…

In order to avoid using a spool file with dynamic SQL, and having the parent script trust that such dynamic script is indeed available and legit, what I am proposing here is the use of oldie DBMS_SQL, and still do dynamic SQL but self contained. Look at sample code below, this script simply enables those tasks out of the Auto Task set provided by Oracle. Of course this script below assumes that someone has disabled one or more of the three tasks in some PDBs… or maybe in all of them, and that some Databases out of the farm may have the same issue… So if in this example we just want to enable all 3 tasks in all PDBs for all databases in farm, then we could schedule sample script as an OEM Job, and execute it every once in a while.

Anyways, enjoy the sample script and consider using it for other purposes. Nothing new, but just a simple and clean case of using DBMS_SQL on multitenant, while avoiding having to execute a script generated by another script and all the headaches that such action may cause when something goes wrong.

COL report_date NEW_V report_date;
SPO /tmp/change_all_pdbs_&&report_date..txt;

VAR v_cursor CLOB;
  :v_cursor := q'[
  FOR i IN (SELECT client_name, operation_name 
              FROM dba_autotask_operation 
             WHERE status = 'DISABLED'
             ORDER BY 1, 2)
    DBMS_OUTPUT.PUT_LINE('ENABLE CLIENT_NAME:'||i.client_name||' OPERATION:'||i.operation_name);
      ( client_name => i.client_name
      , operation   => NULL
      , window_name => NULL
PRINT v_cursor;

  l_cursor_id INTEGER;
  l_rows_processed INTEGER;
  l_cursor_id := DBMS_SQL.OPEN_CURSOR;
  FOR i IN (SELECT name 
              FROM v$containers 
             WHERE con_id > 2 
               AND open_mode = 'READ WRITE'
             ORDER BY 1)
      ( c             => l_cursor_id
      , statement     => :v_cursor
      , language_flag => DBMS_SQL.NATIVE
      , container     =>
      l_rows_processed := DBMS_SQL.EXECUTE(c => l_cursor_id);
  DBMS_SQL.CLOSE_CURSOR(c => l_cursor_id);


Written by Carlos Sierra

July 3, 2017 at 4:18 pm

Posted in OEM, PDB, Scripts

eAdam 3.0

leave a comment »

Source: eAdam 3.0

Written by Carlos Sierra

April 16, 2017 at 7:10 pm

Posted in General

eDB360 meets eAdam 3.0 – when two heads are better than one!

with one comment

Version v1711 of eDB360 invites eAdam 3.0 to the party. What does it mean? We recently learned that eDB360 v1706 introduced the eDB360 repository, which materialized the content of 26 core DBA_HIST views into a staging repository made of heap tables. This in order to expedite the execution of eDB360 on a database with an enlarged AWR. New version v1711 expands the list from 26 views to a total of 219. And these 219 views include now DBA_HIST, DBA, GV$ and V$.

Expanding existing eDB360 repository 8.4x from 26 to 219 views is not what is key on version v1710. The real impact of the latest version of eDB360 is that now it benefits of eAdam 3.0, providing the same benefits of the 219 views of the eDB360 heap-tables repository, but using External Tables, which are easily transported from a source database to a target database. This simple fact opens many doors.

Using the eAdam 3.0 repository from eDB360, we can now extract the metadata from a production database, then restore it on a staging database where we can produce the eDB360 report for the source database. We could also use this new external repository for: finer-granularity data mining; capacity planning; sizing for potential hardware refresh; provisioning tools; to estimate candidate segments for partitioning or for offloading into Hadoop; etc.

With the new external-tables eAdam 3.0 repository, we could easily build a permanent larger heap-table permanent repository for multiple databases (multiple tenants), or for multiple time versions of the same database. Thus, now that eDB360 has met eAdam 3.0, the combination of these two enables multiple innovative future features (custom or to be packaged and shipped with eDB360).

eDB360 recap

eDB360 is a free tool that gives a 360-degree view of an Oracle database. It installs nothing on the database, and it runs on 10g to 12c Oracle databases. It is designed for Linux and UNIX, but runs well on Windows as well (you may want to install first UNIX Utilities UnxUtils and a zip program, else a few OS commands may not properly work on Windows). This repository-less version is the default way to use eDB360, and is the right method for most cases. But now, in addition to the default use, eDB360 can also make use of one of two repositories.

eDB360 takes time to execute (several hours). It is designed to time-out after 24 hours by default. First reason for long execution times is the intentional serial-processing method used, with sequential execution of query after query, while consuming little resources. Such serial execution, plus the fact that it is common to have the tool execute on large databases where the state of AWR is suboptimal, causes the execution to take long.  We often discover that source AWR repositories lack expected partitioning, and in many cases they hold years of data instead of expected 8 to 31 days. Therefore, the nature of serial-execution combined with enlarged and suboptimal AWR repositories, usually cause eDB360 to execute for many hours more than expected. With 8 to 31 days of AWR data, and when such reasonable history is well partitioned, eDB360 usually executes in less than 6 hours.

To overcome the undesired extended execution times of eDB360, and besides the obvious (partition and purge AWR), the tool provides the capability to execute in multiple threads splitting its content by column. And now, in addition to the divide-and-conquer approach of column execution, eDB360 provides 2 repositories with different objectives in mind:

  1. eDB360 repository: Use this method to create a staging repository based on heap tables inside the source database. This method helps to expedite the execution of eDB360 by at least 10x. Repository heap-tables are created and consumed inside the same database.
  2. eAdam3 repository: Use this method to generate a repository based on external tables on the source database. Such external-tables repository can be moved easily to a remote target system, allowing to efficiently generate the eDB360 report there. This method helps to reduce computations in the source database, and enables potential data mining on the external repository without any resources impact on the source database. This method also helps to build other functions on top of the 219-tables repository.

Views included on both eDB360 and eAdam3 repositories:

  1. dba_2pc_neighbors
  2. dba_2pc_pending
  3. dba_all_tables
  4. dba_audit_mgmt_config_params
  5. dba_autotask_client
  6. dba_autotask_client_history
  7. dba_cons_columns
  8. dba_constraints
  9. dba_data_files
  10. dba_db_links
  11. dba_extents
  12. dba_external_tables
  13. dba_feature_usage_statistics
  14. dba_free_space
  15. dba_high_water_mark_statistics
  16. dba_hist_active_sess_history
  17. dba_hist_database_instance
  18. dba_hist_event_histogram
  19. dba_hist_ic_client_stats
  20. dba_hist_ic_device_stats
  21. dba_hist_interconnect_pings
  22. dba_hist_memory_resize_ops
  23. dba_hist_memory_target_advice
  24. dba_hist_osstat
  25. dba_hist_parameter
  26. dba_hist_pgastat
  27. dba_hist_resource_limit
  28. dba_hist_seg_stat
  29. dba_hist_service_name
  30. dba_hist_sga
  31. dba_hist_sgastat
  32. dba_hist_snapshot
  33. dba_hist_sql_plan
  34. dba_hist_sqlstat
  35. dba_hist_sqltext
  36. dba_hist_sys_time_model
  37. dba_hist_sysmetric_history
  38. dba_hist_sysmetric_summary
  39. dba_hist_sysstat
  40. dba_hist_system_event
  41. dba_hist_tbspc_space_usage
  42. dba_hist_wr_control
  43. dba_ind_columns
  44. dba_ind_partitions
  45. dba_ind_statistics
  46. dba_ind_subpartitions
  47. dba_indexes
  48. dba_jobs
  49. dba_jobs_running
  50. dba_lob_partitions
  51. dba_lob_subpartitions
  52. dba_lobs
  53. dba_obj_audit_opts
  54. dba_objects
  55. dba_pdbs
  56. dba_priv_audit_opts
  57. dba_procedures
  58. dba_profiles
  59. dba_recyclebin
  60. dba_registry
  61. dba_registry_hierarchy
  62. dba_registry_history
  63. dba_registry_sqlpatch
  64. dba_role_privs
  65. dba_roles
  66. dba_rsrc_consumer_group_privs
  67. dba_rsrc_consumer_groups
  68. dba_rsrc_group_mappings
  69. dba_rsrc_io_calibrate
  70. dba_rsrc_mapping_priority
  71. dba_rsrc_plan_directives
  72. dba_rsrc_plans
  73. dba_scheduler_job_log
  74. dba_scheduler_jobs
  75. dba_scheduler_windows
  76. dba_scheduler_wingroup_members
  77. dba_segments
  78. dba_sequences
  79. dba_source
  80. dba_sql_patches
  81. dba_sql_plan_baselines
  82. dba_sql_plan_dir_objects
  83. dba_sql_plan_directives
  84. dba_sql_profiles
  85. dba_stat_extensions
  86. dba_stmt_audit_opts
  87. dba_synonyms
  88. dba_sys_privs
  89. dba_tab_cols
  90. dba_tab_columns
  91. dba_tab_modifications
  92. dba_tab_partitions
  93. dba_tab_privs
  94. dba_tab_statistics
  95. dba_tab_subpartitions
  96. dba_tables
  97. dba_tablespace_groups
  98. dba_tablespaces
  99. dba_temp_files
  100. dba_triggers
  101. dba_ts_quotas
  102. dba_unused_col_tabs
  103. dba_users
  104. dba_views
  105. gv$active_session_history
  106. gv$archive_dest
  107. gv$archived_log
  108. gv$asm_disk_iostat
  109. gv$database
  110. gv$dataguard_status
  111. gv$event_name
  112. gv$eventmetric
  113. gv$instance
  114. gv$instance_recovery
  115. gv$latch
  116. gv$license
  117. gv$managed_standby
  118. gv$memory_current_resize_ops
  119. gv$memory_dynamic_components
  120. gv$memory_resize_ops
  121. gv$memory_target_advice
  122. gv$open_cursor
  123. gv$osstat
  124. gv$parameter
  125. gv$pga_target_advice
  126. gv$pgastat
  127. gv$pq_slave
  128. gv$pq_sysstat
  129. gv$process
  130. gv$process_memory
  131. gv$px_buffer_advice
  132. gv$px_process
  133. gv$px_process_sysstat
  134. gv$px_session
  135. gv$px_sesstat
  136. gv$resource_limit
  137. gv$result_cache_memory
  138. gv$result_cache_statistics
  139. gv$rsrc_cons_group_history
  140. gv$rsrc_consumer_group
  141. gv$rsrc_plan
  142. gv$rsrc_plan_history
  143. gv$rsrc_session_info
  144. gv$rsrcmgrmetric
  145. gv$rsrcmgrmetric_history
  146. gv$segstat
  147. gv$services
  148. gv$session
  149. gv$session_blockers
  150. gv$session_wait
  151. gv$sga
  152. gv$sga_target_advice
  153. gv$sgainfo
  154. gv$sgastat
  155. gv$sql
  156. gv$sql_monitor
  157. gv$sql_plan
  158. gv$sql_shared_cursor
  159. gv$sql_workarea_histogram
  160. gv$sysmetric
  161. gv$sysmetric_summary
  162. gv$sysstat
  163. gv$system_parameter2
  164. gv$system_wait_class
  165. gv$temp_extent_pool
  166. gv$undostat
  167. gv$waitclassmetric
  168. gv$waitstat
  169. v$archive_dest_status
  170. v$archived_log
  171. v$ash_info
  172. v$asm_attribute
  173. v$asm_client
  174. v$asm_disk
  175. v$asm_disk_stat
  176. v$asm_diskgroup
  177. v$asm_diskgroup_stat
  178. v$asm_file
  179. v$asm_template
  180. v$backup
  181. v$backup_set_details
  182. v$block_change_tracking
  183. v$cell_config
  184. v$cell_state
  185. v$controlfile
  186. v$database
  187. v$database_block_corruption
  188. v$datafile
  189. v$flashback_database_log
  190. v$flashback_database_stat
  191. v$instance
  192. v$io_outlier
  193. v$iostat_file
  194. v$kernel_io_outlier
  195. v$lgwrio_outlier
  196. v$log
  197. v$log_history
  198. v$logfile
  199. v$mystat
  200. v$nonlogged_block
  201. v$option
  202. v$parallel_degree_limit_mth
  203. v$parameter
  204. v$pdbs
  205. v$recovery_area_usage
  206. v$recovery_file_dest
  207. v$restore_point
  208. v$rman_backup_job_details
  209. v$rman_output
  210. v$segstat
  211. v$spparameter
  212. v$standby_log
  213. v$sys_time_model
  214. v$sysaux_occupants
  215. v$system_parameter2
  216. v$tablespace
  217. v$tempfile
  218. v$thread
  219. v$version

Instructions to use eDB360 and eAdam3 repositories

Both repositories are implemented under the edb360-master/repo subdirectory. Each has a readme file, which explains how to create the repository, consume it and drop it. The eAdam3 repository also includes instructions how to clone an external-table-based eAdam repository into a heap-table-based eDB360 repository.

Executing eDB360 on the eDB360 repository is faster than executing it on the eAdam3 repository, while avoiding new bug 25802477. This new Oracle database bug inflicts compressed external-tables like the ones used by the eAdam3 repository.

If you use eDB360 or eAdam3 repositories and have questions or concerns, do not hesitate to contact the author.


Written by Carlos Sierra

April 10, 2017 at 2:41 pm