Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Archive for the ‘Scripts’ Category

Free script to very quickly and cheaply estimate the size of an index if it were to be rebuilt

with 7 comments

A good friend of mine recently asked me if edb360 included a section showing indexes that would benefit of a rebuild. I replied “not yet” and basically committed to add something. This topic of the needs versus the implications of rebuilding an index has been recently discussed in Richard Foote’s Blog. In my opinion, if you want to know more about indexes, Richard’s blog is one of the first stops. To my surprise and delight, I learned that we can actually use a little trick of the EXPLAIN PLAN FOR command and actually use the CBO to estimate what would be the size of an index if we were to create (or rebuild) it. In this latter blog posting Richard explains and demonstrates how it can be done.

My blog posting today is about encapsulating this cool method to quickly and cheaply estimate the size of an index if it were to be rebuilt, and put it on a free script for the Oracle community to use. So, feel free to use script below, but I need to remind you that before you jump to conclusions that this or that index should be rebuilt, read first about the actual need of doing so. You may want to include in your reading Richard Foote’s numerous postings on this topic, and also a couple of Oracle MOS notes: 989093.1 and 989186.1

----------------------------------------------------------------------------------------
--
-- File name:   estimate_index_size.sql
--
-- Purpose:     Reports Indexes with an Actual size > Estimated size for over 1 MB
--
-- Author:      Carlos Sierra
--
-- Version:     2014/07/18
--
-- Description: Script to very quickly and cheaply estimate the size of an index if it
--              were to be rebuilt. It uses EXPLAIN PLAN FOR CREATE INDEX technique.
--              It can be used on a single index, or all the indexes on a table, or
--              a particular application schema, or all application schemas. It does not
--              lock indexes and only updates the plan_table, which is usually a global
--              temporary table.
--
-- Usage:       Connect to SQL*Plus as SYS or DBA account and execute without parameters.
--              It will ask for optional schema owner, table name and index name. If all
--              3 are given null values then it acts on all application schemas. It
--              generates a simple text report with the indexes having an estimated size
--              of at least 1 MB over their actual size.
--
-- Example:     @estimate_index_size.sql
--
-- Notes:       Developed and tested on 11.2.0.3.
--
--              Inspired on blog posts from Richard Foote and Connor MacDonald:
--              http://richardfoote.wordpress.com/2014/04/24/estimate-index-size-with-explain-plan-i-cant-explain/#comment-116966
--              http://connormcdonald.wordpress.com/2012/05/30/index-size/
--
--              If considering index rebuilds based on the output of this script, read
--              first Richard Foote's numerous blog postings about this topic. Bottom
--              line: there are only a few cases where you actually need to manually
--              rebuild an index.
--
--              This method to estimated size of an index is far from perfect, please
--              scrutinize this script before using it. You may also want to read
--              Oracle MOS notes: 989093.1 and 989186.1 on this topic.
--
---------------------------------------------------------------------------------------
--
SPO estimate_index_size.txt;
UNDEF owner table_name index_name exclusion_list exclusion_list2;
DEF exclusion_list = "('ANONYMOUS','APEX_030200','APEX_040000','APEX_SSO','APPQOSSYS','CTXSYS','DBSNMP','DIP','EXFSYS','FLOWS_FILES','MDSYS','OLAPSYS','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','OWBSYS')";
DEF exclusion_list2 = "('SI_INFORMTN_SCHEMA','SQLTXADMIN','SQLTXPLAIN','SYS','SYSMAN','SYSTEM','TRCANLZR','WMSYS','XDB','XS$NULL')";
VAR random1 VARCHAR2(30);
VAR random2 VARCHAR2(30);
EXEC :random1 := DBMS_RANDOM.string('A', 30);
EXEC :random2 := DBMS_RANDOM.string('X', 30);
DELETE plan_table WHERE statement_id IN (:random1, :random2);

SET SERVEROUT ON;
DECLARE
  sql_text CLOB;
BEGIN
  FOR i IN (SELECT idx.owner, idx.index_name
              FROM dba_indexes idx,
                   dba_tables tbl
             WHERE idx.owner = NVL(UPPER(TRIM('&&owner.')), idx.owner) -- optional schema owner name
               AND idx.table_name = NVL(UPPER(TRIM('&&table_name.')), idx.table_name) -- optional table name
               AND idx.index_name = NVL(UPPER(TRIM('&&index_name.')), idx.index_name) -- optional index name
               AND idx.owner NOT IN &&exclusion_list. -- exclude non-application schemas
               AND idx.owner NOT IN &&exclusion_list2. -- exclude more non-application schemas
               AND idx.index_type IN ('NORMAL', 'FUNCTION-BASED NORMAL', 'BITMAP', 'NORMAL/REV') -- exclude domain and lob
               AND idx.status != 'UNUSABLE' -- only valid indexes
               AND idx.temporary = 'N'
               AND tbl.owner = idx.table_owner
               AND tbl.table_name = idx.table_name
               AND tbl.last_analyzed IS NOT NULL -- only tables with statistics
               AND tbl.num_rows > 0 -- only tables with rows as per statistics
               AND tbl.blocks > 128 -- skip small tables
               AND tbl.temporary = 'N')
  LOOP
    BEGIN
      sql_text := 'EXPLAIN PLAN SET STATEMENT_ID = '''||:random1||''' FOR '||REPLACE(DBMS_METADATA.get_ddl('INDEX', i.index_name, i.owner), CHR(10), ' ');
      -- cbo estimates index size based on explain plan for create index ddl
      EXECUTE IMMEDIATE sql_text;
      -- index owner and name do not fit on statement_id, thus using object_owner and object_name, using statement_id as processing state
      DELETE plan_table WHERE statement_id = :random1 AND (other_xml IS NULL OR NVL(DBMS_LOB.instr(other_xml, 'index_size'), 0) = 0);
      UPDATE plan_table SET object_owner = i.owner, object_name = i.index_name, statement_id = :random2 WHERE statement_id = :random1;
    EXCEPTION
      WHEN OTHERS THEN
        DBMS_OUTPUT.PUT_LINE(i.owner||'.'||i.index_name||': '||SQLERRM);
        DBMS_OUTPUT.PUT_LINE(DBMS_LOB.substr(sql_text));
    END;
  END LOOP;
END;
/
SET SERVEROUT OFF;

WITH
indexes AS (
SELECT pt.object_owner,
       pt.object_name,
       TO_NUMBER(EXTRACTVALUE(VALUE(d), '/info')) estimated_bytes
  FROM plan_table pt,
       TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(pt.other_xml), '/*/info'))) d
 WHERE pt.statement_id = :random2
   AND pt.other_xml IS NOT NULL -- redundant
   AND DBMS_LOB.instr(pt.other_xml, 'index_size') > 0 -- redundant
   AND EXTRACTVALUE(VALUE(d), '/info/@type') = 'index_size' -- grab index_size type
),
segments AS (
SELECT owner, segment_name, SUM(bytes) actual_bytes
  FROM dba_segments
 WHERE owner = NVL(UPPER(TRIM('&&owner.')), owner) -- optional schema owner name
   AND segment_name = NVL(UPPER(TRIM('&&index_name.')), segment_name) -- optional index name
   AND owner NOT IN &&exclusion_list. -- exclude non-application schemas
   AND owner NOT IN &&exclusion_list2. -- exclude more non-application schemas
   AND segment_type LIKE 'INDEX%'
HAVING SUM(bytes) > POWER(2, 20) -- only indexes with actual size > 1 MB
 GROUP BY
       owner,
       segment_name
),
list_bytes AS (
SELECT (s.actual_bytes - i.estimated_bytes) actual_minus_estimated,
       s.actual_bytes,
       i.estimated_bytes,
       i.object_owner,
       i.object_name
  FROM indexes i,
       segments s
 WHERE i.estimated_bytes > POWER(2, 20) -- only indexes with estimated size > 1 MB
   AND s.owner = i.object_owner
   AND s.segment_name = i.object_name
)
SELECT ROUND(actual_minus_estimated / POWER(2, 20)) actual_minus_estimated,
       ROUND(actual_bytes / POWER(2, 20)) actual_mb,
       ROUND(estimated_bytes / POWER(2, 20)) estimated_mb,
       object_owner owner,
       object_name index_name
  FROM list_bytes
 WHERE actual_minus_estimated > POWER(2, 20) -- only differences > 1 MB
 ORDER BY
       1 DESC,
       object_owner,
       object_name
/

DELETE plan_table WHERE statement_id IN (:random1, :random2);
UNDEF owner table_name index_name exclusion_list exclusion_list2;
SPO OFF;

Written by Carlos Sierra

July 18, 2014 at 9:45 am

SQLTXPLAIN PL/SQL Public APIs to execute XTRACT from 3rd party tools

leave a comment »

Many tools offer Public APIs, which expose some functionality to other tools. SQLTXPLAIN contains also some Public APIs. They are provided by package SQLTXADMIN.SQLT$E. I would say the most relevant one is XTRACT_SQL_PUT_FILES_IN_DIR. This blog post is about this Public API and how it can be used by other tools to execute a SQLT XTRACT from PL/SQL instead of SQL*Plus.

Imagine a tool that deals with SQL statements, and with the click of a button it invokes SQLTXTRACT on a SQL of interest, and after a few minutes, most files created by SQLTXTRACT suddenly show on an OS pre-defined directory. Implementing this SQLT functionality on an external tool is extremely easy as you will see below.

Public API  SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR inputs a SQL_ID and two other optional parameters: A tag to identify output files, and a directory name. Only SQL_ID parameter is mandatory, and the latter two are optional, but I recommend to pass values for all 3.

I used “Q1” as a tag to be included in all output files. And I used staging directory “FROG_DIR” at the database layer, which points to “/home/oracle/frog” at the OS layer.

On sample below, I show how to use this Public API for a particular SQL_ID “8u0n7w1jug5dg”. I call this API from SQL*Plus, but keep in mind that if I were to call it from within a tool’s PL/SQL library, the method would be the same.

Another consideration is that Public API  SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR may take several minutes to execute, so you may want to “queue” the request using a Task or a Job within the database. What is important here on this blog post is to explain and show how this Public API works.

SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR parameters:

Find below code snippet showing API Parameters. Notice this API is overloaded, so it may return the STATEMENT_ID or nothing. This STATEMENT_ID is the 5 digits number you see on each SQLT execution.


CREATE OR REPLACE PACKAGE &&tool_administer_schema..sqlt$e AUTHID CURRENT_USER AS
/* $Header: 215187.1 sqcpkge.pks 12.1.03 2013/10/10 carlos.sierra mauro.pagano $ */

  /*************************************************************************************/

  /* -------------------------
   *
   * public xtract_sql_put_files_in_dir
   *
   * executes sqlt xtract on a single sql then
   * puts all generated files into an os directory,
   * returning the sqlt statement id.
   *
   * ------------------------- */
  FUNCTION xtract_sql_put_files_in_dir (
    p_sql_id_or_hash_value IN VARCHAR2,
    p_out_file_identifier  IN VARCHAR2 DEFAULT NULL,
    p_directory_name       IN VARCHAR2 DEFAULT 'SQLT$STAGE' )
  RETURN NUMBER;

  /* -------------------------
   *
   * public xtract_sql_put_files_in_dir (overload)
   *
   * executes sqlt xtract on a single sql then
   * puts all generated files into an os directory.
   *
   * ------------------------- */
  PROCEDURE xtract_sql_put_files_in_dir (
    p_sql_id_or_hash_value IN VARCHAR2,
    p_out_file_identifier  IN VARCHAR2 DEFAULT NULL,
    p_directory_name       IN VARCHAR2 DEFAULT 'SQLT$STAGE' );

Staging Directory

To implement Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR on your tool, you need first to create and test a staging directory where the API will write files. This directory needs to be accessible to the “oracle” account, so I show below how to create sample directory “frog” while connected to the OS as “oracle”.

Since the API uses UTL_FILE, it is important that “oracle” can write into it, so be sure you test this UTL_FILE write  after you create the directory and before you test Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR.

Use code snippet provided below to test the UTL_FILE writing into this new staging OS directory.

Creating "frog" OS directory connected to OS as "oracle"

Creating “frog” OS directory connected to OS as “oracle”

Creating FROG_DIR database directory and providing access to SQLTXADMIN

Creating FROG_DIR database directory and providing access to SQLTXADMIN

Testing a simple WRITE to FROG_DIR

Testing a simple WRITE to FROG_DIR


DECLARE
  out_file_type UTL_FILE.file_type;
BEGIN
  out_file_type :=
  UTL_FILE.FOPEN (
     location     => 'FROG_DIR',
     filename     => 'Test1.txt',
     open_mode    => 'WB',
     max_linesize => 32767 );
END;
/

Executing SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR

On your tool, you can call this SQLT Public API from PL/SQL. You may want to use a Task or Job since the API may take several minutes to execute and you do not want the user to simply wait until SQLT completes.

Execution of Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR

Execution of Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR

Reviewing the output of SQLT XTRACT for SQL_ID "8u0n7w1jug5dg"

Reviewing the output of SQLT XTRACT for SQL_ID “8u0n7w1jug5dg”

Conclusion

Public API SQLTXADMIN.SQLT$E.XTRACT_SQL_PUT_FILES_IN_DIR is available for any 3rd party tool to use. If SQLT has been pre-installed on a system where your tool executes, then calling this API as shown above, will generate a set of SQLT files on a pre-defined staging OS directory.

If the system where you install your tool does not have SQLT pre-installed, your tool can direct its users to download and install SQLT out of My Oracle Support (MOS) under document 215187.1.

Once you generate all these SQLT XTRACT files into an OS staging directory, you may want to zip them, or make them visible to your tool user. If the latter, then show the “main” html report.

SQLT is an Oracle community tool hosted at Oracle MOS under 215187.1. This tool is not supported, but if you have a question or struggle while implementing this Public API, feel free to shoot me an email or post your question/concern on this blog.

Written by Carlos Sierra

June 30, 2014 at 9:29 am

Creating a SQL Plan Baseline out of a modified SQL

with 7 comments

While delivering a session at KSCOPE 14, I was asked a very good question: Can I create a SQL Plan Baseline for an Original SQL out of a Modified SQL? In other words, query Q1, which performs poorly, has SQL_ID1 and SQL_HANDLE1 with a poorly performing Execution Plan with Hash Value PHV1. With a small modification to this query, like adding a CBO Hint or removing one, we obtain query Q2, which performs well, and has SQL_ID2, SQL_HANDLE2 and PHV2. So what we want it to associate PHV2 to SQL_ID1. The questions is: how do we do that with an API?

Maria Colgan explains on this blog how to manually do exactly what is requested above. So the question becomes: can we do this manual procedure with an easy to use API?

Script coe_load_sql_baseline.sql below inputs SQL_ID1, SQL_ID2 and PHV2, which correspond to the Original SQL_ID, the Modified SQL_ID and the Plan Hash Value from the Modified SQL (it could have more than one plan); then it produces a SQL Plan Baseline for the Original SQL out of the Execution Plan of the Modified SQL. By doing this we can “fix” a Query performing poorly by attaching to it a good performing Execution Plan that was produced by making a small temporary change to the Original SQL, like adding or removing a small set of CBO Hints. So we can generate good Execution Plans without actually changing the application.

Script coe_load_sql_baseline.sql is provided inside SQLT (MOS 215187.1) under sqlt/utl directory. Use always the latest version. As of June 2014 the latest version is the one below. This script does not require to install SQLT.

SPO coe_load_sql_baseline.log;
SET DEF ON TERM OFF ECHO ON FEED OFF VER OFF HEA ON LIN 2000 PAGES 100 LONG 8000000 LONGC 800000 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 NUM 20 SQLP SQL>;
SET SERVEROUT ON SIZE UNL;
REM
REM $Header: 215187.1 coe_load_sql_baseline.sql 11.4.5.8 2013/05/10 carlos.sierra $
REM
REM Copyright (c) 2000-2013, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM   carlos.sierra@oracle.com
REM
REM SCRIPT
REM   coe_load_sql_baseline.sql
REM
REM DESCRIPTION
REM   This script loads a plan from a modified SQL into the SQL
REM   Plan Baseline of the original SQL.
REM   If a good performing plan only reproduces with CBO Hints
REM   then you can load the plan of the modified version of the
REM   SQL into the SQL Plan Baseline of the orignal SQL.
REM   In other words, the original SQL can use the plan that was
REM   generated out of the SQL with hints.
REM
REM PRE-REQUISITES
REM   1. Have in cache or AWR the text for the original SQL.
REM   2. Have in cache the plan for the modified SQL
REM      (usually with hints).
REM
REM PARAMETERS
REM   1. ORIGINAL_SQL_ID (required)
REM   2. MODIFIED_SQL_ID (required)
REM   3. PLAN_HASH_VALUE (required)
REM
REM EXECUTION
REM   1. Connect into SQL*Plus as user with access to data dictionary
REM      and privileges to create SQL Plan Baselines. Do not use SYS.
REM   2. Execute script coe_load_sql_baseline.sql passing first two
REM      parameters inline or until requested by script.
REM   3. Provide plan hash value of the modified SQL when asked.
REM
REM EXAMPLE
REM   # sqlplus system
REM   SQL> START coe_load_sql_baseline.sql gnjy0mn4y9pbm b8f3mbkd8bkgh
REM   SQL> START coe_load_sql_baseline.sql;
REM
REM NOTES
REM   1. This script works on 11g or higher.
REM   2. For a similar script for 10g use coe_load_sql_profile.sql,
REM      which uses custom SQL Profiles instead of SQL Baselines.
REM   3. For possible errors see coe_load_sql_baseline.log
REM   4. Use a DBA user but not SYS. Do not connect as SYS as the staging
REM      table cannot be created in SYS schema and you will receive an error:
REM      ORA-19381: cannot create staging table in SYS schema
REM
SET TERM ON ECHO OFF;
PRO
PRO Parameter 1:
PRO ORIGINAL_SQL_ID (required)
PRO
DEF original_sql_id = '&1';
PRO
PRO Parameter 2:
PRO MODIFIED_SQL_ID (required)
PRO
DEF modified_sql_id = '&2';
PRO
WITH
p AS (
SELECT DISTINCT plan_hash_value
  FROM gv$sql_plan
 WHERE sql_id = TRIM('&&modified_sql_id.')
   AND other_xml IS NOT NULL ),
m AS (
SELECT plan_hash_value,
       SUM(elapsed_time)/SUM(executions) avg_et_secs
  FROM gv$sql
 WHERE sql_id = TRIM('&&modified_sql_id.')
   AND executions > 0
 GROUP BY
       plan_hash_value )
SELECT p.plan_hash_value,
       ROUND(m.avg_et_secs/1e6, 3) avg_et_secs
  FROM p, m
 WHERE p.plan_hash_value = m.plan_hash_value
 ORDER BY
       avg_et_secs NULLS LAST;
PRO
PRO Parameter 3:
PRO PLAN_HASH_VALUE (required)
PRO
DEF plan_hash_value = '&3';
PRO
PRO Values passed to coe_load_sql_baseline:
PRO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PRO ORIGINAL_SQL_ID: "&&original_sql_id."
PRO MODIFIED_SQL_ID: "&&modified_sql_id."
PRO PLAN_HASH_VALUE: "&&plan_hash_value."
PRO
WHENEVER SQLERROR EXIT SQL.SQLCODE;
SET TERM OFF ECHO ON;

-- trim parameters
COL original_sql_id NEW_V original_sql_id FOR A30;
COL modified_sql_id NEW_V modified_sql_id FOR A30;
COL plan_hash_value NEW_V plan_hash_value FOR A30;
SELECT TRIM('&&original_sql_id.') original_sql_id, TRIM('&&modified_sql_id.') modified_sql_id, TRIM('&&plan_hash_value.') plan_hash_value FROM DUAL;

-- open log file
SPO coe_load_sql_baseline_&&original_sql_id..log;
GET coe_load_sql_baseline.log;
.

-- get user
COL connected_user NEW_V connected_user FOR A30;
SELECT USER connected_user FROM DUAL;

VAR sql_text CLOB;
VAR plan_name VARCHAR2(30);
EXEC :sql_text := NULL;
EXEC :plan_name := NULL;

-- get sql_text from memory
BEGIN
  SELECT REPLACE(sql_fulltext, CHR(00), ' ')
    INTO :sql_text
    FROM gv$sqlarea
   WHERE sql_id = TRIM('&&original_sql_id.')
     AND ROWNUM = 1;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting original sql_text from memory: '||SQLERRM);
    :sql_text := NULL;
END;
/

-- get sql_text from awr
BEGIN
  IF :sql_text IS NULL OR NVL(DBMS_LOB.GETLENGTH(:sql_text), 0) = 0 THEN
    SELECT REPLACE(sql_text, CHR(00), ' ')
      INTO :sql_text
      FROM dba_hist_sqltext
     WHERE sql_id = TRIM('&&original_sql_id.')
       AND sql_text IS NOT NULL
       AND ROWNUM = 1;
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting original sql_text from awr: '||SQLERRM);
    :sql_text := NULL;
END;
/

-- sql_text as found
SELECT :sql_text FROM DUAL;

-- check is sql_text for original sql is available
SET TERM ON;
BEGIN
  IF :sql_text IS NULL THEN
    RAISE_APPLICATION_ERROR(-20100, 'SQL_TEXT for original SQL_ID &&original_sql_id. was not found in memory (gv$sqlarea) or AWR (dba_hist_sqltext).');
  END IF;
END;
/

-- check phv is found
DECLARE
  l_count NUMBER;
BEGIN
  SELECT COUNT(*)
    INTO l_count
    FROM gv$sql
   WHERE sql_id = TRIM('&&modified_sql_id.')
     AND plan_hash_value = TO_NUMBER(TRIM('&&plan_hash_value.'));

   IF l_count = 0 THEN
     RAISE_APPLICATION_ERROR(-20110, 'PHV &&plan_hash_value. for modified SQL_ID &&modified_sql_id. was not be found in memory (gv$sql).');
   END IF;
END;
/

SET ECHO OFF;
DECLARE
  plans NUMBER;
  description VARCHAR2(500);
  sys_sql_handle VARCHAR2(30);
  sys_plan_name VARCHAR2(30);
BEGIN
  -- create sql_plan_baseline for original sql using plan from modified sql
  plans :=
  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE (
    sql_id          => TRIM('&&modified_sql_id.'),
    plan_hash_value => TO_NUMBER(TRIM('&&plan_hash_value.')),
    sql_text        => :sql_text );
  DBMS_OUTPUT.PUT_LINE('Plans Loaded: '||plans);

  -- find handle and plan_name for sql_plan_baseline just created
  SELECT sql_handle, plan_name
    INTO sys_sql_handle, sys_plan_name
    FROM dba_sql_plan_baselines
   WHERE creator = USER
     AND origin = 'MANUAL-LOAD'
     AND created = ( -- past 1 minute only
  SELECT MAX(created) max_created
    FROM dba_sql_plan_baselines
   WHERE creator = USER
     AND origin = 'MANUAL-LOAD'
     AND created > SYSDATE - (1/24/60));
  DBMS_OUTPUT.PUT_LINE('sys_sql_handle: "'||sys_sql_handle||'"');
  DBMS_OUTPUT.PUT_LINE('sys_plan_name: "'||sys_plan_name||'"');

  -- update description of new sql_plan_baseline
  description := UPPER('original:'||TRIM('&&original_sql_id.')||' modified:'||TRIM('&&modified_sql_id.')||' phv:'||TRIM('&&plan_hash_value.')||' created by coe_load_sql_baseline.sql');
  plans :=
  DBMS_SPM.ALTER_SQL_PLAN_BASELINE (
    sql_handle      => sys_sql_handle,
    plan_name       => sys_plan_name,
    attribute_name  => 'description',
    attribute_value => description );
  DBMS_OUTPUT.PUT_LINE(plans||' plan(s) modified description: "'||description||'"');

  -- update plan_name of new sql_plan_baseline
  :plan_name := UPPER(TRIM('&&original_sql_id.')||'_'||TRIM('&&modified_sql_id.'));
  :plan_name := sys_plan_name; -- avoids ORA-38141: SQL plan baseline SQL_PLAN_64b0jqr2t1h3558b5ab4d does not exist
  IF :plan_name <> sys_plan_name THEN
    plans :=
    DBMS_SPM.ALTER_SQL_PLAN_BASELINE (
      sql_handle      => sys_sql_handle,
      plan_name       => sys_plan_name,
      attribute_name  => 'plan_name',
      attribute_value => :plan_name );
    DBMS_OUTPUT.PUT_LINE(plans||' plan(s) modified plan_name: "'||:plan_name||'"');
  END IF;

  -- drop baseline staging table for original sql (if one exists)
  BEGIN
    DBMS_OUTPUT.PUT_LINE('dropping staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'"');
    EXECUTE IMMEDIATE 'DROP TABLE STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'));
  EXCEPTION
    WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE('staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'" did not exist');
  END;

  -- create baseline staging table for original sql
  DBMS_OUTPUT.PUT_LINE('creating staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'"');
  DBMS_SPM.CREATE_STGTAB_BASELINE (
    table_name  => 'STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.')),
    table_owner => '&&connected_user.' );

  -- packs new baseline for original sql
  DBMS_OUTPUT.PUT_LINE('packaging new sql baseline into staging table "STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.'))||'"');
  plans :=
  DBMS_SPM.PACK_STGTAB_BASELINE (
     table_name  => 'STGTAB_BASELINE_'||UPPER(TRIM('&&original_sql_id.')),
     table_owner => '&&connected_user.',
     sql_handle  => sys_sql_handle,
     plan_name   => :plan_name );
  DBMS_OUTPUT.PUT_LINE(plans||' pla(s) packaged');
END;
/

-- display details of new sql_plan_baseline
SET ECHO ON;
REM
REM SQL Plan Baseline
REM ~~~~~~~~~~~~~~~~~
REM
SELECT signature, sql_handle, plan_name, enabled, accepted, fixed--, reproduced (avail on 11.2.0.2)
  FROM dba_sql_plan_baselines WHERE plan_name = :plan_name;
SELECT description
  FROM dba_sql_plan_baselines WHERE plan_name = :plan_name;
SET ECHO OFF;
PRO
PRO ****************************************************************************
PRO * Enter &&connected_user. password to export staging table STGTAB_BASELINE_&&original_sql_id.
PRO ****************************************************************************
HOS exp &&connected_user. tables=&&connected_user..STGTAB_BASELINE_&&original_sql_id. file=STGTAB_BASELINE_&&original_sql_id..dmp statistics=NONE indexes=N constraints=N grants=N triggers=N
PRO
PRO If you need to implement this SQL Plan Baseline on a similar system,
PRO import and unpack using these commands:
PRO
PRO imp &&connected_user. file=STGTAB_BASELINE_&&original_sql_id..dmp tables=STGTAB_BASELINE_&&original_sql_id. ignore=Y
PRO
PRO SET SERVEROUT ON;;
PRO DECLARE
PRO   plans NUMBER;;
PRO BEGIN
PRO   plans := DBMS_SPM.UNPACK_STGTAB_BASELINE('STGTAB_BASELINE_&&original_sql_id.', '&&connected_user.');;
PRO   DBMS_OUTPUT.PUT_LINE(plans||' plan(s) unpackaged');;
PRO END;;
PRO /
PRO
SPO OFF;
HOS zip -m coe_load_sql_baseline_&&original_sql_id. coe_load_sql_baseline_&&original_sql_id..log STGTAB_BASELINE_&&original_sql_id..dmp coe_load_sql_baseline.log
HOS zip -d coe_load_sql_baseline_&&original_sql_id. coe_load_sql_baseline.log
WHENEVER SQLERROR CONTINUE;
SET DEF ON TERM ON ECHO OFF FEED 6 VER ON HEA ON LIN 80 PAGES 14 LONG 80 LONGC 80 TRIMS OFF TI OFF TIMI OFF SERVEROUT OFF NUM 10 SQLP SQL>;
SET SERVEROUT OFF;
UNDEFINE 1 2 3 original_sql_id modified_sql_id plan_hash_value
CL COL
PRO
PRO coe_load_sql_baseline completed.

 

Written by Carlos Sierra

June 24, 2014 at 10:32 am

Skipping ACS ramp-up using a SQL Patch

leave a comment »

As I prepare for one of my sessions at ODTUG Kscope14 I came across the typical situation of having a SQL for which I wanted to produce multiple optimal execution Plans on an 11g environment. As you may know, with Adaptive Cursor Sharing (ACS) this is possible and automatic, but the problem is that sometimes the ACS ramp-up process causes some suboptimal Execution Plans. If you want to skip this ACS ramp-up process, lets say for a SQL that is part of a business-critical transaction and which is known to have unstable Plans, then you may want to create a SQL Patch with the BIND_AWARE Hint. Maria Colgan explained this method on this blog post. What I present here is a script I use, so I can easily implement SQL Patches for some SQL where I just need to inject one or two CBO Hints, like this BIND_AWARE. I use SQL Profiles or SQL Plan Management when I need to provide CBO Hints that affect access paths or join order, but if I just need something like skipping ACS ramp-up or a Hint to produce a SQL Monitor report, then I’d rather use SQL Patch.

Script below asks for SQL_ID and for a short list of CBO Hints to include. By default it includes these 3: “GATHER_PLAN_STATISTICS MONITOR BIND_AWARE”. Execute this script connecting as SYS.

----------------------------------------------------------------------------------------
--
-- File name:   sqlpch.sql
--
-- Purpose:     Create Diagnostics SQL Patch for one SQL_ID
--
-- Author:      Carlos Sierra
--
-- Version:     2013/12/28
--
-- Usage:       This script inputs two parameters. Parameter 1 the SQL_ID and Parameter 2
--              the set of Hints for the SQL Patch (default to GATHER_PLAN_STATISTICS 
--              MONITOR BIND_AWARE).
--
-- Example:     @sqlpch.sql f995z9antmhxn BIND_AWARE
--
--  Notes:      Developed and tested on 11.2.0.3 and 12.0.1.0
--             
---------------------------------------------------------------------------------------
SPO sqlpch.txt;
DEF def_hint_text = 'GATHER_PLAN_STATISTICS MONITOR BIND_AWARE';
SET DEF ON TERM OFF ECHO ON FEED OFF VER OFF HEA ON LIN 2000 PAGES 100 LONG 8000000 LONGC 800000 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 NUMF "" SQLP SQL>;
SET SERVEROUT ON SIZE UNL;
COL hint_text NEW_V hint_text FOR A300;
SET TERM ON ECHO OFF;
PRO
PRO Parameter 1:
PRO SQL_ID (required)
PRO
DEF sql_id_1 = '&1';
PRO
PRO Parameter 2:
PRO HINT_TEXT (default: &&def_hint_text.)
PRO
DEF hint_text_2 = '&2';
PRO
PRO Values passed:
PRO ~~~~~~~~~~~~~
PRO SQL_ID   : "&&sql_id_1."
PRO HINT_TEXT: "&&hint_text_2." (default: "&&def_hint_text.")
PRO
SET TERM OFF ECHO ON;
SELECT TRIM(NVL(REPLACE('&&hint_text_2.', '"', ''''''), '&&def_hint_text.')) hint_text FROM dual;
WHENEVER SQLERROR EXIT SQL.SQLCODE;

-- trim sql_id parameter
COL sql_id NEW_V sql_id FOR A30;
SELECT TRIM('&&sql_id_1.') sql_id FROM DUAL;

VAR sql_text CLOB;
VAR sql_text2 CLOB;
EXEC :sql_text := NULL;
EXEC :sql_text2 := NULL;

-- get sql_text from memory
DECLARE
  l_sql_text VARCHAR2(32767);
BEGIN -- 10g see bug 5017909
  FOR i IN (SELECT DISTINCT piece, sql_text
              FROM gv$sqltext_with_newlines
             WHERE sql_id = TRIM('&&sql_id.')
             ORDER BY 1, 2)
  LOOP
    IF :sql_text IS NULL THEN
      DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
      DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
    END IF;
    l_sql_text := REPLACE(i.sql_text, CHR(00), ' '); -- removes NUL characters
    DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text), l_sql_text); 
  END LOOP;
  -- if found in memory then sql_text is not null
  IF :sql_text IS NOT NULL THEN
    DBMS_LOB.CLOSE(:sql_text);
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting sql_text from memory: '||SQLERRM);
    :sql_text := NULL;
END;
/

SELECT :sql_text FROM DUAL;

-- get sql_text from awr
DECLARE
  l_sql_text VARCHAR2(32767);
  l_clob_size NUMBER;
  l_offset NUMBER;
BEGIN
  IF :sql_text IS NULL OR NVL(DBMS_LOB.GETLENGTH(:sql_text), 0) = 0 THEN
    SELECT sql_text
      INTO :sql_text2
      FROM dba_hist_sqltext
     WHERE sql_id = TRIM('&&sql_id.')
       AND sql_text IS NOT NULL
       AND ROWNUM = 1;
  END IF;
  -- if found in awr then sql_text2 is not null
  IF :sql_text2 IS NOT NULL THEN
    l_clob_size := NVL(DBMS_LOB.GETLENGTH(:sql_text2), 0);
    l_offset := 1;
    DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
    DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
    -- store in clob as 64 character pieces 
    WHILE l_offset < l_clob_size
    LOOP
      IF l_clob_size - l_offset > 64 THEN
        l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, 64, l_offset), CHR(00), ' ');
      ELSE -- last piece
        l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, l_clob_size - l_offset + 1, l_offset), CHR(00), ' ');
      END IF;
      DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text), l_sql_text);
      l_offset := l_offset + 64;
    END LOOP;
    DBMS_LOB.CLOSE(:sql_text);
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting sql_text from awr: '||SQLERRM);
    :sql_text := NULL;
END;
/

SELECT :sql_text2 FROM DUAL;
SELECT :sql_text FROM DUAL;

-- validate sql_text
BEGIN
  IF :sql_text IS NULL THEN
    RAISE_APPLICATION_ERROR(-20100, 'SQL_TEXT for SQL_ID &&sql_id. was not found in memory (gv$sqltext_with_newlines) or AWR (dba_hist_sqltext).');
  END IF;
END;
/

PRO generate SQL Patch for SQL "&&sql_id." with CBO Hints "&&hint_text."
SELECT loaded_versions, invalidations, address, hash_value
FROM v$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT child_number, plan_hash_value, executions, is_shareable
FROM v$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;

-- drop prior SQL Patch
WHENEVER SQLERROR CONTINUE;
PRO ignore errors
EXEC DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'sqlpch_&&sql_id.');
WHENEVER SQLERROR EXIT SQL.SQLCODE;

-- create SQL Patch
PRO you have to connect as SYS
BEGIN
  SYS.DBMS_SQLDIAG_INTERNAL.I_CREATE_PATCH (
    sql_text    => :sql_text,
    hint_text   => '&&hint_text.',
    name        => 'sqlpch_&&sql_id.',
    category    => 'DEFAULT',
    description => '/*+ &&hint_text. */'
  );
END;
/

-- flush cursor from shared_pool
PRO *** before flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
PRO *** flushing &&sql_id. ***
BEGIN
  FOR i IN (SELECT address, hash_value
              FROM gv$sqlarea WHERE sql_id = '&&sql_id.')
  LOOP
    DBMS_OUTPUT.PUT_LINE(i.address||','||i.hash_value);
    BEGIN
      SYS.DBMS_SHARED_POOL.PURGE (
        name => i.address||','||i.hash_value,
        flag => 'C'
      );
    EXCEPTION
      WHEN OTHERS THEN
        DBMS_OUTPUT.PUT_LINE(SQLERRM);
    END;
  END LOOP;
END;
/
PRO *** after flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;

WHENEVER SQLERROR CONTINUE;
SET DEF ON TERM ON ECHO OFF FEED 6 VER ON HEA ON LIN 80 PAGES 14 LONG 80 LONGC 80 TRIMS OFF TI OFF TIMI OFF SERVEROUT OFF NUMF "" SQLP SQL>;
SET SERVEROUT OFF;
PRO
PRO SQL Patch "sqlpch_&&sql_id." will be used on next parse.
PRO To drop SQL Patch on this SQL:
PRO EXEC DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'sqlpch_&&sql_id.');
PRO
UNDEFINE 1 2 sql_id_1 sql_id hint_text_2 hint_text
CL COL
PRO
PRO sqlpch completed.
SPO OFF;

 

 

Written by Carlos Sierra

June 19, 2014 at 5:14 pm

How to upgrade Dynamic Sampling on a Query already using an imported SQL Profile

with 3 comments

Problem

I have this query that references a couple of Global Temporary Tables (GTT). These GTT have no CBO Statistics, thus Dynamic Sampling (DS) is used on them. The problem is default value of 2 at the instance level samples only 64 blocks and these GTT are large. Testing with DS on larger samples provides better quality on these dynamic statistics and a better performing Execution Plan for this particular SQL.

This SQL already uses an imported SQL Profile generated by coe_xfr_sql_profile.sql out of SQLT (MOS 215187.1) under sqlt/utl directory. This SQL Profile contains CBO Hints created out of the Outline Data contained on the other_xml column of the Plan. Still a higher level of DS is needed for these two GTT in order to produce a better performing Plan. Setting DS at the instance level or session level is not an option. Modifying the query is not an option. So the question is: “how to embed a dynamic sampling hint on a SQL that already has a SQL Profile based on an Outline“?

Solution

On the script generated by  coe_xfr_sql_profile.sql add one extra Hint with the DS level that is needed. Look at sample below, where one extra Hint has added right below DB_VERSION. Since the two GTT have no CBO Statistics, they invoke DS, which would use now level 6 instead of 2.

h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 6)]',
q'[ALL_ROWS]',

 

 

Written by Carlos Sierra

May 21, 2014 at 7:24 am

Why using SQLTXPLAIN

leave a comment »

Every so often I see on a distribution list a posting that starts like this: “I upgraded my application from database release X to release Y and now many queries are performing poorly, can you tell why?”

As everyone else on a distribution list, my first impulse is to make an educated guess permeated by a prior set of experiences. The intentions are always good, but the process is painful and time consuming. Many of us have seen this kind of question, and many of us have good hunches. Still I think our eagerness to help blinds us a bit. The right thing to do is to step back and analyze the facts, and I mean all the diagnostics supporting the observation.

What is needed to diagnose a SQL Tuning issue?

The list is large, but I will enumerate some of the most important pieces:

  1. SQL Text
  2. Version of the database (before and after upgrade)
  3. Database parameters (before and after)
  4. State of the CBO Statistics (before and after)
  5. Changes on Histograms
  6. Basics about the architecture (CPUs, memory, etc.)
  7. Values of binds if SQL has them
  8. Indexes compare, including state (visible?, usable?)
  9. Execution Plan (before and after)
  10. Plan stability? (Stored Outlines, Profiles, SQL Plan Management)
  11. Performance history as per evidence on AWR or StatsPack
  12. Trace from Event 10053 to understand the CBO
  13. Trace from Event 10046 level 8 or 12 to review Waits
  14. Active Session History (ASH) if 10046 is not available

I could keep adding bullets to the list, but I think you get the point: There are simply too many things to check! And each takes some time to collect. More important, the state of the system changes overtime, so you may need to re-collect the same diagnostics more than once.

SQLTXPLAIN to the rescue

SQLT or SQLTXPLAIN, has been available on MetaLink (now MOS) under note 215187.1 for over a decade. In short, SQLT collects all the diagnostics listed above and a lot more. That is WHY Oracle Support uses it every day. It simply saves a lot of time! So, I always encourage fellow Oracle users to make use of the FREE tool and expedite their own SQL Tuning analysis. When time permits, I do volunteer to help on an analysis. So, if you get to read this, and you want to help yourself while using SQLT but feel intimidated by this little monster, please give it a try and contact me for assistance. If I can help, I will, if I cannot, I will let you know.

Conclusion

It is fun to guess WHY a SQL is not performing as expected, and trying different guesses is educational but very time consuming. If you want to actually find root causes before trying to fix your SQL, you may want to collect relevant diagnostics. SQLT is there to help, and if installing this tool is not something you can do in a short term, consider then SQL Health-Check SQLHC.

 

What is new with EDB360?

with 9 comments

Many things, but most important is that it got bigger and better. This EDB360 free tool provided is maturing over time. Its core function has not changed although, which is to present a 360-degree view of a database (10g or higher).

EDB360 is a nice complement to other tools like Exacheck, Raccheck or Oracheck. It has some additional benefits, like taking a snapshot of a system to then be analyzed offline or simply to preserve this snapshot as a baseline.

Keep in mind that EDB360 does not install anything on the database, nor it changes any data on it. In some cases, where direct access to the database server is not an option, having the capability of executing EDB360 through a SQL*Plus client connection is a big plus.

I use EDB360 as a starting place to perform a whole database health-check.

Since pictures tell more than words, please find below 4. The first two are about the new entries on EDB360 main menu (menu is a tad bigger than what you see in these two pictures, and its content is dynamic). The last two pictures are just a sample of the charts that are now part of EDB360.

EDB360 execution parameters changed from 4 to 6:

  1. Oracle Pack License: If your site has the Tuning Pack, then enter ‘T’, else if your site has the Diagnostics Pack enter ‘D’, else enter ‘N’.
  2. Days of History to consider. If you entered ‘T’ or ‘D’ on first parameter, then specify on 2nd parameter up to how many days of history you want EDB360 to use. By default it uses 31, assuming your AWR history is at least that big.
  3. Do you want HTML Reports? By default it is ‘Y’.
  4. Do you want Text Reports? Defaults to ‘Y’.
  5. Do you want CSV Files? Defaults to ‘Y’.
  6. Do you want Charts? Defaults to ‘Y’.

Once you login into SQL*Plus while on top of the edb360 directory, simply execute script edb360.sql and pass all 6 parameters one by one or all of them inline. For example: @edb360 T 31 Y Y Y Y

EDB360 menu (part 1)

EDB360 Dynamic Menu (part 1)

EDB360 menu (part 2)

EDB360 Dynamic Menu (part 2)

CPU Load per Instance (sample chart)

CPU Load per Instance (sample chart)

ASH Top Modules and Actions for Cluster (sample chart)

ASH Top Modules and Actions for Cluster (sample chart)

If you have downloaded EDB360 before, then I encourage you to download and test the new version. If you have never used it, I hope you find this tool useful.

Written by Carlos Sierra

April 2, 2014 at 10:04 am

Meet: eAdam – Enkitec’s free AWR data mining tool

with 9 comments

You recently learned about eDB360, and now eAdam? What is this eAdam tool? Before you continue reading, please be aware that eAdam reads data from AWR, thus you must have a license for the Oracle Diagnostics Pack in order to use this new eAdam tool.

Introduction

New eAdam is a free tool to perform data mining on performance related historical data recorded by AWR. The main characteristics of eAdam are:

  • Installs nothing on the Source database (usually Production)
  • Extracts AWR performance related data as plain text flat files (no export or data pump binary files)
  • Upload extracted AWR data into a Staging database of same or different platform and release
  • Data mining is performed on the Staging database instead of Production

How does eAdam work?

It is better to explain eAdam by functions. So I would say eAdam has the following 4 modules:

  1. AWR extraction from Source (Production)
  2. eAdam installation on Staging system
  3. Loading into eAdam Stage a set of AWR  files extracted from Source
  4. AWR data mining on eAdam Stage

AWR extraction from Source (Production)

This is the simplest part. You just need to execute a simple and short script on a Source system (usually Production). This script extracts into flat files the content of the following AWR views. Then it compresses them into a TAR file. List below may expand over time as new eAdam versions become available.

DBA_HIST_ACTIVE_SESS_HISTORY
DBA_HIST_DATABASE_INSTANCE
DBA_HIST_DATAFILE
DBA_HIST_DLM_MISC
DBA_HIST_EVENT_HISTOGRAM
DBA_HIST_FILESTATXS
DBA_HIST_IOSTAT_DETAIL
DBA_HIST_IOSTAT_FILETYPE
DBA_HIST_IOSTAT_FUNCTION
DBA_HIST_OSSTAT
DBA_HIST_PGASTAT
DBA_HIST_SERVICE_STAT
DBA_HIST_SGA
DBA_HIST_SGASTAT
DBA_HIST_SNAPSHOT
DBA_HIST_SQL_PLAN
DBA_HIST_SQLSTAT
DBA_HIST_SQLTEXT
DBA_HIST_SYS_TIME_MODEL
DBA_HIST_SYSSTAT
DBA_HIST_SYSTEM_EVENT
DBA_HIST_TEMPFILE
DBA_HIST_TEMPSTATXS

eAdam installation on Staging system

You install eAdam once and then use it multiple times. If you download a newer version just install it on top of the prior one, so you get the eAdam delta. eAdam should be installed on a Staging database and not in Production or UAT. Pretty much any database could be your Staging database (QA or any other lower environment). It could even be a database on your laptop for example. Your Staging database does not have to be the same platform or database release than Source.

To install eAdam you simply execute another script. It creates a schema (you provide the name and password), and this script creates the eAdam repository on your Staging database.

Loading into eAdam Stage a set of AWR  files extracted from Source

You can load into eAdam as many TAR files as you want. Each set is identified within eAdam with a sequence key. So your eAdam repository can contain AWR data from different systems, and they could be from same or different platforms and database releases. The data model of your eAdam repository is determined from your Staging database release, so it is ideal your Staging database is of equal or higher release than your Sources, but this is not mandatory.

To load a TAR file with AWR data into your Staging eAdam repository, you execute another script that asks for the TAR name and it produces a set of External Tables, then uploads the AWR data from the temporary external Tables into permanent staging Tables:

DBA_HIST_ACTIVE_SESS_HIST_S
DBA_HIST_DATABASE_INSTANC_S
DBA_HIST_DATAFILE_S
DBA_HIST_DLM_MISC_S
DBA_HIST_EVENT_HISTOGRAM_S
DBA_HIST_FILESTATXS_S
DBA_HIST_IOSTAT_DETAIL_S
DBA_HIST_IOSTAT_FILETYPE_S
DBA_HIST_IOSTAT_FUNCTION_S
DBA_HIST_OSSTAT_S
DBA_HIST_PGASTAT_S
DBA_HIST_SERVICE_STAT_S
DBA_HIST_SGASTAT_S
DBA_HIST_SGA_S
DBA_HIST_SNAPSHOT_S
DBA_HIST_SQLSTAT_S
DBA_HIST_SQLTEXT_S
DBA_HIST_SQL_PLAN_S
DBA_HIST_SYSSTAT_S
DBA_HIST_SYSTEM_EVENT_S
DBA_HIST_SYS_TIME_MODEL_S
DBA_HIST_TEMPFILE_S
DBA_HIST_TEMPSTATXS_S
DBA_HIST_XTR_CONTROL_S
DBA_TAB_COLUMNS_S

AWR Data mining on eAdam Stage

Once your AWR is available inside eAdam, you can perform all the Data Mining you may need. A sample script that produces several CSV files out of your data is provided. This sample script is automatically executed at the end of your upload, so you get a set of CSV files that can be used on Excel or any other tool that reads CSV files. I use Excel, where I can easily generate Charts out of the CSV files created by the sample script. That means I can easily visualize trends out of performance data without having access to the Source (Production) environment.

To produce the sample CSV files, eAdam provides a set of views on top of its own repository. These set of views will evolve over time as new releases become available. As of 1st release we provide the following views:

SH_AAS_APPLICATION_V1
ASH_AAS_CLUSTER_V1
ASH_AAS_COMMIT_V1
ASH_AAS_CONCURRENCY_V1
ASH_AAS_ON_CPU_V1
ASH_AAS_OTHER_V1
ASH_AAS_SCHEDULER_V1
ASH_AAS_TOTAL_V1
ASH_AAS_USER_IO_V1
ASH_INST_V1
ASH_RAC_V1
EVENT_HISTOGRAM_INST_V1
EVENT_HISTOGRAM_IO_RAC_V1
EVENT_HISTOGRAM_RAC_V1
EVENT_HISTOGRAM_RAC_V2
OSSTAT_BUSY_TIME_PERC_V1
OSSTAT_DELTA_V1
OSSTAT_INST_V1
OSSTAT_LOAD_V1
OSSTAT_RAC_V1
SYSTEM_EVENT_DELTA_V1
SYSTEM_EVENT_INST_V1
SYSTEM_EVENT_NON_IDLE_V1
SYSTEM_EVENT_RAC_V1
SYS_TIME_MODEL_DB_CPU_V3
SYS_TIME_MODEL_DB_TIME_V3
SYS_TIME_MODEL_DB_WAIT_V3
SYS_TIME_MODEL_DELTA_V1
SYS_TIME_MODEL_INST_V1
SYS_TIME_MODEL_RAC_V1

FAQ

Q1: Where can I download eAdam?

A1: From the Enkitec web page. Click on the “Products” tab. The tool will be available on March 7, 2014.

Q2: Is it really free?

A2: Yes. And before you ask what is the catch: “there is no catch”. Just be aware you must have an Oracle Diagnostics Pack license in order to access AWR data, and this eAdam tool is not an exception. Besides that, eAdam is free to download and use.

Q3: I need some extra functionality. How do I get it?

A3: If you need something that eAdam does not provide out of the box, of course you can extend its functionality directly. If the addition is something of general interest, you can submit an “Enhancement Request” (an email actually or a comment on this post). But it you want something more advanced and of particular use, you can contact Enkitec for a quote for this customization on top of eAdam (for example an Apex application).

Q4: Can I share this eAdam tool or its output?

A4: Sure you can. Just credit Enkitec for the tool. In other words, use it any way you want, but please honor authorship and ownership.

Q5: Who “owns” eAdam?

A5: Enkitec owns this new tool. Carlos Sierra is the author of eAdam, but the vision and some critical components were provided by: Frits Hoogland, Karl Arao and Randy Johnson. So eAdam is the product of a collaboration effort of some geeks working for Enkitec.

Conclusion

Enkitec is providing this eAdam tool for AWR Data Mining for free. Having an Oracle Diagnostics Pack is a must before using this tool. Besides that, feel free to use this tool at will, and perform all your AWR Data Mining outside the Source system, which is very important for a Production environment. This eAdam is very resource conscious on the Source system, and it empowers anyone to do performance analysis without having direct access to the Source database.

Having an AWR repository created with eAdam, enables many possibilities, like having baselines for particular processes, or compare performance between different time intervals (pre and post an application upgrade for example) or between two different systems (UAT and Production for example). If you already have a set of scripts to do data mining on DBA_HIST views, you can easily convert them to use the matching eAdam Staging tables so you would no longer be constrained to connect to the live system.

Performing Data Mining in entities like ASH as stored by AWR is like digging in a gold mine. There is so much the database wants to tell you. You just need this kind of of tool to listen carefully and find what is important.

Written by Carlos Sierra

March 5, 2014 at 7:29 am

Posted in Active Session History, ASH, AWR, eAdam, edb360, Scripts

Tagged with

Meet “edb360”: a free tool that provides a 360-degree view of an Oracle database

with 37 comments

Simply put: edb360 is a new free tool that provides a 360-degree view of an Oracle database.

What is “edb360“?

This “edb360” tool is the product of a collaborative effort of some very smart guys, and me. Special thanks to Frits Hoogland, Karl Arao, Randy Johnson, Martin Bach, Kyle Hailey, Tanel Poder, Alex Fatkulin, Mauro Pagano, Abel Macias, Jon Adams and Jack Agustin. These guys helped me to envision edb360, some directly and some indirectly, but their help and shared knowledge motivated me to develop edb360 and make it available today.

The edb360 tool started as a quick and dirty “script” to gather basic information about a database without knowing anything about it before hand. The first rule for edb360 was: it has to install nothing in the database. The second rule became: it has to provide some insight about a database.

The output is presented for the most part into 3 formats: HTML, Text and Comma-separated Values (CSV). Why? HTML and Text can be easily used to consolidate important findings into a Word report. Sometimes HTML is more useful and sometimes Text is better. Then CSV is used to produce charts out of Performance Trends. Some people can visualize trends easier with a graph (me included).

What about other tools?

Of course there are wonderful tools that can help in this arena, like Oracle Enterprise Manager (OEM) or Oracle’s Automatic Workload Repository (AWR). So why not using those tools? Well, if I had access to OEM or I knew before hand which time intervals I want to analyze with AWR, then I would not have a strong need to use edb360. The reality that we consultants face when we are getting acquainted of a system, is that we are not given any access to the database of interest (usually production). And asking for a server account feels like asking for coke’s secret formula: then we simply cannot poke the database at our own will, and that is understandable. So, what is our second best?: please run this script that installs nothing and generates a zip file with some metadata from your system.  The script is plain text and its output is also plain text (html, text and csv files). So, any DBA or System Administration can validate that no customer confidential data is extracted or exposed. A win-win!

If the system we want to understand is an Exadata system, we can also request for an Exacheck output, if  not an Exadata system but a RAC cluster, there is Raccheck. These two tools, available though My Oracle Support (MOS) make a good companion for the edb360. In other words, edb360 is not a replacement for the other two but more of an add-on or companion.

Why is edb360 free?

Why not? Often I get asked: why do you give away the tools and scripts you develop? The answer is simple: tools, scripts, white papers, blog entries like this, in my mind they all represent the same: sharing knowledge with our Oracle community. I wish for a community where knowledge (and tools) flows for all to benefit. Let’s say my personal time I invest building tools and scripts kinds of make it up for my lame blog postings. 😉

What is the catch?

No catch. Just be aware that edb360 makes use of some DBA_HIST views and ASH data, and those are part of the Oracle Diagnostics Pack. So when executing the tool it will ask  to indicate if your site has those licenses. Your answer determines the scope of the output. So if you specify you have a license for the Oracle Diagnostics Pack then your edb360 output includes pieces from AWR and ASH, else AWR and ASH are not accessed.

About versions, feedback and support

For the most part, I am committed to maintain this tool as my personal time permits. That means I can only work on it during odd hours and not every day. Nothing different than SQLTXPLAIN during the first few years of its existence, so I am not scared. Keep also in mind this edb360 is work in progress, and version v1405 is the first one I feel comfortable sharing with the community. In other words, it is far from perfect and I foresee it growing in multiple directions.

If you like this tool, and want to enhance its output, get SQLHC from MOS 1366133.1, and place the sqlhc.sql script into the same db360/sql directory. By doing so, you will also get 3 SQL health-checks. In other words, edb360 is SQLHC aware.

Conclusion

If you like free tools and have a use for this edb360 tool, you might as well download it and give it a shot. Nothing to lose (besides a few minutes of your spare time). A sample output is also available under same link above.

Life is Good!

Written by Carlos Sierra

February 19, 2014 at 7:34 pm

About AWR, SQLT and DB360

leave a comment »

This blog posting is about answering this first question below, which I often get asked:

Can I use SQLTXPLAIN (SQLT) instead of AWR?

The answer is: it depends. If you are doing SQL Tuning and you already know the SQL_ID, then you may want to use SQLT XTRACT (MOS 215187.1) directly on the SQL_ID of concern. But even in that case, keep in mind that SQLT accesses AWR data, so your environment must have a valid license to use the Oracle Diagnostics Pack. In fact, when you install SQLT, it asks if you have the Oracle Tuning Pack, the Oracle Diagnostics Pack or none. Depending how you respond, SQLT access or not the set of views and APIs included on those Oracle Packs. That means you can configure SQLT to access or not AWR data.

What is the difference between AWR and SQLT?

In short, the difference between these two is the scope. You use AWR to diagnose database performance, while you use SQLT to diagnose one SQL. Sometimes I explain this difference by saying: “AWR is to an entire database to what SQLT is to one SQL”. One is for system-wide performance, the other is very centric to one SQL at a time.

Why SQLT exists?

I envisioned SQLT on the late 90’s when I was a road-warrior fighting fires in the area of performance, and in particular SQL performance. I found back then that Oracle-provided tools like TKPROF were excellent, but I always needed something more, like knowing the state of the CBO Statistics, Tables and Indexes, etc.

These days, my good friend Mauro Pagano from Oracle, is keeping the ball rolling. He is keeping SQLT in constant motion, making it a better tool on every new version. So, I would say this: SQLT is filling up some gaps that me, and many others, consider important in order to be diligent on root cause analysis for cases were a SQL performs poorly.

What is DB360?

As SQLT brings to the table several pieces of information that we need for SQL Tuning, and which are not available using out-of-the-box tools like TKPROF or SQL Monitoring, the new DB360 tool is doing something similar for the entire database: It complements  what AWR provides by producing a report with meaningful information about an entire database. This DB360 is a tool that installs nothing on the database, and produces an HTML report with sections such as Configuration, Sizing, CBO Statistics, Performance Trends, etc.

Is DB360 a licensed product?

No. This DB360 tool belongs to Enkitec. It is not yet available to the Oracle community, but it will be soon. Same as SQLT, if you have an Oracle Tuning or Diagnostics Pack, then when you execute DB360 you would get to see in your DB360 report some pieces of information generated out of views and APIs covered by those Oracle Packs, else you get only the pieces which require no Oracle Pack license. Besides the restriction to limit your use of DB360 as per your Oracle Pack license, DB360 itself will be available to the Oracle community for free, and with no strings attached, same as SQLT.

Why are SQLT and DB360 free to download and use?

These tools are simply a contribution to the Oracle community. “Sharing tools is like sharing knowledge: it makes our IT experience more pleasurable”. The only payback comes when you share back with the Oracle community some of your knowledge, or some of your tools and scripts. If you have been a speaker in an Oracle Users Groups, then you may relate to this gratifying experience to share with others what you know. At RMOUG these past 3 days, I have had the opportunity to experience once again this special sense of community, that is always eager to share and to learn from each other.

Conclusion

SQLT complements TKPROF and SQL Monitor. DB360 complements AWR. When it comes to diagnostics, either for one SQL or for an entire Database, having to our disposal additional diagnostics in the context of our analysis, improves our chances to do a diligent job, while reducing the time that it would take to assembly those pieces manually; all with the extra benefit of having these extra diagnostics pieces consistent to a point in time. So, I hope you continue enjoying SQLT and in the near future DB360!

Written by Carlos Sierra

February 7, 2014 at 6:07 pm