Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Search Results

eAdam

with 9 comments

Enkitec’s Oracle AWR Data Mining Tool

eAdameAdam is a free tool that extracts a subset of data and metadata from an Oracle database with the objective to perform some data mining using a separate staging Oracle database. The data extracted is relevant to Performance Evaluations projects. Most of the data eAdam extracts is licensed by Oracle under the Diagnostics Pack, and some under the Tuning Pack. Therefore, in order to use this eAdam tool, the source database must be licensed to use both Oracle Packs (Tuning and Diagnostics).

To a point, eAdam is similar to eDB360; both access the Data Dictionary in order to produce some reports. The key difference is that eDB360 generates all the reports (after doing some intensive processing) at the source database, while eAdam simply extracts a set of flat files into a TAR file, using a very light-weight script, delaying all the intensive processing for a later time and on a separate staging system. This feature can be very attractive for busy systems where the amount of processing of any external monitoring tool needs to be minimized.

On the source system, eAdam only needs to execute a short script to extract the data and metadata of interest, producing a dense TAR file. On a staging system, eAdam does the heavy lifting, requiring the creation of a repository, the load of this repository and finally the computation of meaningful reports. The processing of the TAR file into the staging system is usually performed by the requestor, using a lower-level database, or a remote one.

The list of objects eAdam extracts as flat files from the source database includes the following:

dba_hist_active_sess_history
dba_hist_database_instance
dba_hist_event_histogram
dba_hist_osstat
dba_hist_parameter
dba_hist_pgastat
dba_hist_sga
dba_hist_sgastat
dba_hist_snapshot
dba_hist_sql_plan
dba_hist_sqlstat
dba_hist_sqltext
dba_hist_sys_time_model
dba_hist_sysstat
gv$active_session_history
gv$log
gv$sql_monitor
gv$sql_plan_monitor
gv$sql_plan_statistics_all
gv$sql
gv$system_parameter2
v$controlfile
v$datafile
v$tempfile

eAdam works on 10gR2, 11gR2, and on higher releases of Oracle; and it can be used on Linux or UNIX Platforms. It has not been tested on Windows. An eAdam sample output is available at this Dropbox location; after downloading the sample output, look for the 0001_eadam36_N_dbname_index.html file and start browsing.

Instructions – Source Database

Download the tool, uncompress the master ZIP file, and look for file eadam-master/source_system/eadam_extract.sql. Review and execute this single and short script connecting to the source database as SYS or DBA. Locate the TAR file produced, and send it to the requestor.

Be aware that the TAR file produced by the extraction process can be large, so be sure you execute this extract script from a directory with at least 10 GBs of free space. Common sizes of this TAR file range between 100 MBs and 1 GB. Execution time for this extraction process may exceed 1 hour, depending on the size of the Data Dictionary.

Instructions – Staging Database

Be sure you have both the eAdam tool (eadam-master.zip) and the TAR file produced on a source system. Your staging database can be of equal, higher or lower release level than the source, but equal or higher is recommended. The Platform can be the same or different.

To install, load and report on the staging database, proceed with the following steps:

  1. Create on the staging system a file directory available to Oracle for read and write. Most probably you want to create this directory connecting to OS as Oracle and create a new directory like /home/oracle/eadam-master. Put in there the content of the eadam-master.zip file.
  2. Create the eAdam repository on the staging database. This step is needed only one time. Follow instructions from the eadam_readme.txt.  Basically you need to execute eadam-master/stage_system/eadam_install.sql connected as SYS. This script asks for 4 parameters: Tablespace names for permanent and temporary schema objects, and the username and password of the new eAdam account. For the username I recommend eadam, but you can use any valid name.
  3. Load the data contained in the TAR file into the database. To do this you need first to copy the TAR file into the eadam-master/stage_system sub-directory and execute next the stage_system/eadam_load.sql script while on the stage_system sub-directory, and connecting as SYS. This script asks for 4 parameters. Pass first the directory path of your stage_system sub-directory, for example /home/oracle/eadam-master/stage_system (this sub-directory must contain the TAR file). Pass next the username and password of your eadam account as you created them. Pass last the name of the TAR file to be loaded into the database.
  4. The load process performs some data transformations and it produces at the end an output similar to eDB360 but smaller in content. After you review the eAdam output, you may decide to generate new output for shorter time series, in such case use the eadam-master/stage_system/eadam_report.sql connecting as the eadam user. This reporting process asks for 3 parameters. Pass the EADAM_SEQ_ID which identifies your particular load (a list of values is displayed), then pass the range of dates using format YYYY-MM-DD/HH24:MI, for example 2014-07-27/17:33.

Download

EADAM @ GitHub is available as free software. You can see its eadam_readme.txt, license.txt or any other piece of the tool before downloading it. Use this link eadam-master.zip to actually download eAdam as a compressed file.

Feedback

Please post your feedback about this eAdam tool at this blog, or send and email directly to the tool author: Carlos Sierra.

Written by Carlos Sierra

July 27, 2014 at 6:25 pm

How to execute eAdam and eDB360

with one comment

Both eAdam and eDB360 tools are available at the Enkitec web site under the Products tab. You can download a free copy from there; or maybe you were given by an Enkitec consultant a newer version for you to execute. In any case, the question is: “how do I execute these tools?

Executing eAdam

  1. Take the eadam.zip file and unzip it on your server
  2. Navigate to the top eadam directory which contains a readme.txt and a sql subdirectory
  3. Connect into SQL*Plus as SYS, a DBA account, or a user with access to the Data Dictionary views
  4. Execute START sql/eadam_01_xtr.sql using default values for input parameters. Be aware this tool reads from DBA_HIST views, so you can only use it if you have a license for the Oracle Diagnostics or Tuning Packs

    eadam execution sample

    eadam execution sample

Executing eDB360

  1. Take the edb360.zip file and unzip it on your server
  2. Navigate to top edb360 directory which contains a readme.txt, an edb360.sql script, a run_edb360.sh script and a sql subdirectory
  3. Connect into SQL*Plus as SYS, a DBA account, or a user with access to the Data Dictionary views
  4. Execute START edb360.sql passing values to the input parameters. The first one asks about the Oracle License Pack you have, so respond to this with a “T” if you have the Tuning Pack license, with a “D” if you have the Diagnostics Pack or with an “N” if you don’t have any of these two. For the other parameters about the output format just hit enter or pass a “Y”

    edb360 execution sample

    edb360 execution sample

About the Output

Both of these tools generate an output proportional to the size of the selected objects out of the Data Dictionary and Dynamic Views. So it is common for the output to be large, meaning between 100MB and 1 or 2 GB. I strongly recommend to execute these tools from an staging directory with plenty of space. Plan for 10GB, even if the tool end consuming a lot less.

Another consideration is the time these tool take to execute. They extract everything sequentially and on serial execution, so they do not impose a significant load to your system. But since the amount of metadata they extract is large, they may take more than one hour to execute. I have seen cases where they take 2 or 3 hours, so plan accordingly.

The output of each of these two tools is a compressed file. eAdam produces just a set of CSV files, which will be loaded by the requestor into a stand-alone staging Oracle database for data mining and reporting. eDB360 produces HTML, Text and CSV files. The HTML set is the one that is used the most, while the CSV set is used on Excel or similar tool to produce some Charts.

Conclusion

Executing eAdam or eDB360 is easy, but you need access to Data Dictionary views and a large staging area on disk. So before executing these tools, plan on provision space and time.

Written by Carlos Sierra

March 12, 2014 at 10:52 am

Enkitec’s free AWR data mining tool (eAdam) is now available!

with 2 comments

As of today, you can download this free tool out of our Enkitec‘s web page. Just select “eAdam” under the Products Tab.

The next version of eAdam will  incorporate the GV$views, equivalent to the currently extracted  DBA_HIST set.  The reason of this enhancement is to handle read-only databases (DataGuard – DG). So views like GV$ACTIVE_SESSION_HISTORY will be available for data mining within the eAdam staging repository.

Any ways, I hope you enjoy this new  tool. Feel free to provide constructive feedback on this blog, or by sending  an email to carlos.sierra@enkitec.com.

Written by Carlos Sierra

March 6, 2014 at 3:40 pm

Posted in eAdam

Meet: eAdam – Enkitec’s free AWR data mining tool

with 9 comments

You recently learned about eDB360, and now eAdam? What is this eAdam tool? Before you continue reading, please be aware that eAdam reads data from AWR, thus you must have a license for the Oracle Diagnostics Pack in order to use this new eAdam tool.

Introduction

New eAdam is a free tool to perform data mining on performance related historical data recorded by AWR. The main characteristics of eAdam are:

  • Installs nothing on the Source database (usually Production)
  • Extracts AWR performance related data as plain text flat files (no export or data pump binary files)
  • Upload extracted AWR data into a Staging database of same or different platform and release
  • Data mining is performed on the Staging database instead of Production

How does eAdam work?

It is better to explain eAdam by functions. So I would say eAdam has the following 4 modules:

  1. AWR extraction from Source (Production)
  2. eAdam installation on Staging system
  3. Loading into eAdam Stage a set of AWR  files extracted from Source
  4. AWR data mining on eAdam Stage

AWR extraction from Source (Production)

This is the simplest part. You just need to execute a simple and short script on a Source system (usually Production). This script extracts into flat files the content of the following AWR views. Then it compresses them into a TAR file. List below may expand over time as new eAdam versions become available.

DBA_HIST_ACTIVE_SESS_HISTORY
DBA_HIST_DATABASE_INSTANCE
DBA_HIST_DATAFILE
DBA_HIST_DLM_MISC
DBA_HIST_EVENT_HISTOGRAM
DBA_HIST_FILESTATXS
DBA_HIST_IOSTAT_DETAIL
DBA_HIST_IOSTAT_FILETYPE
DBA_HIST_IOSTAT_FUNCTION
DBA_HIST_OSSTAT
DBA_HIST_PGASTAT
DBA_HIST_SERVICE_STAT
DBA_HIST_SGA
DBA_HIST_SGASTAT
DBA_HIST_SNAPSHOT
DBA_HIST_SQL_PLAN
DBA_HIST_SQLSTAT
DBA_HIST_SQLTEXT
DBA_HIST_SYS_TIME_MODEL
DBA_HIST_SYSSTAT
DBA_HIST_SYSTEM_EVENT
DBA_HIST_TEMPFILE
DBA_HIST_TEMPSTATXS

eAdam installation on Staging system

You install eAdam once and then use it multiple times. If you download a newer version just install it on top of the prior one, so you get the eAdam delta. eAdam should be installed on a Staging database and not in Production or UAT. Pretty much any database could be your Staging database (QA or any other lower environment). It could even be a database on your laptop for example. Your Staging database does not have to be the same platform or database release than Source.

To install eAdam you simply execute another script. It creates a schema (you provide the name and password), and this script creates the eAdam repository on your Staging database.

Loading into eAdam Stage a set of AWR  files extracted from Source

You can load into eAdam as many TAR files as you want. Each set is identified within eAdam with a sequence key. So your eAdam repository can contain AWR data from different systems, and they could be from same or different platforms and database releases. The data model of your eAdam repository is determined from your Staging database release, so it is ideal your Staging database is of equal or higher release than your Sources, but this is not mandatory.

To load a TAR file with AWR data into your Staging eAdam repository, you execute another script that asks for the TAR name and it produces a set of External Tables, then uploads the AWR data from the temporary external Tables into permanent staging Tables:

DBA_HIST_ACTIVE_SESS_HIST_S
DBA_HIST_DATABASE_INSTANC_S
DBA_HIST_DATAFILE_S
DBA_HIST_DLM_MISC_S
DBA_HIST_EVENT_HISTOGRAM_S
DBA_HIST_FILESTATXS_S
DBA_HIST_IOSTAT_DETAIL_S
DBA_HIST_IOSTAT_FILETYPE_S
DBA_HIST_IOSTAT_FUNCTION_S
DBA_HIST_OSSTAT_S
DBA_HIST_PGASTAT_S
DBA_HIST_SERVICE_STAT_S
DBA_HIST_SGASTAT_S
DBA_HIST_SGA_S
DBA_HIST_SNAPSHOT_S
DBA_HIST_SQLSTAT_S
DBA_HIST_SQLTEXT_S
DBA_HIST_SQL_PLAN_S
DBA_HIST_SYSSTAT_S
DBA_HIST_SYSTEM_EVENT_S
DBA_HIST_SYS_TIME_MODEL_S
DBA_HIST_TEMPFILE_S
DBA_HIST_TEMPSTATXS_S
DBA_HIST_XTR_CONTROL_S
DBA_TAB_COLUMNS_S

AWR Data mining on eAdam Stage

Once your AWR is available inside eAdam, you can perform all the Data Mining you may need. A sample script that produces several CSV files out of your data is provided. This sample script is automatically executed at the end of your upload, so you get a set of CSV files that can be used on Excel or any other tool that reads CSV files. I use Excel, where I can easily generate Charts out of the CSV files created by the sample script. That means I can easily visualize trends out of performance data without having access to the Source (Production) environment.

To produce the sample CSV files, eAdam provides a set of views on top of its own repository. These set of views will evolve over time as new releases become available. As of 1st release we provide the following views:

SH_AAS_APPLICATION_V1
ASH_AAS_CLUSTER_V1
ASH_AAS_COMMIT_V1
ASH_AAS_CONCURRENCY_V1
ASH_AAS_ON_CPU_V1
ASH_AAS_OTHER_V1
ASH_AAS_SCHEDULER_V1
ASH_AAS_TOTAL_V1
ASH_AAS_USER_IO_V1
ASH_INST_V1
ASH_RAC_V1
EVENT_HISTOGRAM_INST_V1
EVENT_HISTOGRAM_IO_RAC_V1
EVENT_HISTOGRAM_RAC_V1
EVENT_HISTOGRAM_RAC_V2
OSSTAT_BUSY_TIME_PERC_V1
OSSTAT_DELTA_V1
OSSTAT_INST_V1
OSSTAT_LOAD_V1
OSSTAT_RAC_V1
SYSTEM_EVENT_DELTA_V1
SYSTEM_EVENT_INST_V1
SYSTEM_EVENT_NON_IDLE_V1
SYSTEM_EVENT_RAC_V1
SYS_TIME_MODEL_DB_CPU_V3
SYS_TIME_MODEL_DB_TIME_V3
SYS_TIME_MODEL_DB_WAIT_V3
SYS_TIME_MODEL_DELTA_V1
SYS_TIME_MODEL_INST_V1
SYS_TIME_MODEL_RAC_V1

FAQ

Q1: Where can I download eAdam?

A1: From the Enkitec web page. Click on the “Products” tab. The tool will be available on March 7, 2014.

Q2: Is it really free?

A2: Yes. And before you ask what is the catch: “there is no catch”. Just be aware you must have an Oracle Diagnostics Pack license in order to access AWR data, and this eAdam tool is not an exception. Besides that, eAdam is free to download and use.

Q3: I need some extra functionality. How do I get it?

A3: If you need something that eAdam does not provide out of the box, of course you can extend its functionality directly. If the addition is something of general interest, you can submit an “Enhancement Request” (an email actually or a comment on this post). But it you want something more advanced and of particular use, you can contact Enkitec for a quote for this customization on top of eAdam (for example an Apex application).

Q4: Can I share this eAdam tool or its output?

A4: Sure you can. Just credit Enkitec for the tool. In other words, use it any way you want, but please honor authorship and ownership.

Q5: Who “owns” eAdam?

A5: Enkitec owns this new tool. Carlos Sierra is the author of eAdam, but the vision and some critical components were provided by: Frits Hoogland, Karl Arao and Randy Johnson. So eAdam is the product of a collaboration effort of some geeks working for Enkitec.

Conclusion

Enkitec is providing this eAdam tool for AWR Data Mining for free. Having an Oracle Diagnostics Pack is a must before using this tool. Besides that, feel free to use this tool at will, and perform all your AWR Data Mining outside the Source system, which is very important for a Production environment. This eAdam is very resource conscious on the Source system, and it empowers anyone to do performance analysis without having direct access to the Source database.

Having an AWR repository created with eAdam, enables many possibilities, like having baselines for particular processes, or compare performance between different time intervals (pre and post an application upgrade for example) or between two different systems (UAT and Production for example). If you already have a set of scripts to do data mining on DBA_HIST views, you can easily convert them to use the matching eAdam Staging tables so you would no longer be constrained to connect to the live system.

Performing Data Mining in entities like ASH as stored by AWR is like digging in a gold mine. There is so much the database wants to tell you. You just need this kind of of tool to listen carefully and find what is important.

Written by Carlos Sierra

March 5, 2014 at 7:29 am

Posted in Active Session History, ASH, AWR, eAdam, edb360, Scripts

Tagged with

SQLTXPLAIN under new administration

with 3 comments

During my 17 years at Oracle, I developed several tools and scripts. The largest and more widely used is SQLTXPLAIN. It is available through My Oracle Support (MOS) under document_id 215187.1.

SQLTXPLAIN, also know as SQLT, is a tool for SQL diagnostics, including Performance and Wrong Results. I am the original developer and author, but since very early stages of its development, this tool encapsulates the expertise of many bright engineers, DBAs, developers and others, who constantly helped to improve this tool on every new release by providing valuable feedback. SQLT is then nothing but the collection of many good ideas from many people. I was just the lucky guy that decided to build something useful for the Oracle SQL tuning community.

When I decided to join Enkitec back on 2013, I asked Mauro Pagano to look after my baby (I mean SQLT), and sure enough he did an excellent job. Mauro fixed most of my bugs, as he jokes about, and also incorporated some of his own :-). Mauro kept SQLT in good shape and he was able to continue improving it on every new release. Now Mauro also works for Enkitec, so SQLT has a new owner and custodian at Oracle.

Abel Macias is the new owner of SQLT, and as such he gets busy maintaining and enhancing this tool among other duties at Oracle. So, if you have enhancement requests, or positive feedback, please reach out to Abel at his Oracle account: abel.macias@oracle.com. If you come across some of my other tools and scripts, and they show my former Oracle account (carlos.sierra@oracle.com), please reach out to Abel and he might be able to route your concern or question.

Since one of my hobbies is to build free software that I also consume, my current efforts are on eDB360, eAdam and eSP. The most popular and openly available is eDB360, which basically gives your a 360-degree view of a database without installing anything. Then, Mauro is also building something cool on his own free time. Mauro is building the new SQLd360 tool, which is already available on the web (search for SQLd360). This SQLd360 tool, similar to eDB360, provides a 360-degree view, but instead of a database its focus is one SQL. And similarly than eDB360 it installs nothing on the database. Both are available as “free software” for anyone to download and use. That is the nice part: everyone likes free! (specially if any good).

What is the difference between SQLd360 and SQLT?

Both are exceptional tools. And both can be used for SQL Tuning and for SQL diagnostics. The main differences in my opinion are these:

  1. SQLT has it all. It is huge and it covers pretty much all corners. So, for SQL Tuning this SQLTXPLAIN is “THE” tool.
  2. SQLd360 in the other hand is smaller, newer and faster to execute. It gives me what is more important and most commonly used.
  3. SQLT requires to install a couple of schemas and hundreds of objects. SQLd360 installs nothing!
  4. To download SQLT you need to login into MOS. In contrast, SQLd360 is wide open (free software license), and no login is needed.
  5. Oracle Support requires SQLT, and Oracle Engineers are not exposed yet to SQLd360.
  6. SQLd360 uses Google charts (as well as eDB360 does) which enhance readability of large data sets, like time series for example. Thus SQLd360 output is quite readable.
  7. eDB360 calls SQLd360 on SQL of interest (large database consumers), so in that sense SQLd360 enhances eDB360. But SQLd360 can also be used stand-alone.

If you ask me which one would I recommend, I would answer: both!. If you can use both, then that is better than using just one. Each of these two tools (SQLT and SQLd360) has pros and cons compared to the other. But at the end both are great tools. And thanks to Abel Macias, SQLT continues its lifecycle with frequent enhancements. And thanks to Mauro, we have now a new kid on the block! I would say we have a win-win for our large Oracle community!

Written by Carlos Sierra

March 18, 2015 at 12:37 pm

Some eDB360 Frequently Asked Questions

with 6 comments

As I get frequent questions about eDB360 I decided to post some FAQ here. First, some links to prior posts:

  1. Video: Introducing the eDB360 Tool
  2. What to do if edb360 takes long to run
  3. eDB360
  4. What is new with EDB360?
  5. How to execute eAdam and eDB360
  6. Meet “edb360″: a free tool that provides a 360-degree view of an Oracle database
  7. Presentations

Q: When is the best time to execute edb360?

A: At the end of a regular working day. Since it may take several hours to execute, some customers schedule it for a Friday evening.

Q: How long will edb360 take to execute on my production environment?

A: I don’t know. Best way to predict this is by executing edb360 in a lower but similar environment, like QA or TEST.

Q: Are there any risks on executing edb360 on a production environment?

A: No that I know. Think edb360 as a long sequence of SQL statements against DBA and GV$ views. Are they going to break anything? In theory not. So far I haven’t see edb360 breaking any environment. And if it does most probably it would a database bug. Most conservative approach is to try it first on a lower environment.

Q: Does edb360 install anything on my database?

A: No.

Q: Does edb360 changes the state of my database?

A: No.

Q: My edb360 seems to be hanging. How can I tell for sure?

A: Use any normal method: OEM, query long operations, look at V$SQL for active SQL. If you see edb360 is “stuck” at one SQL for let’s say more than one hour, you may want to report this to me. You can also find the SQL, kill the session, comment out that SQL out of edb360 code, and retry. When I have seen this, it happens to be either a known database bug, or lack of good statistics on schema objects behind particular query.

Q: Why is the output of edb360 so extensive?

A: We don’t know what we don’t know. Since edb360 is used for Oracle databases health-checks, trying to minimize the number of round trips to collect diagnostics, this tool collects many pieces of information. Sometimes we use most, sometimes we use a small subset. It all depends what we see.

Q: How do I read edb360?

A: Personally, I read everything. It takes me at least one day for an initial review. When I look at it for the 2nd or 3rd time, my search is narrower and deeper. While I look at it the 1st time I take some light notes, else I forget what I have seen so far.

Q: Can I use edb360 on any Oracle database?

A: Yes, as long as it is 10g, 11g or 12c. It works on any platform, but it works better on Linux and UNIX. It works for any application, including EBS, PSFT, Siebel, SAP and any custom one.

Q: Can I use edb360 on a non-Oracle database?

A: No.

Q: Is edb360 certified to run on 12c?

A: Not yet. It should work fine for single-tenant databases. I haven’t tested it on 12c although. I will do soon.

Q: Can I use edb360 for sizing and/or capacity?

A: Yes and no. edb360 captures sizing metadata using the eSP collector. The eSP application for Sizing and Provisioning is Enkitec’s proprietary (now Accenture’s). So we can use edb360 for sizing, but only internally and when our client asks for it.

Q: Is edb360 free software?

A: Yes. look for license file included on tool.

Q: Where do I get the latest version of this tool?

A: Look at the column at the right of this entry. There is a section with a “Download” title. That links will take you to a GitHub location where you can download the latest version.

Q: Can I use edb360 for SQL Tuning?

A: Actually, for that you may want to get SQLd360, which is work in progress by Mauro Pagano. In the meantime you can use planx.sql, sqlmon.sql, sqlash.sql (under my cscripts) or download SQLTXPLAIN and/or SQLHC from MOS.

Q: Can I share edb360 tool with others?

A: Absolutely! I developed this tool thinking on our Oracle community.

Q: I have some ideas to improve edb360. How can I provide them?

A: Send me an email, or post them here. Every so often I get some good ideas, which I put on my list and one day I get to them. Keep in mind that I work on edb360 on my free personal time, so some ideas take longer to implement than others.

Q: Where can I learn more about edb360?

A: Look at links at the top. There is a 1hr presentation that goes over what this tool does. There is also a fresh sample output available.

Written by Carlos Sierra

January 7, 2015 at 2:56 pm

Posted in edb360

eSP

with 2 comments

eSPEnkitec’s Sizing and Provisioning (eSP) is a new internal tool designed and developed with Oracle Engineered Systems in mind. Thanks to the experience and insights from Randy Johnson, Karl Arao and Frits Hoogland, what began as a pet project for some of us, over time became an actual robust APEX/PLSQL application, developed by Christoph Ruepprich and myself, and ready to debut at Oracle Open World 2014.

This posting is about eSP, what it does, and how it helps on the sizing and provisioning of Oracle Engineered System, or I would rather say, any System where Oracle runs.

We used to size Engineered Systems using a complex and very useful spread sheet developed by Randy Johnson and Karl Arao. Now, it is the turn for eSP to take the next step, and move this effort forward into a more scalable application that sits on top of one of our Exadata machines.

Sizing an Engineered System

Sizing a System can be quite challenging, especially when the current system is composed of several hosts with multiple databases of diverse use, size, versions, workloads, etc. The new target system may also bring some complexities; as the number of possible configurations grows, finding the right choice becomes harder. Then we also have the challenge of disk redundancy, recovery areas, the potential benefits of offloading with their smart scans, just to mention some added complexities.

At a very high level, Sizing a System is about 3 entities: Resources, Capacity and Utilization. Resources define what I call “demand”, which is basically the set of computational resources from your original System made of one or many databases and instances over some hosts. Capacity, which I also call it “supply”, is the set of possible target Systems with their multiple Configurations, in other words Engineered Systems, or any other hardware capable to host Oracle databases. Utilization, which I may also refer as “allocation” is where the magic and challenge resides. It is a clever and unbiassed mapping between databases and configurations, then between instances and nodes. This mapping has to consider at the very least CPU footprint, Memory for SGA and PGA, database disk space, and throughput in terms of IOPS and MBPS. Additional constraints, as mentioned before, include redundancy and offloading among others. CPU can be a bit tricky since each CPU make and model has its own characteristics, so mapping them requires the use of SPEC.

Other challenge a Sizing tool has to consider is the variability of the Resources. The question becomes: Do we see the Resources as a worst case scenario, or shall we rather consider them as time series? In other words, do we compute and use peaks, or do we observe the use of Resources over time, then develop some methods to aggregate them consistently as time series? If we decide to use a reduced set of data points, do we use peaks or percentiles? if the latter, which percentile is well balanced? 99.9, 99, 95 or maybe 90? How conservative are those values? There are so many questions and the answer for most of them, as you may guess is: “it all depends”.

How eSP Works

Without getting into the technical details, I can say that eSP is an APEX application with a repository on an Oracle database, which inputs collected “Requirements” from the databases to be sized, then it processes these Requirements and prepares them to be “Allocated” into one or more defined hardware configurations. The process is for the most part “automated”, meaning this: we execute some tool or script in the set of hosts where the databases reside, then upload the output of these collectors into eSP and we are ready to Plan and apply “what-if” scenarios. Having an Exadata System as our work engine, it allows this eSP application to scale quite well. A “what-if” scenario takes as long as it takes to navigate APEX pages,while all the computations are done in sub-seconds behind scenes, thanks to Exadata!

Once we load the Resources from the eSP collector script, or from the eAdam tool, we can start playing with the metadata. Since eSP’s set of known Configurations (Capacity) include current Engineered Systems (X4), allocating Configurations is a matter of seconds, then mapping databases and instances becomes the next step. eSP contains an auto “allocate” algorithm for databases and instances, where we can choose between a “balanced” allocation or one that is “dense” with several density factors to choose from (100%, 90%, 80%, 70%, 60% and 50%). With all these automated options, we can try multiple sizing and allocation possibilities in seconds, regardless if we are Sizing and Provisioning for one database or a hundred of them.

eSP and OOW

eSP DemoThe Enkitec’s Sizing and Provisioning (eSP) tool is an internal application that we created to help our customers to Size their next System or Systems in a sensible manner. The methods we implemented are transparent and unbiassed. We are bringing eSP to Oracle Open World 2014. I will personally demo eSP at our assigned booth, which is #111 at the Moscone South. I will be on and off the booth, so if you are interested on a demo please let me know, or contact your Enkitec/Accenture representative. We do prefer appointments, but walk-ins are welcomed. Hope to see you at OOW!

Written by Carlos Sierra

September 21, 2014 at 5:40 pm

Posted in eAdam, edb360, Exadata, General, OOW

Free script to generate a Line Chart on HTML

with 14 comments

Performance Metrics are easier to digest if visualized trough some Line Charts. OEM, eDB360, eAdam and other tools use them. If you already have a SQL Statement that provides the Performance Metrics you care about, and just need to generate a Line Chart for them, you can easily create a CSV file and open it with MS-Excel. But if you want to build an HTML Report out of your SQL, that is a bit harder, unless you use existing technologies. Tools like eDB360 and eAdam use Google Charts as a mechanism to easily generate such Charts. A peer asked me if we could have such functionality stand-alone, and that challenged me to create and share it.

HTML Line Chart
This HTML Line Chart Report above was created with script line_chart.sql shown below. The actual chart, which includes Zoom functionality on HTML can be downloaded from this Dropbox location. Feel free to use this line_chart.sql script as a template to display your Performance Metrics. It can display several series into one Chart (example above shows only one), and by reviewing code below you will find out how easy it is to adjust to your own needs. Chart above was created using a simple query against the Oracle Sample Schema SH, but the actual use could be Performance Metrics or any other Application time series.

Script

SET TERM OFF HEA OFF LIN 32767 NEWP NONE PAGES 0 FEED OFF ECHO OFF VER OFF LONG 32000 LONGC 2000 WRA ON TRIMS ON TRIM ON TI OFF TIMI OFF ARRAY 100 NUM 20 SQLBL ON BLO . RECSEP OFF;
PRO
DEF report_title = "Line Chart Report";
DEF report_abstract_1 = "<br>This line chart is an aggregate per month.";
DEF report_abstract_2 = "<br>It can be by day or any other slice size.";
DEF report_abstract_3 = "";
DEF report_abstract_4 = "";
DEF chart_title = "Amount Sold over 4 years";
DEF xaxis_title = "Sales between 1998-2001";
--DEF vaxis_title = "Amount Sold per Hour";
--DEF vaxis_title = "Amount Sold per Day";
DEF vaxis_title = "Amount Sold per Month";
DEF vaxis_baseline = ", baseline:2200000";
DEF chart_foot_note_1 = "<br>1) Drag to Zoom, and right click to reset Chart.";
DEF chart_foot_note_2 = "<br>2) Some other note.";
DEF chart_foot_note_3 = "";
DEF chart_foot_note_4 = "";
DEF report_foot_note = "This is a sample line chart report.";
PRO
SPO line_chart.html;
PRO <html>
PRO <!-- $Header: line_chart.sql 2014-07-27 carlos.sierra $ -->
PRO <head>
PRO <title>line_chart.html</title>
PRO
PRO <style type="text/css">
PRO body   {font:10pt Arial,Helvetica,Geneva,sans-serif; color:black; background:white;}
PRO h1     {font-size:16pt; font-weight:bold; color:#336699; border-bottom:1px solid #cccc99; margin-top:0pt; margin-bottom:0pt; padding:0px 0px 0px 0px;}
PRO h2     {font-size:14pt; font-weight:bold; color:#336699; margin-top:4pt; margin-bottom:0pt;}
PRO h3     {font-size:12pt; font-weight:bold; color:#336699; margin-top:4pt; margin-bottom:0pt;}
PRO pre    {font:8pt monospace;Monaco,"Courier New",Courier;}
PRO a      {color:#663300;}
PRO table  {font-size:8pt; border_collapse:collapse; empty-cells:show; white-space:nowrap; border:1px solid #cccc99;}
PRO li     {font-size:8pt; color:black; padding-left:4px; padding-right:4px; padding-bottom:2px;}
PRO th     {font-weight:bold; color:white; background:#0066CC; padding-left:4px; padding-right:4px; padding-bottom:2px;}
PRO td     {color:black; background:#fcfcf0; vertical-align:top; border:1px solid #cccc99;}
PRO td.c   {text-align:center;}
PRO font.n {font-size:8pt; font-style:italic; color:#336699;}
PRO font.f {font-size:8pt; color:#999999; border-top:1px solid #cccc99; margin-top:30pt;}
PRO </style>
PRO
PRO <script type="text/javascript" src="https://www.google.com/jsapi"></script>
PRO <script type="text/javascript">
PRO google.load("visualization", "1", {packages:["corechart"]})
PRO google.setOnLoadCallback(drawChart)
PRO
PRO function drawChart() {
PRO var data = google.visualization.arrayToDataTable([
/* add below more columns if needed (modify 3 places) */
PRO ['Date Column', 'Number Column 1']
/****************************************************************************************/
WITH
my_query AS (
/* query below selects one date_column and a small set of number_columns */
SELECT --TRUNC(time_id, 'HH24') date_column /* preserve the column name */
       --TRUNC(time_id, 'DD') date_column /* preserve the column name */
       TRUNC(time_id, 'MM') date_column /* preserve the column name */
       , SUM(amount_sold) number_column_1 /* add below more columns if needed (modify 3 places) */
  FROM sh.sales
 GROUP BY
       --TRUNC(time_id, 'HH24') /* aggregate per hour, but it could be any other */
       --TRUNC(time_id, 'DD') /* aggregate per day, but it could be any other */
       TRUNC(time_id, 'MM') /* aggregate per month, but it could be any other */
/* end of query */
)
/****************************************************************************************/
/* no need to modify the date column below, but you may need to add some number columns */
SELECT ', [new Date('||
       TO_CHAR(q.date_column, 'YYYY')|| /* year */
       ','||(TO_NUMBER(TO_CHAR(q.date_column, 'MM')) - 1)|| /* month - 1 */
       --','||TO_CHAR(q.date_column, 'DD')|| /* day */
       --','||TO_CHAR(q.date_column, 'HH24')|| /* hour */
       --','||TO_CHAR(q.date_column, 'MI')|| /* minute */
       --','||TO_CHAR(q.date_column, 'SS')|| /* second */
       ')'||
       ','||q.number_column_1|| /* add below more columns if needed (modify 3 places) */
       ']'
  FROM my_query q
 ORDER BY
       date_column
/
/****************************************************************************************/
PRO ]);
PRO
PRO var options = {
PRO backgroundColor: {fill: '#fcfcf0', stroke: '#336699', strokeWidth: 1},
PRO explorer: {actions: ['dragToZoom', 'rightClickToReset'], maxZoomIn: 0.1},
PRO title: '&&chart_title.',
PRO titleTextStyle: {fontSize: 16, bold: false},
PRO focusTarget: 'category',
PRO legend: {position: 'right', textStyle: {fontSize: 12}},
PRO tooltip: {textStyle: {fontSize: 10}},
PRO hAxis: {title: '&&xaxis_title.', gridlines: {count: -1}},
PRO vAxis: {title: '&&vaxis_title.' &&vaxis_baseline., gridlines: {count: -1}}
PRO }
PRO
PRO var chart = new google.visualization.LineChart(document.getElementById('chart_div'))
PRO chart.draw(data, options)
PRO }
PRO </script>
PRO </head>
PRO <body>
PRO <h1>&&report_title.</h1>
PRO &&report_abstract_1.
PRO &&report_abstract_2.
PRO &&report_abstract_3.
PRO &&report_abstract_4.
PRO <div id="chart_div" style="width: 900px; height: 500px;"></div>
PRO <font class="n">Notes:</font>
PRO <font class="n">&&chart_foot_note_1.</font>
PRO <font class="n">&&chart_foot_note_2.</font>
PRO <font class="n">&&chart_foot_note_3.</font>
PRO <font class="n">&&chart_foot_note_4.</font>
PRO <pre>
L
PRO </pre>
PRO <br>
PRO <font class="f">&&report_foot_note.</font>
PRO </body>
PRO </html>
SPO OFF;
SET HEA ON LIN 80 NEWP 1 PAGES 14 FEED ON ECHO OFF VER ON LONG 80 LONGC 80 WRA ON TRIMS OFF TRIM OFF TI OFF TIMI OFF ARRAY 15 NUM 10 NUMF "" SQLBL OFF BLO ON RECSEP WR;

 

 

Written by Carlos Sierra

July 28, 2014 at 2:34 pm