Archive for the ‘TRCANLZR (TRCA)’ Category
SQL Tuning with SQLTXPLAIN 2-days Workshop
SQLTXPLAIN is a SQL Tuning tool widely used by the Oracle community. Available through My Oracle Support (MOS) under document 215187.1, this free tool is available for download and use to anyone with MOS access. It has helped thousands of times to expedite the resolution of SQL Tuning issues, and many Oracle DBAs and Developers benefit of its use on a daily basis.
Stelios Charalambides has done an excellent job writing a book on this topic. In his book Stelios covers many aspects about SQLTXPLAIN and some related topics. I highly recommend to get a copy of this book if you want to learn more about SQLTXPLAIN. It is available at Amazon and many other retailers.
The new 2-days SQLTXPLAIN Workshop offered by Enkitec (an Oracle Platinum business partner and my employer) is a completely new course that interleaves “how to use effectively SQLTXPLAIN” with important and related SQL Tuning Topics such as Plan Flexibility and Plan Stability. This hands-on workshop offers participants the unique opportunity to fully understand the contents of SQLTXPLAIN and its vast output through an interactive session. About half the time is dedicated to short guided labs, while the other half uses presentations and demos. This workshop is packed with lots of content. It was a real challenge packaging so much info in only two days, but I am very pleased with the result. It became a 2-days intensive knowledge transfer hands-on workshop on SQLTXPLAIN and SQL Tuning!
The first session of this workshop is scheduled for November 7-8 in Dallas, Texas. I expect this pilot session to fill out fast. Other sessions and onsite ones will be offered during 2014. I hope to meet many of you face to face on November 7!
SQLTXPLAIN under new administration
With great pleasure I am announcing here that SQLTXPLAIN (SQLT) is “under new administration”. What does it mean? In short, almost nothing to you. Under the covers, it means you will start seeing a new (and younger) face associated to SQLT. My good friend Mauro Pagano from Oracle has graciously accepted the role of been the first/main contact for this tool. He also has accepted the responsibility of supporting, maintaining and enhancing this tool. Mauro won’t be alone on this task. Another good friend of mine – Abel Macias, will back up Mauro when needed.
Support of SQLT usually goes like this: If someone from Oracle Support requests you to provide a SQLT report, that engineer owning the Service Request (SR) is your main point of contact, including questions/concerns/issues installing SQLT. That same engineer can reach to Mauro or Abel for assistance if needed. In the other hand, if you are a business partner of Oracle or you are using SQLT on your own terms, and you have a question/concern/issue about a SQLT installation, then refer to Metalink (MOS) note 215187.1, where you can see who is the main contact for SQLT and how to email him/her. As of today that person is Mauro.
A bit about Mauro and Abel: I have worked with both for several years now, and besides been super qualified, I have learned that each of us has usually a different technical view on a given issue. So, a brainstorm between 2 or the 3 of us usually provides a much better outcome than using only one brain… Anyways, Mauro and Abel have been instrumental to the success of SQLT, always providing great ideas which translate into very useful features. SQLT is my baby, but it is also theirs! 😉
Of course I will continue advising on SQLT. That includes frequent communication with Mauro and Abel, but it also means to continue helping the Oracle community, specially when it comes to “how to make sense of the output”. So, SQLT is “under new administration”, but it should mean almost nothing different to you.
Keep using SQLT and keep providing your valuable feedback! SQLT is what it is today thanks to the great ideas we constantly receive and implement!
Differences between TKPROF and Trace Analyzer TRCANLZR (TRCA)
When an Oracle user process performs poorly, meaning slower that business requirements, the most common way to start its diagnostics is by turning SQL Trace on, then proceed to review the trace file that is created. This process is usually referred as “tracing” and can be accomplished by several methods including DBMS_MONITOR and ALTER SESSION commands.
Back to the question that motivated this Blog entry: What is the difference between TKPROF and Trace Analyzer? Both input a trace file generated by SQL Trace (also known as EVENT 10046) and both produce a summary out of the input trace.
TKPROF
It has been around since early releases of Oracle. It is included as part of the database software. It is fast. It produces an output report in plain text format. It performs number crunching in memory. It does not have to connect to a database (it can if you want to include Explain Plans in its report). It provides a nice and clean summary of performance metrics for each SQL statement included in trace file.
Trace Analyzer TRCANLZR (TRCA)
It is an add-on tool and not part of the Oracle database product. It is free to download and use out of My Oracle Support (MOS), previously known as Metalink (under note 224270.1). It takes longer to execute than TKPROF. It provides more diagnostics details than TKPROF. It connects to a database (same where the trace was produced or some other). Relies on a database to store the parsed trace, which can help if you want to do some data mining in the contents of the trace. Its output is HTML and also text. It is easier to read than TKPROF specially regarding the Execution Plan. Bottom-line: it is like TKPROF on steroids, as per Eddie Awad’s Blog. See sample below:
Conclusions
Both TKPROF and TRCA are great tools. Keep in mind that there are also some excellent tools external to Oracle in this area. If you need more advanced functionality than the one provided by TKPROF or TRCA, I encourage you to explore these external tools. Personally, I have evaluated Method R and I liked it very much. Also, if you want to learn more about the contents of a SQL Trace you may be interested on this book: Optimizing Oracle Performance by Cary Millsap and Jeff Holt, published by O’Reilly Media.
SQLTXPLAIN and TRCANLZR use PL/SQL NATIVE compilation mode
Starting on SQLT/TRCA release 11.4.5.4 both tools switched PL/SQL compilation mode of their libraries from INTERPRETED to NATIVE. This change was introduced to reduce overhead in PL/SQL code used by these two tools.
Some customers report that installing SQLT or TRCA they get some PL/SQL errors similar to the ones below. That means those systems are not correctly setup to benefit of the NATIVE compilation mode for PL/SQL. If you get similar errors you have two options: configure your system so it can benefit of this PL/SQL optimization, or revert to use INTERPRETED mode for SQLT/TRCA.
Typical errors observed:
LINE/COL ERROR -------- ----------------------------------------------------------------- 0/0 PLS-00923: native compilation failed: C compilation or linking failed:spdzj.c:Could not invoke program /opt/Prod/SUNW/bin/cc:No such file or directory LINE/COL ERROR -------- ----------------------------------------------------------------- 0/0 PLS-00801: internal error [spdzj.c:child process terminated due to signal:]
If you want to revert to use PL/SQL compilation mode of INTERPRETED on SQLT follow these steps:
- Modify sqlt/install/sqcommon1.sql removing the two dashes “–” at the beginning of this line (27): –DEF plsql_code_type = ‘INTERPRETED’;
- Re-install SQLT by executing sqlt/install/sqcreate.sql
- If problem persists perform a fresh install executing sqlt/install/sqdrop.sql followed by sqlt/install/sqcreate.sql
Be aware that you may have to perform these steps above every time you upgrade your SQLT tool until you address the root cause of your PL/SQL configuration issue.
To perform the same on TRCA look for script trca/install/tacommon1.sql and modify its line 27 removing the two dashes “–” at the beginning of the line, so it becomes: DEF plsql_code_type = ‘INTERPRETED’;. Then proceed to install the tool with trca/install/tacreate.sql as usual.
About TKPROF, Trace Analyzer and Method-R Profiler
When trying to diagnose a process performing poorly, a SQL Trace is usually the starting point. First we enable EVENT 10046 level 1, 4, 8 or 12. Next we execute the process of concern. A SQL Trace is produced and we have our result, right? Well, not quite. There are some caveats. We need to be sure we included wait events on the trace file, so levels 8 or 12 are required. But once we have the raw SQL Trace: how do we analyze it? I talk here about 3 tools which input a SQL Trace and output some sort of profile.
TKPROF
TKPROF has been around for more than 20 years! It is part of the product, meaning: no extra cost. It is fast. And it provides the basics. For each SQL statement, we get execution metrics, and we can identify which statements are the ones consuming most service (CPU) or wait time. I would say that TKPROF is always a good starting point when we are tasked with the analysis of a process performing poorly.
TKPROF has some deficiencies although: It may core-dump with large files, and sometimes it is hard to read, specially with a complex row source plan. In the other hand, TKPROF is fast and does not require to connect back to the database where the Trace was produced. If I only care about CPU time and wait events associated only with database calls (non-idle), I can get a good picture of my issue by using plain old TKPROF. If wait events between database calls (idle) are my concern, I feel a bit lost with TKPROF.
Trace Analyzer
Trace Analyzer, also known as TRCANLZR or TRCA for short, has been available for about 10 years. It started as an Oracle internal tool, which provided a bit more than TKPROF. Over time it became more precise with the accounting of the 3 times: CPU Time, Elapsed Time and Response Time. TRCA works according to this simple Time model:
- Elapsed Time = “CPU Time” + “non-idle Waits” + “some un-accounted for time”
- Response Time = “Elapsed Time” + “idle Waits” + “some un-accounted for time”
Since a SQL Trace contains time stamps, the total Response Time can be easily computed. Also within the Trace we have a good record of database call “Elapsed and CPU Time”, and all waits are also measured. So, TRCA simply computes those un-accounted for times and provides all sorts of summaries, which mimic TKPROF well-known format, but adding some desired missing parts.
TRCA expands its output by gathering some interesting pieces out of the data dictionary, like CBO statistics and indexes. Does it mean it has to connect to a database? Yes, it does. It includes a technique that allows to process a SQL Trace on a system other than the one that originated the SQL Trace, but it requires some steps like taking a small snapshot of the source data dictionary and placing it within the TRCA repository on the target system. Not very complex, but not everybody knows how to do this…
Overall, I would say TRCA is good tool that provides more than TKPROF, takes longer to execute, and it is free to use.
Method-R Profiler
When I learned about Method-R Profiler some years ago, I decided to avoid looking at its output. The reason was simple: I did not want to be tempted to replicate some cool functionality into TRCA. I wanted TRCA to be what it is without any influence from some tool other than TKPROF. Nevertheless, a few months ago, while reviewing a performance issue, the only file provided was the output of Method-R Profiler. I was surprised to learn how intuitive it was to read it. It was obvious to me that Method-R Profiler was designed by someone who really masters the topic of performance analysis. Method-R focus is WHERE is the time. To me, that is the only right approach for this kind of tool. We all want to find quickly and easily WHERE is the time consumed. It could be service time (CPU) or some wait event (non-idle or idle).
I had the pleasure to chat with Cary Millsap from Method-R during RMOUG 2013. We talked about a few things, including Method-R Profiler and Trace Analyzer. Cary offered me a temporary license for evaluation, which I accepted. I tested Method-R Profiler using a SQL Trace from one of my typical test scripts. I totally like Method-R. It gives me what I need to diagnose a process performing poorly. Its output is accurate, clear and clean.
Pros:
- Best content. It clearly identifies WHERE the time is spent. So we can focus correctly where we need to.
- Easy to install. I installed it on my laptop and it was as simple as downloading a file. I thought I was missing something, but not at all. Installation took me no time at all!
- Installs nothing in the database. This means I can analyze Trace files on a system other than their source.
- No need to connect to a database.
- Output is clean and easy to read.
Cons:
- It is a licensed product. That means $. But in my opinion, its license prices are peanuts compared to the benefit of having the right tool when needed.
Conclusion
Performance tools are like a car insurance. They provide peace of mind while we expect never to use them. As with car insurances, the question is: which one do we get? Do we decide by cost only of by looking at the possible benefit in case we actually have to use it? When dealing with performance issues, I personally prefer to have the best tools available in the market. And I’d rather get to know them well BEFORE I need to actually use them in a production issue. Having that in mind, I strongly suggest you invest some time getting acquainted with your options, then be sure you have the right tools handy, just in case you face a high-priority issue requiring your expertise (and quick solution).
SQL Tuning Workshop at the NoCOUG
I am delivering a two-days SQL Tuning Workshop at the Norther California Oracle Users Group (NoCOUG) in Pleasanton, California. Today is the 2nd and last day of this class. We discussed yesterday: Cost-based Optimizer (CBO) Statistics, Event 10053, Adaptive Cursor Sharing (ACS) and SQL Plan Management (SPM). We will discuss today some SQL Tuning tools: Event 10046, Trace Analyzer TRCANLZR, SQLTXPLAIN (SQLT), SQL Health-Check (SQLHC) and SQLT Test Case (TC). There are around 35 participants and I would say the level of the class is between intermediate to advanced. Most participants are seasoned DBAs and some have a good level of expertise as Developers. Abel Macias is participating in this session as my co-instructor. He is bringing to class his extensive expertise in SQL Tuning. It is always a challenge to keep the attention of such a large and diverse group, but I would say the level of participation has been quite high. I appreciate all the effort from several contributors of the NoCOUG who have made this training possible. My special thanks to Gwen Shapira, Randy Samberg, Iggy Fernandez and Eric Hutchinson. It is very rewarding to see how Oracle Users Groups are putting together events like this where we can openly share knowledge with our Oracle users community.
I will speak at the actual NoCOUG Conference this Thursday. I will talk about the content of SQLTXPLAIN (SQLT) main report, which is quite comprehensive. I will go over some samples and I will allocate plenty of time for questions during and after my presentation. These days SQLT is widely used within Oracle and now by several enterprise Oracle customers.
Stelios Charalambides book about SQLT Tuning and SQLTXPLAIN will be released on March 20. The title is Oracle SQL Tuning with Oracle SQLTXPLAIN. This book can now be pre-ordered in Amazon. If you want to learn more about how to use SQLTXPLAIN to diagnose SQL Tuning issues you may want to get a copy of this book.
After NoCOUG I will spend one week at the office, where I will deliver a live Webinar in SQL Tuning to our Oracle Partners. Then my next conference is the popular Hotsos Symposium in March 3-7. In my humble opinion, this annual Hotsos event is the single most relevant in the area of Oracle Performance. Thus I am very excited to have the opportunity to speak there. A real honor!
2-days open SQL Tuning Class in Pleasanton, CA this month!
I will be delivering a two-days SQL Tuning class on February 19-20 in Pleasanton, CA. It is available to registered Oracle Users thanks to the help from NoCOUG. Space is limited, so if you want to attend you need to hurry up.
This class covers:
- CBO statistics (fundamentals and good practices);
- Tracing with Events 10046 and 10053;
- Plan Stability with Adaptive Cursor Sharing (ACS) and SQL Plan Management (SPM);
- SQL Tuning tools: Trace Analyzer TRCANLZR (TRCA), SQLTXPLAIN (SQLT) and SQL Health-Check (SQLHC); and
- SQL Tuning Test Cases (creation and implementation).
This is the first time this 2-days class is offered outside Oracle. Its content is an extract of the internal SQL Tuning Workshops I have developed and delivered to over 700 Oracle Engineers around the globe.
This is not an Oracle University (OU) class, which means the scope is more into the actual trades of the day-to-day life of SQL Tuning and less into product features. So if you have taken a SQL Tuning OU class, then this 2-days session would be a nice addition.
Another thing that you should expect from this class is lots of participation. In my sessions I always encourage participants to bring to the table their current SQL Tuning issues or concerns, so we can openly analyze and discuss them during class. The environment is friendly, welcoming and open.
I hope you can join us on February 19-20.
SQLTXPLAIN 11.4.5.0 is now available. SQLTXADMIN was born on November 21, 2012!
So, what is new in version 11.4.5.0?
There are some functional enhancements and a couple of small fixes. What really changed is the security model supporting SQLT. Now we have two users: SQLTXPLAIN and SQLTXADMIN. Legacy SQLTXPLAIN owns all SQLT Tables, Indexes and Sequences, while SQLTXADMIN owns all SQLT Packages and Views. Why? It has to do with restricting as much as possible access to a system. SQLTXPLAIN used to have several system privileges needed by its set of PL/SQL libraries. Once we split this account into two we can strip SQLTXPLAIN from all its system privileges and assign a subset of them to new SQLTXADMIN.
Why this change?
We needed SQLTXPLAIN with no system privileges since any SQLT user needs to provide the SQLTXPLAIN password every time SQLT is used. This latter requirement supports the export of the SQLTXPLAIN repository for the SQL being analyzed, which in turn is needed for Test Case (TC) creation. We also needed those system privileges to extract a Test Case Builder (TCB) and to perform several other “admin” tasks SQLT does in the background.
SQLTXADMIN has now some system privileges listed in the “Security Model” of the sqlt_instructions.html file that comes within the sqlt.zip file from MOS 215187.1. SQLTXPLAIN only has CREATE SESSION system privilege, and it is no longer granted SQLT_USER_ROLE. So, if you try to execute SQLTXPLAIN connected as SQLTXPLAIN you will get this error:
BEGIN * ERROR at line 1: ORA-20109: User "SQLTXPLAIN" lacks required "SQLT_USER_ROLE" role. Request to your DBA: "GRANT SQLT_USER_ROLE TO SQLTXPLAIN;". ORA-06512: at line 3
Do you want to grant SQLT_USER_ROLE to SQLTXPLAIN?
The anser is NO. What you want instead is to execute SQLT connected as the application user that generated the SQL statement you want to analyze. Be sure this application user has been granted the SQLT_USER_ROLE, or you will get the same error. If for some reason you cannot use the application user or you cannot grant SQLT_USER_ROLE to it, then and only then execute SQLT connecting as SYS or any DBA account.
Be also aware that SQLTXADMIN is created locked and with a random password. There is no need to unlock this account or reset the password. Of course SYS could do that but it is not needed nor wanted by SQLT. Think of SQLTXADMIN as the guardian of the system privileges SQLT needs. It uses those system privileges from within SQLT code in those PL/SQL libraries owned by SQLTXADMIN and mostly acting on top of SQLTXPLAIN Objects.
Regarding SQL_USER_ROLE.
This SQLT_USER_ROLE has also been stripped from system privileges preserving only two: ADVISOR and SELECT_CATALOG_ROLE. If granting those privileges to the application user through this role is a concern at your corporation, you can always strip this role from the application user when SQLT is not used by this user, or you can execute SQLT as SYS or any other DBA account.
What happens when you connect and execute SQLT as SYS or DBA instead of the application user that issued the SQL?
Well, you get almost everything, but what you don’t get is the plan produced by the EXPLAIN PLAN FOR command, which basically shows you the plan if you had disabled bind peeking. Not a big issue in most cases. So, if you don’t want to grant your applications users a couple of system privileges then don’t do it, and execute SQLT connecting as any DBA. You will also lose the EVENT 10053 Trace.
What about Trace Analyzer?
Trace Analyzer (MOS 224270.1) also known as TRCANLZR or TRCA for short, went through the same process. So TRCA 11.4.5.0 is also split into two accounts: TRCANLZR and TRCADMIN. Legacy TRCANLZR owns Tables, Indexes and Sequences, while TRCADMIN owns Views and Packages. Similar concepts as SQLTXPLAIN and SQLTXADMIN division of functions, but on a smaller scale. TRCA users must now be granted the new TRCA_USER_ROLE before executing the trcanlzr.sql script to analyze a Trace.
How this new “Security Model” affects SQLT or TRCA functionality?
The new security model is expected to have no effect in the functionality of these two tools. The only thing you may notice is that some operations that could be done connecting as SQLTXPLAIN cannot longer be done from this account. This means some operations like creating a SQL Plan Baseline for a SQL statement would require now a qualified account other than SQLTXPLAIN. But, for the most part like executing any of the main methods, a user connecting as such, should not notice any functional difference after 11.4.5.0 is applied.