About TKPROF, Trace Analyzer and Method-R Profiler
When trying to diagnose a process performing poorly, a SQL Trace is usually the starting point. First we enable EVENT 10046 level 1, 4, 8 or 12. Next we execute the process of concern. A SQL Trace is produced and we have our result, right? Well, not quite. There are some caveats. We need to be sure we included wait events on the trace file, so levels 8 or 12 are required. But once we have the raw SQL Trace: how do we analyze it? I talk here about 3 tools which input a SQL Trace and output some sort of profile.
TKPROF
TKPROF has been around for more than 20 years! It is part of the product, meaning: no extra cost. It is fast. And it provides the basics. For each SQL statement, we get execution metrics, and we can identify which statements are the ones consuming most service (CPU) or wait time. I would say that TKPROF is always a good starting point when we are tasked with the analysis of a process performing poorly.
TKPROF has some deficiencies although: It may core-dump with large files, and sometimes it is hard to read, specially with a complex row source plan. In the other hand, TKPROF is fast and does not require to connect back to the database where the Trace was produced. If I only care about CPU time and wait events associated only with database calls (non-idle), I can get a good picture of my issue by using plain old TKPROF. If wait events between database calls (idle) are my concern, I feel a bit lost with TKPROF.
Trace Analyzer
Trace Analyzer, also known as TRCANLZR or TRCA for short, has been available for about 10 years. It started as an Oracle internal tool, which provided a bit more than TKPROF. Over time it became more precise with the accounting of the 3 times: CPU Time, Elapsed Time and Response Time. TRCA works according to this simple Time model:
- Elapsed Time = “CPU Time” + “non-idle Waits” + “some un-accounted for time”
- Response Time = “Elapsed Time” + “idle Waits” + “some un-accounted for time”
Since a SQL Trace contains time stamps, the total Response Time can be easily computed. Also within the Trace we have a good record of database call “Elapsed and CPU Time”, and all waits are also measured. So, TRCA simply computes those un-accounted for times and provides all sorts of summaries, which mimic TKPROF well-known format, but adding some desired missing parts.
TRCA expands its output by gathering some interesting pieces out of the data dictionary, like CBO statistics and indexes. Does it mean it has to connect to a database? Yes, it does. It includes a technique that allows to process a SQL Trace on a system other than the one that originated the SQL Trace, but it requires some steps like taking a small snapshot of the source data dictionary and placing it within the TRCA repository on the target system. Not very complex, but not everybody knows how to do this…
Overall, I would say TRCA is good tool that provides more than TKPROF, takes longer to execute, and it is free to use.
Method-R Profiler
When I learned about Method-R Profiler some years ago, I decided to avoid looking at its output. The reason was simple: I did not want to be tempted to replicate some cool functionality into TRCA. I wanted TRCA to be what it is without any influence from some tool other than TKPROF. Nevertheless, a few months ago, while reviewing a performance issue, the only file provided was the output of Method-R Profiler. I was surprised to learn how intuitive it was to read it. It was obvious to me that Method-R Profiler was designed by someone who really masters the topic of performance analysis. Method-R focus is WHERE is the time. To me, that is the only right approach for this kind of tool. We all want to find quickly and easily WHERE is the time consumed. It could be service time (CPU) or some wait event (non-idle or idle).
I had the pleasure to chat with Cary Millsap from Method-R during RMOUG 2013. We talked about a few things, including Method-R Profiler and Trace Analyzer. Cary offered me a temporary license for evaluation, which I accepted. I tested Method-R Profiler using a SQL Trace from one of my typical test scripts. I totally like Method-R. It gives me what I need to diagnose a process performing poorly. Its output is accurate, clear and clean.
Pros:
- Best content. It clearly identifies WHERE the time is spent. So we can focus correctly where we need to.
- Easy to install. I installed it on my laptop and it was as simple as downloading a file. I thought I was missing something, but not at all. Installation took me no time at all!
- Installs nothing in the database. This means I can analyze Trace files on a system other than their source.
- No need to connect to a database.
- Output is clean and easy to read.
Cons:
- It is a licensed product. That means $. But in my opinion, its license prices are peanuts compared to the benefit of having the right tool when needed.
Conclusion
Performance tools are like a car insurance. They provide peace of mind while we expect never to use them. As with car insurances, the question is: which one do we get? Do we decide by cost only of by looking at the possible benefit in case we actually have to use it? When dealing with performance issues, I personally prefer to have the best tools available in the market. And I’d rather get to know them well BEFORE I need to actually use them in a production issue. Having that in mind, I strongly suggest you invest some time getting acquainted with your options, then be sure you have the right tools handy, just in case you face a high-priority issue requiring your expertise (and quick solution).
SQLTXPLAIN capabilities – free webinar for Oracle partners (Wednesday 27)
This Wednesday, February 27 at 8 am Pacific Time (11 am ET), I will deliver a one hour webinar to Oracle partners. The topic is about SQLTXPLAIN capabilities and how this tool helps to diagnose SQL statements performing poorly. Details are here. Format is 45 minutes of presentation and demo, followed by 15 minutes for a Q&A session.
SQL Tuning Workshop at the NoCOUG
I am delivering a two-days SQL Tuning Workshop at the Norther California Oracle Users Group (NoCOUG) in Pleasanton, California. Today is the 2nd and last day of this class. We discussed yesterday: Cost-based Optimizer (CBO) Statistics, Event 10053, Adaptive Cursor Sharing (ACS) and SQL Plan Management (SPM). We will discuss today some SQL Tuning tools: Event 10046, Trace Analyzer TRCANLZR, SQLTXPLAIN (SQLT), SQL Health-Check (SQLHC) and SQLT Test Case (TC). There are around 35 participants and I would say the level of the class is between intermediate to advanced. Most participants are seasoned DBAs and some have a good level of expertise as Developers. Abel Macias is participating in this session as my co-instructor. He is bringing to class his extensive expertise in SQL Tuning. It is always a challenge to keep the attention of such a large and diverse group, but I would say the level of participation has been quite high. I appreciate all the effort from several contributors of the NoCOUG who have made this training possible. My special thanks to Gwen Shapira, Randy Samberg, Iggy Fernandez and Eric Hutchinson. It is very rewarding to see how Oracle Users Groups are putting together events like this where we can openly share knowledge with our Oracle users community.
I will speak at the actual NoCOUG Conference this Thursday. I will talk about the content of SQLTXPLAIN (SQLT) main report, which is quite comprehensive. I will go over some samples and I will allocate plenty of time for questions during and after my presentation. These days SQLT is widely used within Oracle and now by several enterprise Oracle customers.
Stelios Charalambides book about SQLT Tuning and SQLTXPLAIN will be released on March 20. The title is Oracle SQL Tuning with Oracle SQLTXPLAIN. This book can now be pre-ordered in Amazon. If you want to learn more about how to use SQLTXPLAIN to diagnose SQL Tuning issues you may want to get a copy of this book.
After NoCOUG I will spend one week at the office, where I will deliver a live Webinar in SQL Tuning to our Oracle Partners. Then my next conference is the popular Hotsos Symposium in March 3-7. In my humble opinion, this annual Hotsos event is the single most relevant in the area of Oracle Performance. Thus I am very excited to have the opportunity to speak there. A real honor!
RMOUG Training Days February 13 (day 3)
The RMOUG Training Days are over now. Yesterday was the 3rd and last day. What an experience! I enjoyed great sessions and learned a few things, like how to gather System Statistics for Exadata (thanks to Karen Morton). More important, I had the opportunity to appreciate the value of networking. When I hear that an Oracle User Group (OUG) is a great place for networking, I usually shrug since I am not someone looking for business opportunities, fame, or name building. Well, networking is a lot more than that. Let me explain:
During my first presentation yesterday, my laptop crashed and with it my local database. I needed my local database for both of my presentations, but kind of crucial for the second. Of course I had plan B and even plan C, but that was not the point since I wanted to stick to plan A, which was the best. My database needed a recovery and I do not do core Database Administration (DBA) work on a daily basis. In fact I have never recovered a database myself! So there I was, between conferences and struggling to recover my database. As a self-sufficient seasoned Oracle guy, I tried myself but I was walking in circles. Having established a friendly relationship with Oracle’s partner Enkitec (an Oracle-centric consultig firm), I felt comfortable reaching for help. So I asked Veronica Stigers if she knew who may assist me to recover my database. She joked with me and immediately referred me to Alex Fatkulin, who was working on his laptop sitting on the floor right there at the Enkitec kiosk. Alex learned that I was lost when he asked me if my database was in archive-mode and I just stared back to him… Nevertheless, he took control of my laptop and did his magic. Alex recovered my database in less than 2 minutes! So Alex made my day and I owe him big time. My lesson learned here is that events like the RMOUG really facilitate networking. And that networking means a lot more than business opportunities. It meant to me to have the courage to reach for help, and to actually find someone willing to help. I thank Enkitec’s awesome resources, in special I thank Veronica and Alex for extending a helping hand and by doing this as good old friends! I know that yesterday we established a lasting relationship, which I treasure already. So, to me, networking means: making new good friends!
Now that RMOUG is over, my attention moves to the Norther California Oracle Users Group (NoCOUG), which is next week! Looking forward to have the opportunity to share some knowledge there and to do some more “networking”…
As time permits, I will be writing more about Execution Plan Stability, and SQL Tuning tools.
RMOUG Training Days February 12 (day 2)
Second day is over now. I attended several interesting sessions. The one I liked the most was about the role of women in technology. I am so glad RMOUG provides now this kind of session! I think we need to encourage women to fill positions that are still dominated by men, like speaking at conferences. If you consider that 40% of DBAs are women while only 11% are speakers, you get to understand something can be improved here…
Today, I will deliver two one-hour sessions:
- How to improve SQL Performance with new SQL Health Check Tool. Wednesday, February 13 at 8:30 am. Room 4a.
- How to create in 5 minutes a SQL Tuning Test Case using SQLTXPLAIN. Wednesday, February 13 at 1:30 pm. Room 403.
I am so happy to participate as a speaker at the RMOUG! My reward is simple: to see participants nodding and smiling when a concept clicks in, or when they get an answer to a question that has been boiling in their heads for some time. After 17 years at Oracle and having learned from so many smart guys, I am lucky to be in a position where I can answer many questions. Still I say “I do not know” when asked something I do not know, but when it comes to SQL Tuning, I feel pretty comfortable with most subtopics. I notice a great interest on Plan Stability and SQL Plan Management in particular. I will be offering sessions in this area within SQL Tuning…
Looking forward to a great third and last day of this event today!
RMOUG Training Days February 11 (day 1)
First day of the RMOUG Training Days February 11-13, 2013 is over. I had the honor to present Using SQLTXPLAIN (SQLT) to Diagnose SQL Statements Performing Poorly. I had 14 participants and we were able to go into the details of several topics that were raised by this small group. We covered all the material I had prepared. We also covered SQL Plan Management (SPM) in a live demo branched from the SQLT core presentation and demos. The most rewarding part for me was watching the smiles in the faces of most participants. I think everyone got something out of this session.
I wanted to share my personal experience at this great conference. In short, I am having a blast! From being picked up at the airport by Prashant Khanolkar and sharing the ride with Cary Millsap and Faun deHerry; to speaking at the pre-conference event; and sharing a beer (or two) with other speakers. Just being here at this RMOUG event has been an awesome experience that I highly recommend to all! Last night, I had the chance to socially share with well recognized personalities, including: Kerry Osborne, Maria Colgan, Karl Arao, Kellyn Pot’Vin, Kyle Hailey, Jeff Jacobs, Andy Colvin, Faun deHerry and Scott Spendolini. I also had the change to meet or meet for the second time: Tim Gorman, Mogens Nørgaard, Ken Johnson, David Stokes, Lisa Cocklin, George Trujillo and Galo Balda. I met a lot more nice people but I am not so good with names to remember them all, specially this early in the morning…
It feels like yesterday when I participated at the Hotsos Seminar on March 2012. I was so intimidated just by thinking that some of the Oracle performance key players were there, and they would actually might sit in my session… I feel deep respect for most of the professionals that gather and are regulars at these events. What I learned at Hotsos 2012 was that most of these “big shots” were so approachable and down to earth. It was a big relief since I had been learning from them for years, through their books and blogs. For some reason I always thought they would be too busy to talk to someone like me… How wrong I was! That Hotsos 2012 event almost a year ago was the inspiration I needed to submit topics and try to participate in other similar events. RMOUG has had the same friendly, open, welcoming environment I discovered at Hotsos. My kudos to RMOUG organizers for making this event an awesome experience! As long as Oracle is willing to sponsor my travel expenses I will keep submitting topics to the RMOUG annual meeting… and Hotsos, and NoCOUG, and ECO, and some others…
Now, it is time for some workout before enjoying a busy day at the RMOUG!
2-days open SQL Tuning Class in Pleasanton, CA this month!
I will be delivering a two-days SQL Tuning class on February 19-20 in Pleasanton, CA. It is available to registered Oracle Users thanks to the help from NoCOUG. Space is limited, so if you want to attend you need to hurry up.
This class covers:
- CBO statistics (fundamentals and good practices);
- Tracing with Events 10046 and 10053;
- Plan Stability with Adaptive Cursor Sharing (ACS) and SQL Plan Management (SPM);
- SQL Tuning tools: Trace Analyzer TRCANLZR (TRCA), SQLTXPLAIN (SQLT) and SQL Health-Check (SQLHC); and
- SQL Tuning Test Cases (creation and implementation).
This is the first time this 2-days class is offered outside Oracle. Its content is an extract of the internal SQL Tuning Workshops I have developed and delivered to over 700 Oracle Engineers around the globe.
This is not an Oracle University (OU) class, which means the scope is more into the actual trades of the day-to-day life of SQL Tuning and less into product features. So if you have taken a SQL Tuning OU class, then this 2-days session would be a nice addition.
Another thing that you should expect from this class is lots of participation. In my sessions I always encourage participants to bring to the table their current SQL Tuning issues or concerns, so we can openly analyze and discuss them during class. The environment is friendly, welcoming and open.
I hope you can join us on February 19-20.
Materialized Views Health Check and Diagnostics Reports
About mvhcdr.sql
I uploaded to My Oracle Support (MOS) under document_id 1517362.1 a new script mvhcdr.sql to aid in the diagnostics of issues regarding Materialized Views (MV). This script installs nothing in the database and it produces a zip file with 5 zip files within. The output includes HTML and CSV files from all DBA views related to Materialized Views, as well as Metadata for the MVs and their Logs. Output also includes some Health-Checks like: MV Logs with Outdated Statistics, MV Logs larger than their Master Table, Complete Refresh Candidates and Old Materialized Views. A list of Materialized Views capabilities is also included within the set of output reports, as well as Logs per MV and MVs per Log with some important attributes. CBO Statistics, actual row counts, segment sizes in blocks, indexes, partitions and similar related metadata is included as well. The output of mvhcdr.sql is quite comprehensive. You can find an output sample under same MOS 1517362.1.
The main reason to have this new script is to aid Oracle Support to collect diagnostics information around Materialized Views. It can also be used by any Oracle user that may want to analyze the health of the MVs on a system, or two compare their setup to a similar database. Another use is to simply document all these MVs on a database, together with their related metadata.
You can download mvhcdr.sql from its own MOS note, or from SQLTXPLAIN 11.4.5.4 or higher. If the latter, you can find mvhcdr.sql under directory sqlt/utl.
This new script mvhcdr.sql can be used on 10g and 11g databases. You execute it connected as SYS. It has a couple of pre-requisites and just one execution parameter. The output gets generated under the same local directory from where you executed SQL*Plus.
Pre-requisites
Tables PLAN_TABLE and MV_CAPABILITIES_TABLE must be available. Else, create them with following scripts:
- PLAN_TABLE: $ORACLE_HOME/rdbms/admin/catplan.sql; or $ORACLE_HOME/rdbms/admin/utlxplan.sql;
- MV_CAPABILITIES_TABLE: $ORACLE_HOME/rdbms/admin/utlxmv.sql;
The MV_CAPABILITIES_TABLE is used to generate and report MV Capabilities as the name implies. The PLAN_TABLE is used as a staging repository for the duration of the execution of mvhcdr.sql.
Execution Parameters
- Materialized View name or prefix (optional, defaults to %)
Deep-dive on SQLTXPLAIN offered externally for the first time
The Rocky Mountain Oracle Users Group (RMOUG) Training Days is approaching soon: February 11-13, 2013. I hope you are attending this Oracle relevant event. There are so many good sessions that it is hard to decide which ones to attend!
For the first time, I will be conducting a 4-hours deep-dive external-session on SQLTXPLAIN (SQLT). The title is Using SQLTXPLAIN (SQLT) to Diagnose SQL Statements Performing Poorly. If you want to know SQLT from the basics to an intermediate level, consider attending this 4-hrs session. It will be offered on Monday, February 11 from 1 to 5 pm at the Colorado Convention Center in Denver, Colorado (USA). I love interactive sessions, so I will encourage voluntary participation and open discussions.
In addition to the 4-hrs deep-dive on SQLT, I will also speak on Wednesday, February 13 at 8:30 am about How to Improve SQL Performance with New SQL Health Check Tool. This is a 1-hour seminar with plenty of time for Q&A.
If you attend the event and want to discuss any issue with regard to SQLTXPLAIN (SQLT), SQL Health-Check (SQLHC) or simply SQL Tuning, please approach me. I will be there to help, share, listen, learn, discuss, etc.
