About TKPROF, Trace Analyzer and Method-R Profiler
When trying to diagnose a process performing poorly, a SQL Trace is usually the starting point. First we enable EVENT 10046 level 1, 4, 8 or 12. Next we execute the process of concern. A SQL Trace is produced and we have our result, right? Well, not quite. There are some caveats. We need to be sure we included wait events on the trace file, so levels 8 or 12 are required. But once we have the raw SQL Trace: how do we analyze it? I talk here about 3 tools which input a SQL Trace and output some sort of profile.
TKPROF has been around for more than 20 years! It is part of the product, meaning: no extra cost. It is fast. And it provides the basics. For each SQL statement, we get execution metrics, and we can identify which statements are the ones consuming most service (CPU) or wait time. I would say that TKPROF is always a good starting point when we are tasked with the analysis of a process performing poorly.
TKPROF has some deficiencies although: It may core-dump with large files, and sometimes it is hard to read, specially with a complex row source plan. In the other hand, TKPROF is fast and does not require to connect back to the database where the Trace was produced. If I only care about CPU time and wait events associated only with database calls (non-idle), I can get a good picture of my issue by using plain old TKPROF. If wait events between database calls (idle) are my concern, I feel a bit lost with TKPROF.
Trace Analyzer, also known as TRCANLZR or TRCA for short, has been available for about 10 years. It started as an Oracle internal tool, which provided a bit more than TKPROF. Over time it became more precise with the accounting of the 3 times: CPU Time, Elapsed Time and Response Time. TRCA works according to this simple Time model:
- Elapsed Time = “CPU Time” + “non-idle Waits” + “some un-accounted for time”
- Response Time = “Elapsed Time” + “idle Waits” + “some un-accounted for time”
Since a SQL Trace contains time stamps, the total Response Time can be easily computed. Also within the Trace we have a good record of database call “Elapsed and CPU Time”, and all waits are also measured. So, TRCA simply computes those un-accounted for times and provides all sorts of summaries, which mimic TKPROF well-known format, but adding some desired missing parts.
TRCA expands its output by gathering some interesting pieces out of the data dictionary, like CBO statistics and indexes. Does it mean it has to connect to a database? Yes, it does. It includes a technique that allows to process a SQL Trace on a system other than the one that originated the SQL Trace, but it requires some steps like taking a small snapshot of the source data dictionary and placing it within the TRCA repository on the target system. Not very complex, but not everybody knows how to do this…
Overall, I would say TRCA is good tool that provides more than TKPROF, takes longer to execute, and it is free to use.
When I learned about Method-R Profiler some years ago, I decided to avoid looking at its output. The reason was simple: I did not want to be tempted to replicate some cool functionality into TRCA. I wanted TRCA to be what it is without any influence from some tool other than TKPROF. Nevertheless, a few months ago, while reviewing a performance issue, the only file provided was the output of Method-R Profiler. I was surprised to learn how intuitive it was to read it. It was obvious to me that Method-R Profiler was designed by someone who really masters the topic of performance analysis. Method-R focus is WHERE is the time. To me, that is the only right approach for this kind of tool. We all want to find quickly and easily WHERE is the time consumed. It could be service time (CPU) or some wait event (non-idle or idle).
I had the pleasure to chat with Cary Millsap from Method-R during RMOUG 2013. We talked about a few things, including Method-R Profiler and Trace Analyzer. Cary offered me a temporary license for evaluation, which I accepted. I tested Method-R Profiler using a SQL Trace from one of my typical test scripts. I totally like Method-R. It gives me what I need to diagnose a process performing poorly. Its output is accurate, clear and clean.
- Best content. It clearly identifies WHERE the time is spent. So we can focus correctly where we need to.
- Easy to install. I installed it on my laptop and it was as simple as downloading a file. I thought I was missing something, but not at all. Installation took me no time at all!
- Installs nothing in the database. This means I can analyze Trace files on a system other than their source.
- No need to connect to a database.
- Output is clean and easy to read.
- It is a licensed product. That means $. But in my opinion, its license prices are peanuts compared to the benefit of having the right tool when needed.
Performance tools are like a car insurance. They provide peace of mind while we expect never to use them. As with car insurances, the question is: which one do we get? Do we decide by cost only of by looking at the possible benefit in case we actually have to use it? When dealing with performance issues, I personally prefer to have the best tools available in the market. And I’d rather get to know them well BEFORE I need to actually use them in a production issue. Having that in mind, I strongly suggest you invest some time getting acquainted with your options, then be sure you have the right tools handy, just in case you face a high-priority issue requiring your expertise (and quick solution).