Posts Tagged ‘SQL Profile’
Why do you need to gather CBO Statistics?
As I help a peer with a SQL Tuning engagement, I face the frequent case of: “We do not want to gather CBO schema object statistics because we don’t want our Execution Plans to change”. Well, the bad news is that: not gathering stats only gives you a false sense of safety because your Execution Plans can change anyways. The reason has to do with Predicates referencing values out of range. Typical cases include range of dates, or columns seeded with values out of a sequence (surrogate keys). Most applications use them both. Example: predicate that references last X days of data. Imagine that date column on this predicate actually has statistics with low and high value that are outdated, lets say the high value refers to last time we gather stats (several months old). In such cases, the CBO uses some heuristics starting on 10g, where the cardinality of the Predicate is computed according to range of low/high and how far the value on Predicate is from this low/high range as per the stats. In short, the cardinality changes over time, as the Predicate on the last X days of data changes from one day to the next, and the next, and so on. At some point, the CBO may decide for a different Plan (with lower cost) and the performance of such SQL may indeed change abruptly. Then we scratch our heads and repeat to ourselves: but we did not gather statistics, why did the plan change?
So, if you understand the rationale above, then you would agree with the fact that: not updating CBO schema stats do not offer any real Plan Stability. So, my recommendation is simple: have reasonable CBO statistics and live with the possibility that some Plans will change (they would change anyways, even if you do not gather stats). Keep always in mind this:
The CBO has better chances to produce optimal Plans if we provide reasonable CBO statistics.
Now the good news: if you have some business critical SQL statements and you want them to have stable Plans, then Oracle already provides SQL Plan Management, which is designed exactly for Plan Stability. So, instead of gambling everyday, hoping for your Plans not to change preserving outdated stats, rather face reality, then gather stats, and create SQL Plan Baselines in those few SQL statements that may prove to have an otherwise unstable Plan and are indeed critical for your business. On 10g you can use SQL Profiles instead.
On 10g and 11g, just let the automatic job that gathers CBO schema statistics do its part. In most cases, that is good enough. If you have transient data, for example ETL tasks, then you may want to have the process workflow to gather stats on particular Tables as soon as the data is loaded or transformed and before it is consumed. The trick is simple: “have the stats represent the data at all times”. At the same time, there is no need to over do the stats, just care when the change on the data is sensible.
Custom SQL Profile and Plan Stability on 10g
SQLTXPLAIN includes into the sqlt/utl sub-directory a script named coe_xfr_sql_profile.sql. This script inputs a SQL_ID, then it asks to choose from a list of known Plan Hash Values (PHV), and after a few seconds it outputs another script that contains the Outline of the PHV together with the SQL Text for the given SQL_ID.
These are some of the things we can do with the script generated by coe_xfr_sql_profile.sql for your SQL_ID/PHV:
- If we execute the generated script with no modifications into the same system, then it creates a Custom SQL Profile with the Outline of the plan from given SQL_ID/PHV. This Plan is independent of CBO Statistics, meaning your Plan will be stable even if you gather new statistics (or delete them).
- If we execute this generated script into a similar system (one with same schema objects), then it creates the Custom SQL Profile from the source system into the target system. That means we basically migrated the PHV into another system. Think on this: “DEV instance exhibits a good-performing Plan. PROD only generates a poorly-performing Plan. You want to quickly migrate the good Plan from DEV into PROD”.
- You can only generate a good-performing Plan if you place a CBO Hint in your SQL, but you cannot modify the application. So, you execute a modified version of your SQL which includes the CBO Hint, then execute coe_xfr_sql_profile.sql for this modified SQL. When you open the generated script you remove the CBO Hint from the SQL Text section. Then this newly generated and modified script contains the original SQL without the Hint and the Outline of the Plan with the Hint. When executed either on the source system or a target system, it creates a custom SQL Profile on it. This Profile associates the original SQL with the good-performing Plan.
- Your original SQL contains a CBO Hint or set of hints and it produces a poorly-performing Plan. When you remove them you notice you get a good-performing Plan, so you execute coe_xfr_sql_profile.sql for the modified SQL (the one with the hints removed). Then you open the generated script and replace on it the SQL Text with the original one (with the hints on it). This generated and modified script, when executed on the source or target system, will create a Custom SQL Profile for the original SQL with the good-performing Plan.
If after you create a Custom SQL Profile with coe_xfr_sql_profile.sql you change your mind and want to drop such Profile, you will find the drop command on the same script that was generated by coe_xfr_sql_profile.sql containing your Profile.
This coe_xfr_sql_profile.sql script uses API DBMS_SQLTUNE.IMPORT_SQL_PROFILE, which is not documented.
A word of caution:
SQL Profiles are basically a set of CBO Hints associated to a SQL Text through a “signature”. This signature is a hash function on the SQL Text. If a Hint from the Profile cannot be applied, it does not invalidate the entire Profile. In other words, the CBO will still try to apply as many Hints as possible. Imagine that one of the Hints refers to an Index, and the Index is dropped, then the Hint will get silently ignored. This may produce sub-optimal Plans.
Another consideration is the effect of a Virtual Private Database (VPD), which appends some dynamic predicates to your SQL. In this case, the Plan produced with the Profile my not be optimal.
If your database is 11g or higher, you’d rather want to use SQL Plan Management (SPM), which provides better Plan Control functionality.