• xx_perf_watcher for monitoring waits during short time intervals

    A while ago I worked on a performance troubleshooting case where frequent short time degradation of IO performance on NetApp storage was suspected to be the root cause. The problem was to get some proof as looking at the averages of IO service time was not alarming enough. I decided to write a tool that could be used to monitor wait times for any DB wait event in short intervals, e.g. so I could get measures for db file sequential/scattered read performance each second

    Read on →

  • How to find tables containing lots of unused space?

    Have you ever wondered why the table segment consumes as much space as it does and how does one know if the space allocated by each of the segments is actually used for storing data and is not mostly empty? Those question did bother me time to time and I was looking for a method that would not require to license any packs (like Diagnostic Pack for Segment Advisor, because it requires AWR) and which would not do lots of IOs by scanning the segments. In the end I found a simple solution for this…

    Read on →

  • How to back up a table properly before installation of an online data-fix?

    Do you give direct read-write access to the production databases for the functional support and maintenance teams? We try not to and one of the reasons is to test every change before it’s deployed on production. This in turn means that DBAs are the ones who apply all the changes including small data-fixes. We have some procedures on how the data-fix changes have to be delivered to the DBAs for installation (scripts along with at least approximate information on how many rows should be changed, as well as some evidence that the script has been tested on pre-prod database).

    Last week we got a request to install a data-fix (replace some values of one column in a particular table ) on a running production database and create a backup of the table before doing the changes. It seems a very simple approach can be used:

    • Create a backup of the table using CTAS statement (create table as select)
    • Run the provided update statement
    • commit

    …but if you think about that, it’s not so straight forward afterall…

    Read on →

  • Can an autoextensible data file grow bigger then dba_data_files.maxbytes? Sure!

    Yesterday I got some alarms about a tablespace running out of free space, closer look revealed that those were false alarms and there was still enough free space to allow segments grow. Probably that is a well known thing for you, but I didn’t know that “bytes” in DBA_DATA_FILES can get larger then “maxbytes”. That was the case, the script was checking “maxbytes” for autoextensible datafiles despite the fact that the real size (“bytes”) is larger. Continue reading to find out how one can get into this kind of situation.

    Read on →

  • Why is the context index on the MV so large?

    I’m not saying the context indexes on Materialize Views are always large, but sometimes they can be. Today we found an issue in production database where the token information table (named DR$<index_name>$I) of the context index named <index name> was ~1.7Gb in size, but the MV table segment which the index was created on was only ~8 Mb. I decided to dig and and find out what the hell was going on there. After a short while it was discovered that there are ~500 sets of token information in the index table for each row in the materialized view. The following demo illustrates how that happened:

    Read on →