How to Find Users having DBA Role in Oracle

The default DBA role is automatically created during Oracle Database installation. This role contains most database system privileges. Therefore, the DBA role should be granted only to actual database administrators. If you want to know which users have been granted the dba role then you need to query the dba_role_privs in the SYS schema.


The DBA role does not include the SYSDBA or SYSOPER system privileges. These are special administrative privileges that allow an administrator to perform basic database administration tasks, such as creating the database and instance startup and shutdown.

This role tells you the grantee, granted_role, whether they have admin option granted, and whether the role is their default role:

DBA_ROLE_PRIVS describes the roles granted to all users and roles in the database.

USER_ROLE_PRIVS describes the roles granted to the current user.

SQL> desc dba_role_privs
Name         Null?    Type
------------ -------- ------------
GRANTEE               VARCHAR2(30)

GRANTEE:               Name of the user or role receiving the grant
GRANTED_ROLE:   Granted role name
ADMIN_OPTION:   Indicates whether the grant was with the ADMIN OPTION (YES) or not(NO)
DEFAULT_ROLE:   Indicates whether the role is designated as a DEFAULT ROLE for the user (YES) or not (NO)

Using below query you can find users having DBA privileges
SQL> select * from dba_role_privs where granted_role='DBA';
--------- ------------ --- ---
SYS       DBA          YES YES
SYSTEM    DBA          YES YES

There are many situations arises where you wanted to revoke the DBA roles granted to the user for the security reasons. The above query will help you to find the users having DBA privileges. 
Read more ...

dbca UnsatisfiedLinkError exception loading native library java.lang.UnsatisfiedLinkError

Getting below error while installing oracle 11g/12c database using DBCA. 

UnsatisfiedLinkError exception loading native library: njni11
java.lang.UnsatisfiedLinkError: /home/u01/app/oracle/product/11.2.0/db_1/lib/ cannot open shared object file: No such file or directory
Exception in thread "main" java.lang.UnsatisfiedLinkError: get
at Method)
at Source)
at Source)
at<init>(Unknown Source)
at oracle.sysman.assistants.util.NetworkUtils.<init>(
at oracle.sysman.assistants.util.step.StepContext.<init>(
at oracle.sysman.assistants.dbca.backend.Host.<init>(
at oracle.sysman.assistants.dbca.ui.UIHost.<init>(
at oracle.sysman.assistants.dbca.ui.InteractiveHost.<init>(
at oracle.sysman.assistants.dbca.Dbca.getHost(
at oracle.sysman.assistants.dbca.Dbca.execute(
at oracle.sysman.assistants.dbca.Dbca.main(

You will be able to install oracle binaries successfully but during during SID creation you will get the above error.


The prerequisites have not been met.


  1. Check the proper oracle binary version. Oracle binaries should be downloaded for proper OS versions.
  2. sqlplus: error while loading shared libraries:" : The prerequisites have not been met. Make sure the "gcc" package has been installed.

I will suggest you to download the suggested oracle rpm packages and start the installation again.

List of supported Oracle 11G RPM packages:

rpm -Uvh binutils-2*x86_64*
rpm -Uvh glibc-2*x86_64* nss-softokn-freebl-3*x86_64*
rpm -Uvh glibc-2*i686* nss-softokn-freebl-3*i686*
rpm -Uvh compat-libstdc++-33*x86_64*
rpm -Uvh glibc-common-2*x86_64*
rpm -Uvh glibc-devel-2*x86_64*
rpm -Uvh glibc-devel-2*i686*
rpm -Uvh glibc-headers-2*x86_64*
rpm -Uvh elfutils-libelf-0*x86_64*
rpm -Uvh elfutils-libelf-devel-0*x86_64*
rpm -Uvh gcc-4*x86_64*
rpm -Uvh gcc-c++-4*x86_64*
rpm -Uvh ksh-*x86_64*
rpm -Uvh libaio-0*x86_64*
rpm -Uvh libaio-devel-0*x86_64*
rpm -Uvh libaio-0*i686*
rpm -Uvh libaio-devel-0*i686*
rpm -Uvh libgcc-4*x86_64*
rpm -Uvh libgcc-4*i686*
rpm -Uvh libstdc++-4*x86_64*
rpm -Uvh libstdc++-4*i686*
rpm -Uvh libstdc++-devel-4*x86_64*
rpm -Uvh make-3.81*x86_64*
rpm -Uvh numactl-devel-2*x86_64*
rpm -Uvh sysstat-9*x86_64*
rpm -Uvh compat-libstdc++-33*i686*
rpm -Uvh compat-libcap*

List of supported Oracle 12C RPM packages:

yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y

Read more ...

How to convert scn to a timestamp in Oracle

Oracle has inbuilt features using which you can convert scn to timestamp and timestamp to scn. SCN_TO_TIMESTAMP takes as an argument a number that evaluates to a system change number (SCN), and returns the approximate timestamp associated with that SCN. This function is useful any time you want to know the timestamp associated with an SCN. In order to do this, Oracle has provided two packages called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN.


  • SCN is very important particularly when you are doing RMAN Recovery. 
  • Archivelog gap resolution and Recovery of Physical standby server.
  •  It can be used in an data pump export parameter file using FLASHBACK_SCN to ensure a consistent copy of the database at that point-in-time. 


Check the current scn of the database using below query.
SQL>select current_scn from v$database;


To get the timestamp value from the current scn number.
SQL> select scn_to_timestamp(4426538972) as timestamp from dual;

23-SEP-18 PM

To get the scn number from the timestamp.
SQL> select timestamp_to_scn(to_timestamp('23/09/2018 15:22:44','DD/MM/YYYY HH24:MI:SS')) as scn from dual;


Read more ...

How to stop Materialized view Auto Refresh in Oracle

A materialized view is a table segment or database object that contains the results of a query. A materialized view created with the automatic refresh can not be alter to stop refreshing. In order to disable that you must break the dbms_job that was created in order to refresh the view.


Mview are local copies of data located remotely, or are used to create summary tables based on aggregations of a table's data. Materialized views, which store data based on remote tables are also, know as snapshots.

Steps to Disable Automatic Refresh of Materialized View.

STEP 1. Connect as mview owner and execute the below query.
select * from user_jobs where broken ='N';

STEP 2. In the WHAT column for the mview refresh job you will see:

STEP 3. Take the JOB_ID form the job column and execute below procedure and mview will stop refreshing automatically:


If you want the mview to start refreshing again just run the job.


That's all about disabling the materialized view. If you want to stop materialized view from auto refresh just run the above procedure and check the status of the job
Read more ...

How to Flush a Single SQL Statement from the Shared Pool

Sometimes it is required to flush the single sql plan from the shared pool and it a good idea rather than flushing all the sql plan from the memory. After baselining or fixing the bad plan, DBA's wanted to kick out the sql plan from the memory to confirm, if optimizer is picking the correct plan.


I have seen that many guys simply use to flush the shared pool using alter system flush shared_pool statement which is not a good idea. Rather if you are struggling for fixing the bad plan of any single query, it is advisable to flush the single sql_id out of the memory.

STEP 1: Find Address and  hash_value of particular sql_id.
SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID='251fjyn5fj84q';

---------------- ------------------------

00000005DGEC9DE0 257655674

STEP 2: Purge sql plan from the shared pool by passing the above values.
SQL> exec DBMS_SHARED_POOL.PURGE ('00000005DGEC9DE0, 257655674', 'C');

PL/SQL procedure successfully completed.

Here, ‘C’ (for cursor)

STEP 3: Check if the plan still exist in the memory. If no rows selected then plan has been flushed out from the memory for that sql_id.

SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID='251fjyn5fj84q';

no rows selected

In the above example, we have used the V$ view to find cursor information. If you are using RAC environment then change your query to use gv$sqlarea just in case the SQL statement you are looking for was executed on an instance other than the one you are currently logged into.

So this is all about flushing the sql statement from the shared pool. If you are facing such perforamce issues then you can use the above method to flush plan for the particular sql_id from the memory.

Read more ...

Best ways to Describe Table in Vertica

There are various ways to describe table in vertica, among some of them are listed below.

describe table

List table definition using \d:
=>  \d table_name;
=> \d schema_name.table_name;

It will list all the below details:
Not Null
Primary Key
Foreign Key

oraadmin=> \d P1_orahow.u_acct_trans;
List of Fields by Tables            

  Schema  |        Table |         Column         |     Type    

 P1_orahow | u_acct_trans| act_id                 | numeric(19,0)
 P1_orahow | u_acct_trans| sfl_id                 | int          
 P1_orahow | u_acct_trans| sts_id                 | int          
 P1_orahow | u_acct_trans| act_record_number      | int          
 P1_orahow | u_acct_trans| act_record_address     | numeric(19,0)
 P1_orahow | u_acct_trans| act_record_length      | int          
 P1_orahow | u_acct_trans| act_record_type        | varchar(255)
 P1_orahow | u_acct_trans| act_duplicate_hashcode | int          
 P1_orahow | u_acct_trans| act_dup_fl             | char(1)      
 P1_orahow | u_acct_trans| act_filename           | varchar(255)
 P1_orahow | u_acct_trans| act_cust_id            | varchar(255)
 P1_orahow | u_acct_trans| act_acct_id            | varchar(255)
 P1_orahow | u_acct_trans| act_subs_id            | numeric(19,0)
 P1_orahow | u_acct_trans| act_re_name            | varchar(255)
 P1_orahow | u_acct_trans| act_item_type_name     | varchar(255)
 P1_orahow | u_acct_trans| act_item_type_code     | varchar(255)
 P1_orahow | u_acct_trans| act_event_time         | timestamp    
 P1_orahow | u_acct_trans| act_charge             | numeric(19,0)
 P1_orahow | u_acct_trans| act_price_plan_code    | varchar(255)
 P1_orahow | u_acct_trans| act_price_plan_name    | varchar(255)
 P1_orahow | u_acct_trans| act_event_inst_id      | varchar(255)
 P1_orahow | u_acct_trans| act_msisdn             | varchar(255)

List high level Table definition:
oraadmin=> \dt u_acct_trans_log;

                          List of tables

    Schema     |         Name         | Kind  |  Owner   | Comment
 P1_orahow      | u_acct_trans | table | oraadmin  |
 P1_orahow_test | u_acct_trans | table | ora_test |

(2 rows)

Describe from v_catalog.columns definition:
FROM   v_catalog.columns
WHERE  table_schema='P1_orahow'
       AND table_name='u_acct_trans'
ORDER  BY ordinal_position;

List table definition using table export:
oraadmin=> SELECT EXPORT_TABLES('', 'schema_name.table_name');
oraadmin=> SELECT EXPORT_TABLES('', 'P1_orahow.u_acct_trans');

List table definition by table export, which will give you the full create statement, including projections:
oraadmin=> SELECT export_objects('', 'schema_name.table_name');
oraadmin=> SELECT export_objects('', 'P1_orahow.u_acct_trans');

List all tables in Public Schema:
oraadmin-> \dt public.*;
                        List of tables

 Schema |           Name           | Kind  |  Owner  | Comment


 public | abc                      | table | oraadmin |
 public | cdr_offpeak_ts_temp      | table | oraadmin |
 public | cdr_percall_ts_temp      | table | oraadmin |
 public | gprs_ts_temp             | table | oraadmin |
 public | ipdr_ts_temp             | table | oraadmin |
 public | nik_cdr_offpeak_summary  | table | oraadmin |
 public | nik_cdr_per_call_summary | table | oraadmin |
 public | nik_gprs_cdr_summary     | table | oraadmin |
 public | nik_ipdr_summary         | table | oraadmin |
 public | nik_recharge_log_summary | table | oraadmin |
 public | recharge_log_ts_temp     | table | oraadmin |

(11 rows)

Read more ...

Script to Monitor RMAN Backup Status and Timings

Being a DBA, you often asked to check the status of  RMAN backup job details. There are many ways to monitor the progress of the backup but you can use the below scripts to monitor the status of the RMAN job like full, incremental & archivelog backups.

You can use views like v$rman_backup_job_details and V$SESSION_LONGOPS to monitor the current executing RMAN jobs and the status of the previously completed backups.


SQL> desc v$rman_backup_job_details
 Name                                 Null?        Type
 ----------------------------- -------- --------------------
 SESSION_KEY                              NUMBER
 SESSION_RECID                          NUMBER
 SESSION_STAMP                         NUMBER
 COMMAND_ID                             VARCHAR2(33)
 START_TIME                                DATE
 END_TIME                                    DATE
 INPUT_BYTES                             NUMBER
 OUTPUT_BYTES                         NUMBER
 STATUS_WEIGHT                       NUMBER
 OBJECT_TYPE_WEIGHT                    NUMBER
 OUTPUT_DEVICE_TYPE                     VARCHAR2(17)
 AUTOBACKUP_COUNT                       NUMBER
 BACKED_BY_OSB                          VARCHAR2(9)
 AUTOBACKUP_DONE                        VARCHAR2(9)
 STATUS                                 VARCHAR2(69)
 INPUT_TYPE                             VARCHAR2(39)
 OPTIMIZED                              VARCHAR2(9)
 ELAPSED_SECONDS                        NUMBER
 COMPRESSION_RATIO                      NUMBER
 INPUT_BYTES_PER_SEC                    NUMBER
 OUTPUT_BYTES_PER_SEC                   NUMBER
 INPUT_BYTES_DISPLAY                    VARCHAR2(4000)
 OUTPUT_BYTES_DISPLAY                   VARCHAR2(4000)
 TIME_TAKEN_DISPLAY                     VARCHAR2(4000)

This script will report status of current as well as completed backup details like full, incremental and archivelog backups:

col STATUS format a9
col hrs format 999.99
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600                   hrs
order by session_key;

----------- ------------- --------- -------------- -------------- -------
        585 ARCHIVELOG    COMPLETED 01/08/15 06:00 01/08/15 06:02     .03
        591 ARCHIVELOG    COMPLETED 01/08/15 12:00 01/08/15 12:01     .02
        596 ARCHIVELOG    COMPLETED 01/08/15 18:01 01/08/15 18:02     .03
        601 DB INCR       FAILED    01/08/15 20:00 01/09/15 01:47    5.79
        603 ARCHIVELOG    COMPLETED 01/09/15 06:00 01/09/15 06:07     .12
        608 ARCHIVELOG    COMPLETED 01/09/15 12:00 01/09/15 12:09     .16
        613 ARCHIVELOG    COMPLETED 01/09/15 15:07 01/09/15 15:25     .29

Below script will report you the percentage of completion along with sid and serial#.


---------- ---------- ---------- ---------- ---------- ----------
 22        31         1          8225460    18357770   45.21

Read more ...

Best Way to move all objects from one Tablespace to another in Oracle

Being a DBA, there are many occasions where you need to move all database objects from one tablespace to another to make them reorganized. This is because too many tablespaces consume lot of space and are difficult to manage and cause extra overhead to oracle. So in this situations you need to move tables, indexes and other database objects to the newly created tablesapce.

Recently as a part of maintenance activity, we observed that users tablespace was consuming 363 GB disk space but when we checked from dba_segments, the actual size of the objects was 3GB only. We tried to resize the datafiles but during resizing datafiles we got the below error.

ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

Oracle throw this error because datafiles has reached to the high watermark and some of the objects might be residing to the end of the hwm and you cannot shrink the datafiles below the High Water Mark (HWM).

For example, suppose you created one tablespace of size 120 GB and created two tables inside that tablespace of size 60 GB each. Now the overall size of the tablesapce is 120 GB and we can say that tablespace has reached to the high watermark because it doesn't contain the data beyound HWM(120GB).

What will happen if you dropped table1 and then trying to resize the datafile?

Inspite of 60 GB free space inside the tablespace, the database doesn't allow you to resize the datafile because tablespace has reached to its maxsize and still contains data(table2) which is sitting somewhere near to the HWM and it will throw error "file contains used data beyond requested RESIZE value". 

                  table1                         table2


As we can see in users tablespace actual size of the data is 3.5GB only but it is occupying 363 GB of disk space. When we tried to resize the datafiles, it throws the ORA-03297 error because of HWM. To overcome this you must move all objects from users tablesapce to new tablesapce.

move objects to different tablesapce in oracle

 You can move all objects to different tablespace in many ways:  for example, using the alter table move command or the dbms_redefinition package. But the safest and the easy ways to do so is to use the remap_tablespace using expdp. If tablespace contains clustered objects then you cannot use alter table move command or any other scripts. So the only one option to do so is to use expdp.

Steps to Move objects into different tablespace using EXPDP:

STEP1: Create directory for export dumpfile:
SQL> create or replace directory test_dir as '/data/oracle';
Directory created.

STEP2: Grant read, write on the newly created directory.
SQL> grant read, write on directory test_dir to username;
Grant succeeded.

STES 3: Export all tablesapce objects using expdp.
nohup expdp \"/ as sysdba\" DIRECTORY=test_dir DUMPFILE=users.dmp LOGFILE=users.log TABLESPACES=USERS &

STEP 4: Import objects to the newly created tablespace using remap_tablespace.
Please note that, you must use table_exists_action=replace otherwise database willl throw error: object already exists and skipped because of default table_exists_action of skip.
nohup impdp \"/ as sysdba\" DIRECTORY=test_dir DUMPFILE=users.dmp table_exists_action=replace remap_tablespace=USERS:MTNGB1 LOGFILE=users.log &

Finally verify the objects in both the tablesapce and drop the tablespace which was consuming huge space.

Read more ...

How to Fix AHM not Advancing to Last Good Epoch in Vertica

Vertica advances the AHM at an interval of 5 minutes to be equal with Last Good Epoch - LGE. Because of unrefreshed projections due to some reason, the AHM does not advance. The AHM is never greater than the LGE. 


Definition and concepts you must know before troubleshooting AHM lagging issue:

Ancient History Mark -AHM:
The ancient history mark - AHM is the epoch prior to which historical data can be purged from physical storage.

An epoch is 64-bit number that represents a logical time stamp for the data in Vertica. Every row has an implicitly stored column that records the committed epoch.

The epoch advances when the data is committed with a DML operation (INSERT, UPDATE, MERGE, COPY, or DELETE). The EPOCHS system table contains the date and time of each closed epoch and the corresponding epoch number of the closed epoch.

Using below query, you can check which time periods pertain to which epochs:
oradmin=> SELECT * FROM epochs;

Current Epoch (CE):
The current epoch is the open epoch that becomes the last epoch (LE) after a COMMIT operation. The current_epoch at the time of the COMMIT is the epoch for that DML.


Latest Epoch (LE):
The latest epoch is the most recently closed epoch. The current epoch after the COMMIT operation becomes the latest epoch.

Checkpoint Epoch (CPE):
The checkpoint epoch per projection is the latest epoch for which there is no data in the WOS. It is the point at which the projection can be recovered. The Tuple Mover moveout operation advances the projection CPE while moving the data from WOS to the ROS. You can see the projection checkpoint epochs in the PROJECTION_CHECKPOINT_EPOCHS system table.

Last Good Epoch (LGE):
The minimum checkpoint epoch across all the nodes is known as the last good epoch. The last good epoch refers to the most recent epoch that can be recovered in a manual recovery. The LGE consists of a snapshot of all the data on the disk. If the cluster shuts down abnormally, the data after the LGE is lost.
The Tuple Mover advances the CPE and sets a new LGE. If the Tuple Mover fails, the data does not move from the WOS to the ROS. Hence, the data does not advance the CPE and the LGE.

To see the cluster last good epoch, you can use the following command:

Last Good Epoch Does Not Advance
There are certain situations when the last good epoch does not advance. If the LGE advances, you see the following result. When the Tuple Mover moves the data from the WOS to the ROS, the LGE advances:




        731384   |       721381

If you do not see the LGE advance, check if there is data in the WOS:
oradmin=>SELECT sum(wos_used_bytes) from projection_storage ;

If there is data in the WOS, force a moveout operation:
oradmin=> SELECT do_tm_task('moveout');

Ancient History Mark Does Not Advance
If the difference b/w the Last good epoch(LGE) and Ancient History mark(AHM) is huge then you need to ensure that there is not much difference as it takes some hours to recover the data.

You can check the LGE and AHM difference using below query:
oradmin=> select get_current_epoch(),get_last_good_epoch(),get_ahm_epoch(),(get_current_epoch()- get_last_good_epoch()) LGECEDiff,(get_last_good_epoch()-get_ahm_epoch()) LGEAHMDiff, get_ahm_time();

oradmin=> select get_current_epoch() CE,get_last_good_epoch() LGE,get_ahm_epoch () AHM,(get_current_epoch()- get_last_good_epoch()) CeLGDiff,(get_last_good_epoch()-get_ahm_epoch())LgeAHmDiff,get_expected_recovery_epoch();

INFO 4544:  Recovery Epoch Computation:

Node Dependencies:
011 - cnt: 9448
101 - cnt: 9448
110 - cnt: 9448
111 - cnt: 5
Nodes certainly in the cluster:

        Node 1(v_usagedb_node0002), epoch 1164674

        Node 2(v_usagedb_node0003), epoch 1164674

Filling more nodes to satisfy node dependencies:

Data dependencies fulfilled, remaining nodes LGEs don't matter:

        Node 0(v_usagedb_node0001), epoch 1164669

   CE    |   LGE   |  AHM   | CeLGDiff | LgeAHmDiff | get_expected_recovery_epoch


 1164675 | 1164669 | 797307 |        6 |     367362 |                     1164674

To sync AHM with LGE, execute the below command:
oradmin=> SELECT MAKE_AHM_NOW();

The above command performs the below operations:
  • Advances the epoch
  • Performs a moveout operation on all projections.
Read more ...

How to Check the Size of Tables in Vertica

Vertica store table data in compressed format. To get the size of a table in vertica, you can use the below query. By using column_storage and projection_storage system tables you will get the size of table compressed format. You can check the all the column definition from the official vertica sites using below link.



The associated table name for which information is listed.

The associated table schema for which information is listed.

The number of bytes of disk storage used by the projection.

SELECT anchor_table_schema,
SUM(used_bytes) / (1024/1024/1024/1024) AS TABLE_SIZE_GB
FROM   v_monitor.projection_storage
GROUP  BY anchor_table_schema,
order  by sum(used_bytes) desc;

To find number of rows and bytes occupied by each table in the database
SELECT t.table_name AS table_name,
SUM(ps.wos_row_count + ps.ros_row_count) AS row_count,
SUM(ps.wos_used_bytes + ps.ros_used_bytes) AS byte_count
FROM tables t
JOIN projections p ON t.table_id = p.anchor_table_id
JOIN projection_storage ps on p.projection_name = ps.projection_name
WHERE (ps.wos_used_bytes + ps.ros_used_bytes) > 500000
--and t.table_name='table_name'
GROUP BY t.table_name
ORDER BY byte_count DESC;

To find the size of single table in the database:

SELECT anchor_table_schema,
SUM(used_bytes) / ( 1024/1024/1024 ) AS TABLE_SIZE_GB
FROM   v_monitor.column_storage
GROUP  BY anchor_table_schema,
order  by sum(used_bytes) desc;

Read more ...



Email *

Message *