Streams AQ: enqueue blocked on low memory Wait Event in Oracle

In this post, we will discuss about the wait event "Streams AQ: enqueue blocked on low memory" which was captured in the oracle AWR report.

In one of the environment there was a performance issue in which schema export backup was in hung state. Before this issue, ideally export was completing in one to two hours.

To resolve this issue, we increased the SGA but it didn't helped out. So we checked the wait event for which query was waiting for. Finally we checked the dynamic memory component and observed that streams pool was shrieked due to which it was blocked on low memory.  

The Oracle Streams pool is a portion of memory in the System Global Area (SGA) that is used by Oracle Streams. The Oracle Streams pool stores enqueued messages in memory, and it provides memory for capture processes and apply processes.

The Oracle Streams pool size is managed automatically when the MEMORY_TARGET, MEMORY_MAX_TARGET, or SGA_TARGET initialization parameter is set to a nonzero value. If these parameters are all set to 0 (zero), then you can specify the size of the Oracle Streams pool in bytes using the STREAMS_POOL_SIZE initialization parameter.


SQL> select SID,WAIT_CLASS,EVENT from v$session where SADDR in (select SADDR from dba_datapump_sessions);

       SID    WAIT_CLASS           EVENT
---------- --------------------  ----------------------------------------
        38     enqueue                    Streams AQ: enqueue blocked on low memory



SQL> select component,current_size/1024/1024,last_oper_type,last_oper_time from v$sga_dynamic_components;





Here we can see that streams pool got shrink-ed due to which it was in hung state.

SOLUTION:


As s  workaround, explicitly set the streams_pool_size to a fixed (large enough) value, e.g. 150 Mb (or 300 MB if needed) that will be used as a minimum value, e.g.:

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=both;

And re-run the Export or Import Data Pump job.

If you cannot modify the STREAMS_POOL_SIZE dynamically, then you need to set the value in the spfile, and restart the database.

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=spfile;
SHUTDOWN IMMEDIATE
STARTUP


NOTE:

If the problem is not fixed after implementing one of the above solutions, a fix for unpublished Bug 24560906 must be also installed before reporting the issue to Oracle Support.
Possible solutions for unpublished Bug 24560906 are:


Set the below parameter and restart the job.
alter system set "_disable_streams_pool_auto_tuning"=TRUE;
SHUTDOWN IMMEDIATE
STARTUP


EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1)
Read more ...

MySQL: Can't start server: Bind on TCP/IP port: Address already in use resolved

In this article we will discuss the about the issues we faced after changing mysql data directory.

For one of the environment, there was a requirement to install mysql 5.6 server on CentOS Linux and after installation we had to change the default data directory to the new location.

Everything was fine but during mysql server startup it was taking lot of time. When we checked the log then we observed the below error.


18548 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
18548 [ERROR] Do you already have another mysqld server running on port: 3306 ?
18548 [ERROR] Aborting

Below are the error logs:

[oradb@orahowdb mysql]$  tail -100f /var/log/mysqld.log
tail: cannot open ‘/var/log/mysqld.log’ for reading: Permission denied
tail: no files remaining
[oradb@orahowdb mysql]$  sudo tail -100f /var/log/mysqld.log
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_CMP'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_TRX'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'InnoDB'
2018-05-23 20:18:10 17743 [Note] InnoDB: FTS optimize thread exiting.
2018-05-23 20:18:10 17743 [Note] InnoDB: Starting shutdown...
2018-05-23 20:18:12 17743 [Note] InnoDB: Shutdown completed; log sequence number 1602387
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'BLACKHOLE'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'ARCHIVE'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MRG_MYISAM'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MyISAM'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MEMORY'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'CSV'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'sha256_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'mysql_old_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'mysql_native_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'binlog'
2018-05-23 20:18:12 17743 [Note] /usr/sbin/mysqld: Shutdown complete

180523 20:18:12 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
180523 20:21:33 mysqld_safe Logging to '/var/log/mysqld.log'.
180523 20:21:33 mysqld_safe Starting mysqld daemon with databases from /DBdata/mysql
2018-05-23 20:21:33 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-05-23 20:21:33 0 [Note] /usr/sbin/mysqld (mysqld 5.6.40) starting as process 18548 ...
2018-05-23 20:21:33 18548 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)

2018-05-23 20:21:33 18548 [Warning] Buffered warning: Changed limits: table_open_cache: 431 (requested 2000)

2018-05-23 20:21:33 18548 [Note] Plugin 'FEDERATED' is disabled.
2018-05-23 20:21:33 18548 [Note] InnoDB: Using atomics to ref count buffer pool pages
2018-05-23 20:21:33 18548 [Note] InnoDB: The InnoDB memory heap is disabled
2018-05-23 20:21:33 18548 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-05-23 20:21:33 18548 [Note] InnoDB: Memory barrier is not used
2018-05-23 20:21:33 18548 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-05-23 20:21:33 18548 [Note] InnoDB: Using Linux native AIO
2018-05-23 20:21:33 18548 [Note] InnoDB: Using CPU crc32 instructions
2018-05-23 20:21:33 18548 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2018-05-23 20:21:33 18548 [Note] InnoDB: Completed initialization of buffer pool
2018-05-23 20:21:33 18548 [Note] InnoDB: Highest supported file format is Barracuda.
2018-05-23 20:21:33 18548 [Note] InnoDB: 128 rollback segment(s) are active.
2018-05-23 20:21:33 18548 [Note] InnoDB: Waiting for purge to start
2018-05-23 20:21:33 18548 [Note] InnoDB: 5.6.40 started; log sequence number 1602387
2018-05-23 20:21:33 18548 [Note] Server hostname (bind-address): '*'; port: 3306
2018-05-23 20:21:33 18548 [Note] IPv6 is available.
2018-05-23 20:21:33 18548 [Note]   - '::' resolves to '::';
2018-05-23 20:21:33 18548 [Note] Server socket created on IP: '::'.
2018-05-23 20:21:33 18548 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
2018-05-23 20:21:33 18548 [ERROR] Do you already have another mysqld server running on port: 3306 ?
2018-05-23 20:21:33 18548 [ERROR] Aborting


Finally, we checked the mysql TCP/IP port[3306]:

[root@orahowdb ~]# lsof -i TCP:3306
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  17915 mysql   10u  IPv6  55522      0t0  TCP *:mysql (LISTEN)


We can see that mysqld is already listining on port 3306 which was causing the problem. Finally we checked the pid of mysqld and killed the process.

[root@orahowdb ~]# ps -ef|grep mysql
root     17830     1  0 16:01 ?        00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/####
mysql    17915 17830  0 16:01 ?        00:00:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log
root     18869 15338  0 20:23 pts/2    00:00:00 sudo tail -100f /var/log/mysqld.log
root     18870 18869  0 20:23 pts/2    00:00:00 tail -100f /var/log/mysqld.log
root     20223 19766  0 20:31 pts/2    00:00:00 grep --color=auto mysql

[root@orahowdb ~]# netstat -lp | grep 3306

[root@blrsubjiradb ~]# lsof -i TCP:3306
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  17915 mysql   10u  IPv6  55522      0t0  TCP *:mysql (LISTEN)

[root@orahowdb ~]# kill -9 17915

[root@orahowdb ~]# lsof -i TCP:3306

[root@orahowdb ~]# ps -ef|grep mysql
root     18869 15338  0 20:23 pts/2    00:00:00 sudo tail -100f /var/log/mysqld.log
root     18870 18869  0 20:23 pts/2    00:00:00 tail -100f /var/log/mysqld.log
root     20242 19766  0 20:32 pts/2    00:00:00 grep --color=auto mysql
[root@orahowdb ~]#


After killing the process, finally we restarted the database normally.
Read more ...

[RESOLVED]: ERROR 1558 (HY000): Column count of mysql.user is wrong. Expected 43, found 42. Created with MySQL 50556, now running 50640

In this article, we will discuss about the issue which we faced after mysql database server installation.

Recently there was a reqirement to install mysql server and to create few mysql databases. As per requirement, we created the databases but during user creation we faced the below error.

ERROR 1558 (HY000): Column count of mysql.user is wrong. Expected 43, found 42. Created with MySQL 50556, now running 50640. Please use mysql_upgrade to fix this error.


Below are the error logs and sequence of steps we followed to resolve this error:

[oradb@orahowdb mysql]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.6.40 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
|oradb             |
| mysql              |
| performance_schema |
+--------------------+
6 rows in set (0.00 sec)

mysql> use oradb
Database changed
mysql>

mysql> create user orahow identified by '####';
ERROR 1558 (HY000): Column count of mysql.user is wrong. Expected 43, found 42. Created with MySQL 50556, now running 50640. Please use mysql_upgrade to fix this error.
mysql>

mysql> exit

[oradb@orahowdb mysql]$
[oradb@orahowdb mysql]$ mysql_upgrade -u root -p
Enter password:
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
mysql.columns_priv                                 OK
mysql.db                                           OK
mysql.event                                        OK
mysql.func                                         OK
mysql.general_log                                  OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
mysql.help_topic                                   OK
mysql.host                                         OK
mysql.ndb_binlog_index                             OK
mysql.plugin                                       OK
mysql.proc                                         OK
mysql.procs_priv                                   OK
mysql.proxies_priv                                 OK
mysql.servers                                      OK
mysql.slow_log                                     OK
mysql.tables_priv                                  OK
mysql.time_zone                                    OK
mysql.time_zone_leap_second                        OK
mysql.time_zone_name                               OK
mysql.time_zone_transition                         OK
mysql.time_zone_transition_type                    OK
mysql.user                                         OK
Running 'mysql_fix_privilege_tables'...
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
OK
Could not create the upgrade info file '/DBdata/mysql/mysql_upgrade_info' in the MySQL Servers datadir, errno: 13

[oradb@orahowdb mysql]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13
Server version: 5.6.40 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql>
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| oradb             |
| mysql              |
| performance_schema |
+--------------------+
6 rows in set (0.00 sec)

mysql>
mysql>
mysql> use oradb;
Database changed

mysql>
mysql> create user orahow identified by '####';
Query OK, 0 rows affected (0.00 sec)



Read more ...

How to Enable/Disable a Scheduled Job in Oracle

It is not easy task to manually deal with too many jobs. So to overcome with this this scenario oracle database provides advanced job scheduling capabilities through Oracle Scheduler. The DBMS_SCHEDULER package provides a collection of scheduling functions and procedures that are callable from any PL/SQL program.

Using Scheduler, database administrators and application developers can easily control when and where various tasks take place in the database environment. These tasks can be time consuming and complicated, so using the Scheduler can help them to improve the management and planning of these tasks.


To disable a job that has been scheduled with dbms_scheduler, first you need to identify the job_name, job status and other related information.



To check the job status:
SQL> select job_name, owner, enabled from dba_scheduler_jobs;

SQL> select job_name, enabled from DBA_SCHEDULER_JOBS WHERE job_name = 'SCHEMA_MNTC_JOB';



To Disable a job:

SQL> execute dbms_scheduler.disable('owner.job');

SQL> exec dbms_scheduler.disable('SCHEMA_MNTC_JOB');

PL/SQL procedure successfully completed.



BEGIN

  DBMS_SCHEDULER.DISABLE('SCHEMA_MNTC_JOB');

END;

/



To enable job:
SQL> exec dbms_scheduler.enable('SCHEMA_MNTC_JOB');

PL/SQL procedure successfully completed.




BEGIN

  DBMS_SCHEDULER.ENABLE('SCHEMA_MNTC_JOB');

END;

/



Again you can check the job status using below query:
SQL> select job_name, enabled from DBA_SCHEDULER_JOBS WHERE job_name = ‘GATHER_STATS_JOB’;


Read more ...

How to Lock/Unlock Table Statistics in Oracle

There are a number of cases were you want to lock the table statistics for example, if you have a highly volatile tables or intermediate table, where the volume of data changes drastically over a relatively short period of time or if you want a table not be analyzed by automatic statistics job but analyze it later.

If table is highly volatile then locking the statistics prevent in execution plan from changing and thus helps in plan stability for some period of time. That is why many guys prefer to unlock the stats, gather the stats and finally lock the stats.


How to check if table stats is locked:
SQL> SELECT stattype_locked FROM dba_tab_statistics WHERE table_name = '&TABLE_NAME' and owner = '&TABLE_OWNER';



If you will try to gather locked table statics, you will get the below error:

SQL> EXEC dbms_stats.gather_table_stats(ownname => 'SANCS', tabname => 'ORDER' , estimate_percent => dbms_stats.auto_sample_size);

ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at “SYS.DBMS_STATS”, line 10640
ORA-06512: at “SYS.DBMS_STATS”, line 10664
ORA-06512: at line 1


How to Lock Table Statistics?
SQL>exec dbms_stats.lock_table_stats('<schema>', '<Table>');

 Example:
exec dbms_stats.lock_table_stats('SANCS', 'ORDER');

PL/SQL procedure successfully completed.



How to Unlock Table Statistics?
SQL> exec dbms_stats.unlock_table_stats('<schema>', '<Table>');



Example:

SQL> exec dbms_stats.unlock_table_stats('SANCS', 'ORDER');

PL/SQL procedure successfully completed.




Read more ...

Script to Start Oracle Database Automatically on Linux

From Oracle 10gR2 onward RAC clusterware automatically start and stop the ASM and Oracle database instances and listeners, so the following procedures are not necessary. But for the single instance where the RAC is not being used, this script will allow you to automate the startup and shutdown of oracle databases on Linux automatically after server reboot.

Already both scripts are installed in $ORACLE_HOME/bin and are called dbstart and dbshut. However, these scripts are not executed automatically after you reboot your server. I will explain you how to configure this script so that Oracle Services can start automatically after Linux server reboot.


How to Configure Auto Startup Script?

Below are the changes you need to perform in order to automate this script.

STEP 1: First, you need to make sure that any database instances you want to autostart need to be set to “Y” in /etc/oratab file as shown below.

# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
ora11g:/u01/app/oracle/product/11.2.0/dbhome_1:Y
ora12c:/u01/app/oracle/product/11.2.0/dbhome_1:Y

The /etc/oratab file is normally created by running the root.sh script at the end of the installation. If you don’t have the file, you can always add it to your system by creating it manually (with user root!).

STEP 2: Make entry of ORACLE_HOME and PATH in your . ~/.bash_profile


STEP 3: Next, we are going to create 2 scripts under home path as /home/oracle/scripts: ora_start.sh and ora_stop.sh.

These scripts will call dbstart and dbshut and it will also allow us to add some more actions, for example the start of the Enterprise Manager Database control or any other services you might have.
You can also create separate directory for this script.

$ su – oracle
$ vi /home/oracle/scripts/ora_start.sh

#!/bin/bash


# script to start the Oracle database, listener and dbconsole


. ~/.bash_profile


# start the listener and the database

$ORACLE_HOME/bin/dbstart $ORACLE_HOME

# start the Enterprise Manager db console

#$ORACLE_HOME/bin/emctl start dbconsole

exit 0


$ vi /home/oracle/scripts/ora_stop.sh
#!/bin/bash

# script to stop the Oracle database, listener and dbconsole


. ~/.bash_profile


# stop the Enterprise Manager db console

#$ORACLE_HOME/bin/emctl stop dbconsole

# stop the listener and the database

$ORACLE_HOME/bin/dbshut $ORACLE_HOME

exit 0


You can see that inside the scripts, we are calling the .bash_profile file of the user “oracle”. This is needed to set the ORACLE_HOME, PATH environment variable.



STEP 4: Give execute permission to the scripts:
$  chmod u+x ora_start.sh ora_stop.sh




STEP 5: We will now create a wrapper script that can be used to schedule as a service.
With user root, create a file called “oracle” under /etc/init.d.

$ vi /etc/init.d/oracle
#!/bin/bash
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.

# Set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME.

ORA_OWNER=oracle
RETVAL=0

case "$1" in
    'start')
        # Start the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su - $ORA_OWNER -c "/home/oracle/scripts/ora_start.sh"
        touch /var/lock/subsys/oracle
        ;;
    'stop')
        # Stop the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su - $ORA_OWNER -c "/home/oracle/scripts/ora_stop.sh"
        rm -f /var/lock/subsys/oracle
        ;;
    *)
        echo $"Usage: $0 {start|stop}"
        RETVAL=1
esac
exit $RETVAL



STEP 6: Grant below permission for this script.
$ chmod 750 /etc/init.d/oracle

Note: Add the oracle home paths in .bash_profile



STEP 7: To create a service of this script, run the following command:
$ chkconfig --add oracle



STEP 8: All done, check the script and database status by running “service oracle stop” or “service oracle start” from the command line.

$ service oracle stop
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
Stopping Oracle Enterprise Manager 11g Database Control …
… Stopped.
Processing Database instance “oratst”: log file /u01/app/oracle/product/11.2.0/db_1/shutdown.log

$ service oracle start
Processing Database instance “oratst”: log file /u01/app/oracle/product/11.2.0/db_1/startup.log
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
Starting Oracle Enterprise Manager 11g Database Control …… started.
After this, it’s time for the final test: reboot your server and check if your Oracle database is automatically started after the reboot.


Whenever database server will reboot, you will observe that your configured databases has been started automatically. There is no need to start and stop the database manually. If you are getting any issue you can contact for support. We will be pleased to assist you.

Read more ...

How to Gather Statistics on Large Partitioned Tables in Oracle

It is difficult to gather stats on large partition tables which is huge in size, specially in core domain like telecom sectors where customers has to maintain call detail records in a partitioned table which are very big and huge in size.

For such tables we use to gather statistics of one partition which we can call it as a source partition and copy that stats to rest of the partition which we can call it as destination partition.

For Example: If you have 366 partitions then you can gather stats for anyone partition say P185 and now copy stats to rest of the partition. please note that, choose the partition where you have data in that partition. There is no need to gather stats for all the partition because oracle internally distribute the data based on the partitioned key.


STEPS TO MAINTAIN STATISTICS ON LARGE PARTITION TABLES



STEP 1: Gather stats for any one partition say P185.

EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME' , PARTNAME => 'P185', estimate_percent => 10, method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true );

Note: Change table_name and table_owner. You can increase degree if free parallel servers are available.


STEP 2: Generate script for rest of the remaining partition like shown below. Your source partition will be P185 and destination partition will be rest of the remaining partitions.


STEP 3: After gather statistics you can lock the stats. Using below format you can generate the script for all the partitions after making necessary changes.


exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P001', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P001');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P002', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P002');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P003', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P003');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P004', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P004');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P005', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P005');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P006', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P006');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P007', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P007');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P008', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P008');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P009', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P009');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P010', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P010');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P011', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P011');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P012', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P012');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P013', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P013');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P014', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P014');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P015', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P015');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P016', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P016');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P017', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P017');


Please feel free to contact for support in case of any difficulties. We will be pleased to provide support for your queries. 

Read more ...

Oracle Expdp/Impdp - Datapump Interview Questions/FAQs

Q1. How to export only ddl/metadata of a table?
Ans: you can use CONTENT=METADATA_ONLY parameter during export.


Q2: Which memory area used by datapump process?
Ans:  streams_pool_size. If streams_pool_size is zero 0 then probably you will get memory related error. Please check this parameter and set minimum to 96M value.

show parameter STREAMS_POOL_SIZE

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------
streams_pool_size                    big integer 96M



Q3: How to improve datapump performance so that export/import happens faster?
Ans:  
  • Allocate streams_pool_size memory. Below query will give you the recommended settings of this parameter.
 select 'ALTER SYSTEM SET STREAMS_POOL_SIZE='||                  (max(to_number(trim(c.ksppstvl)))+67108864)||' SCOPE=SPFILE;'
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in ('__streams_pool_size','streams_pool_size');

ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;

  • Use CLUSTER=N : In a RAC environment it can improve the speed of Data Pump API based operations.
  • Set PARALLEL_FORCE_LOCAL to a value of TRUE since PARALLEL_FORCE_LOCAL could have a wider scope of effect than just Data Pump API based operations.
  • EXCLUDE=STATISTICS:  excluding the generation and export of statistics at export time will shorten the time needed to perform any export operation. The DBMS_STATS.GATHER_DATABASE_STATS procedure would then be used at the target database once the import operation was completed.
  • Use PARALLEL : If there is more than one CPU available and the environment is not already CPU bound or disk I/O bound or memory bound and multiple dump files are going be used (ideally on different spindles) in the DUMPFILE parameter, then parallelism has the greatest potential of being used to positive effect, performance wise.
Q4: How to monitor status of export/import - datapump operations/jobs?
Ans: From dba_datapump_jobs you can easily monitor the status. You can use the below query.


set linesize 200
set pagesize 200
col owner_name format a12
col job_name format a20
col operation format a12
col job_mode format a20
SELECT 
owner_name, 
job_name, 
operation, 
job_mode, 
state 
FROM 
dba_datapump_jobs
where 
state='EXECUTING';

SELECT   w.sid, w.event, w.seconds_in_wait
FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w
WHERE   s.saddr = d.saddr AND s.sid = w.sid;

SELECT 
OPNAME, 
SID, 
SERIAL#, 
CONTEXT, 
SOFAR, 
TOTALWORK,
    ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM 
V$SESSION_LONGOPS
WHERE 
OPNAME in
(
select 
d.job_name
from 
v$session s, 
v$process p, 
dba_datapump_sessions d
where 
p.addr=s.paddr 
and 
s.saddr=d.saddr
)
AND 
OPNAME NOT LIKE '%aggregate%'
AND 
TOTALWORK != 0
AND 
SOFAR <> TOTALWORK;


Q5: How to stop/start/kill datapump jobs?
Ans: expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job

You can also kill jobs from alter system kill session command. SID and SERIAL# you will get from the above command.

alter system kill session 'SID,SERIAL#' immediate;


Q6: How will you take consistent export backup? What is the use of flashback_scn ?
Ans:  To take a consistent export backup you can use the below method:

SQL:  
           select to_char(current_scn) from v$database;

Expdp parfile content:
---------------------------

directory=OH_EXP_DIR 
dumpfile=exporahow_4apr_<yyyymmdd>.dmp 
logfile=exporahow_4apr_<yyyymmdd>.log 
schemas=ORAHOW 
flashback_scn=<<current_scn>>


Q7: How to drop constraints before import?
Ans: 

set feedback off;
spool /oradba/orahow/drop_constraints.sql;

select 'alter table SCHEMA_NAME.' || table_name || ' drop constraint ' || constraint_name || ' cascade;'
from dba_constraints where owner = 'SCHEMA_NAME'
  and not (constraint_type = 'C')
  order by table_name,constraint_name;

  Spool off;
  exit;

Q8: I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don't have that much available disk space? What could be the reason that only ddl is consuming huge space?
Ans: Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below. 

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
  STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.


Q9: If you don't have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?
Ans: You can use network link/ DB Link for export.  You can use network_link by following these simple steps:

Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax

Q10: Tell me some of the parameters you have used during export?
Ans:

CONTENT:         Specifies data to unload where the valid keywords are:
                             (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY       Directory object to be used for dumpfiles and logfiles.
DUMPFILE         List of destination dump files (expdat.dmp),
                             e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
TABLES                Identifies a list of tables to export - one schema only.
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.


Q11: You are getting undo tablespace error during import, how you will avoid it?
Ans: We can use COMMIT=Y option


Q12: Can we import a 11g dumpfile into 10g database using datapump? 
Ans: Yes we can import from 11g to 10g using VERSION option.
Read more ...

CONTACT

Name

Email *

Message *