How to Fix AHM not Advancing to Last Good Epoch in Vertica

Vertica advances the AHM at an interval of 5 minutes to be equal with Last Good Epoch - LGE. Because of unrefreshed projections due to some reason, the AHM does not advance. The AHM is never greater than the LGE. 

vertica


Definition and concepts you must know before troubleshooting AHM lagging issue:


Ancient History Mark -AHM:
The ancient history mark - AHM is the epoch prior to which historical data can be purged from physical storage.


Epoch:
An epoch is 64-bit number that represents a logical time stamp for the data in Vertica. Every row has an implicitly stored column that records the committed epoch.

The epoch advances when the data is committed with a DML operation (INSERT, UPDATE, MERGE, COPY, or DELETE). The EPOCHS system table contains the date and time of each closed epoch and the corresponding epoch number of the closed epoch.

Using below query, you can check which time periods pertain to which epochs:
oradmin=> SELECT * FROM epochs;



Current Epoch (CE):
The current epoch is the open epoch that becomes the last epoch (LE) after a COMMIT operation. The current_epoch at the time of the COMMIT is the epoch for that DML.
oradmin=> SELECT CURRENT_EPOCH FROM SYSTEM;

CURRENT_EPOCH
---------------
629415



Latest Epoch (LE):
The latest epoch is the most recently closed epoch. The current epoch after the COMMIT operation becomes the latest epoch.


Checkpoint Epoch (CPE):
The checkpoint epoch per projection is the latest epoch for which there is no data in the WOS. It is the point at which the projection can be recovered. The Tuple Mover moveout operation advances the projection CPE while moving the data from WOS to the ROS. You can see the projection checkpoint epochs in the PROJECTION_CHECKPOINT_EPOCHS system table.


Last Good Epoch (LGE):
The minimum checkpoint epoch across all the nodes is known as the last good epoch. The last good epoch refers to the most recent epoch that can be recovered in a manual recovery. The LGE consists of a snapshot of all the data on the disk. If the cluster shuts down abnormally, the data after the LGE is lost.
The Tuple Mover advances the CPE and sets a new LGE. If the Tuple Mover fails, the data does not move from the WOS to the ROS. Hence, the data does not advance the CPE and the LGE.


To see the cluster last good epoch, you can use the following command:
oradmin=> SELECT GET_LAST_GOOD_EPOCH();



Last Good Epoch Does Not Advance
There are certain situations when the last good epoch does not advance. If the LGE advances, you see the following result. When the Tuple Mover moves the data from the WOS to the ROS, the LGE advances:

oradmin=>SELECT CURRENT_EPOCH, LAST_GOOD_EPOCH FROM SYSTEM ;

 CURRENT_EPOCH   |   LAST_GOOD_EPOCH

---------------+-----------------

        731384   |       721381



If you do not see the LGE advance, check if there is data in the WOS:
oradmin=>SELECT sum(wos_used_bytes) from projection_storage ;


If there is data in the WOS, force a moveout operation:
oradmin=> SELECT do_tm_task('moveout');


Ancient History Mark Does Not Advance
If the difference b/w the Last good epoch(LGE) and Ancient History mark(AHM) is huge then you need to ensure that there is not much difference as it takes some hours to recover the data.


You can check the LGE and AHM difference using below query:
oradmin=> select get_current_epoch(),get_last_good_epoch(),get_ahm_epoch(),(get_current_epoch()- get_last_good_epoch()) LGECEDiff,(get_last_good_epoch()-get_ahm_epoch()) LGEAHMDiff, get_ahm_time();


oradmin=> select get_current_epoch() CE,get_last_good_epoch() LGE,get_ahm_epoch () AHM,(get_current_epoch()- get_last_good_epoch()) CeLGDiff,(get_last_good_epoch()-get_ahm_epoch())LgeAHmDiff,get_expected_recovery_epoch();



INFO 4544:  Recovery Epoch Computation:

Node Dependencies:
011 - cnt: 9448
101 - cnt: 9448
110 - cnt: 9448
111 - cnt: 5
Nodes certainly in the cluster:

        Node 1(v_usagedb_node0002), epoch 1164674

        Node 2(v_usagedb_node0003), epoch 1164674

Filling more nodes to satisfy node dependencies:

Data dependencies fulfilled, remaining nodes LGEs don't matter:

        Node 0(v_usagedb_node0001), epoch 1164669

   CE    |   LGE   |  AHM   | CeLGDiff | LgeAHmDiff | get_expected_recovery_epoch

---------+---------+--------+----------+------------+-----------------------------

 1164675 | 1164669 | 797307 |        6 |     367362 |                     1164674



To sync AHM with LGE, execute the below command:
oradmin=> SELECT MAKE_AHM_NOW();

The above command performs the below operations:
  • Advances the epoch
  • Performs a moveout operation on all projections.
Read more ...

How to Check the Size of Tables in Vertica

Vertica store table data in compressed format. To get the size of a table in vertica, you can use the below query. By using column_storage and projection_storage system tables you will get the size of table compressed format. You can check the all the column definition from the official vertica sites using below link.

v_monitor.projection_storage


COLUMN DEFINITION:
----------------------------------


ANCHOR_TABLE_NAME: VARCHAR
The associated table name for which information is listed.


ANCHOR_TABLE_SCHEMA: VARCHAR
The associated table schema for which information is listed.

USED_BYTES: INTEGER
The number of bytes of disk storage used by the projection.



SELECT anchor_table_schema,
anchor_table_name,
SUM(used_bytes) / (1024/1024/1024/1024) AS TABLE_SIZE_GB
FROM   v_monitor.projection_storage
GROUP  BY anchor_table_schema,
anchor_table_name
order  by sum(used_bytes) desc;



SELECT ANCHOR_TABLE_NAME,PROJECTION_SCHEMA,((SUM(USED_BYTES))/1024/1024/1024)  AS TOTAL_SIZE FROM PROJECTION_STORAGE WHERE ANCHOR_TABLE_NAME ='&TABLE_NAME' AND ANCHOR_TABLE_SCHEMA='&TABLE_SCHEMA' AND PROJECTION_NAME like '&PROJECTION_NAME' GROUP BY PROJECTION_SCHEMA, ANCHOR_TABLE_NAME;



SELECT anchor_table_schema,
anchor_table_name,
SUM(used_bytes) / ( 1024/1024/1024 ) AS TABLE_SIZE_GB
FROM   v_monitor.column_storage
GROUP  BY anchor_table_schema,
anchor_table_name
order  by sum(used_bytes) desc;

Read more ...

How to Change MySQL Data Directory to a New Location on Linux

In this article we will explain how to change the default MySQL data directory to a different mount point or location on a CentOS/RHEL 7 server.

 Sometimes there is a requirement to change the default mysql data directory (/var/lib/mysql) to a new location based on the expected use of the database server. The default /var location might not be feasible to hold the incoming data and over some period of time databases can run into I/O contention or space crunch. 

Thus it is a good practice to relocate default MySQL’s data directory to the new location.

In this example, we are moving the data directory to the mount point /DBdata. You can also create and use a separate directory for this purpose.

MySQL


Check the mount point free space and create new data directory
In this example we are considering /DBdata as new directory location.

[orahow@orahowdb ~]$ df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/cl_clunode3-DBdata  500G   33M  500G   1% /DBdata
# mkdir directory_name
# chown -R mysql:mysql directory_name


STEP 1:  Check the current MySQL Data Directory Location
To identify the current data directory, login into the mysql server and fire the below command.
[orahow@orahowdb ~]$ mysql -u root -p
Enter password:


mysql> select @@datadir;
+-----------------+
| @@datadir       |
+-----------------+
| /var/lib/mysql/ |
+-----------------+
1 row in set (0.00 sec)


You can also fire the below command to get the current directory location.



STEP 2: Check the Current Status of MySQL Server:
[orahow@orahowdb ~]$ service mysql status
Redirecting to /bin/systemctl status mysql.service
● mysqld.service - MySQL Community Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-05-23 19:33:55 IST; 764ms ago
  Process: 13660 ExecStartPost=/usr/bin/mysql-systemd-start post (code=exited, status=0/SUCCESS)
  Process: 13644 ExecStartPre=/usr/bin/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
 Main PID: 13659 (mysqld_safe)
    Tasks: 12
   CGroup: /system.slice/mysqld.service
           ├─13659 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─13825 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mysqld.log --pid-file=/var/run/...


STEP 3: To ensure data integrity, shut down the MySQL server before making changes to the data directory.
systemctl doesn't display the outcome of all service management commands, so if you want to be sure you've succeeded, use the following command:
[orahow@orahowdb ~]$ sudo systemctl stop mysqld
[orahow@orahowdb ~]$ service mysql status
[orahow@orahowdb ~]$ sudo systemctl status mysqld
.......
.......
May 23 19:34:11 orahow.oradb systemd[1]: Stopping MySQL Community Server...
May 23 19:35:34 orahow.oradb systemd[1]: Stopped MySQL Community Server.



STEP 4: Copy and Synchronize Default Directory (/var/lib/mysql) to New Location.
To copy existing database directory to new location, we will use sync (Remote Sync) command which is the most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems.
[orahow@orahowdb ~]$ cd /var/lib/mysql
[orahow@orahowdb mysql]$ ls -lrt
total 116804
drwx------. 2 mysql mysql     4096 May 17 10:35 mysql
drwx------. 2 mysql mysql     4096 May 17 10:35 performance_schema
-rw-rw----. 1 mysql mysql       52 May 17 10:35 aria_log_control
-rw-rw----. 1 mysql mysql    16384 May 17 10:35 aria_log.00000001
-rw-r--r--  1 mysql mysql      276 May 23 15:33 RPM_UPGRADE_MARKER-LAST
-rw-r--r--  1 root  root       276 May 23 15:33 RPM_UPGRADE_HISTORY
-rw-rw----  1 mysql mysql 50331648 May 23 15:54 ib_logfile1
-rw-rw----  1 mysql mysql       56 May 23 15:54 auto.cnf
srwxrwxrwx  1 mysql mysql        0 May 23 16:01 mysql.sock
-rw-rw----. 1 mysql mysql 18874368 May 23 16:01 ibdata1
-rw-rw----  1 mysql mysql 50331648 May 23 16:01 ib_logfile0
-rw-r-----  1 mysql mysql        0 May 23 16:13 binlog.index


We have already shut down the MySQL server, now we will copy the existing database directory to the new location with rsync command. For this we will use -a flag which will preserves the permissions and other directory properties, while -v provides verbose output so that you can follow the progress.

Note: Please don't use trailing slash on the directory, otherwise rsync can dump the contents of the directory into the mount point instead of transferring into a containing mysql directory.

[orahow@orahowdb DBdata]$ sudo rsync -av /var/lib/mysql /DBdata
sending incremental file list
mysql/
mysql/RPM_UPGRADE_HISTORY
mysql/RPM_UPGRADE_MARKER-LAST
mysql/aria_log.00000001
mysql/aria_log_control
mysql/auto.cnf
mysql/performance_schema/threads.frm

sent 120,636,427 bytes  received 1,962 bytes  80,425,592.67 bytes/sec
total size is 120,600,480  speedup is 1.00


Once the synchronization is complete, rename the current folder with a .bak extension and keep it until we’ve confirmed the move was successful.
[orahow@orahowdb ~]$ sudo mv /var/lib/mysql /var/lib/mysql_23rd_may_2018.bak


STEP 5: Configure and Point the Existing Data Directory to the New Location
By default, the datadir is set to /var/lib/mysql in the /etc/my.cnf file. Edit this configuration file to change the new data directory:
We have marked the necessary changes below. Please make an entry of client block[client], if it doesn't exist and point socket file to the new location.

[orahow@orahowdb ~]$ sudo vi /etc/my.cnf
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
#datadir=/var/lib/mysql
datadir=/DBdata/mysql
socket=/DBdata/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[client]
port=3306
socket=/DBdata/mysql/mysql.sock
Once done, save the configuration file using  escape :wq! and exit.


STEP 6 — Finally Restart the MySQL server 
We have updated the configuration file to use new location, now we are ready to start the MySQL server and verify the new directory location.
[orahow@orahowdb ~]$ sudo systemctl start mysqld
[orahow@orahowdb ~]$ sudo systemctl status mysqld


To make sure that the new data directory is indeed in use, start the MySQL monitor. Look at the value for the data directory again:

[orahow@orahowdb ~]$ mysql -u root -p
Enter password:
mysql> select @@datadir;
Output
+----------------------------+
| @@datadir                   |
+----------------------------+
| /DBdata/mysql/             |
+----------------------------+
1 row in set (0.01 sec)

Now that you’ve restarted MySQL and confirmed that it’s using the new location, take the opportunity to ensure that your database is fully functional. Once you’ve verified the integrity of any existing data, you can remove the backup data directory with sudo rm -Rf /var/lib/mysql.bak.

Conclusion:
In this tutorial, we’ve moved and changed the MySQL’s data directory to the new location. Although we have updated and restarted the services, sometimes their might be a port conflict. So check if MySQL port is already in use or not. If port is already in use then check the process who is listening the port, kill that process and restart the MySQL server again.
Read more ...

Hidden SQL- Why v$sql is not displaying sql_fulltext in Oracle

Recently i came across a situation where i was not able to see the SQL text for my sql_id. Developer and application guys were struggling to fix the backlog piling into the system during data loading. They had configured a piece of code called dbwritter - not a database dbwritter. This process will pick the file from app server and load it into the temporary table and finally exchange partition with the main table.

For faster access, it was caching temporary table into the keep cache before exchanging partition with the main table but at some place code was getting stuck and we were struggling to find the sql text which was blocked and every time code was getting stuck there, it was completely not moving at all.


We were killing and restarting the whole process again and again but every time code was getting stuck at the same place and it was displaying the sql_id but it was not displaying the sql_fulltext.

We tried many views like v$sql, v$session, v$sqlarea, dba_hist_sqltext but no luck.

It was displaying wait event "enq: TX - contention" but  not showing any sql_text.


   SID Serial#                 A W   Sec in Wait Event                     SQL                       SQL_ID
------- ------- ------------------ - - ---------- ------------------------- ------------------------- -------------
   1637   28617             Y Y     159205 enq: TX - contention   - Not Available -  5dxybryysj4g7
                                               

Finally after struggling through the codes, I tried to fetch all the sessions and sql text which was active. From the active session output, we observed that one of the insert statement was using hint and due to which it was blocking the sessions.

We fetch the sql_fulltext from the v$open_cursor which earlier it was not showing from the v$session. This sql_id was putting temporary table into the keep buffer cache.

Finally we killed the session which was using hint for the insert statement and after that everything was moving fine.

SQL> select * from v$open_cursor where sql_id like '5dxybryysj4g7';
alter TABLE XXXX storage (buffer_pool keep);

Read more ...

How insert statement works internally in oracle

In this post, we will see the flow and sequence of steps which oracle follow internally for the execution of insert statement. So below are the flowchart and execution steps which works in oracle architecture.





How does the insert query execution occur ?


  1. SQL*PLUS checks the syntax on client side.

  2. If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets andsent via lan using tcp to the server.

  3. Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache keeps the recently executed sql statements along with their execution plan.

  6. In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7. Parsing is the process undertaken by oracle to generate an execution plan.

  8. The first step in parsing involves performing a symantic check. This is nothing but check for the existence of the obj and its structure in the database.

  9. This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the server process will pick the best possible and cost effective e-plan and go to the library cache.

  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql statement will begin.

  14. After generation of e-plan server process will keep the plan in the library cache on the mru end.

  15. Thereafter the plan is picked up and execution of the insert operation will begin.

  16. Server process will bring empty blocks from the specific datafile of the tablespace in which the table will exist , into which rows must be inserted.

  17. The blocks will be brought into database block buffers(or database buffer cache).

  18. The blocks will be containing no data.

  19. Then server process will bring equal no of empty blocks from the rollback/undo tablespace. they will also be brought into the database block buffers.

  20. Server process will copy the address of the original data blocks of the userdata datafiles into the empty rollback/undo blocks.

  21. Then server process will bring a set of userdata blocks into the pga and the data will be added from the insert sql statement into user data blocks.

  22. After the insert operation is complete in the database buffer cache then dbwriter will write the data back to the respective datafiles after a certain time gap.
Read more ...

How update statement works internally in oracle

In this post, we will see the flow and sequence of steps which oracle follow internally for the execution of update statement.





How does the update query execution occur?


  1. SQL*PLUS checks the syntax on client side.

  2. If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets andsent via lan using tcp to the server.

  3. Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache keeps the recently executed sql statements along with their execution plan.

  6. In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7. Parsing is the process undertaken by oracle to generate an execution plan.

  8. The first step in parsing involves performing a symantic check. This is nothing but check for the existence of the obj and its structure in the database.

  9. This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the server process will pick the best possible and cost effective e-plan and go to the library cache.
  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql statement will begin.

  14. After generation of e-plan server process will keep the plan in the library cache on the mru end.

  15. Thereafter the plan is picked up by the server process and execution of the update will begin.

  16. Server process will bring the required blocks from the specific datafile of the table which has to be updated.

  17. The blocks will be brought into database block buffers(or database buffer cache).

  18. The blocks will be containing the original data of the table.

  19. Then server process will bring equal no of empty blocks from the undo tablespace and they will also be brought into the database block buffers(or database buffer cache).

  20. Server process will copy the original data from the userdata blocks into the empty rollback/undo blocks and create a before image.

  21. Then server process will bring a set of userdata blocks into the pga (program global area) and after performing filter operations the selected rows will be updated with new content.

  22. The above update process will continue until all the userdata blocks have been checked and updated.

  23. After the update operation is complete then dbwriter will write the data back to the respective datafiles after a certain time gap.


Read more ...

Network Wait: SQL*Net more data from client in awr report

In one of the sites, I came across a performance issue where data loading task was taking more than usual. After analysing the AWR report, we observed that network wait was high and the task was waiting for the event sql*net more data from client.

As you can see, in below AWR there are two consecutive days screenshot, and in both the AWR there was backlogs. But in second AWR where average wait in ms is 158, processing speed was little bit faster than first one where  average wait in ms was 293.







As observed in AWR, network  wait is because of the the shadow process has received part of a call from the client process (for example, SQL*Plus, Pro*C, and JDBC) in the first network package and is waiting for more data for the call to be complete. Examples are large SQL or PL/SQL block and insert statements with large amounts of data.

The possible cause might be network latency problems, tcp_no_delay configuration issues and large array insert.
Read more ...

How select statement works internally in oracle

In this post we will discuss about the order or flow of execution of select statement in oracle.

To write a select query against an oracle database we require an oracle client installation on the client system. Oracle client is nothing but oracle sql*plus. whenever we are giving the username, password and host string to sql*plus client then it takes the host string and will lookup a file known as tnsnames.ora (transparent network substrait). 

This file will be located in $ORACLE_HOME\network\admin\tnsnames.ora. 
The oracle client is also installed when we install developer forms or jdeveloper. The tns file will keep the host string or alias and that will point to a config script. The script will keep ip address of oracle server,port number of the listener and sid of the database. Using these details sql*plus will dispatch the given username and password to the above given address. The database will authenticate the user and if successful then a server process will be initiated on the server side and user process will be initiated on the client side. After this a valid session is establish between the client and the server. The user types a query on the sql prompt.






Below are the select query execution flow in oracle:


  1.  SQL*PLUS checks the syntax on client side.

  2.  If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets and sent via lan using tcp to the server.

  3.  Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache will keep the recently executed sql statements along with their execution plan.

  6.  In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7.  Parsing is the process undertaken by oracle to generate an execution plan.

  8.  The first step in parsing involves performing a semantic check. This is nothing but check for the existence of the object and its structure in the database.

  9.  This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. the optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the sp will pick the best possible and cost effective e-plan and go to the library cache.

  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql sataement will begin.

  14. Server Process will then go to the database buffer cache and checks whether the data required by the query is already available or not in the cache.

  15. If available that data can be returned to the client else it brings the data from the database files.

  16. If sorting and filtering is required  by the query then the pga is utilized along with the temporary tablespace for performing sort run.

  17. After sort run the data is returned to the client and sql*plus client will convert the given data to ascii format and display the data in a tabular format to the users.


Read more ...

How to Enable/Disable ARCHIVELOG Mode in Oracle 11g/12c

When you run the database in NOARCHIVELOG mode, you disable the archiving of the redo log. If you want to take the backup of the database using RMAN then your database must be in ARCHIVELOG mode.

A database backup, together with online and archived redo log files, guarantees that you can recover all committed transactions in the event of OS or disk faluer.


Below are the steps required to enable archive log mode on an Oracle 10g/11g or 12c database.


Verify the database log mode.

[oracle@orahow ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Feb 03 04:05:02 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     25
Current log sequence           27


From the above output you can see that your database is in No Archive Mode. Note that Archive destination is USE_DB_RECOVERY_FILE_DEST. You can determine the path by looking at the parameter RECOVERY_FILE_DEST.


SQL> show parameter recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u01/app/oracle/flash_recovery
                                                 _area
db_recovery_file_dest_size           big integer 2728M


By default, archive logs will be written to the flash recovery area also called FRA. If you don't want to write archive logs to the FRA then you can set the parameter LOG_ARCHIVE_DEST_n to the new location where you wish to write the archive logs.


To set new new archive log destination, you can use the following command.

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/orahow/arch' scope = both;

System altered.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Current log sequence           28
SQL>


Now we shutdown the database and bring it backup in mount mode.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area  606806016 bytes
Fixed Size                  1376268 bytes
Variable Size             394268660 bytes
Database Buffers          205520896 bytes
Redo Buffers                5640192 bytes
Database mounted.
SQL>

Now set the database in archive log mode

SQL> alter database archivelog;

Database altered.

Finally open the database.
SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28


From the above output you can see that database is in archive log mode and automatic archival is also enabled.

For experiment or confirmation you can switch the log file to see that an archive is written to archive log location.


SQL> alter system switch logfile;

System altered.

SQL> host
[oracle@orahow ~]$ ls /u01/app/oracle/oradata/orahow/arch
1_28_812359664.dbf
[oracle@orahow ~]$ exit
exit


Disabling Archive Log Mode


The following are the steps required to disable archive log mode on an Oracle 10g/11g or 12c database.

Verify the database log mode.

[oracle@orahow ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Feb 03 04:05:02 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup mount
ORACLE instance started.

Total System Global Area  606806016 bytes
Fixed Size                  1376268 bytes
Variable Size             394268660 bytes
Database Buffers          205520896 bytes
Redo Buffers                5640192 bytes
Database mounted.
SQL>


To disable archive log mode use the below command.

SQL> alter database noarchivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Current log sequence           28
SQL>
As you can see, ARCHIVELOG mode has been disabled.

Read more ...

Streams AQ: enqueue blocked on low memory Wait Event in Oracle

In this post, we will discuss about the wait event "Streams AQ: enqueue blocked on low memory" which was captured in the oracle AWR report.

In one of the environment there was a performance issue in which schema export backup was in hung state. Before this issue, ideally export was completing in one to two hours.

To resolve this issue, we increased the SGA but it didn't helped out. So we checked the wait event for which query was waiting for. Finally we checked the dynamic memory component and observed that streams pool was shrieked due to which it was blocked on low memory.  

The Oracle Streams pool is a portion of memory in the System Global Area (SGA) that is used by Oracle Streams. The Oracle Streams pool stores enqueued messages in memory, and it provides memory for capture processes and apply processes.

The Oracle Streams pool size is managed automatically when the MEMORY_TARGET, MEMORY_MAX_TARGET, or SGA_TARGET initialization parameter is set to a nonzero value. If these parameters are all set to 0 (zero), then you can specify the size of the Oracle Streams pool in bytes using the STREAMS_POOL_SIZE initialization parameter.


SQL> select SID,WAIT_CLASS,EVENT from v$session where SADDR in (select SADDR from dba_datapump_sessions);

       SID    WAIT_CLASS           EVENT
---------- --------------------  ----------------------------------------
        38     enqueue                    Streams AQ: enqueue blocked on low memory



SQL> select component,current_size/1024/1024,last_oper_type,last_oper_time from v$sga_dynamic_components;





Here we can see that streams pool got shrink-ed due to which it was in hung state.

SOLUTION:


As s  workaround, explicitly set the streams_pool_size to a fixed (large enough) value, e.g. 150 Mb (or 300 MB if needed) that will be used as a minimum value, e.g.:

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=both;

And re-run the Export or Import Data Pump job.

If you cannot modify the STREAMS_POOL_SIZE dynamically, then you need to set the value in the spfile, and restart the database.

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=spfile;
SHUTDOWN IMMEDIATE
STARTUP


NOTE:

If the problem is not fixed after implementing one of the above solutions, a fix for unpublished Bug 24560906 must be also installed before reporting the issue to Oracle Support.
Possible solutions for unpublished Bug 24560906 are:


Set the below parameter and restart the job.
alter system set "_disable_streams_pool_auto_tuning"=TRUE;
SHUTDOWN IMMEDIATE
STARTUP


EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1)
Read more ...

MySQL: Can't start server: Bind on TCP/IP port: Address already in use resolved

In this article we will discuss the about the issues we faced after changing mysql data directory.

For one of the environment, there was a requirement to install mysql 5.6 server on CentOS Linux and after installation we had to change the default data directory to the new location.

Everything was fine but during mysql server startup it was taking lot of time. When we checked the log then we observed the below error.


18548 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
18548 [ERROR] Do you already have another mysqld server running on port: 3306 ?
18548 [ERROR] Aborting

Below are the error logs:

[oradb@orahowdb mysql]$  tail -100f /var/log/mysqld.log
tail: cannot open ‘/var/log/mysqld.log’ for reading: Permission denied
tail: no files remaining
[oradb@orahowdb mysql]$  sudo tail -100f /var/log/mysqld.log
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_CMP'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_TRX'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'InnoDB'
2018-05-23 20:18:10 17743 [Note] InnoDB: FTS optimize thread exiting.
2018-05-23 20:18:10 17743 [Note] InnoDB: Starting shutdown...
2018-05-23 20:18:12 17743 [Note] InnoDB: Shutdown completed; log sequence number 1602387
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'BLACKHOLE'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'ARCHIVE'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MRG_MYISAM'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MyISAM'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MEMORY'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'CSV'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'sha256_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'mysql_old_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'mysql_native_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'binlog'
2018-05-23 20:18:12 17743 [Note] /usr/sbin/mysqld: Shutdown complete

180523 20:18:12 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
180523 20:21:33 mysqld_safe Logging to '/var/log/mysqld.log'.
180523 20:21:33 mysqld_safe Starting mysqld daemon with databases from /DBdata/mysql
2018-05-23 20:21:33 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-05-23 20:21:33 0 [Note] /usr/sbin/mysqld (mysqld 5.6.40) starting as process 18548 ...
2018-05-23 20:21:33 18548 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)

2018-05-23 20:21:33 18548 [Warning] Buffered warning: Changed limits: table_open_cache: 431 (requested 2000)

2018-05-23 20:21:33 18548 [Note] Plugin 'FEDERATED' is disabled.
2018-05-23 20:21:33 18548 [Note] InnoDB: Using atomics to ref count buffer pool pages
2018-05-23 20:21:33 18548 [Note] InnoDB: The InnoDB memory heap is disabled
2018-05-23 20:21:33 18548 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-05-23 20:21:33 18548 [Note] InnoDB: Memory barrier is not used
2018-05-23 20:21:33 18548 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-05-23 20:21:33 18548 [Note] InnoDB: Using Linux native AIO
2018-05-23 20:21:33 18548 [Note] InnoDB: Using CPU crc32 instructions
2018-05-23 20:21:33 18548 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2018-05-23 20:21:33 18548 [Note] InnoDB: Completed initialization of buffer pool
2018-05-23 20:21:33 18548 [Note] InnoDB: Highest supported file format is Barracuda.
2018-05-23 20:21:33 18548 [Note] InnoDB: 128 rollback segment(s) are active.
2018-05-23 20:21:33 18548 [Note] InnoDB: Waiting for purge to start
2018-05-23 20:21:33 18548 [Note] InnoDB: 5.6.40 started; log sequence number 1602387
2018-05-23 20:21:33 18548 [Note] Server hostname (bind-address): '*'; port: 3306
2018-05-23 20:21:33 18548 [Note] IPv6 is available.
2018-05-23 20:21:33 18548 [Note]   - '::' resolves to '::';
2018-05-23 20:21:33 18548 [Note] Server socket created on IP: '::'.
2018-05-23 20:21:33 18548 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
2018-05-23 20:21:33 18548 [ERROR] Do you already have another mysqld server running on port: 3306 ?
2018-05-23 20:21:33 18548 [ERROR] Aborting


Finally, we checked the mysql TCP/IP port[3306]:

[root@orahowdb ~]# lsof -i TCP:3306
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  17915 mysql   10u  IPv6  55522      0t0  TCP *:mysql (LISTEN)


We can see that mysqld is already listining on port 3306 which was causing the problem. Finally we checked the pid of mysqld and killed the process.

[root@orahowdb ~]# ps -ef|grep mysql
root     17830     1  0 16:01 ?        00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/####
mysql    17915 17830  0 16:01 ?        00:00:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log
root     18869 15338  0 20:23 pts/2    00:00:00 sudo tail -100f /var/log/mysqld.log
root     18870 18869  0 20:23 pts/2    00:00:00 tail -100f /var/log/mysqld.log
root     20223 19766  0 20:31 pts/2    00:00:00 grep --color=auto mysql

[root@orahowdb ~]# netstat -lp | grep 3306

[root@blrsubjiradb ~]# lsof -i TCP:3306
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  17915 mysql   10u  IPv6  55522      0t0  TCP *:mysql (LISTEN)

[root@orahowdb ~]# kill -9 17915

[root@orahowdb ~]# lsof -i TCP:3306

[root@orahowdb ~]# ps -ef|grep mysql
root     18869 15338  0 20:23 pts/2    00:00:00 sudo tail -100f /var/log/mysqld.log
root     18870 18869  0 20:23 pts/2    00:00:00 tail -100f /var/log/mysqld.log
root     20242 19766  0 20:32 pts/2    00:00:00 grep --color=auto mysql
[root@orahowdb ~]#


After killing the process, finally we restarted the database normally.
Read more ...

[RESOLVED]: ERROR 1558 (HY000): Column count of mysql.user is wrong. Expected 43, found 42. Created with MySQL 50556, now running 50640

In this article, we will discuss about the issue which we faced after mysql database server installation.

Recently there was a reqirement to install mysql server and to create few mysql databases. As per requirement, we created the databases but during user creation we faced the below error.

ERROR 1558 (HY000): Column count of mysql.user is wrong. Expected 43, found 42. Created with MySQL 50556, now running 50640. Please use mysql_upgrade to fix this error.


Below are the error logs and sequence of steps we followed to resolve this error:

[oradb@orahowdb mysql]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.6.40 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
|oradb             |
| mysql              |
| performance_schema |
+--------------------+
6 rows in set (0.00 sec)

mysql> use oradb
Database changed
mysql>

mysql> create user orahow identified by '####';
ERROR 1558 (HY000): Column count of mysql.user is wrong. Expected 43, found 42. Created with MySQL 50556, now running 50640. Please use mysql_upgrade to fix this error.
mysql>

mysql> exit

[oradb@orahowdb mysql]$
[oradb@orahowdb mysql]$ mysql_upgrade -u root -p
Enter password:
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
mysql.columns_priv                                 OK
mysql.db                                           OK
mysql.event                                        OK
mysql.func                                         OK
mysql.general_log                                  OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
mysql.help_topic                                   OK
mysql.host                                         OK
mysql.ndb_binlog_index                             OK
mysql.plugin                                       OK
mysql.proc                                         OK
mysql.procs_priv                                   OK
mysql.proxies_priv                                 OK
mysql.servers                                      OK
mysql.slow_log                                     OK
mysql.tables_priv                                  OK
mysql.time_zone                                    OK
mysql.time_zone_leap_second                        OK
mysql.time_zone_name                               OK
mysql.time_zone_transition                         OK
mysql.time_zone_transition_type                    OK
mysql.user                                         OK
Running 'mysql_fix_privilege_tables'...
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/DBdata/mysql/mysql.sock'
Warning: Using a password on the command line interface can be insecure.
OK
Could not create the upgrade info file '/DBdata/mysql/mysql_upgrade_info' in the MySQL Servers datadir, errno: 13

[oradb@orahowdb mysql]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13
Server version: 5.6.40 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql>
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| oradb             |
| mysql              |
| performance_schema |
+--------------------+
6 rows in set (0.00 sec)

mysql>
mysql>
mysql> use oradb;
Database changed

mysql>
mysql> create user orahow identified by '####';
Query OK, 0 rows affected (0.00 sec)



Read more ...

How to Enable/Disable a Scheduled Job in Oracle

It is not easy task to manually deal with too many jobs. So to overcome with this this scenario oracle database provides advanced job scheduling capabilities through Oracle Scheduler. The DBMS_SCHEDULER package provides a collection of scheduling functions and procedures that are callable from any PL/SQL program.

Using Scheduler, database administrators and application developers can easily control when and where various tasks take place in the database environment. These tasks can be time consuming and complicated, so using the Scheduler can help them to improve the management and planning of these tasks.


To disable a job that has been scheduled with dbms_scheduler, first you need to identify the job_name, job status and other related information.



To check the job status:
SQL> select job_name, owner, enabled from dba_scheduler_jobs;

SQL> select job_name, enabled from DBA_SCHEDULER_JOBS WHERE job_name = 'SCHEMA_MNTC_JOB';



To Disable a job:

SQL> execute dbms_scheduler.disable('owner.job');

SQL> exec dbms_scheduler.disable('SCHEMA_MNTC_JOB');

PL/SQL procedure successfully completed.



BEGIN

  DBMS_SCHEDULER.DISABLE('SCHEMA_MNTC_JOB');

END;

/



To enable job:
SQL> exec dbms_scheduler.enable('SCHEMA_MNTC_JOB');

PL/SQL procedure successfully completed.




BEGIN

  DBMS_SCHEDULER.ENABLE('SCHEMA_MNTC_JOB');

END;

/



Again you can check the job status using below query:
SQL> select job_name, enabled from DBA_SCHEDULER_JOBS WHERE job_name = ‘GATHER_STATS_JOB’;


Read more ...

How to Lock/Unlock Table Statistics in Oracle

There are a number of cases were you want to lock the table statistics for example, if you have a highly volatile tables or intermediate table, where the volume of data changes drastically over a relatively short period of time or if you want a table not be analyzed by automatic statistics job but analyze it later.

If table is highly volatile then locking the statistics prevent in execution plan from changing and thus helps in plan stability for some period of time. That is why many guys prefer to unlock the stats, gather the stats and finally lock the stats.


How to check if table stats is locked:
SQL> SELECT stattype_locked FROM dba_tab_statistics WHERE table_name = '&TABLE_NAME' and owner = '&TABLE_OWNER';



If you will try to gather locked table statics, you will get the below error:

SQL> EXEC dbms_stats.gather_table_stats(ownname => 'SANCS', tabname => 'ORDER' , estimate_percent => dbms_stats.auto_sample_size);

ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at “SYS.DBMS_STATS”, line 10640
ORA-06512: at “SYS.DBMS_STATS”, line 10664
ORA-06512: at line 1


How to Lock Table Statistics?
SQL>exec dbms_stats.lock_table_stats('<schema>', '<Table>');

 Example:
exec dbms_stats.lock_table_stats('SANCS', 'ORDER');

PL/SQL procedure successfully completed.



How to Unlock Table Statistics?
SQL> exec dbms_stats.unlock_table_stats('<schema>', '<Table>');



Example:

SQL> exec dbms_stats.unlock_table_stats('SANCS', 'ORDER');

PL/SQL procedure successfully completed.




Read more ...

Script to Start Oracle Database Automatically on Linux

From Oracle 10gR2 onward RAC clusterware automatically start and stop the ASM and Oracle database instances and listeners, so the following procedures are not necessary. But for the single instance where the RAC is not being used, this script will allow you to automate the startup and shutdown of oracle databases on Linux automatically after server reboot.

Already both scripts are installed in $ORACLE_HOME/bin and are called dbstart and dbshut. However, these scripts are not executed automatically after you reboot your server. I will explain you how to configure this script so that Oracle Services can start automatically after Linux server reboot.


How to Configure Auto Startup Script?

Below are the changes you need to perform in order to automate this script.

STEP 1: First, you need to make sure that any database instances you want to autostart need to be set to “Y” in /etc/oratab file as shown below.

# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
ora11g:/u01/app/oracle/product/11.2.0/dbhome_1:Y
ora12c:/u01/app/oracle/product/11.2.0/dbhome_1:Y

The /etc/oratab file is normally created by running the root.sh script at the end of the installation. If you don’t have the file, you can always add it to your system by creating it manually (with user root!).

STEP 2: Make entry of ORACLE_HOME and PATH in your . ~/.bash_profile


STEP 3: Next, we are going to create 2 scripts under home path as /home/oracle/scripts: ora_start.sh and ora_stop.sh.

These scripts will call dbstart and dbshut and it will also allow us to add some more actions, for example the start of the Enterprise Manager Database control or any other services you might have.
You can also create separate directory for this script.

$ su – oracle
$ vi /home/oracle/scripts/ora_start.sh

#!/bin/bash


# script to start the Oracle database, listener and dbconsole


. ~/.bash_profile


# start the listener and the database

$ORACLE_HOME/bin/dbstart $ORACLE_HOME

# start the Enterprise Manager db console

#$ORACLE_HOME/bin/emctl start dbconsole

exit 0


$ vi /home/oracle/scripts/ora_stop.sh
#!/bin/bash

# script to stop the Oracle database, listener and dbconsole


. ~/.bash_profile


# stop the Enterprise Manager db console

#$ORACLE_HOME/bin/emctl stop dbconsole

# stop the listener and the database

$ORACLE_HOME/bin/dbshut $ORACLE_HOME

exit 0


You can see that inside the scripts, we are calling the .bash_profile file of the user “oracle”. This is needed to set the ORACLE_HOME, PATH environment variable.



STEP 4: Give execute permission to the scripts:
$  chmod u+x ora_start.sh ora_stop.sh




STEP 5: We will now create a wrapper script that can be used to schedule as a service.
With user root, create a file called “oracle” under /etc/init.d.

$ vi /etc/init.d/oracle
#!/bin/bash
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.

# Set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME.

ORA_OWNER=oracle
RETVAL=0

case "$1" in
    'start')
        # Start the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su - $ORA_OWNER -c "/home/oracle/scripts/ora_start.sh"
        touch /var/lock/subsys/oracle
        ;;
    'stop')
        # Stop the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su - $ORA_OWNER -c "/home/oracle/scripts/ora_stop.sh"
        rm -f /var/lock/subsys/oracle
        ;;
    *)
        echo $"Usage: $0 {start|stop}"
        RETVAL=1
esac
exit $RETVAL



STEP 6: Grant below permission for this script.
$ chmod 750 /etc/init.d/oracle

Note: Add the oracle home paths in .bash_profile



STEP 7: To create a service of this script, run the following command:
$ chkconfig --add oracle



STEP 8: All done, check the script and database status by running “service oracle stop” or “service oracle start” from the command line.

$ service oracle stop
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
Stopping Oracle Enterprise Manager 11g Database Control …
… Stopped.
Processing Database instance “oratst”: log file /u01/app/oracle/product/11.2.0/db_1/shutdown.log

$ service oracle start
Processing Database instance “oratst”: log file /u01/app/oracle/product/11.2.0/db_1/startup.log
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
Starting Oracle Enterprise Manager 11g Database Control …… started.
After this, it’s time for the final test: reboot your server and check if your Oracle database is automatically started after the reboot.


Whenever database server will reboot, you will observe that your configured databases has been started automatically. There is no need to start and stop the database manually. If you are getting any issue you can contact for support. We will be pleased to assist you.

Read more ...

How to Gather Statistics on Large Partitioned Tables in Oracle

It is difficult to gather stats on large partition tables which is huge in size, specially in core domain like telecom sectors where customers has to maintain call detail records in a partitioned table which are very big and huge in size.

For such tables we use to gather statistics of one partition which we can call it as a source partition and copy that stats to rest of the partition which we can call it as destination partition.

For Example: If you have 366 partitions then you can gather stats for anyone partition say P185 and now copy stats to rest of the partition. please note that, choose the partition where you have data in that partition. There is no need to gather stats for all the partition because oracle internally distribute the data based on the partitioned key.


STEPS TO MAINTAIN STATISTICS ON LARGE PARTITION TABLES



STEP 1: Gather stats for any one partition say P185.

EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME' , PARTNAME => 'P185', estimate_percent => 10, method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true );

Note: Change table_name and table_owner. You can increase degree if free parallel servers are available.


STEP 2: Generate script for rest of the remaining partition like shown below. Your source partition will be P185 and destination partition will be rest of the remaining partitions.


STEP 3: After gather statistics you can lock the stats. Using below format you can generate the script for all the partitions after making necessary changes.


exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P001', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P001');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P002', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P002');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P003', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P003');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P004', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P004');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P005', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P005');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P006', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P006');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P007', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P007');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P008', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P008');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P009', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P009');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P010', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P010');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P011', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P011');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P012', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P012');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P013', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P013');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P014', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P014');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P015', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P015');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P016', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P016');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P017', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P017');


Please feel free to contact for support in case of any difficulties. We will be pleased to provide support for your queries. 

Read more ...

Oracle Expdp/Impdp - Datapump Interview Questions/FAQs

Q1. How to export only ddl/metadata of a table?
Ans: you can use CONTENT=METADATA_ONLY parameter during export.


Q2: Which memory area used by datapump process?
Ans:  streams_pool_size. If streams_pool_size is zero 0 then probably you will get memory related error. Please check this parameter and set minimum to 96M value.

show parameter STREAMS_POOL_SIZE

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------
streams_pool_size                    big integer 96M



Q3: How to improve datapump performance so that export/import happens faster?
Ans:  
  • Allocate streams_pool_size memory. Below query will give you the recommended settings of this parameter.
 select 'ALTER SYSTEM SET STREAMS_POOL_SIZE='||                  (max(to_number(trim(c.ksppstvl)))+67108864)||' SCOPE=SPFILE;'
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in ('__streams_pool_size','streams_pool_size');

ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;

  • Use CLUSTER=N : In a RAC environment it can improve the speed of Data Pump API based operations.
  • Set PARALLEL_FORCE_LOCAL to a value of TRUE since PARALLEL_FORCE_LOCAL could have a wider scope of effect than just Data Pump API based operations.
  • EXCLUDE=STATISTICS:  excluding the generation and export of statistics at export time will shorten the time needed to perform any export operation. The DBMS_STATS.GATHER_DATABASE_STATS procedure would then be used at the target database once the import operation was completed.
  • Use PARALLEL : If there is more than one CPU available and the environment is not already CPU bound or disk I/O bound or memory bound and multiple dump files are going be used (ideally on different spindles) in the DUMPFILE parameter, then parallelism has the greatest potential of being used to positive effect, performance wise.
Q4: How to monitor status of export/import - datapump operations/jobs?
Ans: From dba_datapump_jobs you can easily monitor the status. You can use the below query.


set linesize 200
set pagesize 200
col owner_name format a12
col job_name format a20
col operation format a12
col job_mode format a20
SELECT 
owner_name, 
job_name, 
operation, 
job_mode, 
state 
FROM 
dba_datapump_jobs
where 
state='EXECUTING';

SELECT   w.sid, w.event, w.seconds_in_wait
FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w
WHERE   s.saddr = d.saddr AND s.sid = w.sid;

SELECT 
OPNAME, 
SID, 
SERIAL#, 
CONTEXT, 
SOFAR, 
TOTALWORK,
    ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM 
V$SESSION_LONGOPS
WHERE 
OPNAME in
(
select 
d.job_name
from 
v$session s, 
v$process p, 
dba_datapump_sessions d
where 
p.addr=s.paddr 
and 
s.saddr=d.saddr
)
AND 
OPNAME NOT LIKE '%aggregate%'
AND 
TOTALWORK != 0
AND 
SOFAR <> TOTALWORK;


Q5: How to stop/start/kill datapump jobs?
Ans: expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job

You can also kill jobs from alter system kill session command. SID and SERIAL# you will get from the above command.

alter system kill session 'SID,SERIAL#' immediate;


Q6: How will you take consistent export backup? What is the use of flashback_scn ?
Ans:  To take a consistent export backup you can use the below method:

SQL:  
           select to_char(current_scn) from v$database;

Expdp parfile content:
---------------------------

directory=OH_EXP_DIR 
dumpfile=exporahow_4apr_<yyyymmdd>.dmp 
logfile=exporahow_4apr_<yyyymmdd>.log 
schemas=ORAHOW 
flashback_scn=<<current_scn>>


Q7: How to drop constraints before import?
Ans: 

set feedback off;
spool /oradba/orahow/drop_constraints.sql;

select 'alter table SCHEMA_NAME.' || table_name || ' drop constraint ' || constraint_name || ' cascade;'
from dba_constraints where owner = 'SCHEMA_NAME'
  and not (constraint_type = 'C')
  order by table_name,constraint_name;

  Spool off;
  exit;

Q8: I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don't have that much available disk space? What could be the reason that only ddl is consuming huge space?
Ans: Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below. 

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
  STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.


Q9: If you don't have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?
Ans: You can use network link/ DB Link for export.  You can use network_link by following these simple steps:

Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax

Q10: Tell me some of the parameters you have used during export?
Ans:

CONTENT:         Specifies data to unload where the valid keywords are:
                             (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY       Directory object to be used for dumpfiles and logfiles.
DUMPFILE         List of destination dump files (expdat.dmp),
                             e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
TABLES                Identifies a list of tables to export - one schema only.
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.


Q11: You are getting undo tablespace error during import, how you will avoid it?
Ans: We can use COMMIT=Y option


Q12: Can we import a 11g dumpfile into 10g database using datapump? 
Ans: Yes we can import from 11g to 10g using VERSION option.
Read more ...

CONTACT

Name

Email *

Message *