Hidden SQL- Why v$sql is not displaying sql_fulltext in Oracle

Recently i came across a situation where i was not able to see the SQL text for my sql_id. Developer and application guys were struggling to fix the backlog piling into the system during data loading. They had configured a piece of code called dbwritter - not a database dbwritter. This process will pick the file from app server and load it into the temporary table and finally exchange partition with the main table.

For faster access, it was caching temporary table into the keep cache before exchanging partition with the main table but at some place code was getting stuck and we were struggling to find the sql text which was blocked and every time code was getting stuck there, it was completely not moving at all.


We were killing and restarting the whole process again and again but every time code was getting stuck at the same place and it was displaying the sql_id but it was not displaying the sql_fulltext.

We tried many views like v$sql, v$session, v$sqlarea, dba_hist_sqltext but no luck.

It was displaying wait event "enq: TX - contention" but  not showing any sql_text.


   SID Serial#                 A W   Sec in Wait Event                     SQL                       SQL_ID
------- ------- ------------------ - - ---------- ------------------------- ------------------------- -------------
   1637   28617             Y Y     159205 enq: TX - contention   - Not Available -  5dxybryysj4g7
                                               

Finally after struggling through the codes, I tried to fetch all the sessions and sql text which was active. From the active session output, we observed that one of the insert statement was using hint and due to which it was blocking the sessions.

We fetch the sql_fulltext from the v$open_cursor which earlier it was not showing from the v$session. This sql_id was putting temporary table into the keep buffer cache.

Finally we killed the session which was using hint for the insert statement and after that everything was moving fine.

SQL> select * from v$open_cursor where sql_id like '5dxybryysj4g7';
alter TABLE XXXX storage (buffer_pool keep);

Read more ...

How insert statement works internally in oracle

In this post, we will see the flow and sequence of steps which oracle follow internally for the execution of insert statement.





How does the insert query execution occur ?


  1. SQL*PLUS checks the syntax on client side.

  2. If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets andsent via lan using tcp to the server.

  3. Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache keeps the recently executed sql statements along with their execution plan.

  6. In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7. Parsing is the process undertaken by oracle to generate an execution plan.

  8. The first step in parsing involves performing a symantic check. This is nothing but check for the existence of the obj and its structure in the database.

  9. This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the server process will pick the best possible and cost effective e-plan and go to the library cache.

  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql statement will begin.

  14. After generation of e-plan server process will keep the plan in the library cache on the mru end.

  15. Thereafter the plan is picked up and execution of the insert operation will begin.

  16. Server process will bring empty blocks from the specific datafile of the tablespace in which the table will exist , into which rows must be inserted.

  17. The blocks will be brought into database block buffers(or database buffer cache).

  18. The blocks will be containing no data.

  19. Then server process will bring equal no of empty blocks from the rollback/undo tablespace. they will also be brought into the database block buffers.

  20. Server process will copy the address of the original data blocks of the userdata datafiles into the empty rollback/undo blocks.

  21. Then server process will bring a set of userdata blocks into the pga and the data will be added from the insert sql statement into user data blocks.

  22. After the insert operation is complete in the database buffer cache then dbwriter will write the data back to the respective datafiles after a certain time gap.
Read more ...

How update statement works internally in oracle

In this post, we will see the flow and sequence of steps which oracle follow internally for the execution of update statement.





How does the update query execution occur?


  1. SQL*PLUS checks the syntax on client side.

  2. If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets andsent via lan using tcp to the server.

  3. Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache keeps the recently executed sql statements along with their execution plan.

  6. In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7. Parsing is the process undertaken by oracle to generate an execution plan.

  8. The first step in parsing involves performing a symantic check. This is nothing but check for the existence of the obj and its structure in the database.

  9. This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the server process will pick the best possible and cost effective e-plan and go to the library cache.
  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql statement will begin.

  14. After generation of e-plan server process will keep the plan in the library cache on the mru end.

  15. Thereafter the plan is picked up by the server process and execution of the update will begin.

  16. Server process will bring the required blocks from the specific datafile of the table which has to be updated.

  17. The blocks will be brought into database block buffers(or database buffer cache).

  18. The blocks will be containing the original data of the table.

  19. Then server process will bring equal no of empty blocks from the undo tablespace and they will also be brought into the database block buffers(or database buffer cache).

  20. Server process will copy the original data from the userdata blocks into the empty rollback/undo blocks and create a before image.

  21. Then server process will bring a set of userdata blocks into the pga (program global area) and after performing filter operations the selected rows will be updated with new content.

  22. The above update process will continue until all the userdata blocks have been checked and updated.

  23. After the update operation is complete then dbwriter will write the data back to the respective datafiles after a certain time gap.


Read more ...

Network Wait: SQL*Net more data from client in awr report

In one of the sites, I came across a performance issue where data loading task was taking more than usual. After analysing the AWR report, we observed that network wait was high and the task was waiting for the event sql*net more data from client.

As you can see, in below AWR there are two consecutive days screenshot, and in both the AWR there was backlogs. But in second AWR where average wait in ms is 158, processing speed was little bit faster than first one where  average wait in ms was 293.







As observed in AWR, network  wait is because of the the shadow process has received part of a call from the client process (for example, SQL*Plus, Pro*C, and JDBC) in the first network package and is waiting for more data for the call to be complete. Examples are large SQL or PL/SQL block and insert statements with large amounts of data.

The possible cause might be network latency problems, tcp_no_delay configuration issues and large array insert.
Read more ...

How select statement works internally in oracle

In this post we will discuss about the order or flow of execution of select statement in oracle.

To write a select query against an oracle database we require an oracle client installation on the client system. Oracle client is nothing but oracle sql*plus. whenever we are giving the username, password and host string to sql*plus client then it takes the host string and will lookup a file known as tnsnames.ora (transparent network substrait). 

This file will be located in $ORACLE_HOME\network\admin\tnsnames.ora. 
The oracle client is also installed when we install developer forms or jdeveloper. The tns file will keep the host string or alias and that will point to a config script. The script will keep ip address of oracle server,port number of the listener and sid of the database. Using these details sql*plus will dispatch the given username and password to the above given address. The database will authenticate the user and if successful then a server process will be initiated on the server side and user process will be initiated on the client side. After this a valid session is establish between the client and the server. The user types a query on the sql prompt.






Below are the select query execution flow in oracle:


  1.  SQL*PLUS checks the syntax on client side.

  2.  If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets and sent via lan using tcp to the server.

  3.  Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache will keep the recently executed sql statements along with their execution plan.

  6.  In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7.  Parsing is the process undertaken by oracle to generate an execution plan.

  8.  The first step in parsing involves performing a semantic check. This is nothing but check for the existence of the object and its structure in the database.

  9.  This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. the optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the sp will pick the best possible and cost effective e-plan and go to the library cache.

  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql sataement will begin.

  14. Server Process will then go to the database buffer cache and checks whether the data required by the query is already available or not in the cache.

  15. If available that data can be returned to the client else it brings the data from the database files.

  16. If sorting and filtering is required  by the query then the pga is utilized along with the temporary tablespace for performing sort run.

  17. After sort run the data is returned to the client and sql*plus client will convert the given data to ascii format and display the data in a tabular format to the users.


Read more ...

How to Enable/Disable ARCHIVELOG Mode in Oracle 11g/12c

When you run the database in NOARCHIVELOG mode, you disable the archiving of the redo log. If you want to take the backup of the database using RMAN then your database must be in ARCHIVELOG mode.

A database backup, together with online and archived redo log files, guarantees that you can recover all committed transactions in the event of OS or disk faluer.


Below are the steps required to enable archive log mode on an Oracle 10g/11g or 12c database.


Verify the database log mode.

[oracle@orahow ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Feb 03 04:05:02 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     25
Current log sequence           27


From the above output you can see that your database is in No Archive Mode. Note that Archive destination is USE_DB_RECOVERY_FILE_DEST. You can determine the path by looking at the parameter RECOVERY_FILE_DEST.


SQL> show parameter recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u01/app/oracle/flash_recovery
                                                 _area
db_recovery_file_dest_size           big integer 2728M


By default, archive logs will be written to the flash recovery area also called FRA. If you don't want to write archive logs to the FRA then you can set the parameter LOG_ARCHIVE_DEST_n to the new location where you wish to write the archive logs.


To set new new archive log destination, you can use the following command.

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/orahow/arch' scope = both;

System altered.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Current log sequence           28
SQL>


Now we shutdown the database and bring it backup in mount mode.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area  606806016 bytes
Fixed Size                  1376268 bytes
Variable Size             394268660 bytes
Database Buffers          205520896 bytes
Redo Buffers                5640192 bytes
Database mounted.
SQL>

Now set the database in archive log mode

SQL> alter database archivelog;

Database altered.

Finally open the database.
SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28


From the above output you can see that database is in archive log mode and automatic archival is also enabled.

For experiment or confirmation you can switch the log file to see that an archive is written to archive log location.


SQL> alter system switch logfile;

System altered.

SQL> host
[oracle@orahow ~]$ ls /u01/app/oracle/oradata/orahow/arch
1_28_812359664.dbf
[oracle@orahow ~]$ exit
exit


Disabling Archive Log Mode


The following are the steps required to disable archive log mode on an Oracle 10g/11g or 12c database.

Verify the database log mode.

[oracle@orahow ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Feb 03 04:05:02 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup mount
ORACLE instance started.

Total System Global Area  606806016 bytes
Fixed Size                  1376268 bytes
Variable Size             394268660 bytes
Database Buffers          205520896 bytes
Redo Buffers                5640192 bytes
Database mounted.
SQL>


To disable archive log mode use the below command.

SQL> alter database noarchivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Current log sequence           28
SQL>
As you can see, ARCHIVELOG mode has been disabled.

Read more ...

Streams AQ: enqueue blocked on low memory Wait Event in Oracle

In this post, we will discuss about the wait event "Streams AQ: enqueue blocked on low memory" which was captured in the oracle AWR report.

In one of the environment there was a performance issue in which schema export backup was in hung state. Before this issue, ideally export was completing in one to two hours.

To resolve this issue, we increased the SGA but it didn't helped out. So we checked the wait event for which query was waiting for. Finally we checked the dynamic memory component and observed that streams pool was shrieked due to which it was blocked on low memory.  

The Oracle Streams pool is a portion of memory in the System Global Area (SGA) that is used by Oracle Streams. The Oracle Streams pool stores enqueued messages in memory, and it provides memory for capture processes and apply processes.

The Oracle Streams pool size is managed automatically when the MEMORY_TARGET, MEMORY_MAX_TARGET, or SGA_TARGET initialization parameter is set to a nonzero value. If these parameters are all set to 0 (zero), then you can specify the size of the Oracle Streams pool in bytes using the STREAMS_POOL_SIZE initialization parameter.


SQL> select SID,WAIT_CLASS,EVENT from v$session where SADDR in (select SADDR from dba_datapump_sessions);

       SID    WAIT_CLASS           EVENT
---------- --------------------  ----------------------------------------
        38     enqueue                    Streams AQ: enqueue blocked on low memory



SQL> select component,current_size/1024/1024,last_oper_type,last_oper_time from v$sga_dynamic_components;





Here we can see that streams pool got shrink-ed due to which it was in hung state.

SOLUTION:


As s  workaround, explicitly set the streams_pool_size to a fixed (large enough) value, e.g. 150 Mb (or 300 MB if needed) that will be used as a minimum value, e.g.:

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=both;

And re-run the Export or Import Data Pump job.

If you cannot modify the STREAMS_POOL_SIZE dynamically, then you need to set the value in the spfile, and restart the database.

CONNECT / as sysdba
ALTER SYSTEM SET streams_pool_size=150m SCOPE=spfile;
SHUTDOWN IMMEDIATE
STARTUP


NOTE:

If the problem is not fixed after implementing one of the above solutions, a fix for unpublished Bug 24560906 must be also installed before reporting the issue to Oracle Support.
Possible solutions for unpublished Bug 24560906 are:


Set the below parameter and restart the job.
alter system set "_disable_streams_pool_auto_tuning"=TRUE;
SHUTDOWN IMMEDIATE
STARTUP


EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1)
Read more ...

MySQL: Can't start server: Bind on TCP/IP port: Address already in use resolved

In this article we will discuss the about the issues we faced after changing mysql data directory.

For one of the environment, there was a requirement to install mysql 5.6 server on CentOS Linux and after installation we had to change the default data directory to the new location.

Everything was fine but during mysql server startup it was taking lot of time. When we checked the log then we observed the below error.


18548 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
18548 [ERROR] Do you already have another mysqld server running on port: 3306 ?
18548 [ERROR] Aborting

Below are the error logs:

[oradb@orahowdb mysql]$  tail -100f /var/log/mysqld.log
tail: cannot open ‘/var/log/mysqld.log’ for reading: Permission denied
tail: no files remaining
[oradb@orahowdb mysql]$  sudo tail -100f /var/log/mysqld.log
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_CMP'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'INNODB_TRX'
2018-05-23 20:18:10 17743 [Note] Shutting down plugin 'InnoDB'
2018-05-23 20:18:10 17743 [Note] InnoDB: FTS optimize thread exiting.
2018-05-23 20:18:10 17743 [Note] InnoDB: Starting shutdown...
2018-05-23 20:18:12 17743 [Note] InnoDB: Shutdown completed; log sequence number 1602387
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'BLACKHOLE'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'ARCHIVE'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MRG_MYISAM'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MyISAM'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'MEMORY'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'CSV'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'sha256_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'mysql_old_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'mysql_native_password'
2018-05-23 20:18:12 17743 [Note] Shutting down plugin 'binlog'
2018-05-23 20:18:12 17743 [Note] /usr/sbin/mysqld: Shutdown complete

180523 20:18:12 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
180523 20:21:33 mysqld_safe Logging to '/var/log/mysqld.log'.
180523 20:21:33 mysqld_safe Starting mysqld daemon with databases from /DBdata/mysql
2018-05-23 20:21:33 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-05-23 20:21:33 0 [Note] /usr/sbin/mysqld (mysqld 5.6.40) starting as process 18548 ...
2018-05-23 20:21:33 18548 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)

2018-05-23 20:21:33 18548 [Warning] Buffered warning: Changed limits: table_open_cache: 431 (requested 2000)

2018-05-23 20:21:33 18548 [Note] Plugin 'FEDERATED' is disabled.
2018-05-23 20:21:33 18548 [Note] InnoDB: Using atomics to ref count buffer pool pages
2018-05-23 20:21:33 18548 [Note] InnoDB: The InnoDB memory heap is disabled
2018-05-23 20:21:33 18548 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-05-23 20:21:33 18548 [Note] InnoDB: Memory barrier is not used
2018-05-23 20:21:33 18548 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-05-23 20:21:33 18548 [Note] InnoDB: Using Linux native AIO
2018-05-23 20:21:33 18548 [Note] InnoDB: Using CPU crc32 instructions
2018-05-23 20:21:33 18548 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2018-05-23 20:21:33 18548 [Note] InnoDB: Completed initialization of buffer pool
2018-05-23 20:21:33 18548 [Note] InnoDB: Highest supported file format is Barracuda.
2018-05-23 20:21:33 18548 [Note] InnoDB: 128 rollback segment(s) are active.
2018-05-23 20:21:33 18548 [Note] InnoDB: Waiting for purge to start
2018-05-23 20:21:33 18548 [Note] InnoDB: 5.6.40 started; log sequence number 1602387
2018-05-23 20:21:33 18548 [Note] Server hostname (bind-address): '*'; port: 3306
2018-05-23 20:21:33 18548 [Note] IPv6 is available.
2018-05-23 20:21:33 18548 [Note]   - '::' resolves to '::';
2018-05-23 20:21:33 18548 [Note] Server socket created on IP: '::'.
2018-05-23 20:21:33 18548 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
2018-05-23 20:21:33 18548 [ERROR] Do you already have another mysqld server running on port: 3306 ?
2018-05-23 20:21:33 18548 [ERROR] Aborting


Finally, we checked the mysql TCP/IP port[3306]:

[root@orahowdb ~]# lsof -i TCP:3306
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  17915 mysql   10u  IPv6  55522      0t0  TCP *:mysql (LISTEN)


We can see that mysqld is already listining on port 3306 which was causing the problem. Finally we checked the pid of mysqld and killed the process.

[root@orahowdb ~]# ps -ef|grep mysql
root     17830     1  0 16:01 ?        00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/####
mysql    17915 17830  0 16:01 ?        00:00:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log
root     18869 15338  0 20:23 pts/2    00:00:00 sudo tail -100f /var/log/mysqld.log
root     18870 18869  0 20:23 pts/2    00:00:00 tail -100f /var/log/mysqld.log
root     20223 19766  0 20:31 pts/2    00:00:00 grep --color=auto mysql

[root@orahowdb ~]# netstat -lp | grep 3306

[root@blrsubjiradb ~]# lsof -i TCP:3306
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  17915 mysql   10u  IPv6  55522      0t0  TCP *:mysql (LISTEN)

[root@orahowdb ~]# kill -9 17915

[root@orahowdb ~]# lsof -i TCP:3306

[root@orahowdb ~]# ps -ef|grep mysql
root     18869 15338  0 20:23 pts/2    00:00:00 sudo tail -100f /var/log/mysqld.log
root     18870 18869  0 20:23 pts/2    00:00:00 tail -100f /var/log/mysqld.log
root     20242 19766  0 20:32 pts/2    00:00:00 grep --color=auto mysql
[root@orahowdb ~]#


After killing the process, finally we restarted the database normally.
Read more ...

CONTACT

Name

Email *

Message *