Friday, July 20, 2012

Useful UNIX Command For Oracle DBA

How to kill all similar processes with single command (in this case opmn)

ps -ef | grep opmn |grep -v grep | awk ‘{print $2}’ |xargs -i kill -9 {}

Locating Files under a particular directory

find . -print |grep -i test.sql

 Using AWK in UNIX

To remove a specific column of output from a UNIX command – for example to determine the UNIX process Ids for all Oracle processes on server (second column)

ps -ef |grep -i oracle |awk '{ print $2 }'

Changing the standard prompt for Oracle Users

Edit the .profile for the oracle user

PS1="`hostname`*$ORACLE_SID:$PWD>"

 Display top 10 CPU consumers using the ps command

/usr/ucb/ps auxgw | head -11

 Show number of active Oracle dedicated connection users for a particular ORACLE_SID

ps -ef | grep $ORACLE_SID|grep -v grep|grep -v ora_|wc -l

 Display the number of CPU’s in Solaris

psrinfo -v | grep "Status of processor"|wc -l

Display the number of CPU’s in AIX

lsdev -C | grep Process|wc -l

Display RAM Memory size on Solaris

prtconf |grep -i mem

Display RAM memory size on AIX

First determine name of memory device

lsdev -C |grep mem

then assuming the name of the memory device is ‘mem0’

lsattr -El mem0

Swap space allocation and usage

Solaris : swap -s or swap -l

Aix : lsps -a

 Total number of semaphores held by all instances on server

ipcs -as | awk '{sum += $9} END {print sum}'

View allocated RAM memory segments

ipcs -pmb

Manually deallocate shared memeory segments

ipcrm -m ''

 Show mount points for a disk in AIX

lspv -l hdisk13

 Display amount of occupied space (in KB) for a file or collection of files in a directory or sub-directory

du -ks * | sort -n| tail

Display total file space in a directory

du -ks .

 Cleanup any unwanted trace files more than seven days old

find . *.trc -mtime +7 -exec rm {} \;

 Locate Oracle files that contain certain strings

find . -print | xargs grep rollback

 Locate recently created UNIX files (in the past one day)

find . -mtime -1 -print

 Finding large files on the server (more than 100MB in size)

find . -size +102400 -print

Crontab :

To submit a task every Tuesday (day 2) at 2:45PM

45 14 2 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1

To submit a task to run every 15 minutes on weekdays (days 1-5)

15,30,45 * 1-5 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1

To submit a task to run every hour at 15 minutes past the hour on weekends (days 6 and 0)

15 * 0,6 * * opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1


How to scale inserts in MySQL with Innodb Engine

Usually we thinks/expect that inserts with large number of threads would to be faster but this is not always true(Due to MySQL limitations). The bottleneck might be the CPU, IO controller and OS. MySQL cannot fully use available cores/cpus  in server e.g. MySQL 5.0 (Innodb) is limited to 4 cores etc. We are working on scaling inserts, we have two data processing clusters each of which use 50 threads - so total 100 threads to  insert data into MySQL database (version 5.0.51). The issue, inserts are delayed by minute(s) and the backlog continues to grow... After examining innodb monitor status we found long list of transactions waiting for AUTO-INC lock: For example:


------- TRX HAS BEEN WAITING 9 SEC FOR THIS LOCK TO BE GRANTED: TABLE LOCK table `alertxxxx/alertdata` trx id 0 685590474 lock mode AUTO-INC waiting ------------------ ---TRANSACTION 0 685590465, ACTIVE 10 sec, process no 8457, OS thread id 1169045824 setting auto-inc lock

Why AUTO-INC lock?

 When accessing the auto-increment counter, InnoDB engine uses a special table-level AUTO-INC lock that it keeps to the end of the current SQL statement, not to the end of the transaction. This basically causes all inserts into the same table to serialize. With single row inserts it is normally not too bad but could prevent scalability with multiple threads inserting Bug #16979. However, we can get better through put (inserts per second) with less  number of threads. So after dropping number of threads on both clusters by 50% initially - taking it to 20-20 sessions. The problem almost disappeared and when we further reduced number of threads to 10-10 sessions, the problem disappeared!

Beginning with MySQL 5.1.22 - new locking model introduced for handling Innodb auto-increment in InnoDB. There is a good article which talks about this MySQL Site and Blog

Similarly, if you want to achieve fast insert performance, it can be interesting to load files instead of the loading the inserts one by one : it is 3 to 4 times faster.
If the goal is a huge amount of data already known at that time, it is probably the best option.

Optimization: We can’t use load files for all projects. We need to insert data row by row. For same we can optimize variables/Configuration.

1. Optimize database structure - Design your tables to minimize their space on the disk. This can result in huge improvements by reducing the amount of data written to and read from disk. Smaller tables normally require less main memory while their contents are being actively processed during query execution. Any space reduction for table data also results in smaller indexes that can be processed faster. Ensure columns have the right data types and require least amount of storage; you should try to use the most precise type in all cases. For example, if an integer column is used for values in the range from 1 to 99999, MEDIUMINT UNSIGNED is the best type. For more information about optimizing database structure click MySQL Site. If you store large strings (TEXT) or BLOB, compression may help there.
Use procedure analyse () function to check optimal data type based on table . But before changing data type, check your future plans, So it will not create a  problem in near future.



2. Innodb flush method - e.g. O_DIRECT, if used can help to avoid double buffering between the InnoDB buffer pool and the operating system's filesystem cache. MySQL reference manual explain this MySQL Site.

[ Warning]
           O_DIRECT, serializes the writes in ext3. Howerver, impact can be lowered by using innodb_file_per_table)

3. Innodb thread concurrency - Keep the value low for this option variable (default 8 ok), however the correct value for this variable is dependent on environment and workload. This option variable is explained MySQL Site

4. Innodb buffer pool - Innodb maintains a buffer pool for caching data and indexes in memory. Making the pool larger can improve performance by reducing the amount of disk I/O needed, here is the detail about it.

5. Innodb log file size- The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. But larger log files also mean that recovery is slower in case of a crash. As of MySQL 5.5, recovery times have been greatly improved and the whole log file flushing algorithm has been improved. In 5.5 you generally want larger log files as recovery is improved.

6. Filesystem selection and disk issues - 'xfs' is known to perform very well with MySQL. Also writing the redo logs, binary logs, data files in different physical disks is a good practice with a bigger gain than server configuration. RAID 10 is recommended for best performance, more detail about disk issue can be found at MySQL Site .

Thursday, July 19, 2012

Replicate a Database with different Name in MySQL

It happened to me   first time, so I was very excited about it until I noticed that this command has been with us from nearly the very beginning (ie MySQL 4). We need to replicate a database with different name. Was thinking it will be a big task J finally got the easy solution and working now. J

The Situation
Master Server contains db1
Slave Server contains db1 and db2
Slave Server needs db1 from the Master to replicate to db2

The Solution
On my.cnf on the slave server:
--replicate-rewrite-db=from_name->to_name

Thus, this situation would dictate the following:
--replicate-rewrite-db=db1->db2
Easy!

In replication there may be scenario like below
Part 1
What if the Master Server contained db1 and db2; and
the Slave needs only db1 from the Master to replicate to db2 on the Slave?

Part 2
We don't want to replicate both db1 and db2 to this slave, although we want to replicate both databases to another slave - this means that the filter for databases should reside on my.cnf on the slave...

Assuming that we are using replicate-rewrite-db=db1->db2:
Using replicate-do-db on the my.cnf file on the slave, which database would you use with that command?

db1 or db2?  It only takes a moment in time to find out... :)
 

Wednesday, July 18, 2012

Ronald Bradford blog: New security fixes for MySQL reported

6 new security fixes for Oracle MySQL have been detailed in the most current Oracle Critical Patch Update (CPU).

These are:

  • CVE-2012-1735 (5.5.23 and earlier)
  • CVE-2012-0540 (5.1.62 and earlier, 5.5.23 and earlier)
  • CVE-2012-1757 (5.5.23 and earlier )
  • CVE-2012-1756 (5.5.23 and earlier)
  • CVE-2012-1734 (5.1.62 and earlier, 5.5.23 and earlier )
  • CVE-2012-1689 (5.1.62 and earlier, 5.5.22 and earlier )

Oracle strongly recommends that customers apply CPU fixes as soon as possible. Unfortunately there is no easy description for MySQL users what that really entails. There is a reference to Critical Patch Update July 2012 Patch Delivery Document for Oracle Sun Products Suite My Oracle Support Note 1446033.1, however all the information is behind having a support license. There appears to be no information easily available for the community users.

A full description of these CVEs can be found here. Unfortunately most say Vulnerability in the MySQL Server component of Oracle MySQL (subcomponent: Server). Supported versions that are affected are 5.5.23 and earlier. Easily exploitable vulnerability allows successful authenticated network attacks via multiple protocols. Successful attack of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. which is effectively useless information.

There is external information that can be found at the National Vulnerability Database (not linked in the Oracle article). For example http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1735 however this does not provide any more meaningful information either.

There was a recent 5.5.25a released on 2012-07-05, however this, 5.5.25, 5.5.24, and the yet to be released 5.5.26 release notes provide no information about these security issues.

While security is important for a database and system administrator, on first inspection the information provided does not offer an easy way to assess the risk and take appropriate actions.

Tuesday, July 3, 2012

MySQL Cluster Installation

MySQL Cluster is a technology that enables clustering of in-memory databases in a shared-nothing system. The shared-nothing architecture allows the system to work with very inexpensive hardware, and without any specific requirements on hardware or software. It also does not have any single point of failure because each component has its own memory and disk.

A MySQL Cluster consists of a set of computers, each running a one or more processes which may include a MySQL server, a data node, a management server, and (possibly) a specialized data access programs. All these programs work together to form a MySQL Cluster.

There are three types of cluster nodes, and in a minimal MySQL Cluster configuration, there will be at least three nodes, one of each of these types:

Management node (MGM nod.): The role of this type of node is to manage the other nodes within the MySQL Cluster.

Data node: This type of node stores cluster data. There are as many data nodes as there are replicas, times the number of fragments.

SQL node: This is a node that accesses the cluster data. An SQL node is typically started with the command mysqld --ndbcluster or by using mysqld with the ndbcluster option added to my.cnf.


NDB stands for “Network Database.”

The setup


1. The cluster setup has four nodes, each on a separate host, and each with a fixed network address on a typical Ethernet as shown here:

Quote

Node IP Address Function
Management (MGM) node 192.168.0.1 ndb_mgmd, ndb_mgm
MySQL server (SQL) node 192.168.0.2 mysqld –ndb-cluster
Data (NDBD) node "A" 192.168.0.3 ndbd
Data (NDBD) node "B" 192.168.0.4 ndbd

Packages Specification:

Package requirement:

192.168.0.2 - MySQL server (SQL) node
MySQL-server-5.1.11-0.glibc23
MySQL-shared-5.1.11-0.glibc23
MySQL-client-5.1.11-0.glibc23

192.168.0.1 - Management (MGM) node
MySQL-ndb-tools-5.1.11-0.glibc23
MySQL-ndb-management-5.1.11-0.glibc23

192.168.0.3 - Data (NDBD) node "A"
MySQL-ndb-tools-5.1.11-0.glibc23
MySQL-client-5.1.11-0.glibc23
MySQL-ndb-storage-5.1.11-0.glibc23
MySQL-server-5.1.11-0.glibc23 ( Optional )

192.168.0.4 - Data (NDBD) node "B"
MySQL-ndb-tools-5.1.11-0.glibc23
MySQL-client-5.1.11-0.glibc23
MySQL-ndb-storage-5.1.11-0.glibc23
MySQL-server-5.1.11-0.glibc23 ( Optional )

Step by Step configuration:

Each data node or SQL node requires a my.cnf file that provides two pieces of information:
- A connect string to find the MGM node
- A line which says the MySQL server on this host to run in NDB mode.

The my.cnf file for data node (ndb)

For each data node and SQL node in our example setup(i.e. Data (NDBD) node "A" 192.168.0.3 & Data (NDBD) node"B" 192.168.0.4) my.cnf should look like this:

# Options for mysqld process:
[MYSQLD]
ndbcluster # run NDB engine
ndb-connectstring=192.168.0.1 # location of MGM node

# Options for ndbd process:
[MYSQL_CLUSTER]
ndb-connectstring=192.168.0.1 # location of MGM node

Now, you need to start the actual cluster:

The Management Node configuration:

You need to create a directory in MGM node to in which the configuration file can be found and then to create file it self.

# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# vi config.ini

# Options affecting ndbd processes on all data nodes:
[NDBD DEFAULT]
NoOfReplicas=2 # Number of replicas


# TCP/IP options:
[TCP DEFAULT]

# Management process options:
[NDB_MGMD]
hostname=192.168.0.1 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node logfiles

# Options for data node "A":
[NDBD]
# (one [NDBD] section per data node)
hostname=192.168.0.3 # Hostname or IP address
DataDir=/var/lib/mysql-cluster # Directory for this data node's datafiles
BackupDataDir=/var/lib/mysql-cluster/backup
#DataMemory=256M # Set this as per your server h/w


# Options for data node "B":
[NDBD]
hostname=192.168.0.4 # Hostname or IP address
DataDir= /var/lib/mysql-cluster # Directory for this data node's datafiles
BackupDataDir=/var/lib/mysql-cluster/backup
#DataMemory = 256M # Set this as per your server h/w

# SQL node options:
[MYSQLD]
hostname=192.168.0.2 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[MYSQLD] # for API
[MYSQLD]
[MYSQLD]

Then, you need to start a cluster which as been configured; each cluster node process must be started separately.

It is recommended that the management node be started first, followed by the storage nodes, and then finally by any SQL nodes:

In MGM node (192.168.0.1) pass the following command:

# ndb_mgmd -f /var/lib/mysql-cluster/config.ini

-OR-

# cd /var/lib/mysql-cluster
# ndb_mgmd


You must need to tell mgmd to find the configuration file.

Now, start the ndbd process in each data node A and B (192.168.0.3 & 192.168.0.4)

# ndbd --initial


Then, start mysql server process on the SQL node (192.168.0.2)

# service mysql start


Verify that SQL node is started with support of ndb cluster:

# mysql –u root –p

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8 to server version: 5.1.11-beta

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SHOW ENGINES\G
*************************** 10. row ***************************
Engine: ndbcluster
Support: YES
Comment: Clustered, fault-tolerant tables
Transactions: YES
XA: NO
Savepoints: NO
************************************************

If all has been setup correctly, the cluster should be operational.

Type the below mentioned command from Management Node:

# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.4 (Version: 5.1.11, Node group: 0)
id=3 @192.168.0.3 (Version: 5.1.11, Node group: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.1 (Version: 5.1.11)

[mysqld(API)] 3 node(s)
id=4 @192.168.0.2 (Version: 5.1.11)

ndb_mgm>




mysql> SHOW PROCESSLIST \G;
*************************** 1. row ***************************
Id: 1
User: system user
Host:
db:
Command: Daemon
Time: 1
State: Waiting for event from ndbcluster
Info: NULL
*************************************************

How to shutdown and restart the cluster:

In MGM node use the below mentioned command:

# ndb_mgm -e shutdown



The above command causes the ndb_mgm, ndb_mgmd, and any ndbd processes to terminate gracefully.

To restart the cluster:

From MGM node (192.168.0.1)

# ndb_mgmd -f /var/lib/mysql-cluster/config.ini


From Data Node (192.168.0.3 & 192.168.0.4)

# ndbd (do not use –initial as it is used only first time)


From SQL node:

# service mysql start

Replication in MySQL


Replication allows databases on one MySQL server to be duplicated on another. Starting in Version 3.23.15, MySQL supports for one-way replication. One server acts as the master, while one or more other servers act as slaves. The master server keeps a binary log of updates. It also maintains the index file of binary logs to keep track of log rotation. Each slave upon connecting informs the master where it left off since the last successfully propagated update, catches up any updates that have occurred since then, and then blocks and waits for the master to notify it of new updates. When using replication, all updates to the tables that are replicated should be performed on master server.



Benefits of Replication:



q  Robustness: is increased with a master/slave setup. In the event of problems you can switch to the Slave as a backup.

q  Speed: The extra speed is achieved by splitting the load for processing client queries between master and slave servers. SELECT queries may be sent to the slave and the queries that modify data should still be sent to the master, so that master and slave to not get out of synchronization.

q  System Administration: You can get non disturbing backups of system by doing backup of slave instead of doing it on master.



Replication Implementation Overview:



MySQL replication is based on the master server keeping track of all changes to your database (updates, deletes, etc.) in the binary logs. Therefore, to use replication, you must enable binary logging on the master server. Each slave server receives from the master the saved updates that the master has recorded in its binary log, so that the slave can execute the same updates on its copy of the data. Binary log is simply a record starting from the fixed point in time at which you enable binary logging. Any slaves that you set up will need copies of the databases on your master as they existed at the moment you enabled binary logging on the master. If you start your slaves with data that is not the same as what was on the master when the binary log was started, your slaves may fail.



After the slave has been set up with a copy of the master's data, it will simply connect to the master and wait for updates to process. If the master goes away or the slave loses connectivity with your master, it will keep trying to connect periodically until it is able to reconnect and resume listening for updates. Each slave keeps track of where it left off. The master server has no knowledge of how many slaves there are or which ones are up to date at any given time.



Replication Setup:



1.    Make sure you have a recent version of MySQL installed on the master and slaves, and that these versions are compatible according to the table shown below.



Master/ Slave replication compatibility between different versions of MySQL





Master
Master
Master


3.23.33 and up
4.0.3 and up or any 4.1.x
5.0.0
Slave
3.23.33 and up
yes
no
no
Slave
4.0.3 and up
yes
yes
no
Slave
5.0.0
yes
yes
yes



But it is recommended using same version for both the master and the slave.

2.    Make sure the [mysqld] section of the `my.cnf' file on the master host includes a log-bin option. The section should also have a server-id=master_id option, where master_id must be a positive integer value from 1 to 2^32 - 1.



[mysqld]
log-bin
server-id=1



If those options are not present, add them and restart the server.



3.    Take the full database backup from Master server using option –master-data=2. Below is the command for same.



mysqldump –u –p –all-database –-master-data=2>fulldump.sql



4.    Set up an account on the master server that the slave server can use to connect. This account must be given the REPLICATION SLAVE privilege.



Suppose that your domain is mydomain.com and you want to create an account with a username of repl such that slave servers can use the account to access the master server from any host in your domain using a password of slavepass. To create the account, use GRANT statement:



mysql> GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.*
à TO repl@'%.mydomain.com' IDENTIFIED BY 'slavepass;







5.    Stop the server that is to be used as a slave server and add the following to its `my.cnf' file:



[mysqld]
server-id=2



The slave_id value, like the master_id value, must be a positive integer value from 1 to 2^32 - 1. In addition, it is very important that the ID of the slave be different than the ID of the master. If you are setting up multiple slaves, each one must have a unique server-id value that differs from that of the master and from each of the other slaves.



6.    Start the slave server.



#   mysql –u root –p





7.    Restore the database to slave using commands

mysql –u –p <

8.    Execute the following statement on the slave, replacing the values within <> with the actual values relevant to your system:



mysql> CHANGE MASTER TO
    ->    MASTER_HOST='',
    ->     MASTER_USER='',
    ->     MASTER_PASSWORD='',
    ->     MASTER_LOG_FILE='',
    ->     MASTER_LOG_POS=;



This statement creates master account on slave.





9.    Start the slave threads.



mysql> START SLAVE;





If you have forgotten to set server-id for the master, slaves will not be able to connect to it.

If you have forgotten to set server-id for the slave, you will get the following error in its error log:



Warning: one should set server_id to a non-0 value if

master_host is set.  The server will not act as a slave.



You will also find error messages in the slave's error log if it is not able to replicate for any other reason. Once a slave is replicating, you will find in its data directory one file called `master.info' and another called `relay-log.info'. The slave uses these two files to keep track of how much of the master's binary log it has processed.